A Scientific Approach to Violin Making

The question of which is the best geometry for a violin is still open in the instrument-making field. In our activity as researchers of the Musical Acoustic Lab of the Politecnico di Milano, located in the prestigious Violin Museum of Cremona, Italy, we are often asked about the “secrets” of Stradivarius. The craft of violin making, in fact, is often perceived as veiled with mystique, and it is not clear why Guarneris have one shape and Stradivaris has another, which is the best outline to obtain a given vibrational feature or how a shape change affects the final response of the instrument.

Our current goal is to shed light on these aspects and develop a methodology that scientists and luthiers can use together to optimize the shape of the violin under given constraints. As a first step in understanding this, we focus on the vibrational response of free violin top plates, but our studies can easily be extended to the whole instrument. The developed method allows us to discuss how the modal response of a violin plate changes as its shape varies and, vice versa, to optimize its geometry depending on the response we want to obtain.

Starting from the 3D scan of the top plate of a historic violin, we build a parametric model that controls its shape. The parameterization that we define is meaningful to a luthier, so that we believe our findings can have a relevant impact on the violin-making community. Furthermore, for the first time to our knowledge, we show that machine learning can be successfully applied to traditional violin making. This could indeed be a game-changer for this field, as not only will it help luthiers do better than the greatest “masters” of the past, but it will also help them explore the potential of new designs and materials.

Research outcomes:

  • Design of a parameterized model of a violin top plate
  • Introduction of machine learning in the violin making field
  • Optimization of the top plate geometry based on the vibrational response we want to obtain from it

References:

Mirco Pezzoli PhD dissertation – Space-time Parametric approach to Extended Audio Reality

Yesterday, Mirco Pezzoli successfully defended his Ph.D. dissertation titled “Space-time Parametric approach to Extended Audio Reality.” 
In his work, Mirco proposed a novel parametric model for sound field representation.
The parametric framework allows navigation and manipulation of recorded sound scenes and the rendering of directional virtual acoustic sources for extended reality applications.

Congratulations!

Abstract

The extended reality field is rapidly growing, primarily through augmented and virtual reality applications. In this context, Extended Audio Reality (EAR) refers to the subset of extended reality operations related to the audio domain. In this thesis, we propose a parametric approach to EAR conceived to provide an effective and intuitive framework for implementing EAR applications. The main challenges of EAR regard the processing of real sound fields and the rendering of virtual acoustic sources (VSs).

We introduce a novel parametric model for sound field representation based on few parameters. The proposed model allows both the navigation and manipulation of a recorded sound scene. The main feature of our solution consists in modeling the acoustic source directivity directly through the parameters of the representation.

Moreover, we can seamlessly implement virtual sources within the same parametric representation enabling EAR. We studied the VS implementation through a case study. In particular, we focused on the VS implementation of violins. Different solutions are outlined according to their invasiveness ranging from the measurement of historical instruments to vibroacoustic simulations of violin models.

Lastly, through a proof-of-concept simulation, we showcase the benefit of the proposed parametric approach to EAR.

Marco Paracchini PhD dissertation – Remote Biometric Signal Processing based on Deep Learning using SPAD Cameras

Yesterday, Marco Paracchini successfully defended his Ph.D. dissertation titled “Remote Biometric Signal Processing based on Deep Learning using SPAD Cameras.” 
In his study, Marco aims at developing a compact system able to remotely and in real-time check the health condition of a person (heart rate, etc.) just by using a camera. This could be used in the automotive field in order to monitor the driver’s health state.

This thesis is part of the Horizon 2020 DEIS project!

Congratulations!

Abstract

Remote PhotoPlethysmoGraphy (rPPG) allows the extraction of cardiac information just by analyzing a video stream of a person face. In particular, the blood flowing in the vessels underneath the subject’s face introduces variations on the light intensity reflected by the skin that could be analyzed in order to obtain information about the subject’s heart activity. In this research, the adoption of a rPPG application in an automotive environment is investigated in order to monitor, in a non invasive fashion, the driver’s health state and potentially avoid accidents caused by acute illness states. The main goal of this work is to study and develop a rPPG system able to estimate numerous biomedical measurements in real time and in a dependable fashion. Moreover, this work explores the possibility of adopting a SPAD (i.e. Single-Photon Avalanche Diode) array camera instead of traditional RGB camera. In order to compensate for the SPAD camera’s low spatial resolution, a novel facial skin segmentation method, based on a deep learning model, is proposed. This method can precisely associate a skin label to each pixel of a given image depicting a face even when working with low resolution grayscale face images (64×32 pixel) and is able to work in presence of general environment condition regarding illumination, facial expressions, object occlusions and regardless of the gender, age and ethnicity of the subject. In order to perform and validate biometric measurements with a SPAD camera and compare it to estimations that could be obtained from a traditional RGB camera, multiple experiments have been conducted using a portable ECG device for reference. Moreover, some metrics were developed in order to monitor the dependability of the heart rate estimation and detect situations where an optical solution, such as rPPG, could fail. Finally, a rPPG application has been developed able to run in real time on a small ARM device equipped on a car. After receiving data from the SPAD camera, it is able to execute in real time the deep learning based pulse signal extraction and analyze it in order to constantly monitor the driver’s health condition by estimating multiple biometric parameters.

OccamyPy: a python library for inverse problems

We are happy to announce the publication of OccamyPy, a python library for solving small- and large-scale inverse problems!

Codes and tutorials: https://github.com/fpicetti/occamypy

This library has been developed at Stanford University by Ettore Biondi, Guillaume Barnier, Robert Clapp, Stuart Farris and our Francesco Picetti.

The key features are:
 – CPU and GPU vectors operations
 – linear and nonlinear abstract operators, with a number of signal processing operators already implemented (derivatives, convolutions, smoothing)
– from laptops to HPC clusters in one click
 – most common problems already implemented (linear and nonlinear regularized least-squares with preconditioning)
 – state-of-the-art algorithms (CG, LSQR, FISTA, ISTC, SplitBregman, MCMC, L-BFGS, L-BFGS-B, Truncated Newton)
 – a number of utilities for handling runs (stoppers, restart, logging)

You can install it with “pip install occamypy”. Play with the tutorials and let us know!

The acoustics of brass instruments

We are pleased to announce the seminar “The acoustics of brass instruments”, which will be held by Vincent Freour on

December 18th from 4.45pm to 6.00pm.

Join the seminar (open to all) in this Webex room: https://politecnicomilano.webex.com/meet/mirco.pezzoli

Abstract

Understanding the functioning of a musical instrument, as well as the functioning of the musician and the way he/she controls it are crucial questions for the musicians, but also of the physicists! This talk will focus on the particular case of brass (or lip-reed) instruments such as the trumpet or the trombone. After reviewing the fundamentals of the acoustics of brass instruments, we will present different studies that aim at bringing new knowledge in the domain of brass instrument performance and pedagogy, as well as developing new technologies for numerically aided design of brass instruments.

Biography

I received a PhD in Music Technology from McGill University in 2013. My research concentrates on the experimental analysis and modelling of the musician, his/her musical instrument, and the interactions between both systems. In the last years, I had the opportunity to collaborate with various leading research institutes, including IRCAM (France), CIRMMT (Canada), the bioengineering department of Politecnico di Milano (Italy), LAUM (France), LMA (France). Since 2015, I work as a researcher in the R&D division of the Japanese musical instrument company YAMAHA. After three years at the company headquarters in Japan, I moved to Marseille in 2018 in the framework of a collaboration between YAMAHA and the Laboratory of Mechanics and Acoustics (LMA).

Vincent Freour

https://www.facebook.com/musicalacoustics/posts/1351211311883033

L’Ascesa dell’Audio Spaziale: Nuovi Strumenti Matematici per l’Acustica Interattiva, Intelligente e Environment-Aware

Un seminario organizzato dall’Intelligent Signal Processing and MultiMedia (ISPAMM) group dell’Università La Sapienza di Roma, per l’Audio Engineering Society (AES).

Il seminario è parte della serie “Virtual Seminar Series on Artificial Intelligence for Sound Sythesis and Analysis”
http://ispac.diet.uniroma1.it/event/audioai-seminars/

Gio 10 Dicembre, 2020 – 10:00-11:00am

Il seminario è fruibile a questo link Zoom: https://uniroma1.zoom.us/j/86743644029?pwd=SktqTkUrTFE4VEthRENtb2FUMTBMdz09

Abstract

Dopo ol tre un secolo di relativa stasi, la ricerca degli ultimi 3 decenni ha portato profonde trasformazioni nell’audio spaziale, grazie alla cross-contaminazione fra discipline molto diverse, come elaborazione dei segnali, geometria computazionale, computer grafica/visione 3D, e intelligenza artificiale. In questo seminario illustrerò gli strumenti matematici che queste discipline hanno fornito, e l’uso che ne abbiamo fatto. Parlerò dell’impatto della geometria proiettiva sulla modellazione acustica interattiva; delle tecniche plenacustiche per la ripresa e la restituzione interattiva di campi acustici; delle nuove tecniche di “inpainting” acustico, e altro ancora.

Bio

Dopo gli studi universitari e dottorali  presso l’Università di Padova (1988-1993) e UC-Berkeley, Augusto Sarti è entrato nel corpo docente del Politecnico di Milano (PoliMI) nel 1993, dove è attualmente Professore Ordinario. Dal 2013 al 2018 è stato anche Full Professor presso UC-Davis in California. Presso PoliMI è fondatore e coordinatore del Musical Acoustics Lab e del Sound and Music Computing Lab, nonché della Laurea Magistrale in Music and Acoustic Engineering. Ha promosso, coordinato, o contribuito a oltre 30 progetti Europei nell’area dell’elaborazione di segnali multimediali. È co-autore di oltre 350 pubblicazioni scientifiche e brevetti. I suoi interessi di ricerca sono nell’area dell’elaborazione di segnali audio/acustici, acustica computazionale e musicale, e music information retrieval.
È membro eletto dell’EURASIP Board of Directors, e IEEE Senior Member. È stato membro eletto dell’IEEE TC on Audio and Acoustics Signal Processing, Associate Editor della rivista scientifica IEEE/ACM Tr. on Audio Speech and Language Processing, e Senior Area Editor della rivista scientifica IEEE Signal Processing Letters. È stato chairman o co-chairman di varie conferenze internazionali in area multimediale, fra cui IEEE AVSS-05, DAFx-09, e IEEE WASPAA-19.

MAE Seminar – Rhythmic complexity and tension in music

A musical Seminar/Clinic with Yogev Gabay

Organized for the course of “Computer Music Representations and Models” and for the Master Program in Music and Acoustic Engineering

Wed Dec 9, 2020 – 10:15am to 2:15pm

Open to all who register in advance at the following link: https://forms.office.com/Pages/ResponsePage.aspx?id=K3EXCvNtXUKAjjCd8ope69UeO9RiF0ZHuAS79eyaanxUQU1RR1FCTVY1T1RWSE02TE1DVEdOTldOVS4u

In order to participate, please click on the following link 15 minutes before the seminar begins: https://politecnicomilano.webex.com/meet/augusto.sarti (someone will let you in)

Organizer and contact person: Prof. Augusto Sarti, augusto.sarti@polimi.it

Abstract

This clinic intends to offer a new perspective on the language of music from the rhythmic standpoint. Yogev Gabay will cover (with the help of a drum set) various aspects of rhythmic modeling in a wide range of musical genres, from jazz to progressive rock, and will show how rhythms are conceived and formed in the performer’s mind, but also how they are perceived by the listener. Among the topics covered in this seminar are: where music and math collide: rhythm, meter and metric subdivisions; rhythmic perception across genres and cultures; discussion on the role of kick, snare and hi-hat; rethinking time signatures, a divide and conquer approach; composite rhythms, polyrhythms, linear rhythms and illusions (perceptual masking).

Biography

Born and raised in Israel, Yogev Gabay is a Berklee College of Music alumni. As a drummer he has played in venues and festivals all over the world including Israel, India, Netherlands, Azerbaijan, Mexico, Russia, Italy, USA and many more. Over the years, Yogev recorded, performed and been a part of numerous albums, EP’s, sessions, singles, tours and more. Yogev is an active player in many genres from Metal (Distorted Harmony, ARP, HAGO), to Jazz (Tigran Hamasyan, Sivan Arbel Septet, Emil Afrasiyab), Electro Pop (SHEER, RINI), Experimental (Kundalini), Funk Pop (Shmemel), World Music (Shiran Avraham, Tamar Shuki) and many more. As a Berklee alumni, Yogev also prepares students for the Berklee audition. Yogev is endorsed by Meinl Cymbals.

MAE Seminar – The role of the acoustics engineer in the musical instrument industry

We are pleased to announce the seminar “The role of the acoustics engineer in the musical instrument industry”,  which will be held by Tahvanainen Henna, PhD student at the Aalto University in Finland, on December 11th (this Friday) from 4.45pm to 6.00pm.
Join the seminar (open to all) in this Webex room:
https://politecnicomilano.webex.com/meet/mirco.pezzoli

Abstract

The acoustic engineer in the musical instrument industry works at the boundary between luthiery and acoustics. One of the main goals is to be able to do computer-aided design of musical instruments up to the point that we can listen the instruments prior to building them. Another goal is to be able to understand to structure-acoustic behavior of the instrument. Ultimately, we would like to build high quality instruments that suit a specific purpose and has the acoustic characteristics that the user wanted. This talk is an introduction to the guitar industry,  the work of a guitar engineer, and the setup of numerical modelling tools for guitar engineering in the industrial context.Biography: I am passionate about acoustics, music, and musical instruments. My research focus is on analysis, simulation, and perception of both concert halls and musical instruments. For my master thesis,  I modelled the acoustics of a Finnish string instrument called kantele to complete my studies in Acoustics and Audio Signal Processing at Aalto University in Finland. Currently, I am finalizing my doctorate on concert hall acoustics there. My passion has taken me to work as a research engineer on acoustic guitars at Yamaha Corporation, Japan and to intern on sound in VR at Facebook Reality Labs in the US. Currently, I am teaching digital signal processing for music technology students at Uniarts in Helsinki, and I work as an acoustics specialist at A-Insinöörit with a focus on acoustical modelling and room acoustics. I am also collaborating with kantele builders to research more on the acoustics of the instrument. I am a hobbyist kantele player since 1990.

MAE Seminar – Art and Technology: a Multidisciplinary Approach to Creative Production

Multidisciplinary in the creative production. Art, science and technology together in the creative production. In the seminar, fuse* will bring us in the journey of the multidisciplinary creative process that led to the realization of their projects.

Friday Dec. 4th 2020 from 10:15 to 13:30 CET.

fuse*, well known multidisciplinary studio and artistic production company (Mutek, TodaysArt, Sónar Istanbul, Artechouse, STRP Biennial, RomaEuropa, etc.) is at the intersection of art and science, exploring the expressive potential by the creative use of emerging technologies. The objective of fuse* is to push past accepted limits, spur empathy and seek out new interplay between light, space, sound, and movement.

Register here! https://www.facebook.com/events/424759658899832/

Abstract

The lecture will focus on the history and evolution of fuse* and on the in-depth study of some principles that underlie the studio’s modus operandi, through the description of the creative process that led to the realization of some of the most representative projects, as well as the work of technological exploration and artistic research done in recent years. Multiverse and Dökk, two of the studio’s most iconic productions, will be described in their complexity and peculiar mixture of multidisciplinary languages and scientific references, with a specific focus on audio experimentation and interaction technologies developed in Real-Time. Through the introduction of some recent projects, including Artificial Botany, Mimesis and Treu, it will be outlined an overview on the technologies that the studio is currently experimenting in the field of Machine Learning, generative design, photogrammetry and Data Analysis. FUSE*FACTORY fuse* is a studio and production company at the intersection of art and science, exploring the expressive potential by the creative use of emerging technologies. Since inception, the studio’s research has focused primarily on the production of installations and live media performances which instill wonder and motivate audiences to challenge what is possible. As the studio evolved, creation of new projects became more holistic and placed increasingly higher value on pure experimentation. The objective of fuse* is to push past accepted limits, spur empathy and seek out new interplay between light, space, sound, and movement. fuse* maintains close ties to its community, by developing, supporting, and promoting projects with the intent of propagating culture and knowledge. In this vein, fuse* has co-produced NODE, an electronic music and digital arts festival, since 2016. Over the years, fuse* has exhibited and performed internationally at art institutions and festivals including Mutek, TodaysArt, Sónar Istanbul, Artechouse, STRP Biennial, RomaEuropa, Kikk, Scopitone and the National Centre for the Performing Arts of China.

MAE Seminar – Harmonic Complexity and Tension in Music

We are pleased to announce the musical seminar/clinic “Harmonic Complexity and Tension in Music”, which will be held by the renowned jazz pianist Davide Logiri on Wednesday December 2nd from 10:15am to 1:30pm.
The seminar is organized for the course of “Computer Music Representations and Models” and for the Master Program in Music and Acoustic Engineering.
It is open to all who register in advance at the following link:
https://forms.office.com/Pages/ResponsePage.aspx
In order to participate, please click on the following link 15 minutes before the seminar begins: https://politecnicomilano.webex.com/meet/augusto.sarti (someone will let you in)
The organizer and contact person is Prof. Augusto Sarti (email: augusto.sarti@polimi.it)

Pianist Davide Logiri

Abstract

We will explore what it means to create an enticing musical narration by controlling tension and complexity in harmonic (and rhythmic) progressions. We will also explore the role of musical structure in designing an emotional narration/prosody. We will finally discuss the role of melody in keeping everything together. We will do so in an interactive fashion, while visiting various genres and styles of music.

Davide Logiri

Davide Logiri graduated at the Milano Music Conservatory “G. Verdi”, where he studied cello, piano and composition. While pursuing his degree in classical music studies, he also focused on jazz studies under the guidance of Diego Baiardi, Sante Palumbo, Antonio Faraò, Phil DeGreg, Dan Haerle, e Harold Danko. In 1995 he was ranked “best performer” at the Clinics of the Berklee College of Music in Boston, which granted him the opportunity to perform at Umbria Jazz Winter ’95. This jumpstarted his career as jazz pianist that brought him to perform throughout Europe, Russia, Argentina, Brasil and the USA.