Recent musical research of note

Music is an integral part of human life, whether you listen to the top 40 on the radio, are an acoustic connoisseur, or even make it from scratch yourself.

In fact, today is International Jazz Day: declared by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2011 “to highlight jazz and its diplomatic role of uniting people in all corners of the globe.”

The Nature of Music, the 2021 SCINEMA International Film Festival winner of the SCINEMA Junior Award, tunes into the mathematical foundations of music.

In the film, Malia Barrele and Ryan Twemlow reveal the evolution of music in nature, Pythagoras’ discovery of how numbers govern musical tone, and how the orbits of TRAPPIST-1 planets form ratios found in musical harmonies.

You can watch the short film here.

There’s definitely no lack of captivating new musical research; here’s some from this week which you might not have heard about.

“I know this song!” – can other species still recognise an altered melody?

If a song sounds higher or lower, faster or slower, or the instruments have been changed, humans can still recognise a marvellous melody they have heard before – even if it’s not the same as the original.

Now we know this ability isn’t unique to humans. Rats can also still recognise a song when its tempo or pitch has been altered, but not when the instrument playing it is changed, according to a new study published in Animal Cognition.

“Our group is dedicated to understanding how these skills have evolved in humans and to what extent some of their components are shared with other species,” explains author Juan Manuel Toro, director of the Language and Comparative Cognition Group (LCC) of the Pompeu Fabra University in Spain.

To investigate, the researchers trained 40 laboratory rats (Rattus norvegicus) to identify a single melody – the second half of the song “Happy Birthday”.

Over 20 sessions, each lasting for 10 minutes per day, the team familiarised rats individually to the song by repeating it 40 times (no wonder it got stuck in their heads).

After this, three test sessions were held where modified versions of the song were played, and the rats’ responses to it were analysed. The song was played at different pitches (higher and lower), speeds (faster and slower), or on a violin instead of a piano.

They found that the rats’ behaviour only responded differently when the tune was played on a new instrument, but not when it was played at an altered tempo or pitch.

“Our results show that the rats recognised the song even when there were changes in frequency and tempo,” Toro explains. “But when we changed the timbre, they were no longer able to recognise the song.

“The results suggest that the ability to recognise patterns over changes in pitch and tempo present in humans might emerge from pre-existing abilities in other species.”

noti Toro 1
Musical score for the familiar stimulus and its pitch-modified versions. One version is shifted one octave higher, while the other is shifted one octave lower. Credit: UPF

The physics of the song of a musical saw

The eerie and ethereal sound emitted by the musical saw – also known as a singing saw – is very similar to that of the electronic musical instrument, the theremin. The saw has been a part of folk music tradition since the proliferation of cheap, flexible steel in the early 19th century and sound is made by bending a metal hand saw and bowing it like a cello.

It turns out that the mathematical physics of this remarkable instrument might hold the key to designing high-quality resonators for a range of applications, according to a new study published in the Proceedings of the National Academy of Sciences (PNAS).

“How the singing saw sings is based on a surprising effect,” explains co-first author Petur Bryde, a graduate student from the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University, US. “When you strike a flat elastic sheet, such as a sheet of metal, the entire structure vibrates. The energy is quickly lost through the boundary where it is held, resulting in a dull sound that dissipates quickly.

“The same result is observed if you curve it into a J-shape,” he says. “But, if you bend the sheet into an S-shape, you can make it vibrate in a very small area, which produces a clear, long-lasting tone.”

These small areas are what physicists call localised vibrational modes – and musicians call the ”sweet spot” – a confined area on the sheet which resonates without losing energy at the edges.

The underlying mechanism behind this effect remained a mystery until now, and the team found the explanation via an analogy to a very different class of physical systems — topological insulators.

These are materials which conduct electricity on their surface or edge but not within their interior, no matter how you cut them.

The ”sweet spot” acts as an internal edge of the saw.

“By using experiments, theoretical and numerical analysis, we showed that the S-curvature in a thin shell can localise topologically-protected modes at the ‘sweet spot’ or inflection line, similar to exotic edge states in topological insulators,” says Bryde. “This phenomenon is material independent, meaning it will appear in steel, glass or even graphene.”

The researchers also found that they could tune the localisation of the mode by changing the shape of the S-curve, which is important in potential applications such as sensing, where you need a resonator that is tuned to very specific frequencies.

singing saw 850
The researchers clamped the saw in two configurations: a J shape (left) and an S shape (right). The S shape has an inflection point (the sweet spot) in its profile, while J shape doesn’t. Credit: Mahadevan Lab/Harvard SEAS

Exploring acoustical perception in infants

Differences in pitch and rhythm could be key factors that enable people, starting from infancy, to distinguish speech from music, according to research presented on the 26th of April at the annual meeting of the Cognitive Neuroscience Society (CNS) in San Francisco in the US.

“We know that from age four, children can and readily do explicitly differentiate between music and language,” says cognitive neuroscientist Christina Vanden Bosch der Nederlanden, assistant professor of Psychology at the University of Toronto, Mississauga in Canada.

But what about babies?

To investigate, der Nederlanden and colleagues exposed 32 four-month-old infants to speech and song, both in a sing-song, infant-directed manner (baby-talk) and in a monotone speaking voice, while recording electrical brain activity with a non-invasive electroencephalogram (EEG).

Their findings suggest that infants are actually better at tracking infant-directed sounds when they’re spoken, compared with being sung.

“This is different from what we see in adults, who are better at neural tracking sung compared to spoken utterances,” says der Nederlanden.

They also found that pitch affected the infants’ brain activity for speech, as exaggerated pitch was related to better neural tracking (perception) of baby-talk speech, compared with song.

This indicates that a lack of “pitch stability” (a wider pitch range) is an important feature for guiding babies’ attention and that the exaggerated, unstable pitch of baby-talk also helps to signal whether an infant is hearing speech or song.

According to der Nederlanden, pitch stability is a feature that “might signal to a listener ‘oh this sounds like someone singing’,” and the lack of pitch stability can conversely signal to infants that they are hearing speech.

Please login to favourite this article.