Back to overview
Categories
Music Cognition

The science and mystery of listening

Ashley Burgoyne researches how humans listen to music. Why are some songs catchy while others are not? How does my listening differ from yours? And why is the Spotify algorithm so good?

17 March 2021, Iris Proff

Photo by Dollar Gill. Edited by Arco Mul.

As a boy growing up in a village in the northeastern tip of the US, Ashley Burgoyne started singing at his aunt’s piano along with his cousins. More often than not, Ashley would stay and keep singing long after his cousins had left. His fascination for music would only grow in the following years. He entered university studying music and mathematics and soon became a semi-professional choral singer. Only when moving to Europe for a postdoc at the ILLC, Ashley dropped his choral singing to give more room for his research into how humans listen to music.

“It’s almost embarrassing that we don’t know what people hear when they listen to music,” he says. Traditionally, Psychology assumes that there exist two types of listening. On the one hand, there is background music – the type of music which is playing in the supermarket while you do your shopping or in the bathroom of a fancy restaurant. On the other, there is deep listening, where people are fully concentrated on listening to music and nothing else.

In real life however, this dichotomy does not hold. Most of the time, people consciously choose music they enjoy, but still do something else while listening – they read the news, cook, run, drive a car or work. When our attention is divided between two tasks, our brain filters things out. What is it that people pick up from the music they listen to? What sticks and what gets filtered out? And how might this differ between individuals?

What makes songs stick in your head

Ashley Burgoyne is musicologist at the ILLC.

Working towards these grand questions, Ashley investigates what it is that makes some pieces of music catchy – and others not. For that aim, the Music Cognition Group at the ILLC devised Hooked on Music, a new type of experiment to measure if someone remembers a song. Usually, this is done by querying the title and artist of a song. However, this approach conflates two types of memory process, Ashley points out. You might very well recognize a song without having a clue about its title or the artist performing it.

Singing or humming-along tasks are problematic as well. Some people are just not good at or uncomfortable with singing. Instead, in the new experiment people are asked to sing along in their head while the song is paused for a few seconds. Afterwards they indicate if they believe that the song proceeds at the right or the wrong moment.

By now, the researchers designed four version of this task. They started off with an international version with hits from the British charts and a Dutch version with the famous Top 2000, a list of popular songs that people in the Netherlands vote on each year. Currently, the research team is running two new editions of the experiment with Eurovision and Christmas songs. You can participate in the experiments here.

According to this research, what is it that makes music memorable? “The melody seems to dominate every other aspect of the music”, says Ashley. Simple, conventional melodies that are easy to sing seem to be more memorable than complex and unusual melodies. It goes without saying that other factors than the music itself play an important role as well. Take for instance Queen’s famous Bohemian Rhapsody. The song does not score particularly high on musical memorability. That it became a tune everyone can sing along with might have as much to do with its history and context as with the music itself.

Marketing, the timing of release, the name of the artist or label and pure luck – all of this influences if a piece of music becomes successful or not. Ashley found that 20 percent of a song’s memorability was explained by musical features. The remaining 80 percent was explained by all the other factors.

Haunting for the right representation of musical experience

“I see musical features as poor creatures trying to survive in the world of music business,” says Ashley. Typically, musical features are properties from music theory, such as harmony, melody, and rhythm. But do those features really describe what people pick up from music? Unless you enjoyed a musical training, you are not likely to consciously notice if a song is in D major or A minor. You will probably not be able to tell apart a four-quarter from a three-quarter rhythm or to admire the intricate succession intervals that a melody is constructed of. These properties certainly influence how you perceive music – but do they capture your experience in its entirety?

Music theoretical properties were long thought to express facts about music, but this assumption appears questionable when comparing music perception in different cultures. The concept of dissonance is at the root of Western music theory: there are certain ‘consonant’ combination of notes which sound pleasant to our ear, and other ‘dissonant’ combinations that are unpleasant, that don’t quite fit. A study from 2016 found that native Amazonians, who have not been in exposed to Western music, perceive consonant and dissonant sounds as equally pleasant – challenging the idea that dissonance is an objective property of music.

Might there be musical features that are more universal, and map better onto human listening in the real world than music theoretical properties?

Defining musical experience with AI

The audio streaming provider Spotify also works with musical features, but those have little to do music theory: they capture how happy or sad, how danceable and energetic a song is, if it is acoustic or not, and if it contains vocals or not. Spotify uses artificial intelligence to automatically assign these features to songs, a method called feature learning. The resulting features are then used by other algorithms that suggest new songs to Spotify users or to create playlists of songs that are similar to each other. How exactly any of this works is a well-kept company secret, but if you are curious, you can check how Spotify classifies your favorite songs here.

The graph below shows the Spotify Audio Features of some popular songs, where every feature ranges from 0 to 1. Most feature scores make intuitive sense: The rhythmic-happy Booty Swing by Parov Stelar is inviting to dance. Beethoven’s Moonlight Sonata – quietly and elegantly performed on a piano – scores extremely low on energy, while Outcast’s Hey Ya is an outburst of energy.

According to Spotify’s Valence feature, Adele’s heartbroken ballad Make You Feel My Love is almost as sad as it gets, while Earth Wind and Fire’s song September scores 98 % happy. Queen’s legendary Bohemian Rhapsody scores quite average for most of the features.

As with many artificial intelligence tools, this method is vulnerable to biases. For instance, Latin music is prone to be labeled as ‘happy’ by the Spotify algorithm. “On the surface, it feels like Latin music might be happier in general,” says Ashley. “But once you start thinking about the broader implications of that – that entire regions of the world are systematically listening to happier music than the rest of us – all of a sudden it feels more problematic.”

Despite such issues, using artificial intelligence to better understand human listening is the future of the field, the researcher believes. In the near future, the Music Cognition Group is planning to do research into automatic feature learning as well, in order to find musical features that better map on human listening. “I do still sing occasionally,” says Ashley. “But there just isn’t enough time to do it all.”