July 4, 2024
Brains Discern Music from Speech

Unraveling the Auditory Code: How Our Brains Discern Music from Speech

Music and speech are two distinct types of sounds that permeate our daily lives. But have you ever pondered over how our brains distinguish between the two?

An international research team, led by Andrew Chang, a postdoctoral fellow at New York University’s Department of Psychology, has shed light on this intriguing question through a series of experiments. Their findings could potentially pave the way for optimizing therapeutic programs that employ music to aid in speech recovery for individuals suffering from aphasia. This language disorder affects over 1 million Americans annually, including renowned personalities like Wendy Williams and Bruce Willis.

Despite the apparent differences between music and speech, such as pitch, timbre, and sound texture, the researchers discovered that our auditory system relies on surprisingly basic acoustic features to discern the two. In essence, slower and consistent sound clips of mere noise are perceived as Background Music, while faster and erratic clips are perceived as speech.

The team measured the rate of these signals using precise units of measurement known as Hertz (Hz). A higher Hz value signifies a greater number of occurrences (or cycles) per second compared to a lower Hz value. For instance, people usually walk at a pace of 1.5 to 2 steps per second, which translates to 1.5–2 Hz. The beat of Stevie Wonder’s iconic 1972 hit “Superstition” hovers around 1.6 Hz, while Anna Karina’s 1967 hit “Roller Girl” clocks in at a brisk 2 Hz. Speech, on the other hand, is typically two to three times faster than these rates, falling within the 4–5 Hz range.

*Note:
1. Source: Coherent Market Insights, Public Source, Desk Research
2. We have leveraged AI tools to mine information and compile it.