You’re in a noisy restaurant. How does your brain focus on your conversation over all the others?

Picture this: you are in a crowded restaurant, chatting with your friend about the details of your day. As if you are the only people in the room, you both are entrenched in the conversation.  Meanwhile, dozens of other conversations are happening, music is blaring, and a football game is on the T.V. behind you. How can you possibly focus your attention on one conversation when there are hundreds of other auditory stimuli around you? A new research uncovers the science behind the brain’s capacity to accomplish just this.

A recent study from the University of Rochester Medical Center (URMC) explores the brain functions that occur when a person listens to a single speaker or sound when surrounded by many others. Led by Edmund Lalor, Ph.D., associate professor of Neuroscience and Biomedical Engineering at URMC, researchers observed unique neural activity that allowed listeners to distinctly process one stream of auditory input while tuning out another.

In their study, researchers asked participants to listen to two stories and focus their attention on only one. Using EEG brainwave recordings, researchers observed that the story participants focused on was converted into linguistic units of sound that distinguish words from one another, known as phonemes, while the story they were not paying attention to was not.

These results indicate that the brain makes an extra effort to understand the words coming from the speaker they are listening to, but does not do the same with external noises and conversations. This additional step marks the difference between converting sound into distinguishable words, or not.

“Our findings suggest that the acoustics of both the attended story and the unattended or ignored story are processed similarly, but we found there was a clear distinction between what happened next in the brain,” Lalor explains in a statement. “That conversion is the first step towards understanding the attended story. Sounds need to be recognized as corresponding to specific linguistic categories like phonemes and syllables, so that we can ultimately determine what words are being spoken – even if they sound different — for example, spoken by people with different accents or different voice pitches.”

Future implications of this study include the discovery of a new way to monitor facets of human auditory processing. Over the course of their research, Lalor’s team was able to demonstrate that EEG brainwave signals can be used to identify the speaker an individual chooses to focus on in a multi-speaker environment. “Our research showed that – almost in real time – we could decode signals to accurately figure out who you were paying attention to,” said Lalor.  

John Foxe, Ph.D., director of the Del Monte Institute for Neuroscience was a co-author on this study that showed it was possible to use EEG brainwave signals to determine who a person was paying attention to in a multi-speaker environment. Surpassing the standard approach of looking at effects on average brain signals, Foxe’s novel research will contribute to a more comprehensive understanding of human auditory processing in future studies.  

This study was published in The Journal of Neuroscience.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

About the Author

Steve Fink

Steve Fink is the Editor-in-Chief of BrainTomorrow.com, GutNews.com and StudyFinds.com. He is formerly the Vice President of News Engagement for CBS Television Stations’ websites, and spent 20 years with CBS.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *