The Listening Beyond Your Comfort Zone

Novais
7 min readNov 7, 2023

--

https://www.globehealer.com/procedures/auditory-brainstem-response-abr/
edited https://www.globehealer.com/procedures/auditory-brainstem-response-abr/

👋Hi language enthusiasts!

We all have our comfortable native language, but what happens when we hear languages we don’t understand?

“Today, we’re delving into the world of listening to languages that aren’t your native tongue. It’s a bit like enjoying a song that has a few unexpected notes. But what if those ‘off-key’ moments had some hidden perks?

In this blog, we’ll explore the effects of listening to non-native languages, and I’ll share my own hands-on project experiences. We’ll also take you behind the scenes to explain the science of working with EEG data and review some of research papers in the field. Now, let’s dive into this exciting journey!🚀.”

🌐 Studying a second language is incredibly important in our globalized world. It opens doors to new cultures, deepens our understanding of the world, and enhances communication skills. While it can be challenging, the rewards are worth the effort. Learning a new language often requires persistence and dedication. So, while it may not be a walk in the park, the journey of mastering a second language is an enriching adventure that can transform your life.

In my personal experience, I’ve encountered the idea that one of the most effective ways to learn a new language is by immersing myself in an environment where that language is spoken. This immersive approach involves activities like listening to music, watching movies, or interacting with people who speak the language you’re trying to learn. I decided to give it a try, and to my surprise. I found that it actually worked quite well for me. Notably, my English language skills, particularly my listening comprehension, showed noticeable improvement.

This positive outcome has fueled my curiosity and motivating me to delve deeper into the concept of language immersion and its impact on the brain. Hence, I’m excited to embark on the project “The Listening Beyond Your Comfort Zone”

⚡What is EEG Data?

An electroencephalogram (EEG) records the brain’s electrical activity from the scalp. It uses electrodes placed on the head. In the picture below, you’ll see one of the formats with 32 electrodes which calls 10–20 system.

In this article, you’ll come across various unusual names composed of two letters followed by a number. Please keep in mind that these names correspond to electrode labels. For further details, these labels represent areas like

  • Fp (pre-frontal or frontal pole)
  • F (frontal)
  • C (central)
  • T (temporal)
  • P (parietal)
  • O (occipital)

👩🏻‍💻Let’s take your hands mess up with EEG data

Data pipeline

Dataset

Firstly, you need to download data that can be found on the OpenNeuro website. In this dataset, participants are English native speakers, and they were tasked with listening to various stimulus in shuffle order. The stimuli encompassed a wide range, including 6 music genres and 6 speech types. However, for the focus of my project, I narrowed my attention to the specific task of listening to Chinese and English audio.

Interested tasks are Chinese audio(10005) and English audio(10002)

Clean it up!!🧹

In our world, things can get messy and complicated. This also applies to EEG (brain signal) data. It often has unwanted signals, like electrical noise from the surroundings or even tiny eye movements from participants. That’s why we have to clean up the EEG data before we can really dig into it and understand what’s going on in the brain.

The process of cleaning up data of our workflow involves 5 essential steps

1.Filter : This is the step of screening out unwanted frequencies from our EEG data. For example, in this project, I use a range of 0.1 - 40 Hz, as frequencies outside of this range may include AC electrical interference. After filtering, you can see that the data appears much smoother.*** NOTE : Always filter EEG data first *** because it’s better to identify low-frequency data when filtering continuous data.

2.Epoch : This process involves segmenting continuous EEG data into smaller chunks that correspond to the tasks participants are performing at specific times. Once you’ve epoched the data, you will notice vertical lines that separate the EEG data into individual epochs.

3.Remove artifacts : As mentioned above, EEG data is often contaminated with other signals such as muscle movements, eye blinks, or electrode movements. Now, we are going to detect and remove these unwanted signals from our data. Thanks to technological advancements, this process has become much more accessible, as we can easily use the Autoreject library available in Python to help us identify artifact components.

The picture above, it displays components that are considered artifacts. For instance, the first component corresponds to an eye blink. It’s important to note that the number of artifact components may vary when you apply Autoreject to your data.

4.Re-referencing : Re-referencing EEG data is crucial for improving data quality by establishing a common reference point across electrodes. The choice of reference matters, and using mastoids as a reference is advantageous when available because it minimizes common noise sources and helps maintain signal differences for analysis and source localization.

TP9 and TP10 are mastroid electrode that use as a reference

5.Evoke : Evoke data is averaged across trials within the same task into one chunk.Now, our data is ready for visualization. 🎉

🕵 Let’s uncover the story hidden within the data

In this visualization, it’s referred to as an event-related potential (ERP) graph, which shows voltage changes over time for each electrode. It appears that electrodes around the left frontal area exhibit similar patterns for both native and non-native languages at around 1 second after the onset. However, one noticeable difference is that, at 0.6 seconds after the onset, the native language shows lower voltage compared to the non-native language.

As we zoom out to observe the broader perspective of overall voltage patterns, a noteworthy trend emerges. The native language consistently displays higher overall voltage levels in comparison to the non-native language. This suggests that there may be distinct electrical activity patterns associated with language processing, with the native language eliciting stronger voltage responses throughout the observed period.

🔬How about what other studies have found?

From the work of Robert Jagiello and his colleagues. They examined how the brain responds to “familiar music” and “unfamiliar music”.

Their study revealed two distinct clusters of channels that displayed significant differences in responses under these conditions. The first cluster, situated in the left parietal region, exhibited a significant disparity between conditions from 540 to 750 milliseconds. Meanwhile, the second cluster, located in the right frontotemporal region, showed a significant difference from 350 to 750 milliseconds. These observations are consistent with findings from previous old/new recognition memory studies and support the framework of dual-process theories of memory. Specifically, the right frontotemporal cluster is associated with feelings of familiarity, while the later responses in the left-parietal cortex are linked to retrieval processes.

But I wondered why my results were strange🤔

So, I looked for answers and came across a second paper that might hold the key

For the second study, participants had to do plausibility judgement task of 300 sentences with different levels of difficulty, considering factors like meaning, grammar, clarity, and speech speed. The study used a special method called functional near-infrared spectroscopy (fNIRS) to measure brain activity.

What they discovered was quite interesting. They found that the , particularly the right orbital MFG (BA10), plays a key role in processing complex auditory information and allocating attentional resources for decision making. When the listening task became more challenging, activity in BA10 increased, but during the most difficult conditions, there was a significant decrease in activation.

These findings tell us that more attention as the task became harder but lost motivation when it became too difficult. This could be due to the task exceeding available cognitive resources for comprehension, which aligns with behavioral findings and existing models of auditory cognition and listening effort. This study may be the first to uncover a potential neural marker for sustained auditory attention and listening effort.

💭Discussion

Based on my results, it seems that the frontal lobe exhibits a significant difference between native and non-native listening. This difference may be associated with the complexity of listening. When we listen to unfamiliar things, we need to pay more attention, leading to increased frontal lobe activity, as one of its responsibilities is related to attention.

However, it’s important to note that my experiment only had one sample, which may weaken the conclusion’s strength. Therefore, working with a larger sample size is necessary.

As we draw our journey to a close, I’d like to express my deep gratitude to Brain Coding Camp and all the mentors for their consistent support and guidance throughout my learning journey. I also want to extend my appreciation to you, the reader, for participating in this exploration with me. Your continued interest and active involvement are greatly appreciated, and I eagerly anticipate sharing further discoveries with you.

💖💗🥰💞

--

--