Components of Communication

NU Sci Staff
NU Sci
Published in
4 min readJan 2, 2020

By Victoria Miller-Browne, Biology, 2020

Source: Pixabay

This article was originally posted as part of Issue 36: Local.

As light shines into the office of David Lewkowicz, the lab coordinator of the Communication Development Lab (CDL) at Northeastern, it illuminates textbooks with subjects ranging from sensation and perception to the analysis of infancy as a life stage. “In our everyday world we are constantly bombarded by sensory inputs that are stimulating different sensory systems,” notes Dr. Lewkowicz as he recounts the major aims of the research done at this lab.

The goal of research at the CDL is to understand the basic processes of perception and cognition in infants and young children, which includes analyzing multisensory integration and processing. In layman’s terms, it is how a person processes multiple stimuli, such as seeing someone’s face and hearing their voice, at the same time. The lab achieves its goals by utilizing electroencephalography (EEG) and eye-tracking technology to analyze attention shifts of infants and children in a multitude of situations.

The goal of research at the CDL is to understand the basic processes of perception and cognition in infants and young children.

Some of the studies attempt to demonstrate the concept of perceptual narrowing, the process of using experiences to shape one’s perception, which occurs in all infants and children as they grow and develop. This concept can be seen in the results of a race-effect study, published in 2017, that attempted to prove the tendency of children to recognize faces of their own race more easily. The study, which tested four to six and 10 to 12 month old children across three experiments, found that 10 to 12 month old subjects identified mostly own-race faces when accompanied by both silence and non-speech sound while four to six month olds discriminated both non-race and own-race.

One might be confused about the data showing that older children have more trouble overall with facial recognition than the younger. This could be due to the fact that babies have an in-born, primitive detection of low-level features including audio-visual (AV) synchrony. As they develop, instead of absorbing all the stimuli the world offers, children begin to detect identity patterns such as facial features of humans and native language characteristics. Slowly but surely, their perception narrows on more commonly-seen presences such as specific races, animals, etc. This is the point when humans move to the next stage of life, deemed perceptual boarding, with increased expertise of native categories of information.

Similar results were found in a monkey study done by the CDL that dealt with matching a monkey’s moving face to a corresponding vocalization. The results from this study showed that four to six month old subjects were overall more able to AV synchronize monkey faces and vocalization while eight month year olds were more able to AV synchronize humans faces over monkeys. Dr Lekowicz summarizes this change as infants “becom[ing] an expert with the things you have to deal with most.” Both processes of perceptual broadening and narrowing are exemplified in studies analyzing the selective attention to a talker’s mouth during infancy and on.

This data can hopefully give insight into a developing child’s world and how certain multisensory cues can affect it.

Using eye-tracking technology, researchers studied whether increased attentional focus on the mouth is mediated by both AV synchrony and language experience. Researchers found that, regardless of language, AV synchrony mediated the attention on the speaker’s mouth in the initial stages of language development, decreasing with subsequent language expertise. By conducting this research and gathering this data, the lab hopes to better understand the underlying mechanisms that allow infants and young children to learn and eventually provide insights into developmental challenges such as dyslexia or autism.

In future studies, the lab plans to utilize head-mounted eye-trackers to put on the head of the baby and the parent in order to analyze social interactions as well as joint attention between babies and caregivers. This data can hopefully give insight into a developing child’s world and how certain multisensory cues can affect it. One of the last things I asked Dr. Lewkowicz about was his favorite aspect of the research process and what motivates him through the process. “To finally get the data from a study done for a long time and to finally get the story from what the numbers tell you… it’s the process of discovery, and that’s why I do science.”

Developmental Science (2017). Doi:10.1111/desc.12381

Infancy (2010). Doi:v15 n1 p46–60

Infant Behavior and Development (1992). Doi:10.1016/0163–6383(92)80002-C

--

--

NU Sci Staff
NU Sci
Editor for

NU Sci is Northeastern University’s student-run science magazine, publishing science news since 2009.