Parvati Jayakumar

Graduate Research Assistant

As a graduate student at the University of Washington, Parvati is pursuing her Masters in Data Science. Her background in Electronics and Communication and experience as a Data Analyst in health tech perfectly complements her lifelong fascination with auditory neuroscience. She thrives on working with data (and people!) and aspires to be a leading data scientist who can make positive changes in people’s lives. When she’s not around her laptop, she might be found honing her athletic skills, playing team sports, exploring Washington, painting, cooking, talking, or just dreaming big (:D)!

Claudia Conceicao

Lab Manager

Claudia is a University of South Florida graduate. She majored in Biomedical Sciences and minored in International Studies. She hopes to become a physician who works to address healthcare disparities, especially in underserved communities. She is passionate about learning new skills, such as languages. In her free time, Claudia loves to play with her kitty, Pluma, explore new places, hang out with friends, and try new foods.

Bilingual Infants’ Perceptual Narrowing and Speech Sound Awareness

Presentation by Ines Sohn

Ines Juhee Sohn, Bonnie K. Lau

Through the Life Sciences Summer Undergraduate Research Program (LSSURP) at the University of Minnesota, Ines investigated bilingual infants’ perceptual narrowing, their ability to identify phonemes native to their own languages, in their first year of life. They compared “typically developing” monolingual and bilingual infants’ phoneme sensitivity and found that the two groups of infants were largely similar — both could identify phonemes of all languages around 4 months of age, but were only able to distinguish phonemes from their native language(s) around 8-12 months of age. Due to the potential confounds of acoustic similarity in stimuli selection and the similarity between chosen languages, more research is necessary to clarify the bilingual phoneme sensitivity timeline.

Ines Sohn with Poster

Multitalker speech perception thresholds in autistic young adults

Presentation by Katie Emmons

Katherine Emmons, Annette Estes Stephen Dager, Adrian KC Lee, Bonnie K. Lau

Multitalker speech perception, the ability to listen to one speaker in the presence of several competing speakers, is an important skill used in everyday life. Spatial cues, or timing and intensity differences between the two ears, play a critical role in segregating competing talkers and help listeners localize where the voice of interest is coming from. For example, if the speaker of interest is standing to a listener’s right, their voice will arrive at the right ear faster and louder than the left ear. Normal-hearing (NH) neurotypical (NT) listeners can use spatial cues to selectively attend to a specific speaker, while ignoring competing speakers’ voices. In this study, researchers found that participants with autism spectrum disorder (ASD) may struggle using spatial cues to selectively attend to one talker in the presence of competing talkers.

Research question:

Can autistic young adults use spatial cues to selectively attend to one of three simultaneous sentences?

What did the researchers do?

Researchers asked 24 participants (12 ASD participants; 12 comparison group particpants) aged 21-23 years, to participate in a study at the University of Washington. Participants completed a multitalker listening task where they listened to three people talking at once. Each person said a sentence with a keyword, a color, and a number. Participants were asked to report back the color and number spoken by the target talker, indicated by the key word, “Charlie.” This speaker was known as the ‘target talker.’ The target talker was always a male voice that came from directly in front of the participant.

Example of the different speakers presented to the participant. Red (left) and blue (right) were masker voices, and green (middle) is the target talker.

Participants were asked to indicate the color and number spoken by the target talker using a response panel.

Response panel presented to participant when answering.

Researchers measured speech perception thresholds in terms of target-to-masker ratios (TMR; the volume level difference between the target talker and the masker talkers) for each participant.

What were the results?

Participants in the ASD group were able to complete the multitalker listening task, though they required higher target-to-masker ratios (i.e., the target talker had to be louder than the two competing talkers) compared to comparison group participants.

Multitalker speech perception thresholds in ASD and comparison groups. Mean +/- SE shown with bars. Individual data points shown with solid points. Overall, ASD group participants required the target talker to be louder than competing talkers (ASD group M = 2.00, SE = 1.18; comparison group M = -1.91, SE = 1.48).

Why is this important?

Results suggest adults with ASD may have difficulty using spatial cues to separate simultaneous auditory streams. Autistic adults may benefit from having their communication partner step away from competing talkers in multitalker situations, so that their communication partners voice becomes louder than other competing voices in the room.

Multitalker speech perception thresholds in autistic young adults

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download