ASL mismatch responses in MEG

Auditory mismatch responses (MMR) have been used in spoken language to examine automatic detections of linguistic anomalies/changes at phonetic, phonological, lexical, and morpho-syntactic levels. To date, very few studies have addressed similar questions for sign languages using visual MMR (vMMR). Comparing the localization of lexical MMR effects in spoken and sign languages can provide further insights on cross-modal neural mechanisms during lexical access. In the current study, we aim to examine ASL lexical mismatch response in ASL and to localize the vMMR responses using MEG.

Cheng, Q., & Zhao, C., (Oct. 2022) Localizing visual mismatch responses in American Sign Language (ASL) using MEG. Poster Presentation at the Society of the Neurobiology of Language (SNL) Annual Meeting, Philadelphia, PA. [poster]