We are pleased to invite you to our upcoming Brown Bag meeting featuring Dr. Jieun Lee from the University of Kansas. Below are the details of the event:
Time: 10:00 AM, Friday, January 10
Venue: Online meeting
Registration: Register here
Online meeting: https://meet.google.
“Individual Differences in Native Speech Perception and Their Relations to Learning Nonnative Phonological Contrasts”
Abstract: This talk will explore my research within a modern theoretical framework where listeners perceive speech categories in a gradient manner, preserving fine-grained details of acoustic information to enable more flexible speech processing. This approach highlights the adaptability of the speech perception mechanism, challenging the earlier theoretical view known as Categorical Perception (CP) (Liberman et al., 1957). Empirical results from my research suggest that gradient representations of speech categories help resolve issues of ambiguity and variance in speech input during the speech perception process. I will discuss data on speech perception in
both native and nonnative languages at the individual level to understand why some listeners process continuous speech inputs more continuously and identify the benefits of this gradient perception, phenomena unexplained by the CP framework.
One possible benefit involves investigating listeners’ adaptability under challenging contexts, such as when a primary acoustic cue is obscured, requiring reliance on secondary cues. My findings indicate that when primary cues are ambiguous and uninformative in native speech, listeners tend to depend on secondary acoustic cues to compensate for uncertainty during phoneme identification. However, this trade-off between cues (i.e., increase reliance on secondary cues) does not occur uniformly across all listeners, indicating significant individual differences. In my talk, I will argue that these differences can be predicted by the degree to which listeners perceive speech categories gradiently, as assessed through a Visual Analogue Scaling (VAS) task.
Another scenario where listeners benefit from perceptual flexibility derived from gradient speech perception is in perceiving nonnative speech. Given that gradient listeners are more resilient to poor and degraded acoustic properties, I will discuss whether these listeners can use such adaptability to perceive exotic nonnative speech sounds by adjusting their perceptual weighting of acoustic dimensions to define nonnative speech categories. The discussion will include a study targeting native English listeners learning a Korean three-way stop contrast and the results of High Variability Phonetic Training, which provided short-term exposure to the target Korean
contrast. We will explore the extension of listeners’ adaptability and perceptual flexibility even in extremely challenging situations, such as nonnative speech perception, and discuss the transfer of within-category acoustic cue sensitivities from native to nonnative speech perception.