Loading Events
  • This event has passed.


Native listeners perceive illusory sounds, typically when presented with sound sequences that do not respect the word-internal phonotactic constraints of their language (Dehaene-Lambertz et al. 2000; Dupoux et al. 1999; Kabak and Idsardi, 2007; inter alia). Such perceptual illusions have been claimed to be driven by surface phonotactics and phonetic characteristics of segments (Davidson and Shaw, 2012; Dupoux et al., 2011). In this talk, I will argue that phonological knowledge at large is a crucial modulatory factor in such illusions.

Inspired by Bayesian models of speech perception (Feldman and Griffiths, 2007; Sonderegger and Yu, 2010), I suggest that the task of the listener in speech perception is to reverse infer the best parse of the underlying representation given their native language phonology and the acoustics of the input stream. Since underlying representations are abstractions that depend on the phonology of the language, the view predicts the recruitment of phonological knowledge, beyond surface-phonotactics, during speech perception. Consistent with this expectation, I will present results from research on perceptual illusions that show that knowledge of both phonological alternations and higher-level prosodic structure is utilized during speech perception.​