Realtime integration of acoustic cues and semantic expectations in speech processing: Evidence from EEG

AbstractA critical debate in speech perception concerns the stages of processing and their interactions. One source of evidence is the timecourse over which different sources of information affect ongoing processing. We used electroencephalography (EEG) to ask when semantic expectations and acoustic cues are integrated neurophysiologically. Participants (N=31) heard target words from a voicing continuum (bark/park) in which both voice onset time (VOT) and preceding coarticulation were manipulated. Targets were embedded in sentences predicting one phoneme or the other (Good dogs sometimes—). We used a component-independent analysis every 2 msec to determine when each cue affected the continuous EEG signal. This revealed an early window (125-225 msec) sensitive exclusively to perceptual information (VOT), a later window (400-575 msec) sensitive to semantic information, and a critical intermediate window (225-350 msec) when VOT and coarticulation are processed simultaneously with semantic expectations. This suggests continuous cascades and interactions between lower-level and higher-level processes.


Return to previous page