There are two major research projects currently underway. These are:
Aging and the Neural Architecture Supporting Perceptual Organization of Speech
Sponsor: Natural Sciences and Engineering Research Council of Canada
Aging and the Neural Architecture Supporting Perceptual Organization of Speech
Sponsors: Canadian Institutes of Health Research
Each of these projects consists of multiple experiments. Some of the experiments currently underway include:
Neural Correlates of Change Deafness - Kristina Backer, Claude Alain
- Change deafness is the auditory analogue of change blindness, or the failure to detect salient changes in a complex auditory scene. In a typical change deafness paradigm, two auditory scenes (comprised of 3 or more sounds) are played one after another, and some aspect within the second scene is changed (e.g. a sound changes location or is removed, etc.). Often, this change goes unnoticed. Both attentional and working memory accounts have been independently proposed to explain change deafness. In the current study, an attentional cue will be manipulated to further explore: 1) the extent to which change deafness arises from a working memory failure and 2) the interaction between memory and attention with respect to change deafness.
Neural correlates of ineffective study and effective retrieval - Alice Kim, Claude Alain, and Endel Tulving
- Recent research has produced a surprising finding: standard conditions of repeated studying are essentially useless for long-term retention after an item's initial recall; instead, repeated testing is a critical factor. Although the facilitative role of testing (retrieval) for long-term retention is not altogether a new finding, this is the first time that the ineffectiveness of repeated studying has been demonstrated so strikingly. As yet there is no explanation for why repeated testing, but not repeated studying, leads to better long-term retention. To examine further this phenomenon of "useless study and useful testing" we will first replicate the behavioural effects of repeated studying and repeated testing and then measure the neural signatures of repeated studying and repeated testing using the event-related potential methodology.
Pitch-Encoding Differences Between Tone and Non-Tone Language-Speaking Musicians With and Without Absolute Pitch - Claude Alain, Stefanie Hutka
- This study was the first to examine pitch-encoding in tone (Mandarin and Cantonese) and non-tone language (English) speaking musicians with and without absolute pitch (AP) (n=32). AP is the ability to label pitches without a reference pitch. Though AP is generally a rare ability, research suggests that development of AP may be facilitated by speaking a tone language (Deutsch et al., 2006). Neuroimaging studies suggest that language-related areas are activated when AP musicians process musical stimuli. I hypothesized that AP musicians, and particularly, tone-language-speakers with AP, would demonstrate greater accuracy and faster response times than both non-tone language speaking musicians with AP and without AP, on two audio-visual encoding tasks. Significant differences between AP and no-AP groups were found for both musical and non-musical stimuli, suggesting that individuals with AP may encode and combine audio-visual information more effectively than those without AP. A marginal significant interaction was found for tone-language and reaction time (but not accuracy) for encoding musical stimuli across tasks. Reference: Deutsch, D., Henthorn, T., Marvin, E., & Xu, H. (2006). Absolute pitch among Americans and Chinese conservatory students: Prevalence differences and evidence for a speech- related critical period. Acoustical Society of America, 119(2), 719-722.
Neuroimaging studies of auditory perception and attention: Attentional consequences of harmonic mistuning -- Ada W. S. Leung and Claude Alain
- In order to perform concurrent sound segregation successfully, the auditory system often bases on the harmonic relations between components of a physical sound source. This mechanism, though believed to be dependent on low level processes that take place along the ascending auditory pathways, have recently been found sensitive to attention. However, the extent to which attention is deployed and how it is allocated during the sound segregation process is still unknown. The present study aimed to explicitly test the deployment of attention during processing of complex sounds. A series of experiments are conducted to examine whether attention allocated to mistuned harmonic can improve or hinder gap detection. Since gap detection is an attention demanding task, examining the gap detection performance allows us to evaluate the attentional deployment to the mistuned harmonic. Several experiments are designed to manipulate the degree of mistuning and the duration of the gap. The idea is that the attention being drawn to the mistuned harmonic might compete with that required to detect the gap and hence jeopardize gap detection. Both behavioral data and event-related potentials will be recorded for analysis.
Sleep, Consolidation and Experience-Based Changes in Performance and Neuromagnetic Brain - Claude Alain, Bernhard Ross, Kuang Da Zhu
- Sleep has been shown to be important in the consolidation of newly acquired skills in visual, motor and auditory domains. Previous studies have found training-related changes in auditory perceptual learning in N1 and P2 with the latter possibly indexing slow-learning process that is dependent on sleep. In this experiment, participants will learn over multiple sessions to identify two simultaneously presented vowels that differ in frequency. We will manipulate time of testing (TOD) for each session to better understand the role of sleep in auditory learning as measured behaviourally and using magnetoencephalography (MEG).
Neuroimaging studies of auditory perception and attention: MEG study for auditory attentional blink - Dawei Shen and Claude Alain
- The AB occurs when two targets are to be identified among distractors in a rapid serial auditory (or visual) presentation stream. In this situation, correct identification of the first target (target) may produce a deficit in processing the second target (probe), and this effect lasts several hundred milliseconds. In previous studies, the auditory attentional blink mainly concerned the influence of bottom –up factors (e.g., SOA and effects of distractors). At present, we investigate the influence of top-down factors on the auditory AB by using magnetoencephalography (MEG) technique in order to further discover the nature of the auditory AB.
Effects of multiple source characteristics on word and speaker recognition: An ERP study - Sandra Campeanu, Dr. Fergus Craik and Dr. Claude Alain
- Context reinstatement has been shown to facilitate word and source recognition. In an auditory ERP experiment, participants performed both recognition tasks with words spoken in four voices. Two voice parameters varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicate that reinstating the study voice at test facilitates both word and speaker memory, compared with no benefit when only one voice parameter is similar. This implies that voices are encoded as acoustic patterns rather than as the sum of their vocal attributes. ERPs revealed, in addition to three expected memory-related modulations, a pre-recollection positivity associated with this reinstatement benefit in both tests. This positivity, likely reflecting acoustic recognition, occurred at 400ms over parietal regions in the word test and started as early as 120ms and 175ms over right frontal and right temporal areas, respectively, in the speaker test.
Speaker Identity in Memory: Exploring the Nature of Voice Reinstatement at Test - Sandra Campeanu, Dr. Fergus Craik, Dr. Claude Alain
- In a previous study we found evidence that voice information is encoded as a whole, rather than as the sum of its acoustic parts. In a follow-up study we are now investigating the effect of attention allocation on the representation of voice in memory, both implicitly and explicitly. The purpose of this work is to discern whether voices, like faces, are distinctively processed by their corresponding sensory system.
Dissociable Changes in Auditory Evoked Responses for Speech Identification Performance and Task Repetition - Boaz Ben-David, Sandra Campeanu, Kelly Tremblay and Claude Alain
- Auditory perceptual learning, which is accompanied by rapid changes in sensory and response pathways, is a fundamental process central to speech perception, yet the neural mechanisms underlying auditory learning remain poorly understood. Here, we report rapid physiological changes in the human auditory system that coincide with learning. During a one hour test session, participants learned to identify two consonant-vowel syllables that differed in voice-onset-time (VOT). They also carried out a simple tone identification task to determine if changes in auditory evoked potentials were specific to the trained speech cue or whether they simply reflect task repetition. The ability to identify the speech sounds improved from the first to the fourth block of trials as revealed by higher d prime while beta measures remained constant throughout the experiment. This behavioural improvement coincided with a decrease in N1 and P2 amplitude and these learning-related changes differed from those observed during tone identification task, which did not yield changes in performance. Training-induced changes in sensory evoked responses were followed by a decreased in sustained activity over the parietal regions that was specific to the speech sounds. The results are consistent with a top-down non-specific attention effect on neural activity during learning, as well as a more learning-specific modulation, which is coincident with behavioural improvements in speech identification.
Recently Completed Projects:
Neuroimaging studies of auditory perception and attention: The effect of pitch and location difference on concurrent vowel segregation and identification – Yi Du and Claude Alain
Human communication, during social gatherings and other noisy environments, involves the detection and identification of concurrent sound sources (e.g., voice, music). Difficulty in auditory stream segregation may contribute to the problems in speech perception often observed in older adults. Previous psychophysical studies have demonstrated that performance in identifying speech sounds improves with either increasing distance between the frequency bands (e.g. fundamental frequency, ƒ0) or increasing spatial separation of concurrent streams of sounds. This study records brain activity using whole head magnetoencephalography (MEG) underlying identification of concurrent English vowels. And the goals are to investigate the relative contribution of ƒ0 difference and location difference in parsing concurrent vowels, how fast and where in the brain do ƒ0-guided segregation and location-guided segregation take place, what is the interaction between these two neural processes, what’s the role of attention in ƒ0-guided and location-guided segregation.
Neuroimaging studies of auditory perception and attention: Detecting and Identifying sounds presented at high rate - Dawei Shen and Claude Alain
This study involves the recording of electrical brain activity (EEG) while you hear different short sequence of sounds presented at a high rate. In some sequences, there will be one or two target sounds embedded in the sequence. You will be asked to indicate whenever your hearing these target sounds by pressing a button. You may find the task difficult. We will be measuring your electrical potentials to these sounds. We ask that you try not to move around too much as this can interfere with the recording.
Modulations of the face and eye gaze processing networks - Roxane Itier, Claude Alain, Randy McIntosh
The human face is arguably the most important and complex social stimulus that we process everyday. Although numerous studies have focused on the difference in processing faces compared to objects, they have neglected the important social context of face processing wherein the need to attend to the eyes seems vital for proper social interactions. The eyes are central to all aspects of social communication such as identity, emotions or direction of attention, and it has been suggested that they could be processed by a dedicated mechanism. This hypothesis of an eye processor system in the human brain is supported by behavioural and clinical data, but has not been well studied in neuroimaging research. The goal of the present study is to test for the existence of this putative eye detector and to study its integration within the larger face processing system. We are using the ERP and fMRI techniques to characterize both the temporal and spatial dynamics of the face and eye brain networks. Participants are performing a categorization task (responding whether there are eyes or not in the picture) while viewing faces, isolated eyes, isolated mouths, faces without eyes and objects on a computer screen.
Specificity of human face and eye processing - Roxane Itier, Claude Alain
Faces and eyes trigger an early brain response recorded on the scalp and known as the N170 component. This response is always larger for faces and eyes than other objects and for this reason, the N170 is believed to reflect the early processes necessary to encode the structure of a face into memory. However, whether this face sensitivity is specific to human faces or simply reflect a sensitivity to the broad category of faces is debated. This project aims at determining whether faces and eyes of humans are processed in the same way as faces and eyes of other animals. The N170 in response to human, ape, dog and cat faces, eyes and faces-without-eyes are compared. The inversion effect, a manipulation that disrupts the face configuration and reflects specific face processing mechanisms is used. This study will inform us on the particular neural properties underlying face and eye processing.
Brain activity during the learning of words for later recall: An event-related potential study - Endel Tulving, Terence Picton, and Alice Kim
Previous studies have demonstrated that the ability to remember verbal and visual items can be predicted by the magnitude of activation in various brain regions during the encoding of these items. Previous studies have not, however, examined what is responsible for the different patterns of neural activation that are associated with the encoding of subsequently remembered and subsequently forgotten items. To test a possible answer to this question, as provided by a new scientific idea that we refer to as “camatosis”, we will examine how the encoding of items in single-trial free recall lists depends on the encoding of preceding items in the list. To do so, we will examine participants’ free recall performance and then compare the event-related potentials (ERPs) recorded during encoding between recalled and non-recalled items. The camatosis hypothesis will be specifically tested by comparing the ERPs to items that follow an item that was later recalled to the ERPs to items that follow an item that was not recalled.
The Involvement of the Inferior Parietal Lobe in Sound Localization, as Demonstrated with Functional Magnetic Resonance Imaging - Claude Alain, and Laura Vecchio
There is a considerable body of evidence to support the ‘dual pathway hypothesis’ in the human auditory system, suggesting that identifying a sound and localizing a sound involve distinct and separate cortical pathways. Past studies involving both primates and humans revealed that a dorsal stream is responsible for sound localization, while a ventral stream is responsible for identification. Some researchers, however, believe that activity in the dorsal stream, particularly in the parietal cortex, is a result of goal-directed behaviour. The ‘sensory-motor account’ asserts that activity in the inferior parietal lobe (IPL) is a result of the sensory integration and goal-directed processes involved in making a response to auditory stimuli. The ‘memory account’ suggests activity in the parietal lobe to be a result of localizing a sound and encoding spatial information to auditory working memory. The present study employs functional magnetic resonance imaging (fMRI) to examine the cortical regions activated during location and pitch tasks. Participants are presented with groups of two sounds, and instructed to make judgements regarding the pitch or location of the second sound relative to the first. The aim is to study activity patterns in the IPL during sound localization and pitch discrimination. If activity in this region were primarily observed during location tasks, it would imply that at least some activity in this region is a result of processing spatial information and thus support the ‘memory account’. If there were no difference in the patterns of activity during location and pitch tasks, it would suggest that most neurons in the IPL are active as a result of sensory integration and goal directed processes.
Human event-related potentials - tpear - Terry Picton and Sasha John
Objective: To evaluate how the amplitudes and latencies of auditory steady-state responses (ASSRs) to multiple stimuli presented at rates between 80 and 105 Hz vary with the ear of stimulation, the handedness or gender of a participant and the rate of stimulation. Design: ASSRs were recorded in a group of 40 young adults (19 female, 12 left-handed) using several stimulus conditions. In the two main conditions, four sinusoidally amplitude-modulated tones (each uniquely modulated using rates between 80 and 105 Hz) with carrier frequencies of 500, 1000, 2000 and 4000 Hz, were presented concurrently to each ear (eight total). In the first condition the modulation rates for the left ear were slower than those for the right and in the second condition this relationship was reversed. Other conditions evaluated the responses to single stimuli, to multiple stimuli presented in one ear only and to multiple stimuli (4 in each ear) presented with rates that decreased rather than increased with increasing carrier frequency. All stimuli were presented at an intensity of 73 dB SPL. Results: Multiple-stimulus ASSRs were significantly reduced (monotic or dichotic) compared to single-stimulus ASSRs, especially at 1000 and 2000 Hz. There were significant differences between monotic and dichotic stimulation. When the stimuli were presented dichotically, the amplitude of the response was largely determined by the relative rates of modulation for the stimuli presented in each ear. ASSRs were larger in the ear with the higher rate when the carrier frequencies were 500 and 1000 Hz and when the modulation rates were less than 90 Hz. Female participants showed larger responses than male participants, particularly at frequencies 1000 and 2000 Hz, but this difference reached only borderline levels of significance. In some of the analyses, the responses in the right ear were significantly larger than in the left. The estimated latency of the responses increased with decreasing carrier frequency and was significantly shorter for the female participants than for the male participants, and for dichotic rather than monotic stimuli. There was no significant effect of handedness, nor any interaction of handedness with ear, for either the amplitude or the latency of the responses. Conclusions. Presenting multiple stimuli at 73 dB SPL in the same ear decreases the amplitude of the ASSR compared to when the stimuli are presented singly. This is caused by the masking effect of low on higher frequencies and some other effect (such as suppression) of high on lower frequencies. Dichotic stimulation can increase the amplitude of the response to stimuli modulated more rapidly (and concomitantly decrease the responses to the stimuli modulated more slowly). This effect occurs only for the carrier frequencies less than 2000 Hz and for modulation frequencies less than 90 Hz. Dichotic stimulation also causes a small but highly significant decrease in the latency of the response compared to monotic stimulation. Female participants had significantly earlier responses than male participants.
Responding to relevant and irrelevant aspects of sound - Ben Dyson
This on-going project is concerned with how we allocate attention to both relevant and irrelevant aspects of our sound world. Stimuli are presented which vary according to a number of parameters, only one of which will be task-relevant at any one point in time. Both behavioural and ERP measures are of interest in order to further uncover the mechanisms of attentional distribution as a function of the environment.
The Effect of Attention and Processing Time on Consolidation of Words During Episodic Encoding: An Event-related Potential Study - Erol Ozcelik
Recent research on visual cognition proposes that human beings have a significant limitation on the consolidation of early perceptual representations to a more durable form of memory. Although several studies have examined the nature of consolidation on working memory, no research has investigated this post-perceptual process on episodic memory. The goal of this study is to investigate the role of attention and processing time on the short-term consolidation of information during episodic encoding and to reveal neural activity associated with successful episodic encoding by contrasting recordings of event-related potentials for words that are remembered and forgotten.
Envelope-following responses to slow (2-8 Hz) speech envelopes in sentences - Steve Aiken and T.W. Picton
Recent research suggests that the speech envelope (the pattern of slow amplitude changes between 2 and 8 Hz) is very important for speech understanding. Speech can be understood when everything except the envelope information is removed, as long a the envelope information is maintained in more than one frequency band. Conversely, removal of the envelope information greatly reduces speech intelligibility. Near-perfect intelligibility is maintained when these slow envelope modulations are presented at least eight discrete bands. It is not known how the brain uses this envelope information to understand speech, and there have been few studies investigating the representation of envelope information in the brain. The present study will seek to elucidate the representation of envelope informaton in the brain. This will be accomplished by measuring electrical brain responses to sentences (using a 65-electrode cap) and then calculating the coherence between this activity at the scalp and the sentence envelopes (in discrete frequency bands). If coherence between sentence envelope information and brain activity can be reliably detected, it may be possible to use envelope-following responses to speech envelopes as a tool to assess the encoding of speech information in the brain of children and other difficult-to-test populations.
Envelope-following responses to vowels (VEFR) - Steve Aiken
Envelope following responses to vowels will be recorded in normal hearing people, using a high recording bacndwidth (30-3000 Hz). One goal is to determine the frequency range at which significant envelope following responses can be reliably recorded to the harmonics in speech. Such responses provide frequency specific information about speech encoding in the brainstem, and may be useful for validating hearing aid fittings in infants. Another goal is to determine whether responses are reliably related to the phase of the stimulus, or are relatively insensitive to stimulus phase. The final goal is to determine whether responses can be recorded to vowel formants when no harmonics are present.
Vowel Training: Neurophysiological studies of age-related changes in auditory perception - Cristina Saverino, Claude Alain
Sensory cortices exhibit an astounding degree of plasticity during development, particularly in childhood and early adolescence. The objective of the current study will be to examine whether young and older adults display similar changes in cortical activity during an auditory perceptual learning task. Cortical activity will be assessed by means of Magnetoencephalography (MEG), a highly useful tool to localize sources of activation in perceptual and motor areas. Participants will receive training on a concurrent vowel recognition task, in which younger and older adults will be required to distinguish between two different English vowels played simultaneously. Unpublished results demonstrated that MEG activity in the auditory cortex coincided with performance in young adults. We will establish whether these cortical changes in response to training also occur for older adults and whether they are similar to the changes seen in younger individuals.
Concurrent sounds are parsed using acoustic features of the incoming sound wave, such as frequency or spatial cues. Deficits in these parsing processes are thought to play a role in the difficulties experienced by older adults in understanding speech in situations with background noise. A dual-vowel identification task is being used to examine the effects of such acoustic cues on speech segregation in older adults. The auditory evoked fields (AEFs) elicited by the task in the primary auditory cortex are measured using magnetoencephalography (MEG). Unpublished results show that differing frequencies within a vowel pair are registered in the auditory cortex, even though behaviourally, two-vowel identification rates are low irrespective of cue availability. These results indicate that certain cue information, though registered centrally, is not integrated at a higher level so that it may help improve concurrent speech sound identification.
Neurophysiological studies of age-related changes in concurrent sound segregation - Olga Kciuk, Claude Alain
Dynamical range mapping in the young and old brain - Antonio Vallesi
A dynamical range map is the collection of brain regions that, when considered as a network, are able to support a particular behavioral operation. We will examine how dynamical maps change with task demands and with senescence, using ERP and fMRI in participants performing psychophysical tasks. An important feature of psychophysical studies is that they allows us to minimize differences between age groups due to poor task performance or strategy by equating task performance. Data analyzed at the group level describe the dynamical range for a given task. Data from each participant will be analyzed at the single trial level to capture the dynamical range for that person.
Neuroelectric Correlates of Rapid Perceptual Learning of Speech Sounds - Claude Alain, Sandra Campeanu and Kelly Tremblay
Learning perceptual skills is characterized by rapid improvements in performance within the first hour of training (fast perceptual learning) followed by more gradual improvements that take place over several daily practice sessions (slow perceptual learning). While it is widely accepted that slow perceptual learning is accompanied by enhanced stimulus representation in sensory cortices, there is considerable controversy about the neural substrates underlying early and rapid improvements in learning perceptual skills. Here we measured event-related brain potentials (ERPs) while listeners were trained to identify two consonant-vowel syllables. Listeners were also presented with a broadband noise to examine whether training-related changes in ERPs were specific to the trained speech cue. Participants performed 10 blocks with 90 trials in each block. The ability to identify both speech sounds improved from the first to the fourth block of trials, and remained relatively constant thereafter. Behavioral improvement coincided with an increased negative peak (between 180-350 ms) over frontocentral sites, and an increase in sustained activity over the parietal regions. While the former was also observed for the noise, the latter was specific to speech sounds. The results are consistent with a top-down non-specific attention effect on neural activity during learning, as well as a more learning-specific modulation, reflecting behavioral improvements in speech identification.
An objective physiological demonstration of hearing in individuals with cochlear implants would be extremely useful in device fitting and monitoring auditory perceptual performance. This is especially true in the pediatric cochlear implant population because it is often difficult to assess how well the child with an implant is processing sounds. A major problem with recording the brains response to sound in these participants is the electrical artifact generated by the implant as it processes sound and stimulates auditory nerve fibers. This artifact can overlap the neural response and make measurements difficult.
Recording auditory evoked potentials to speech sounds in cochlear implant listeners - Lendra Friesen and Terry Picton
The goal of this research is an artifact-free measure of the brain’s response to sound in participants with cochlear implants. We shall first examine how cortical responses in listeners with cochlear implants respond to speech syllables and tones at different inter-stimulus intervals (ISIs). The scalp-recorded P1-N1-P2 response will then be measured in 30 individuals with normal hearing and in 30 listeners with cochlear implants. Based on previous research, we hypothesize that the amplitude of the N1 response will increase with increasing ISI in both groups. However, the cochlear implant artifact will remain the same for all ISIs. Brain electric source analysis (BESA) will be then be used to model the activity in the auditory cortex of both groups of participants and to explain the electrical artifact generated by the cochlear implant. We hypothesize that we shall be able to separate out the activity generated by the implant and by the brain using both BESA and the ISI stimulation paradigm.