Acessibilidade / Reportar erro

Cortical auditory evoked potentials using the speech stimulus /ma/

ABSTRACT

Purpose:

to compare cortical auditory evoked responses using two speech stimuli, /ma/ and /da/, in normally hearing young adults.

Methods:

a cross-sectional, observational and analytical study, with a sample composed of nineteen normally hearing young adults, recruited by convenience, ages between 18 and 25 years old, from both genders, participated in the study. Cortical auditory evoked potentials (CAEP) were monaurally recorded in two conditions: 1) with a pair of speech stimuli /ba/ and /da/, and 2), with a pair of speech stimuli /ba/ and /ma/. The order of the experiments was randomized in a proportion of 50% for each of the two stimuli, totaling 100 stimuli for each experiment. Speech sounds were presented at 70 dB SPL. Descriptive and analytical statistical tests were performed.

Results:

mean latency values of the complex P1, N1, P2, N2 and P3 were lower for the /ma/ when compared to those of /da/ (p <0,05). There was no difference in amplitude values between responses evoked using /ma/ and /da/.

Conclusion:

cortical auditory evoked potentials, elicited by the speech stimulus /ma/ had, on average, lower latency peaks of P1-N1-P2-N2 and P3, when compared to those of speech stimulus /da/.

Keywords:
Electrophysiology; Evoked Potentials, Auditory; Speech Therapy

Introduction

Auditory Evoked Potentials (AEP) are electrophysiological responses evoked by a sound and characterized by changes in electrical activity along the auditory pathway11. Lean Y, Shan F, Xuemei Q, Xiaojiang S. Effects of mental workload on long-latency auditory-evoked-potential, salivar cortisol, and immunoglobulin A. Neurosci Lett. 2011;491(1):31-4.. Cortical auditory evoked potentials (CAEP) are represented by positive (P) and negative (N) peaks. Peaks P1, N1 and P2 are mostly exogenous potentials, and N2 is considered a mixed peak. Their latencies are, respectively, between 60 to 80 ms, 90 to 100 ms, 100 to 160 ms and 180 to 200 ms22. Prakash H, Abraham A, Rajashekar B, Yerraguntla K. The effect of intensity on the speech evoked auditory late latency response in normal hearing individuals. J Int Adv Otol. 2016;12(1):67-71.. Another cortical response is a positive peak that occurs between 220 ms to 280 ms, called P3. This peak is mostly originated at the frontal or frontal-central lobes33. Squires NK, Squires KC, Hillyard SA. Two varieties of long-latency positive waves evoked by unpredictable auditory stimuli in man. Electroencephalogr Clin Neurophysiol. 1975;38(4):387-401. and is related to an initial sensory process. It is also related to attention to new stimuli44. Polich J. Theoretical overview of P3a and P3b. In: Polich J, editor. Detection of Change: Event-Related Potential and fMRI Findings. Boston, MA: Kluwer Academic Press; 2003. p.83-98..

Cortical responses are evoked by several types of stimuli, such as clicks, pure tones, or speech sounds. Speech sounds have different temporal and spectral parameters that are used as contrasts to evoke cortical responses. Usually two syllables that are different at the voice onset and/or the speech articulation point of a specific sound are used, as /ta/ and /da/ or /ba/ and /da/55. Kim JR, Ahn SY, Jeong SW, Kim LS, Park JS, Chung SH et al. Cortical auditory evoked potential in aging: effects of stimulus intensity and noise. Otol Neurotol. 2012;33(7):1105-12..

The speech syllables used to evoked cortical responses are usually composed by a consonant and a vowel, for example /da/, /ta/ and /ba/. The consonant is briefer and evokes a transient response. The vowel evokes a sustained response, called frequency followed response (FFR)66. Rocha CN, Filippini R, Moreira R, Neves IF, Schochat E. Potencial evocado auditivo de tronco encefálico com estímulo de fala. Pró-Fono R. Atual. Cientif. 2010;22(4):479-84.. The syllable /da/ is the most frequently stimuli used in studies on CAEP (e.g., Kraus and Nicol, 200577. Kraus N, Nicol T. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system. Trends Neurosci. 2005;28(4):176-81.; Rocha et al., 201066. Rocha CN, Filippini R, Moreira R, Neves IF, Schochat E. Potencial evocado auditivo de tronco encefálico com estímulo de fala. Pró-Fono R. Atual. Cientif. 2010;22(4):479-84.; Massa et al., 201188. Massa CGP, Rabelo CM, Matas CG, Schochat E, Samelli AG. P300 with verbal and nonverbal stimuli in normal hearing adults. Braz J Otorhinolaryngol. 2011;77(6):686-90.; Opptiz et al., 201599. Oppitz SJ, Didonea DD, Silva DD, Gois M, Folgearini J, Ferreira GC et al. Long-latency auditory evoked potentials with verbal and nonverbal stimuli. Braz J Otorhinolaryngol. 2015;81(6):647-52.), although other syllables are also used as /ba/ and /ta/.

The study1010. Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol. 2006;11(4):233-41. have investigated cortical responses of children with learning disabilities and reported that only the speech stimuli was able to identify learning problems because it induces a more complex decoding process. Therefore, researchers have been searching for new speech stimuli, which may be sensitive in detecting speech and hearing problems88. Massa CGP, Rabelo CM, Matas CG, Schochat E, Samelli AG. P300 with verbal and nonverbal stimuli in normal hearing adults. Braz J Otorhinolaryngol. 2011;77(6):686-90..

The syllable /ma/ may be especially interesting regarding its learning context in early childhood. According to the linguistic generative theory, the way language is acquired by a young child is universal. One of the first phoneme experienced by babies, natives of many different languages, is the /ma/. This sound is frequently heard in childhood1111. Jakobson R. Child language, aphasia and phonological universals. Paris: Mouton; 1972[1941].. In addition to that, another language acquisition theory, called the emergent theory, says that repetition of a phoneme in early childhood contributes to its consolidation in memory1212. Vihman M. Word learning and the origins of phonological systems. In: Foster-Cohen S, editor. Language Acquisition. Palgrave Advances in Linguistics. London: Palgrave Macmillan; 2009;15-39..

Considering that different neural regions are activated with speech sounds22. Prakash H, Abraham A, Rajashekar B, Yerraguntla K. The effect of intensity on the speech evoked auditory late latency response in normal hearing individuals. J Int Adv Otol. 2016;12(1):67-71. and the fact that speech perception is the most relevant social function of the auditory system, studying CAEP, using a new stimulus, such as /ma/, will contribute to understand how the auditory system processes speech sounds, and possibly help in diagnosing auditory and/or language problems. Thus, the present study aimed to compare the cortical auditory evoked responses elicited by the speech stimuli /ma/ and /da/, in normally hearing young adults.

Methods

This research protocol is based on Resolution nº 466/2012 of the Brazilian Health Council of the Ministry of Health for studies with human beings and was approved by the Research Ethics Committee of the Universidade Federal de Pernambuco, Brazil, under number 2.767.511. It is a cross-sectional, observational and analytical study.

Nineteen (19) male and female young adults, aging between 18 and 25 years old, were randomly recruited and have accepted to participate.

All participants presented: hearing thresholds below or equal to 25 dB HL from 250Hz to 8000Hz, including the inter-octaves 3000 and 6000 Hz; Tympanograms type "A" and presence of ipsilateral and contralateral acoustic reflexes; absolute and inter-peak values of Auditory Brainstem Responses (ABR) latencies within normality for the click stimulus; and Montreal Cognitive Assessment (MoCA) equal to or greater than 26 points. Participants with auditory processing and/or cognitive complaints, and history of middle or outer ear infections were excluded from the study.

Data Collection Procedures

After signing a consent form, participants were submitted to a first-step procedure including inspection of the external auditory canal; application of the MoCA test to rule out the possibility of slight cognitive deficiencies; immittanciometry, tonal and vocal audiometry, and click ABR.

Posteriorly, CAEP were recorded in two conditions: 1) with a pair of speech stimuli /ba/ and /da/, and 2) with a pair of speech stimuli /ba/ and /ma/. The order of the experiments was randomized and the procedures are described below.

For CAEP experiments, the participant remained in an acoustically treated booth, sitting comfortably in a reclining chair. The subject was asked to watch a movie with subtitles on a tablet, in the silent mode. The equipment model used was Opti-Amp 8008 from Intelligent Hearing Systems (IHS). Electrodes had a concave gold-plated contact area, and were placed on Cz/A1-2 (vertex/right-left earlobe) and ground electrode on Fpz (forehead). The stimuli were monoaural, presented only to the right ear, in a random proportion of 50% for each of the two stimuli, totalizing 100 stimuli for each experiment. Responses were registered in a window of 500 milliseconds, with band pass filter of 1-30 Hz, amplification of 25.000x, with alternating polarity and stimulation rate of 0.7 stimuli per second. Speech stimuli were presented in the intensity of 70 dB SPL.

The stimuli were natural spoken phonemes, lasting 180 milliseconds, recorded by native female Brazilian Portuguese speakers. They were extracted from a stable portion of the emission, in the Praat® (Version 4.2.31), at 48 kHz and 16 bits, later recorded in wav format for the insertion of the stimulus in the Software. Temporal and frequency domain representations can be found in Figure 1.

Figure 1:
Temporal representations in the domain of frequencies of the stimuli

Data Analysis

Data were tabulated and processed by the Statistical Package for the Social Sciences (SPSS - version 23.0). Tabular and graphical presentations, means, standard deviations, and the hypothesis tests were used to analyze the data.

After characterization of the obtained data through descriptive statistical techniques, Kolmogorov-Smirnov testing was applied to check the normality of the distributions of the variables. Student's t-test for paired data was also used to compare the differences between the responses evoked by the proposed stimuli, in the case of variables with normal distribution. Values were considered significant when p < 0.05.

Results

The sample consisted of 19 participants, in which 13 (68%) were females. All participants aged from 18 to 25 years old (mean 22.60 years and standard deviation 1.79) and 18 of them had prevalence of right cerebral dominance. The scores in the MoCA test varied from 26 to 30 points (mean 27.50 points and standard deviation 1.19). Regarding the educational level of the participants, 03 (16%) had already completed higher education, and 16 (84%) were undergraduate students.

With respect to ABR, mean values of 1.65 (standard deviation 0.11), 3.85 (standard deviation 0.15), and 5.76 (standard deviation 0.2) were found for waves I, III and V, respectively.

The distribution of hearing thresholds means by frequency is shown in Figure 2. Speech recognition thresholds had a mean of 18.75 dB HL (standard deviation 0.71) in the right ear, and 16.19 dB HL (standard deviation 0.97) in the left ear. Speech discrimination was majorly 100% for all participants.

Figure 2:
Profile of the average auditory thresholds, by frequency and by ears

The normality of the samples, regardless of sex, was checked using the Kolmogorov-Smirnov test, and the results were homogeneous and normal. Thus, the parametric Student's t-test was used in the analyses for the paired comparisons.

Figure 3 shows the P1-N1-P2-N2-P3 complexes, comparing /ba/(1) (test performed with /ba/ and /da/) and /ba/(2) (test performed with /ba/ and /ma/). In Figure 4, the same complex can be observed, but evoked by the stimuli /da/ and /ma/. The results express the mean values found and their respective standard deviations.

Figure 3:
Latency and amplitude of the P1-N1-P2-N2-P3 Complex evoked by /ba/(1) and /ba/(2)

Figure 4:
Latency and amplitude of the P1-N1-P2-N2-P3 Complex evoked with /da/ and /ma/

As shown in Figure 3, there were no significant differences (p>0.05) in the latencies and amplitudes between /ba/(1) and /ba/(2), per analyzed peak (P1, N1, P2, N2 and P3).

It is observed in Figure 4 that, on average, all latencies of the peaks (P1, N1, P2, N2 and P3) were significantly lower, when responses were evoked by the /ma/ stimulus. It was also shown that there were no significant differences in amplitude between the stimuli /da/ and /ma/. The p-values for each of the comparisons can be seen in Table 1 and Table 2.

Table 1:
P-values for each of the paired comparisons regarding latency values
Table 2:
P-values of paired comparisons regarding amplitude values

Figure 5 shows the grand average of the cortical auditory evoked potentials elicited by the phonemes /da/ and /ma/.

Figure 5:
Grand average of cortical auditory evoked potentials through the speech stimuli /ma/ and /da/

Discussion

Discussion of Methods

During the sample selection process, five incomplete recordings were excluded due to the need for paired data. This need is related to the fact that cortical sensory processing, even for identical stimuli, has a large variability among normal subjects1313. Wagner M, Roychoudhury A, Campanelli L, Shafer LV, Martin B, Steinschneider M. Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP). Neurosci Lett. 2016;12(614):119-26. and it can be influenced by gender and age1414. McPherson DL. Late potentials of the auditory system. San Diego: Singular; 1996..

Participants who failed to complete all the stimuli of the CAEP test in a single session were eliminated due to the possibility of changes in the P2 component. Ross and Tremblay (2009)1515. Ross B, Tremblay K. Stimulus experience modifies auditory neuromagnetic responses in young and older listeners. Hear Res. 2009;248(1-2):48-59. and Tremblay et al. (2014)1616. Tremblay KL, Ross B, Inoue K, McClannahan K, Collet G. Is the auditory evoked P2 response a biomarker of learning? Front Syst Neurosci. 2014;8:20-8. suggest that mere exposure to a stimulus during baseline EEG recording sessions, even in the absence of training, could contribute to increased P2 amplitude. For this reason, the pairs were also randomized.

In order to compare the two stimuli, /da/ and /ma/, /ba/ was chosen as the second control stimulus, in order to evaluate the behavior of the two phonemes of interest under conditions of equal interactions. It was seen that the phoneme /ba/ presented in both situations (/ba/ (1) and /da/ and /ba/ (2) and /ma/) did not lead to significantly different results, only in the stimuli of interest (/da/ and /ma/) (Table 1 and 2), indicating that the testing conditions were the same and the differences were physiological.

For the CAEP assessment, the chosen proportion of presentation of stimuli was 50% for each, out of a total of 100. Stimuli with equal frequency rates suggest a better visibility for the individual characteristics of each stimulus, without an attention effect directed to one of the presented stimuli (rare), as in the traditional oddball paradigm1717. Morlet D, Rubi P, André-Obadia N, Fischer C. The auditory oddball paradigm revised to improve bedside detection of consciousness in behaviorally unresponsive patients. Psychophysiology. 2017;54(11):1644-62..

Discussion of the Results

The latency values of the P1, N1, P2, N2 and P3 CAEP components found here were similar to the values reported in literature: P1 between 54-75 ms, N1 between 80 and 150 ms, P2 between 145 and 200 ms, N2 between 180 to 250 ms1818. Hall J. New handbook of auditory evoked responses. Boston (USA): Allyn & Bacon; 2006. and P3 between 220 to 350 ms1919. Polich J, Howard L, Starr A. Effects of age on the P300 component of the eventrelated potential from auditory stimuli: peak definition, variation, and measurement. J Gerontol. 1985;40(6):721-6.. However, the amplitude values found here were smaller than what it is reported in literature. There are report some factors that may influence this variation of amplitude, such as body temperature, time of the day, food intake shortly before the examination, seasons, and even personality factors2020. Machado CSS, Carvalho ACO, Silva PLG. Caracterização da normalidade do P300 em adultos jovens. Rev Soc Bras Fonoaudiol. 2009;14(1):83-90..

Time perception of the stimulus can be observed in latency recordings2121. Advíncula KP. Estudo dos potenciais evocados auditivos de longa latência em crianças com desvios fonológicos [Thesis]. Recife: Universidade Católica de Pernambuco; 2004:143.. It was seen here that the stimulus /ma/ presented a shorter perception time, revealed by the CAEP latency responses that were, in average, lower, when compared to the /da/ stimulus, for all components.

In order to better explain the results, linguistics aspects related to the stimuli need to be understood. For example, the generative theory explains that speech is considered a sequence of a set of distinctive features, and the phonological processes involved in its acquisition are motivated by acoustic perception2222. Lee SH. Fonologia gerativa. Fonologia, fonologias: uma introdução. In: da Hora D, Matzenauer CL, editors. São Paulo: Contexto; 2017.p.31-46.. According to this approach, the existence of an innate mechanism responsible for the acquisition of language, denominated Universal Grammar, is presupposed. This mechanism is responsible for guiding the process of acquisition of language in children, through its interaction with the linguistic environment in which they are inserted2323. Chomsky N. Knowledge of language: Its nature, origin and use. New York (USA): Praeger; 1986..

Within this generative context, Clements (2009)2424. Clements GN. Phonological feature. In: Contemporary Views on Architecture and Representations in Phonology. Cambridge: MIT Press; 2009.p.19-68. proposed principles that determine the constitution of linguistic systems, such as the scale of robustness, which reveals that there is a universal hierarchy of features where the contrasts of higher features are acquired earlier than the lower contrasts. In analysis, the labial feature /m/ belongs to the top of the robustness scale, as one of the most robust contrasting features, while the + - voice feature /d/ occupies a lower position in the hierarchy, as less robust2525. Lazzarotto-Volcão C. Uma proposta de Escala de Robustez para a aquisição fonológica do PB. Letrônica. 2010;3(1): 62-80.. Thus, /m/ is learned first than /d/ and for this reason it is a more heard and trained sound.

The example was used of the /ma/ phoneme, in most languages, this phoneme most often appears as one of the earliest acquired by infants. In many languages, the name representing the mother usually has the /m/ phoneme, making it easier for the babies to say it1111. Jakobson R. Child language, aphasia and phonological universals. Paris: Mouton; 1972[1941].. Furthermore, the syllable is reinforced along early childhood in a repeated way1212. Vihman M. Word learning and the origins of phonological systems. In: Foster-Cohen S, editor. Language Acquisition. Palgrave Advances in Linguistics. London: Palgrave Macmillan; 2009;15-39..

Repeatedly introduced auditory stimuli can affect how sound is processed in the brain of the listener and thus, modify the auditory evoked responses1515. Ross B, Tremblay K. Stimulus experience modifies auditory neuromagnetic responses in young and older listeners. Hear Res. 2009;248(1-2):48-59.. Accordingly, studies2626. Trainor LJ, Lee K, Bosnyak DJ. Cortical plasticity in 4-month-old infants: specific effects of experience with musical timbres. Brain Topogr. 2011;24:192-203. showed that the capacity for cortical discrimination early in childhood is increased by simple passive sound exposure.

It was seen, in the present study, that the P3 component evoked by /ma/ presented lower latency when compared to /da/. This result may be related to the processing of the acoustic characteristics of the sound of /ma/, early learned and kept in memory, after the comparison of the received stimulus with the previously stored neural representation2727. Polich J. Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol. 2007;118(10):2128-48..

In fact, one of the functions of the working memory is to compare the "new" information that is arriving in our brain by the sensory (auditory) pathways with old information, which is consolidated and stored in long-term memory, acquired since childhood2828. Andrade VM, Santos FH, Bueno OFA. Neuropsicologia hoje. São Paulo: Artes Médicas; 2004..

The latency of P3 increases according to the difficulty of discrimination of the stimulus2929. Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech, Lang Hear Res. 2012;55(2):447-59.. This indicates that the phoneme /ma/ demonstrated better discrimination, due to its lower latency. The event may be associated with the facility of identifying "familiar" sounds present in memory. This was also observed in the lower latency of N2, whose latency value showed the same positive correlation with the difficulty of discriminating the speech contrast of P33030. Novak GP, Ritter W, Vaughan Jr HG, Wiznitzer ML. Differentiation of negative event-related potentials in an auditory discrimination task. Electroencephalogr Clin Neurophysiol. 1990;75(4):255-75.. It is known that the N2 component is mixed, linked to the processing of identification and attention to the stimulus.

The other component evoked by the CAEP responses are the P1-N1-P2 wave complex. In the results of the study, the mean latencies of /ma/ were smaller when compared to /da/, as occurred with N2 and P3. P1 is the first positive peak of the complex; it is believed that it reflects the control of the auditory information passed on to the auditory cortex. N1 reveals the detection of acoustic changes and P2 demonstrates auditory processing beyond sensation3131. Crowley KE, Colrain IM. A review of the evidence for P2 being an independent component process: age, sleep and modality. Clin Neurophysiol. 2004;115(4):732-44..

The results of the present study showed that the /ma/ and /da/ stimuli were acoustically processed in different ways. The perception of the consonants occurs through transient acoustic events, which can be separately perceived3232. Ordunã I, Liu EH, Church BA, Eddins AC, Mercado E. Evoked potential changes following discrimination learning involving complex sounds. Clin Neurophysiol. 2012;123(4):711-9., that is to say, the acoustic analysis occurs meticulously according to the distinct characteristics of each phoneme.

The study3333. Tremblay KL, Piskosz M, Souza P. Effects of age and age-related hearing loss on the neural representation of speech cues. Clin Neurophysiol. 2003;114(7):1332-43. reported that the P1-N1-P2 complex reflect the neural representation of perceptually relevant temporal clues, such as changes in voice start time. Thus, when evoked by different stimuli, as in the present study, the P1-N1-P2 wave complex reacts in a very distinct way, indicating this complex is highly dependent on the physical properties of the stimulus that is used to evoke it.

Speech-elicited CAEP researches are especially interesting because speech perception is the most important social function of the auditory system3434. Digeser FM, Wohlberedt T, Hoppe U. Contribution of spectrotemporal features on auditory event-related potentials elicited by consonant-vowel syllables. Ear Hear. 2009;30(6):704-12.. Clinical applications with speech stimuli have already been proposed for various uses, such for acoustic verification of amplification in hearing aids3535. Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear. 2006;27(2):93-103., and in the results of auditory training3636. Tremblay KL, Kraus N, McGee T, Ponton C, Otis B. Central auditory plasticity: Changes in the N1-P2 complex after speech-sound training. Ear Hear. 2001;22(2):79-90.. Thus, the discovery of more sensitive responses to stimuli, which do not require the active participation of the investigated subject, calls for further studies on the phoneme /ma/ in other populations such as children, and using new components of CAEP.

Conclusion

Cortical auditory evoked potentials elicited by the speech stimulus /ma/ had, on average, lower latency peaks of P1-N1-P2-N2 and P3, when compared to those of speech stimulus /da/.

REFERENCES

  • 1
    Lean Y, Shan F, Xuemei Q, Xiaojiang S. Effects of mental workload on long-latency auditory-evoked-potential, salivar cortisol, and immunoglobulin A. Neurosci Lett. 2011;491(1):31-4.
  • 2
    Prakash H, Abraham A, Rajashekar B, Yerraguntla K. The effect of intensity on the speech evoked auditory late latency response in normal hearing individuals. J Int Adv Otol. 2016;12(1):67-71.
  • 3
    Squires NK, Squires KC, Hillyard SA. Two varieties of long-latency positive waves evoked by unpredictable auditory stimuli in man. Electroencephalogr Clin Neurophysiol. 1975;38(4):387-401.
  • 4
    Polich J. Theoretical overview of P3a and P3b. In: Polich J, editor. Detection of Change: Event-Related Potential and fMRI Findings. Boston, MA: Kluwer Academic Press; 2003. p.83-98.
  • 5
    Kim JR, Ahn SY, Jeong SW, Kim LS, Park JS, Chung SH et al. Cortical auditory evoked potential in aging: effects of stimulus intensity and noise. Otol Neurotol. 2012;33(7):1105-12.
  • 6
    Rocha CN, Filippini R, Moreira R, Neves IF, Schochat E. Potencial evocado auditivo de tronco encefálico com estímulo de fala. Pró-Fono R. Atual. Cientif. 2010;22(4):479-84.
  • 7
    Kraus N, Nicol T. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system. Trends Neurosci. 2005;28(4):176-81.
  • 8
    Massa CGP, Rabelo CM, Matas CG, Schochat E, Samelli AG. P300 with verbal and nonverbal stimuli in normal hearing adults. Braz J Otorhinolaryngol. 2011;77(6):686-90.
  • 9
    Oppitz SJ, Didonea DD, Silva DD, Gois M, Folgearini J, Ferreira GC et al. Long-latency auditory evoked potentials with verbal and nonverbal stimuli. Braz J Otorhinolaryngol. 2015;81(6):647-52.
  • 10
    Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol. 2006;11(4):233-41.
  • 11
    Jakobson R. Child language, aphasia and phonological universals. Paris: Mouton; 1972[1941].
  • 12
    Vihman M. Word learning and the origins of phonological systems. In: Foster-Cohen S, editor. Language Acquisition. Palgrave Advances in Linguistics. London: Palgrave Macmillan; 2009;15-39.
  • 13
    Wagner M, Roychoudhury A, Campanelli L, Shafer LV, Martin B, Steinschneider M. Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP). Neurosci Lett. 2016;12(614):119-26.
  • 14
    McPherson DL. Late potentials of the auditory system. San Diego: Singular; 1996.
  • 15
    Ross B, Tremblay K. Stimulus experience modifies auditory neuromagnetic responses in young and older listeners. Hear Res. 2009;248(1-2):48-59.
  • 16
    Tremblay KL, Ross B, Inoue K, McClannahan K, Collet G. Is the auditory evoked P2 response a biomarker of learning? Front Syst Neurosci. 2014;8:20-8.
  • 17
    Morlet D, Rubi P, André-Obadia N, Fischer C. The auditory oddball paradigm revised to improve bedside detection of consciousness in behaviorally unresponsive patients. Psychophysiology. 2017;54(11):1644-62.
  • 18
    Hall J. New handbook of auditory evoked responses. Boston (USA): Allyn & Bacon; 2006.
  • 19
    Polich J, Howard L, Starr A. Effects of age on the P300 component of the eventrelated potential from auditory stimuli: peak definition, variation, and measurement. J Gerontol. 1985;40(6):721-6.
  • 20
    Machado CSS, Carvalho ACO, Silva PLG. Caracterização da normalidade do P300 em adultos jovens. Rev Soc Bras Fonoaudiol. 2009;14(1):83-90.
  • 21
    Advíncula KP. Estudo dos potenciais evocados auditivos de longa latência em crianças com desvios fonológicos [Thesis]. Recife: Universidade Católica de Pernambuco; 2004:143.
  • 22
    Lee SH. Fonologia gerativa. Fonologia, fonologias: uma introdução. In: da Hora D, Matzenauer CL, editors. São Paulo: Contexto; 2017.p.31-46.
  • 23
    Chomsky N. Knowledge of language: Its nature, origin and use. New York (USA): Praeger; 1986.
  • 24
    Clements GN. Phonological feature. In: Contemporary Views on Architecture and Representations in Phonology. Cambridge: MIT Press; 2009.p.19-68.
  • 25
    Lazzarotto-Volcão C. Uma proposta de Escala de Robustez para a aquisição fonológica do PB. Letrônica. 2010;3(1): 62-80.
  • 26
    Trainor LJ, Lee K, Bosnyak DJ. Cortical plasticity in 4-month-old infants: specific effects of experience with musical timbres. Brain Topogr. 2011;24:192-203.
  • 27
    Polich J. Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol. 2007;118(10):2128-48.
  • 28
    Andrade VM, Santos FH, Bueno OFA. Neuropsicologia hoje. São Paulo: Artes Médicas; 2004.
  • 29
    Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech, Lang Hear Res. 2012;55(2):447-59.
  • 30
    Novak GP, Ritter W, Vaughan Jr HG, Wiznitzer ML. Differentiation of negative event-related potentials in an auditory discrimination task. Electroencephalogr Clin Neurophysiol. 1990;75(4):255-75.
  • 31
    Crowley KE, Colrain IM. A review of the evidence for P2 being an independent component process: age, sleep and modality. Clin Neurophysiol. 2004;115(4):732-44.
  • 32
    Ordunã I, Liu EH, Church BA, Eddins AC, Mercado E. Evoked potential changes following discrimination learning involving complex sounds. Clin Neurophysiol. 2012;123(4):711-9.
  • 33
    Tremblay KL, Piskosz M, Souza P. Effects of age and age-related hearing loss on the neural representation of speech cues. Clin Neurophysiol. 2003;114(7):1332-43.
  • 34
    Digeser FM, Wohlberedt T, Hoppe U. Contribution of spectrotemporal features on auditory event-related potentials elicited by consonant-vowel syllables. Ear Hear. 2009;30(6):704-12.
  • 35
    Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear. 2006;27(2):93-103.
  • 36
    Tremblay KL, Kraus N, McGee T, Ponton C, Otis B. Central auditory plasticity: Changes in the N1-P2 complex after speech-sound training. Ear Hear. 2001;22(2):79-90.

Publication Dates

  • Publication in this collection
    16 Sept 2022
  • Date of issue
    2022

History

  • Received
    11 Oct 2021
  • Accepted
    05 Aug 2022
ABRAMO Associação Brasileira de Motricidade Orofacial Rua Uruguaiana, 516, Cep 13026-001 Campinas SP Brasil, Tel.: +55 19 3254-0342 - São Paulo - SP - Brazil
E-mail: revistacefac@cefac.br