Acessibilidade / Reportar erro

The Effects of Monaural Stimulation on Frequency-Following Responses in Adults Who Can Sing in Tune and Those Who Cannot

Abstract

Introduction

Musicians have an advantage over non-musicians in detecting, perceiving, and processing nonverbal (i.e., environmental sounds, tones and others) and verbal sounds (i.e., consonant, vowel, phrases and others) as well as instrumental sounds. In contrast to the high skill of musicians, there is another group of people who are tone-deaf and have difficulty in distinguishing musical sounds or singing in tune. These sounds can originate in different ways, such as a musical instrument, orchestra, or the human voice.

Objectives

The objective of the present work is to study frequency-following responses (FFRs) in individuals who can sing in-tune and those who sing off-tune.

Methods

Electrophysiological responses were recorded in 37 individuals divided in two groups: (i) control group (CG) with professional musicians, and (ii) experimental group (EG) with non-musicians.

Results

There was homogeneity between the two groups regarding age and gender. The CG had more homogeneous responses in the latency of the FFRs waves when responses between the right and left ears were compared to those of the EG.

Conclusions

This study showed that monaural stimulation (right or left) in an FFR test is useful for demonstrating impairment of speech perception in individuals who sing off tune. The response of the left ear appears to present more subtlety and reliability when identifying the coding of speech sound in individuals who sing off tune.

Keywords
hearing; voice; auditory evoked potential; speech perception

Introduction

It is well known that musicians have an advantage over nonmusicians in detecting, perceiving, and processing sounds. Recent research has shown that this improved ability in sound processing occurs both for nonverbal and verbal sounds as well as instrumental sounds.11 Zuk J, Benjamin C, Kenyon A, Gaab N. Behavioral and neural correlates of executive functioning in musicians and non-musicians. PLoS One 2014;9(06):e99868 22 Strait DL, Slater J, O’Connell S, Kraus N. Music training relates to the development of neural mechanisms of selective auditory attention. Dev Cogn Neurosci 2015;12:94–104 33 Strait DL, Kraus N. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res 2014;308:109–121 In contrast to the high skill of musicians, there is another group of people who are tone-deaf and have difficulty in distinguishing musical sounds or singing in tune. These sounds can originate in different ways, such as a musical instrument, orchestra, or the human voice.44 Strait DL, Parbery-Clark A, O’Connell S, Kraus N. Biological impact of preschool music classes on processing speech in noise. Dev Cogn Neurosci 2013;6:51–60 55 Houaiss A. Dicionário da língua portuguesa. Rio de Janeiro: Objetiva; 2001 66 Sobreira S. Desafinação vocal. Rio de Janeiro: Musimed; 2003 77 Moura Fd. Análise do processamento auditivo em cantores afinados e desafinados. São Paulo: Centro Universitário das Faculdades Metropolitanas Unidas; 2008

Singing off tune may be due to a lack of exposure to music. Although it is extremely difficult to say why certain individuals cannot sing in tune, there are two probable factors at work, namely difficulties in sound perception and/or vocal produc-tion.88 Mawhinney T. Tone-deafness and low musical abilities - an investigation of prevalence. characteristics and tractability. Kingston: Queen’s University; 1986 However, other authors emphasize other potential problems, such as memory and language.99 Heresniak M. The care and training of adult bluebirds: teaching the singing impaired. J. Singing 2004;61(01):9–25, set.-oct. Musicians seem to have the ability to process and perceive sounds, while individuals who sing off tune have an impaired skill in these areas.1010 Ishii C, Arashiro PM, Desgualdo L. Ordering and temporal resolution in professional singers and in well tuned and out of tune amateur singers. PróFono 2006;18(03):285–292. Doi: 10.1590/S0104-56872006000300008
https://doi.org/10.1590/S0104-5687200600...

Off-tune singing is usually assessed by vocal emission techniques, whichassumethere isagapinfunction along the auditory trajectory. Based on this assumption, individuals who sing off tune should really have their hearing and vocal abilities monitored. One way to evaluate and monitor a person’s synchronized neural activity in response to sounds is through noninvasive electrophysiological testing. Among the different types of electrophysiological procedures, we highlight the frequency-following response (FFR), which reflects the phase-locked activity of neural populations in the rostal brainstem and can track the fundamental frequency of a sound and its harmonics. Clinically, FFR responses are highly replicable, both within and across individuals.1111 Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011;122 (02):346–355,1212 Song JH, Nicol T, Kraus N. Reply to Test-retest reliability of the speech-evoked ABR is supported by tests of covariance. Clin Neurophysiol 2011;122(09):1893–1895

To perceive music, the right hemisphere is predominantly involved. The right hemisphere processes the prosodic and melodic characteristics of a sound, different to language processing, which largely involves the left hemisphere.1313 Kimura D. Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology 1961;15:166–171. http://dx.doi.org/10.1037/h0083219
http://dx.doi.org/10.1037/h0083219...
It would, therefore, be interesting to see if there is any difference in FFR responses between the right and left ears when they are monaurally stimulated.

Studies on the detection of neurophysiological changes resulting from the processingof speechsounds inindividuals who sing off tune are scarce.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...
Our study group hypothesized that there could be adifference in FFR responses between the ears, with better performance in the right ear of individuals who sing in tune compared with those who sing off tune. Thus, the objective of the present work is to study FFR responses in individuals who can sing in tune and those who sing off tune. We used monaural stimulation (sounds supplied to the right and left ears separately) to try and understand how speech sounds are coded in subcortical regions of these two sorts of people.

Materials and Methods

Statement of Ethics

This study was approved by the committee for ethics in research under protocol number 1.191.303 at the CAEE: 41305515.9.0000.5511. Informed consent for research was obtainedinwriting from all participantsafteran explanation of the nature, purpose, and expected results of the study.

Participants

A total of 37 individuals participated in this study, 20 female and 17 male, aged between 20 and 57 years who were attended at an institute for voice treatment. The subjects were divided into two groups according to the inclusion criteria described below.

The control group (CG) consisted of 17 professional musicians (10 females and 7 males) who could sing in tune, and the experimental group (EG) consisted of 20 nonmusicians (10 females and 10 males) who sang off tune. It is important to clarify that for the purpose of the present study, professional musicians were defined as individuals with musical experience who lived off of music as a job, while nonmusicians were individuals without musical experience and whose work was not related to music. To be included, all subjects should have: (i) air conduction threshold below 20 dB HL for octaves from 0.25 to 8kHz and bone conduction thresholds below 15dB HL for octaves between 0.5 to 4kHz; (ii) type A tympanogram with compliance between 0.3 and 1.3mmhos and pressure between –100 daPa and + 200 daPa associated with the presence of ipsilateral and contralateral acoustic reflexes in both ears; (iii) click auditory brainstem response (ABR) with waves I, III, and V present and with interpeak intervals I to III, III to V, and I to V within normal standards in both ears; no syndromic hearing impairment; and no current or prior psychiatric disorder. The hearing and pitch-matching assessment were performed in the institute for voice treatment.

Besides that, the CG was composed of professional musicians without tuning anomalies as confirmed by administration of a pitch-matching test, and the EG was composed of individualswithnomusicalabilitywitherrorsintuning ability also confirmed by administration of a pitch-matching test.

Procedures

Audiological Evaluation

  • a. Audiometric evaluation was performed via air conduction at 0.25 to 8 kHz and bone conduction at 0.5 a 4kHz. Auditory threshold was considered to be normal if up to 15dB for bone conduction and up to 20dB for air conduction according to the classification of Davis and Silver-man.1515 Davis H. Silverman RS. Hearing and deafness. Nova York: NY: Rinehart & Winston; 1970 Testing was performed using an Interacoustics AC 40 audiometer (Grason-Stadler, Eden Prairie, USA).

  • b. Speech recognition threshold (SRT). A list of disyllables was adopted, and the final result was the intensity at which the participant scored 50% of the words presented.

  • c. Speech recognition index (SRI) was tested at 40dB above the mean tonal threshold of 0.5, 1, and 2kHz using a list of monosyllabic words, and it was considered normal if the percentage of correct answers was between 88 and 100%.

  • d. Immittanciometry (tympanometry and acoustic reflex). Tympanometry was performed with a 226Hz probe tone. Ipsilateral and contralateral acoustic reflexes were probed at frequencies of 0.5 to 4kHz. Normal subjects presented a peak maximum compliance at atmospheric pressure (0 daPa) and an equivalent volume of 0.3 to 1.3 mL according to the proposal of Jerger (1970).1616 Jerger J. Clinical experience with impedance audiometry. Arch Otolaryngol 1970;92(04):311–324 Immittanciometry was performed using the Interacoustics AT 235h Impedance Audiometer (Grason-Stadler, Eden Prairie, USA). All equipment was calibrated according to ISO-389 and IEC-645 standards. Subjects whohad normal responsesinthebasic audiological evaluation were then tested by auditory electrophysiology.

Pitch-matching Tests

A pitch-matching test was administered individually to the participants in a quiet environment, with sound stimuli presented under free-field conditions at a self-selected normal loudness. In task 1, the individual had to listen to an isolated musical tone and then immediately repeat it vocally, a task repeated with five different tones. In task 2, the individual had to listen to a 3-tone sequence and then immediately repeat the sequence vocally, a task again repeated using five different sequences. The vocal reproductions were directly captured into a portable computer by means of a head-mounted microphone that had a flat frequency response; it was placed at 45° and 5cm away from the mouth of the participant. The samples were recorded using Sound Forge software version 4.5c and imported into Vocalgrama 1.8i(CTSInformática, Pato Branco, PR, Brazil).

Pitch-matching Tests

All voice samples were subjected to computerized acoustic analysis by means of the Vocalgrama (CTS informatica, Paraná, Brazil) software. Vocalgrama uses autocorrelation to determine F0; a filter, available on the software, was used to reduce artifacts. The frequency of an individual’s vocal imitation was compared with the frequency of the original tone. A correct match was considered to be when the reproduction had the same fundamental frequency as the original to within a semitone (►Fig. 1a), and the individual was then considered to have accurate pitch-matching. In cases in which the vocal imitation and the original tone had different frequencies the match was considered wrong (►Fig. 1b). Participants who were able to sing correct sequences of tones with 100% accuracy were considered as ableto sing intune. However, whenparticipants were unable to correctly repeat the sequences, they were considered as singing off tune.1717 Moreti F, Pereira LD, Gielow I. Pitch-matching scanning: comparison of musicians and non-musicians’ performance. J Soc Bras Fonoaudiol 2012;24(04):368–373 The fundamental frequency extraction was performed offline.

Fig. 1
Example of the computerized acoustic evaluation. (A) Correct tuning in the pitch-matching test and (B) incorrect tuning in the pitch-matching test (Vocalgrama 1.8i – CTS Informática).

Electrophysiological Evaluation

Electrophysiological evaluation was conducted using the Biologic Navigator Pro equipment (Natus, Middleton, USA) in an acoustically prepared soundproof and electrically shielded room. Subjects were seated in a reclining chair in a comfortable position. The skin of the subject’s scalp was cleaned with abrasive paste before fixing the electrodes in place with conductive paste and adhesive tape. Impedance was kept below 3 kΩ and the interelectrode impedance was lower than 2 kΩ. The electrodes were positioned according to the 10 to 20 system, that is, active electrode at the apex (Cz), reference electrode on the ipsilateral mastoid, and ground electrode on the contralateral mastoid.1818 Jasper HH. The ten-twenty system of the International Federation. Electroencephalogr Clin Neurophysiol 1958;10:371–375 The right ears were assessed separately. Acquisition parameters of the (a) Click-ABR (equipment Biologic Navigator Pro (Natus, Middleton, USA), click, 0,1 millisecond [duration], rarefaction [polarity], 80 dB nHL [intensity], 19.3 [rate], 2,000 [scans], replication 2x, 10 to 1,500 [filter], 10.66 ms [window] and insert ER-3A), and(b) FFR (equipmentBiologic Navigator Pro, speech, 40ms [duration], alternation [polarity], 80 dB SPL [intensity], 10.9 [rate], 3,000 [scans], replication 2x, 10 to 200 [filter], 85.33 ms [window], and insert ER-3A). During testing, subjects were instructed to keep their eyes closed to avoid artifacts. If necessary, changes were made to the position of the subject to ensure stable recording conditions. Runs containing more than 10% artifacts were repeated.

All analyses were performed offline, and the response waveforms were visually identified and manually marked by an audiologist who was blinded to each participant’s age, gender, and group (CG or EG).

The ABR responses were recorded on the right and left ears separately at 80 dBnHL. Two waveforms were collected to verify reproducibility. The presence and absolute latencies of waves I, III, and Vto80dBnHL wereanalyzed, as wellas the interpeak intervals I to III, III to V, and I to V, according to the normality criteria proposed by the Navigatorpro Biologic system (Natus, Middleton, USA).

The analysis was performed in the time domain. Latency and amplitude values of the seven waves elicited by the syllable /da/ (V, A, C, D, E, Fand O) were based on the analysis criteria of previous published studies.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...
,1919 Skoe E, Kraus N. Auditory brain stem responseto complex sounds: a tutorial. Ear Hear 2010;31(03):302–324 In case a wave was not detected, it was described as absent and the data for this wave was not analyzed. In addition, analysis of the VA complex was done, involving: (i) slope of the VA complex (uV/ms), which is related to the temporal synchronization of the response generators; and (ii) area of the VA complex (uV × ms), which is related to the amount of activity contributing to generation of the wave.1919 Skoe E, Kraus N. Auditory brain stem responseto complex sounds: a tutorial. Ear Hear 2010;31(03):302–324

Statistical Analysis

To compare the two groups (tuned and tuneless) across each wave, the analysis of variance (ANOVA) procedure was used, testing the age and gender variables as well as their interactions. The variables group, gender, and age were fixed with two levels each. The ANOVA used the Fisher-Snedecor distribution to determine whether there was a significant difference among groups or their interactions. Since the FFR response presents seven wave peaks, which are also related to each other, the p-values from the ANOVA analysis were adjusted for multiple comparisons using the false discovery rate (FDR) correction. To test the homogeneity of the sample, the Pearson chi squared test was applied. The level of significance was set at 5% (p < 0.05). The statistical analyses were conducted using the R-project facility (www.r-project.org).

Results

Table 1 shows a statistical description of the demographic data based on the variables age and gender of individuals who could sing in tune and those who could not. There was homogeneity between the two groups regarding age and gender.

Table 1
Statistical description of demographic data based on the variables age and gender between groups

In the analyses of the responses of latency and amplitude between the right and left ears there was no statistically significant difference, but important information could be analyzed with the wave distribution waves. ►Fig. 2 shows the distribution of the latency values of waves V, A, C, D, E, F, and O between the right and left ears in the studied groups. The scatter plot for latency compares the right and left ears and it shows a fairly homogeneous distribution of all waves in the group of individuals who could sing in tune, whereas in the group of individuals who sang off tune, there was a greater dispersion in latency of all FFR waves. In the same group, there was also a gradual increase in the standard deviation associated with waves with higher latency.

Fig. 2
Comparison of latency values and ears between groups.

Table 2 displays a comparison between the CG and EG in terms of FFR latency (ms) measured in the right ear. There was a statistically significant difference in the latencies of waves A (p = 0.045), C (p = 0.002), D (p = 0.030), and F (p = 0.046) between the groups.

Table 2
Comparison between in-tune and off-tune individuals in terms of frequency-following response latency measured in the right ear

Table 3 displays the comparison between the CG and EG in terms of FFR latency (ms) measured in the left ear. There was a statistically significant difference in the latencies of waves C (p = 0.04), D (p = 0.02), E (p = 0.017), F (p = 0.01), and O (p = 0.018).

Table 3
Comparison between in-tune and off-tune individuals in terms of frequency-following response latency measured in the left ear

Fig. 3 shows the left-right distribution of amplitude values of waves V, A, C, D, E, F, and O. The scatter plot shows similar responses in the lower left quadrant, although there is more scatter in the EG.

Fig. 3
Comparison of amplitudes values between groups and ears.

Table 4 compares the CG and EG regarding FFR amplitude (µV) in the right ear. There was no statistically significant difference between the groups.

Table 4
Comparison between in-tune and off-tune individuals in terms of frequency-following response amplitude measured in the right ear

Table 5 compares the CG and EG regarding FFR amplitude (µV) in the left ear. There was a statistically significant difference in amplitude values only for wave O (p = 0.047).

Table 5
Comparison between in-tune and off-tune individuals in terms of frequency-following response amplitude measured in the left ear

Discussion

In the analysis of the sample characterization, considering gender (male and female) and age group, the groups were homogeneous. The CGwas composed of 17 participants (male = 7,female = 10;28–49 years), and theEGwas composedof20 participants (male = 10, female = 10; 20–57 years).

The scatter plot for latency showed that the CG had more homogeneous responses in latency of the FFR waves when responses between the right and left ears were compared. In the EG, there were a greater number of individuals with wide variations in latency values between ears, mainly for waves C (which represents the transition between the consonant and the vowel) and O (which represents the end of the vowel).

The data above corroborate the hypothesis made by our study group, based on previous studies, that there could be a difference in FFR responses between the ears, with better performance in the right ear of individuals who could sing in tune compared with those who could not. An earlier work found that there were better neuronal responses in the right hemispheres of musicians compared with those of nonmusicians when complex musical stimuli were used to elicit FFRs.2020 Siedenberg R, Goodin DS, Aminoff MJ, Rowley HA, Roberts TP. Comparison of late components in simultaneously recorded event-related electrical potentials and event-related magnetic fields. Electroencephalogr Clin Neurophysiol 1996;99(02):191–197 This relates to the greater sensitivity of musicians in recognizing and discriminating the frequency of tones due to the greater number of specialized neurons involved in the task of tonotopic organization.2121 Woods DL, Alho K, Algazi A. Intermodal selective attention: evidence for processing in tonotopic auditory fields. Psychophysiology 1993;30(03):287–295 However, in the present study, we again used a complex stimulus, but this time a verbal stimulus was used (speech). This type of stimulus carries an important linguistic load that is preferentially processed in the left hemisphere.

When comparing latencies in the right ear between groups, there was a statistically significant difference in the values of waves A, C, D, and F between the CGa and the EG. Waves A, C, and Fare considered the most stable peaks in FFR responses.2222 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004;115(09):2021–2030,2323 Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005;156(01):95–103 The EG, therefore, had impaired sound processing speed throughout all waves of the FFR compared with the CG; this might be explained by an impaired perception of rapid changes in the time domain, in addition to having a more limited neural representation of harmonics.

In the left ear, a statistically significant difference was observed in the latency of waves C, D, E, F, and O in the EG compared with the CG. The vowel coding region, also called the sustained portion, reflects encoding of the fundamental frequency and the harmonic structure of complex stimuli andhas midbrain origins. Research has shownthat musicians have a greater subcortical representation of speech syllables than non-musicians, giving musicians a neural advantage in distinguishing complex sounds, including speech, even under adverse conditions.2424 Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Frontiers in Aging Neuroscience 4, (2012). Accessed January 112023. Doi: org/ 10.3389/fnagi.2012.00030
https://doi.org/10.3389/fnagi.2012.00030...
In the present study, we ascertainedthat there wasacorrelationbetween detuningand the processing of auditory information, showing that individuals who sing off tune, in addition to having problems in vocal production, also have problems processing speech, presumably through inefficient processing of neurons in the subcortical and cortical regions.

It is interesting to note that the vast majority of studies were performed by evaluating the FFR with monaural stimulation only in the right ear. What can be explained by the advantage of the right ear and, therefore, of the left hemisphere in the processing of information related to the processing of verbal sounds.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...
,1919 Skoe E, Kraus N. Auditory brain stem responseto complex sounds: a tutorial. Ear Hear 2010;31(03):302–324,2020 Siedenberg R, Goodin DS, Aminoff MJ, Rowley HA, Roberts TP. Comparison of late components in simultaneously recorded event-related electrical potentials and event-related magnetic fields. Electroencephalogr Clin Neurophysiol 1996;99(02):191–197,2525 Sanfins MD, Hatzopoulos S, Donadon C, et al. An analysis of the parameters used in speech ABR assessment protocols. J Int Adv Otol 2018;14(01):100–105. Doi: 10.5152/IAO/2018.3574
https://doi.org/10.5152/IAO/2018.3574...
However, there are some researchers who have analyzed the FFR assessments considering the monaural responses of the right and left ears separately. The researchers reported that the monaural responses of the right ear seem to be similar to those obtained in the left ear, however, each point out that the studies were carried out with individuals considered typical, that is, without speech, language and/or communication complaints.2626 Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011;32(02):168–180 2727 Akhoun I, Moulin A, Jeanvoine A, et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008;175(02):196–205 2828 Sanfins MD, Borges LR, Ubiali T, et al. Speech-evoked brainstem response in normal adolescent and children speakers of Brazilian Portuguese. Int J Pediatr Otorhinolaryngol 2016;90:12–19 Thus, the present study analyzed the responses of the right and left ears (monaural stimulation), showing that the responses obtained in the left ear seem to be important in the process of differential diagnosis of individuals who sing in tune from those who do not sing in tune. Therefore, our study calls attention to care in carrying out and analyzing the FFR evaluation in individuals with pathologies, that is, in certain pathologies the evaluations must be performed through monaural stimulation in the right and left ears. The evaluation of the FFR may allow us to understand how the encoding process of speech sounds occurs in different pathological conditions within the communicative process.

The brain is divided into two hemispheres, right and left, with each one being responsible for different functions. There is an important relationship between brain hemisphere and auditory processing. The vast majority of people has the left hemisphere/right ear (LH/RE) as the area responsible for understanding speech and language. The information received in the right ear is transferred directly to the left hemisphere (language-dominant hemisphere). However, the right hemisphere/left ear (RH/LE) has an important function in the processing of specific aspects, such as melody, pitch, prosody, and intonation, which are essential for understanding speech. The information that arrives in the left ear is initially processed in the right hemisphere, and, with the help of the corpus callosum, this information is forwarded to the left hemisphere.2929 Knecht S, Dräger B, Deppe M, et al. Handedness and hemispheric language dominance in healthy humans. Brain 2000;123(Pt 12):2512–2518 So, the present study allows us to understand that musicians present a more efficient and robust processing in the right ear due their musical experience. This way, probably, there is a decrease in the time of neural transmission of auditory information since the ability to significantly decode speech elements is a complex task that involves multiple stages of neural processing to reach the auditory cortex.

Regarding the amplitude values, significant statistical differences were observed only in the O wave in the left ear. The amplitude parameters seem to vary widely, and there is little accuracy in distinguishing individuals who can sing in tune from those who cannot. A recent study observed FFRresponses only in the rightear of individuals who sing off tune;it also showedthat the amplitude valuesdo not seem to beeffectivein identifying poor vocaltuning.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...
Thesefindings corroborate previousFFRstudies,which alsohighlighted that the amplitude measures are not very reliable in distinguishing between normal and pathological individuals.3030 Sanfins MD, Borges LR, Donadon C, Hatzopoulos S, Skarzynski PH, Colella-Santos MF. Electrophysiological responses to speechstimuli in children with otitis media. J Hear Sci 2017;7(04):9–19,3131 Colella-Santos MF, Donadon C, Sanfins MD, Borges LR Otitis media: long-term effect on central auditory nervous system. BioMed Research International. 2019, Article ID 8930904, 10 pages. https://doi.org/10.1155/2019/8930904
https://doi.org/10.1155/2019/8930904...

Only one study has been found in the literature associating FFR and tune.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...
This study revealed a difference between individuals who can sing in tune and those who cannot in neural processing on FFR response using the syllable /da/. However, the analysis was exclusive to the responses of the right ear. The authors suggested that an individual with a good knowledge and experience of music is more likely to have developed efficient processing of language. Another interesting point highlighted was that the brainstem seemed to have an active role in the neural decoding of sounds, besides that, the musical experience and sound stimulation throughout life could improve skills across the entire auditory trajectory.1414 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
https://doi.org/10.17430/JHS.2020.10.3.6...

Frequency-following response is a type of neurophysiological evaluation that allows the investigation and monitoring of the coding of speech sounds in the brainstem, subcortical, and cortical regions. The present study demonstrates that individuals who sing off tune have a deficit in the processing of sound information, which may be one reason that their vocal tuning is also negatively affected. These individuals seem to have a weaker neural network for the perception of speech sounds compared with the responses of individuals who are able to sing in tune.

Limitation and Future Research

In the present study, there was a predominance of females in the EG. Further studies should include an equal number of males and females as well as a large number of individuals in both groups. Furthermore, new research on this topic should continue to be regarded as a useful measurement to evaluate and monitor individuals who sing off tune.

Conclusion

The present study showed that monaural stimulation (right or left) in an FFR test is useful for demonstrating impairment of speech perception in individuals who sing off tune. Alterations were observed (i) in the right ear, where the latencies of waves A, C, D, and F were associated with normal values of amplitude; and (ii) in the left ear, where alterations in the latencies of waves C, D, E, F, and O were associated with amplitude values of wave O. The response of the left ear appears to present more subtlety and reliability when identifying the coding ofspeech sound in individuals who singoff tune.

References

  • 1
    Zuk J, Benjamin C, Kenyon A, Gaab N. Behavioral and neural correlates of executive functioning in musicians and non-musicians. PLoS One 2014;9(06):e99868
  • 2
    Strait DL, Slater J, O’Connell S, Kraus N. Music training relates to the development of neural mechanisms of selective auditory attention. Dev Cogn Neurosci 2015;12:94–104
  • 3
    Strait DL, Kraus N. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res 2014;308:109–121
  • 4
    Strait DL, Parbery-Clark A, O’Connell S, Kraus N. Biological impact of preschool music classes on processing speech in noise. Dev Cogn Neurosci 2013;6:51–60
  • 5
    Houaiss A. Dicionário da língua portuguesa. Rio de Janeiro: Objetiva; 2001
  • 6
    Sobreira S. Desafinação vocal. Rio de Janeiro: Musimed; 2003
  • 7
    Moura Fd. Análise do processamento auditivo em cantores afinados e desafinados. São Paulo: Centro Universitário das Faculdades Metropolitanas Unidas; 2008
  • 8
    Mawhinney T. Tone-deafness and low musical abilities - an investigation of prevalence. characteristics and tractability. Kingston: Queen’s University; 1986
  • 9
    Heresniak M. The care and training of adult bluebirds: teaching the singing impaired. J. Singing 2004;61(01):9–25, set.-oct.
  • 10
    Ishii C, Arashiro PM, Desgualdo L. Ordering and temporal resolution in professional singers and in well tuned and out of tune amateur singers. PróFono 2006;18(03):285–292. Doi: 10.1590/S0104-56872006000300008
    » https://doi.org/10.1590/S0104-56872006000300008
  • 11
    Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011;122 (02):346–355
  • 12
    Song JH, Nicol T, Kraus N. Reply to Test-retest reliability of the speech-evoked ABR is supported by tests of covariance. Clin Neurophysiol 2011;122(09):1893–1895
  • 13
    Kimura D. Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology 1961;15:166–171. http://dx.doi.org/10.1037/h0083219
    » http://dx.doi.org/10.1037/h0083219
  • 14
    Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020;10(03): 58–67. https://doi.org/10.17430/JHS.2020.10.3.6
    » https://doi.org/10.17430/JHS.2020.10.3.6
  • 15
    Davis H. Silverman RS. Hearing and deafness. Nova York: NY: Rinehart & Winston; 1970
  • 16
    Jerger J. Clinical experience with impedance audiometry. Arch Otolaryngol 1970;92(04):311–324
  • 17
    Moreti F, Pereira LD, Gielow I. Pitch-matching scanning: comparison of musicians and non-musicians’ performance. J Soc Bras Fonoaudiol 2012;24(04):368–373
  • 18
    Jasper HH. The ten-twenty system of the International Federation. Electroencephalogr Clin Neurophysiol 1958;10:371–375
  • 19
    Skoe E, Kraus N. Auditory brain stem responseto complex sounds: a tutorial. Ear Hear 2010;31(03):302–324
  • 20
    Siedenberg R, Goodin DS, Aminoff MJ, Rowley HA, Roberts TP. Comparison of late components in simultaneously recorded event-related electrical potentials and event-related magnetic fields. Electroencephalogr Clin Neurophysiol 1996;99(02):191–197
  • 21
    Woods DL, Alho K, Algazi A. Intermodal selective attention: evidence for processing in tonotopic auditory fields. Psychophysiology 1993;30(03):287–295
  • 22
    Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004;115(09):2021–2030
  • 23
    Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005;156(01):95–103
  • 24
    Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Frontiers in Aging Neuroscience 4, (2012). Accessed January 112023. Doi: org/ 10.3389/fnagi.2012.00030
    » https://doi.org/10.3389/fnagi.2012.00030
  • 25
    Sanfins MD, Hatzopoulos S, Donadon C, et al. An analysis of the parameters used in speech ABR assessment protocols. J Int Adv Otol 2018;14(01):100–105. Doi: 10.5152/IAO/2018.3574
    » https://doi.org/10.5152/IAO/2018.3574
  • 26
    Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011;32(02):168–180
  • 27
    Akhoun I, Moulin A, Jeanvoine A, et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008;175(02):196–205
  • 28
    Sanfins MD, Borges LR, Ubiali T, et al. Speech-evoked brainstem response in normal adolescent and children speakers of Brazilian Portuguese. Int J Pediatr Otorhinolaryngol 2016;90:12–19
  • 29
    Knecht S, Dräger B, Deppe M, et al. Handedness and hemispheric language dominance in healthy humans. Brain 2000;123(Pt 12):2512–2518
  • 30
    Sanfins MD, Borges LR, Donadon C, Hatzopoulos S, Skarzynski PH, Colella-Santos MF. Electrophysiological responses to speechstimuli in children with otitis media. J Hear Sci 2017;7(04):9–19
  • 31
    Colella-Santos MF, Donadon C, Sanfins MD, Borges LR Otitis media: long-term effect on central auditory nervous system. BioMed Research International. 2019, Article ID 8930904, 10 pages. https://doi.org/10.1155/2019/8930904
    » https://doi.org/10.1155/2019/8930904

Publication Dates

  • Publication in this collection
    09 June 2023
  • Date of issue
    Apr 2023

History

  • Received
    03 Dec 2020
  • Accepted
    20 May 2021
Fundação Otorrinolaringologia R. Teodoro Sampaio, 483, 05405-000 São Paulo/SP Brasil, Tel.: (55 11) 3068-9855, Fax: (55 11) 3079-6769 - São Paulo - SP - Brazil
E-mail: iaorl@iaorl.org