IMR Press / JIN / Volume 22 / Issue 5 / DOI: 10.31083/j.jin2205112
Open Access Original Research
The Levels of Auditory Processing during Emotional Perception in Children with Autism
Show Less
1 Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of Russian Academy of Science, 117485 Moscow, Russian Federation
2 Laboratory for the Study of Tactile Communication, Pushkin State Russian Language Institute, 117485 Moscow, Russian Federation
3 Laboratory for Neurocognitive Research, Our Sunny World Center for Children with Autism, 109052 Moscow, Russian Federation
4 Laboratory of Physiology of Sensory Systems, Institute of Higher Nervous Activity and Neurophysiology of Russian Academy of Science, 117485 Moscow, Russian Federation
*Correspondence: caviter@list.ru (Galina V. Portnova)
J. Integr. Neurosci. 2023, 22(5), 112; https://doi.org/10.31083/j.jin2205112
Submitted: 11 March 2023 | Revised: 20 May 2023 | Accepted: 29 May 2023 | Published: 9 August 2023
Copyright: © 2023 The Author(s). Published by IMR Press.
This is an open access article under the CC BY 4.0 license.
Abstract

Background: The perception of basic emotional sounds, such as crying and laughter is associated with effective interpersonal communication. Difficulties with the perception and analysis of sounds that complicate understanding emotions at an early development age may contribute to communication deficits. Methods: This study focused on auditory nonverbal emotional perception including emotional vocalizations with opposite valences (crying and laughter) and neutral sound (phoneme “Pᴂ”). We conducted event-related potential analysis and compared peak alpha frequencies (PAFs) for different conditions in children with autism spectrum disorder (ASD) and typically developing (TD) children aged 4 to 6 years old (N = 25 for each group). Results: Children with ASD had a higher amplitude of P100 and lower amplitude of N200 for all types of sounds and higher P270 in response to neutral phoneme. During the perception of emotional sounds, children with ASD demonstrated a single P270 electroencephalography (EEG) component instead of a P200–P300 complex specific to TD children. However, the most significant differences were associated with a response to emotional valences of stimuli. The EEG differences between crying and laughter were expressed as a lower amplitude of N400 and higher PAF for crying compared to laughter and were found only in TD children. Conclusions: Children with ASD have shown not just abnormal acoustical perception but altered emotional analysis of affective sounds as well.

Keywords
electroencephalography (EEG)
crying
laughter
phoneme
peak alpha frequency
event-related potential (ERP)
N400
late positivity (LP)
P270
1. Introduction

Individuals with autism spectrum disorder (ASD) have trouble recognizing and responding to the emotional and psychological states of other people [1, 2]. People with ASD tend to ignore emotional prosody and focus solely on semantics [3]. Their emotional reactions are usually immature [4] and thus hard to interpret. Different modalities of emotional expression, such as voice pitch, and facial and body gestures, are often incongruent [5]. People with ASD differ from their neurotypical peers in their ability to process and understand emotional states through both facial gesture and voice intonation. Such differences have been found in both children [6] and adults [7]. The ability to properly understand emotional context is essential for adequate social perception and communication. Recent studies have shown that the nonverbal auditory component of a complex multimodal message prevails over the visual component during the perception of interlocutor emotional states in the processing of social information [8]. Adults with ASD not only have abnormal processing of emotional prosody [9] but also have difficulty expressing basic emotions in speech [10]. Peculiarities in auditory perception in children with ASD have been shown in neurophysiological research [11, 12]. In particular, event-related potential (ERP) studies have shown altered sensory processing in people with ASD [13]. Children with ASD also have larger variability in auditory ERPs compared with typically developing (TD) peers aged 3 to 9 years [14] and demonstrate impairment in perception of emotional speech prosody [15], altered perception of nonverbal emotional sounds [16], and difficulty with an emotional assessment of musical fragments [17]. Other findings have confirmed that both late and early components of ERPs may be an ideal tool to investigate emotional perception and thus could be successfully applied to clinical populations [18, 19, 20, 21]. Previous electroencephalography (EEG) studies have also demonstrated that the frequency characteristics of EEG can be used to assess the perception of different valences of emotional sounds [22] and are sensitive to a subject’s emotional state during emotional stimulation [23]. Some clinical studies evaluating the association between peak alpha frequency (PAF) and emotional states have shown that PAF is significantly related to depressive symptomatology [24] (attributed to the subjective perception of tonic pain) [25].

The study had several goals. First, to identify the EEG traits related to the nonspecific auditory perception of emotional and neutral stimuli between children with ASD and TD children. Second, to identify the EEG traits related to differences in emotionally significant stimuli. Third, to study differences in the ability to discriminate among the emotional stimuli of different valences in children with ASD and their TD peers. The analysis approaches and stimuli were selected in accordance with the aforementioned goals.

2. Methods
2.1 Participants

We recruited 25 children with ASD and 25 TD children (see Table 1). ASD was diagnosed according to the International Classification of Diseases, 10th revision criteria by a clinical psychologist to exclude cognitive or mental impairment. Parents were asked to complete the Childhood Autism Rating Scale (CARS) with the assistance of a clinical psychologist to exclude ASD. All children in the ASD group were diagnosed with early childhood autism. The severity was assessed by the CARS. Children within the ASD group showed mainly motor, verbal, and play stereotypic behavior but less social and dimensional stereotypes. None of the subjects had a history of epilepsy or other seizures. The Wechsler Intelligence Scale for Children, fourth edition was used to classify intellectual disability (see Table 1).

Table 1.Descriptive statistics of demographic data.
Group n Age Sex CARS WISC-IV
min max mean SD m f mean SD mean SD
ASD 25 4 5 4.84 1.8 14 11 40.4 9.3 87.2 10.2
TD 25 4 6 5.25 1.9 13 12 15.4 4.1 90.1 9.5

ASD, autism spectrum disorder; TD, typically developing; CARS, Childhood Autism Rating Scale; SD, Standard deviation; WISC-IV, Wechsler Intelligence Scales for Children, fourth edition.

The inclusion criteria for participation in the study included a CARS score <25 (average score, 17.8 ± 2.4) and age from 4 to 5 years old. The exclusion criteria included a history of neurological or mental disorders other than ASD, a history of brain injury or other comorbid conditions, active psychopharmacotherapy, and epileptic activity on EEG. The research protocols were approved by the ethics commission of the Institute of High Nervous Activity and Neurophysiology of RAS (Protocol No. 2 from 20.04.2016). Participants’ parents provided written informed consent for study participation.

2.2 Stimuli

The stimuli included a recording of infant crying and laughter vocalizations, which were purchased from internet sound databases (Sound Jay, Sound Library, Freesound, Soundboard). The raw audio files were downsampled to a rate of 44.1 kHz to mono waveform (WAV) files. All files were normalized to a root mean square (RMS) amplitude and were modified with respect to the stimulus length with WavelLab 10.0 (Steinberg, Hamburg, Germany).

Twenty-four original audio files (13 crying and 11 laughing vocalizations) were selected for pilot perceptual assessment by nineteen adults (students) in the pilot experiment (average age, 20.1 years; standard deviation = 3.7; range = 19–25; 10 females; none of these subjects participated further in the study). They were asked to rate each of the 24 stimuli presented in random order using the following scales (0–10): “unpleasant-pleasant”, “calming-arousing”, and “hardly recognized—well recognized”. After the pilot study, we removed hardly recognized stimuli and selected sounds with the highest rates of pleasantness (laughter) and unpleasantness (crying) with similar rates of arousal and physical characteristics (duration, pitch, and loudness). The phoneme “Pᴂ” was also selected after the same pilot study, as it was the easiest to recognize and the most emotionally neutral prosody. One stimulus of each kind was used in all the participants.

Finally, we presented crying, laughter, and phoneme vocalizations with the following physical parameters: “crying” had a duration of 751 ms, average pitch of 973 Hz, average loudness of 39.8 dB (RMS), maximum loudness of 45.0, and minimum loudness of 25.7 dB (RMS). Laughter was 755 ms long and had an average pitch of 961 Hz, average loudness of 41.2 dB (RMS), maximum loudness of 47.1 dB (RMS), and minimum loudness of 26.1 dB (RMS). Phoneme “Pᴂ” had a duration of 403 ms, average pitch of 967 Hz, loudness of 40.5 dB (RMS), maximum loudness of 45.2, and minimum loudness of 35.9 dB (RMS) (obtained with WaveLab 6; Steinberg Media Technologies GmbH, Hamburg, Germany). The sounds were presented using Presentation 22.0 (Neurobehavioral System, Inc., Berkeley, CA, USA).

2.3 Procedure

Crying, laughter, and phoneme sounds were presented in a randomized sequence. Each stimulus was presented 50 times. The interval between stimuli was randomized in the range of 1500–3000 ms. The sound stimuli were presented with dynamics; the eyes of all participants were opened. The whole procedure took about 20 min.

2.4 EEG Registration

Participants were placed in a sitting position in an acoustically and electrically isolated chamber during the recording. In the resting-state session, they were instructed to close their eyes, remain calm, and avoid falling asleep or engaging in any movement, speech, or other activity. EEG was recorded using a 19-channel Encephalan EEG amplifier (Medicom MTD, Taganrog, Russian Federation). The amplifier bandpass filter was nominally set to 0.05–70 Hz. Continuous EEG was recorded with 19 AgCl electrodes located according to the International 10–20 system with an average mastoid reference. The sampling rate was 250 Hz, with impedances below 10 kilohms. Eye movements were recorded with additional electrodes located above and below the left eye (for vertical eye movements) and lateral to both lateral canthi (for horizontal eye movements).

2.5 EEG Preprocessing

An independent component analysis (ICA)-based algorithm with the EEGLAB [26] plugin for MatLab 7.11.0 (MathWorks, Natick, MA, USA) was used to filter eye movement artifacts out of the continuous EEG corresponding to the resting-state session of each subject. Muscle artifacts were removed with manual data inspection. Finally, we analyzed the data of 50 children. The continuous resting-state EEG of each subject was filtered with a band-pass filter set to 0.5–30 Hz. Then the artifact-free EEG epochs underwent the Fast Fourier transform (FFT), which was used to calculate the power spectral density (PSD).

2.6 Data Analysis
2.6.1 ERP Analysis

The next stage included analysis with EEGLAB 14 (a Matlab toolbox). The data were filtered with a 1.6 Hz high-pass filter, 30 Hz low-pass filter, and 50 Hz notch filter. The reference electrode was changed to a common average reference. Ocular and muscular artifacts were removed with ICA. The EEG was segmented to 1000 ms epochs starting from 200 ms before the stimulus onset. Individual ERP component traits (e.g., latency and amplitude) were extracted for further analyses. We measured and analyzed the amplitudes and latencies of the following ERP components: P100, N200, P200, P270, late positivity (LP), and N400. Each component was selected for each subject based on the topographical distribution of the grand-averaged ERP activity. In cases where the peak was not detected for the adjusted electrodes and latencies, we considered that the participant did not have the ERP component. Fz, F3, F4, Cz, C3, and C4 electrodes were used for the analysis of the P100 component (with the latency of 50–150 ms), N200 component (120–220 ms), P200 component (180–300 ms), and P3a (250–400 ms). Cz, C3, C4, Pz, P3, and P4 electrodes were selected for analysis of the LP (450–650 ms) and N400 components (400–600 ms). The electrodes chosen as the ERP components of interest usually have fronto-central or central-parietal-occipital localizations [27, 28].

We could easily visualize the LP components in individual ERPs of TD children only for crying sounds; however, we could not visualize the individual LP components for sounds of laughter and phoneme in the TD group or for any sound in children with ASD. To analyze the difference between the LP component, we calculated the square under the curve (SLP) for latency 450–650 ms and y = 0. If the value under the curve and upper ordinate (y = 0) could not be found, it was estimated to be 0. To calculate ERP differences between crying and laughter, we also calculated the square between curves for crying and laughter on the latency of 400–650 ms (Sdiff). If the curve on the latency of 400–650 ms was more positive for laughter than crying (e.g., in some children with ASD), the Sdiff was negative. SP150–450 was calculated using the square under the curve for a latency of 150–450 ms and y = 0. When the ERP components were discriminated for each participant (or if the subject did not have an ERP component), we also calculated the latency and amplitude of each component in each electrode to evaluate the topography of differences.

2.6.2 Peak Alpha Frequency (PAF)

To calculate the PAF, we selected 1.25 s EEG fragments beginning from stimuli onset and finally received 47.8 ± 1.4 EEG fragments (trials) for laughter and 46.9 ± 1.6 EEG fragments for crying. In a similar way, we selected 48.2 ± 1.2 s EEG fragments for phonemes. After visual inspection from 42 to 50 trials for each type of stimuli, each participant was used for further analysis. The difference in duration between emotional sounds and phonemes contributed to the attempt to reduce the possible effect of the stimuli duration. We also selected 50 ± 1.25 s and 50 ± 0.9 s resting-state EEG fragments. The calculations were made with MATLAB (MathWorks, Natick, MA, USA) using the hamming window with 50% overlap between contiguous sections for each trial separately, and then averaged.

PAF identification was conducted using FFT. The PAF was estimated as a value of frequency with maximal PSD from the range of 8–13 Hz based on the frequency discretization data. If no peak was present, it was not counted. Due to the absence of differences between resting-state PAFs (rsPAFs) calculated for resting-state EEG fragments with different durations (p = 0.92), we calculated the mean rsPAFs for each subject and used them for further analysis.

2.6.3 Statistical Analyses

Statistical analyses were conducted with STATISTICA version 13 (StatSoft Inc, Tulsa, OK, USA). Differences between groups were assessed with repeated measures analysis of variance (ANOVA) with Tukey’s post hoc comparison (p < 0.05). Repeated measures ANOVA on the amplitude and latency of each component was conducted with emotional vocalizations (crying and laughter) and phonemes. Degrees of freedom for F-ratios were corrected according to the Bonferroni method. To conduct statistics on the PAF, repeated measures ANOVA for merged PAF values were applied. Correlation analysis between EEG parameters (Sdiff, SP150–450, SLP, PAF) was conducted for all children and for each group separately to assess the effect. Spearman’s rank correlation was used to evaluate the relationships between EEG values (p < 0.05). The post hoc comparisons were adjusted for multiple comparisons by Bonferroni correction. All analytical steps were performed with STATISTICA version 13 and scripts implemented in MATLAB R2018b.

3. Results
3.1 Group Differences in ERPs

The ERPs of the TD and ASD groups had similar structures; however, the ERP components of children with ASD and TD children had some specific differences (Fig. 1). The differences in ERP components between children with ASD and TD children were most pronounced in the central, occipital, and parietal areas (see Fig. 1). The positive component P100 had a significantly higher amplitude in TD children for all types of stimuli (F(1, 47) = 13.982, p = 0.00009). The N200 had a significantly higher amplitude in the ASD group (F(1, 47) = 12.874, p = 0.00088). For latency of 200–400 ms, both groups of children had positive components; however, for the emotional sounds, TD children had two positive components P200 (P200, P3a), whereas children with ASD had a single P200 component (F(1, 47) = 13.272, p = 0.00012). During phoneme presentation, both groups had a single P3a; however the amplitude was significantly higher in TD children (F(1, 47) = 14.025, p = 0.00006). There were no significant differences in peak latencies. Detailed information is presented in Table 2.

Fig. 1.

ERPs of children with ASD and TD children for both types of stimuli: phonemes (A) and emotional sounds (B). ERPs for Pz electrodes are averaged over all conditions (laughter and crying). Scalp maps indicate localizations of significantly different electrodes for N200 (A2) and P300 (A3, B2) components. Stars indicate significant group differences (ANOVA) between ERP components’ amplitude. **p < 0.01, *p < 0.05. ERP, event-related potential.

Table 2.Descriptive statistics of ERP components and S𝐋𝐏 values (µV*ms) for two groups of subjects and three types of stimuli (crying, laughter, and phoneme) averaged over specified sets of electrodes (see section “ERP analysis”).
Group Stimuli ERP P100 N200 P200 P3а LP N400 SLP (µV*ms) Sdiff (µV*ms) SP150–450
TD phoneme Amp 2.23 ± 0.4 –3.17 ± 1.2 - 6.22 ± 0.8 - –0.95 ± 1.2 7.3 ± 19 725 ± 68 377 ± 52
Lat 101 ± 12 178 ± 22 - 308 ± 21 - 543 ± 27
crying Amp 1.07 ± 0.6 –2.31 ± 0.8 5.09 ± 0.8 3.88 ± 1.1 1.62 ± 1.0 –0.12 ± 1.3 116.4 ± 26 718 ± 49
Lat 98 ± 11 169 ± 21 269 ± 22 347 ± 19 540 ± 17 589 ± 26
laughter Amp 0.99 ± 0.8 –2.82 ± 0.9 4.47 ± 1.1 3.71 ± 0.9 - –4.86 ± 0.8 1.8 ± 23 755 ± 36
Lat 103 ± 12 181 ± 18 272 ± 19 342 ± 23 - 521 ± 23
ASD phoneme Amp 0.02 ± 1.3 –5.08 ± 1.3 - 1.84 ± 0.8 - –1.12 ± 1.4 1.6 ± 17 –12.6 ± 94 164 ± 58
Lat 99 ± 18 175 ± 23 - 307 ± 24 - 556 ± 28
crying Amp 0.39 ± 1.0 –3.72 ± 1.1 - 3.33 ± 1.1 - –3.90 ± 1.3 1.3 ± 13 487 ± 61
Lat 122 ± 17 171 ± 23 - 310 ± 23 - 549 ± 27
laughter Amp 0.79 ± 1.1 –3.38 ± 1.2 - 3.18 ± 1.2 - 3.87 ± 1.5 0.9 ± 19 491 ± 55
Lat 119 ± 22 168 ± 19 - 302 ± 26 - 562 ± 29

Amp, Amplitude; Lat, Latency; ERP, event-related potential; LP, late positivity.

3.2 Differences between Emotional Tones of Stimuli

We have plotted ERP curves for emotionally different stimuli i both groups (Fig. 2). TD children had a significantly higher amplitude of the N400 component during laughter compared to crying (F(1, 47) = 12.119, p = 0.00051) located in the left frontal and temporal areas. Children with ASD did not have significant differences in N400 amplitude between crying and laughter. At the same time, children with ASD showed a significantly higher amplitude of the N400 component for emotional sounds compared to neutral phonemes. Similar differences were found in the TD group between laughter and phonemes (F(1, 47) = 15.836, p = 0.00002). At the same time, the emotional sound of crying induced the LP component (or other equivalent) on latencies from 450 to 650 ms only in TD children, whereas for other sounds (phonemes and laughter), the positive peak could hardly be detected. We also could not visualize the individual LP components in children with ASD.

Fig. 2.

ERPs for crying and laughter in two groups of subjects: (A) children of the control group (B) children with ASD. The significant differences (ANOVA) between sounds of crying and laughter were found for the amplitude of the P300 component; the localization of differences is depicted in maps A2 and B2. Stars indicate significant group differences (ANOVA) between ERP components’ amplitude. **p < 0.01.

To analyze the difference between LP components for different sounds and groups, we calculated the square under the curve for a latency of 450–650 ms and y = 0 (SLP). The results showed that SLP in TD children was significantly higher for crying sounds compared to laughter and phonemes (F(1, 47) = 14.239, p = 0.00006). In the ASD group, SLP values for different sounds did not differ statistically. Sdiff was significantly higher in TD children compared to children with ASD (F(1, 47) = 17.016, p < 0.00001). SP150–450 was significantly higher in TD children compared to children with ASD for each type of sound: phoneme (F(1, 49) = 12.064, p = 0.00049), crying and laughter (F(1, 47) = 16.228, p < 0.00001; F(1, 47) = 18.354, p < 0.00001). No significant differences were found for any of the stimuli between boys and girls within groups.

3.3 Peak Alpha Frequency (PAF)

The results showed that PAF significantly increased during emotional stimuli perception only in TD children, whereas children with ASD did not show any significant difference between rest and stimulation (condition(3) × group effect F(2, 94) = 9.675, p = 0.0001, post hoc Bonferroni p < 0.0072 in TD children and p > 0.19 in children with ASD). The PAFs also did not differ between the ASD and TD groups during the resting state and phoneme perception (Fig. 3). We also found significant differences between PAFs for crying and laughter only in TD children (condition(2) × group effect F(1, 49) = 10.975, p < 0.0001, post hoc Bonferroni p < 0.0035 in TD children).

Fig. 3.

Peak alpha frequency (PAF) between groups for different conditions and its topography. (A) the group values of PAF averaged over all cites. The significant differences (Student’s t-test) were calculated inside each group and were marked with curly brackets (**p < 0.01). (B) The topography of differences (after Bonferroni correction, marked with black dots) for the TD group between rest and crying (a), rest and laughter (b) and laughter and crying (c).

We also found significant differences between PAFs for crying and laughter (F(2, 94) = 10.9754, p = 0.00009) only in TD children.

3.4 Correlation between EEG Parameters

During the assessment of individual ERPs, we found that in the case when a child had pronounced complex P2–P3a, the difference between crying and laughter at the late latencies was also pronounced. We hypothesized the relationship between positivity on the latency of 150–450 ms and the differences in processing between crying and laughter on the later latencies. The results showed that SP150–450 was positively correlated with Sdiff and SLP and the differences in PAFs between crying and laughter. The results of the correlation analysis are presented in Table 3.

Table 3.Spearman’s rank correlation coefficient (N – number of subjects, R – Spearman’s rank correlation coefficient) in all children(over all subjects), the TD group and the ASD group.
All children TD group ASD group
N R p-level N R p-level N R p-level
SP150–450 vs. Sdiff 48 0.66 <0.001 25 0.72 <0.001 23 0.21 0.09
SP150–450 vs. SLP 29 0.61 <0.001 25 0.69 <0.001 - - -
SP150–450 vs. ΔPAF 50 0.58 0.001 25 0.64 0.001 25 0.32 0.03

SLP was >0 only in four children with ASD, so a correlation analysis for the ASD group was not conducted.

4. Discussion

The results showed that compared to TD children, children with ASD were characterized by specific features of perception of sound stimuli, some of which were nonspecific and concerned with both emotionally significant and neutral stimuli, whereas others were associated with peculiarities of perception of laughter and crying sounds. In particular, children with ASD had a lower amplitude of the P100 component and a higher amplitude of the N200 component for all stimuli types (both emotionally significant and neutral) compared to the control group, which corresponded with previously identified features of perception of emotionally significant stimuli and phonemes in children with ASD [16, 29] and indicated specific features of sound perception in individuals with autism [30]. Changes in the amplitude and latency of P100 and N200 components in subjects with ASD have been extensively studied by researchers in the context of features of perception of complex stimuli that require a considerable amount of cognitive effort from children with autism, particularly activation of attention and memory processes [31, 32]. As previously shown, the differences in amplitude of these components identified in our study were more likely related to nonspecific features of sensory stimulus analysis than to emotional perception [32].

The other peculiarity of EEG in children with ASD was attributed to the valence of the nonspecific response to emotional stimuli. In particular, careful analysis of individual ERPs revealed that a complex of positive components was detected only in TD children presented with emotionally significant stimuli, which we labeled P200-P3a, whereas children with ASD had only one component when presented with emotionally significant stimuli. It was difficult to unequivocally distinguish between P200 and P3a in ASD individuals, but we considered P3a to be dominant over P200. Such an effect was previously shown in an oddball paradigm and is considered to be linked to challenged stimulus recognition [33]. In our paradigm, all stimuli had the same frequency, so this effect could be explained with challenged recognition of the stimuli (i.e., TD children recognized the same sounds of crying and laughter as repetition, while ASD children found it to be complicated). This was confirmed by previous works showing greater trial-to-trial variability (thus, complicated perception of repetitive stimuli) in individuals with ASD [34]. According to some studies, the presence of a double-positive complex in TD children can be regarded as a nonspecific response to emotional stimuli and is a consequence of the activation of cognitive processes necessary for analysis of the stimulus [33, 35]. The emotional nature of the double peak in children in TD children can also be confirmed by our results, according to which a single-positive complex was detected when a neutral phoneme was presented to both TD children and children with ASD.

The presence of the P2–P3a complex in children in the control group appeared to be closely related to the formation of later components, such as the LP and N400, which is associated with the analysis of the valence of emotionally significant stimuli [36]. In particular, it was found that differences between laughter and crying sounds were observed only in TD children and manifested in a greater amplitude of the LP component and a smaller amplitude of the N400 component for crying sounds compared to laughter. Our results are consistent with previous findings showing that a higher N400 amplitude is associated with the processing of emotionally incongruent stimuli [37] and emotional vocal expressions [21]. The results of these studies indicate that increases in N400 amplitude are typically observed when analyzing more complex emotional stimuli that have either some incongruent or verbal component or require analysis of subtle social relationships. Laughter versus crying is just such an emotion [38]. Regarding the differences between the valences of the two emotional states, it has been shown that an increase in LP can be explained by a direct response to the unpleasantness of an emotionally significant sound stimulus [19, 39], that is the involvement of an emotional response to the stimulus.

Regarding the relationship between the presence of a double-positive P2–P3a peak and the Sdiff value reflecting differences between stimulus valences at the ERP level, we suggest that the process of analyzing emotionally significant sound stimuli is a sequential activation of different brain structures and any change in this sequence may be accompanied by disturbances in the perception of emotionally significant sounds. As a result, deviations in the early stages of stimulus analysis, which we see in changes in early ERP components [40], lead to disturbances in the later stages of stimulus analysis and, consequently, to activation of other brain structures. As a result, analysis of sound stimuli of different valence in subjects with ASD leads to the activation of other neural networks. In particular, functional magnetic resonance imaging studies have shown significant differences between neural networks during the processing of sad and happy auditory stimuli in individuals with ASD and typical participants [41, 42]. Early impairments in the processing of emotionally significant stimuli that form in the early stages of child development result in features of auditory emotional perception that can be observed even in highly functional autistic individuals [43]. However, by beginning intervention in the early stages of perceptual formation, we may be able to modify the incorrect stimulus analysis process and influence the formation of later and more specific emotional perception processes.

The study had some limitations. First, it is difficult to communicate with children with ASD, so the extent of their compliance with the instructions to avoid thinking of anything specific and just listen to the sounds (that were not interesting by themselves) remains unknown. Second, the phoneme [p] itself may be quite different from the emotional stimulus; thus, it may be useful to use another control stimulus in further studies.

5. Conclusions

We compared the auditory emotional perception in children with ASD and TD children with similar IQ levels and used ERP analysis to study the sounds of crying, laughter, and neutral phonemes. Our findings indicated three levels of differences between the TD and ASD groups associated with a latency of ERP response. First, children with ASD had a lower P100 and higher N200 during the perception of both emotional and nonemotional sounds. Second, positivity on the latency of 150–450 ms was significantly more pronounced in TD children, and their ERP response to emotional sounds consisted of two components, P200 and P3a, unlike ASD children. Finally, the difference in ERP response between crying and laughter was found only in TD children and was associated with the amplitudes of late components (LP and N400) and PAF. We also found a correlation between higher positivity in the period of 150–450 ms and differences between valences of emotional stimuli.

Availability of Data and Materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Author Contributions

GP has designed the study, analyzed the data, and produced the first draft. IS has participated in data collection and text revision. LM has participated in data analysis and text revision. All authors have read and approved the final manuscript. All authors have participated sufficiently in the work and agreed to be accountable for all aspects of the work.

Ethics Approval and Consent to Participate

The study has been approved by the Institute of High Nervous Activity and Neurophisiology of RAS (Protocol No. 2 from 20.04.2016). Participants’ parents provided written informed consent for study participation.

Acknowledgment

Not applicable.

Funding

The study was supported by grant of the Russian Science Foundation, project № 22-15-00324, “Social tactile contacts and their role in psycho-emotional rehabilitation”. https://rscf.ru/en/project/22-15-00324/.

Conflict of Interest

The authors declare no conflict of interest.

References
[1]
Dennis M, Lockyer L, Lazenby AL. How high-functioning children with autism understand real and deceptive emotion. Autism. 2000; 4: 370–381.
[2]
Philip RCM, Whalley HC, Stanfield AC, Sprengelmeyer R, Santos IM, Young AW, et al. Deficits in facial, body movement and vocal emotional processing in autism spectrum disorders. Psychological Medicine. 2010; 40: 1919–1929.
[3]
Zhang M, Chen Y, Lin Y, Ding H, Zhang Y. Multichannel perception of emotion in speech, voice, facial expression, and gesture in individuals with autism: a scoping review. Journal of Speech, Language, and Hearing Research. 2022; 65: 1435–1449.
[4]
Laurent AC, Rubin E. Challenges in emotional regulation in Asperger syndrome and high-functioning autism. Topics in Language Disorders. 2004; 24: 286–297.
[5]
Paul R, Wilson KP. Assessing speech, language, and communication in autism spectrum disorders. In Goldstein S, Naglieri JA, Ozonoff S (eds.) Assessment of autism spectrum disorders (pp. 171–208). 1st edn. Guilford Press: New York. 2009.
[6]
Dawson G, Webb SJ, Carver L, Panagiotides H, McPartland J. Young children with autism show atypical brain responses to fearful versus neutral facial expressions of emotion. Developmental Science. 2004; 7: 340–359.
[7]
Ozonoff S, Pennington BF, Rogers SJ. Executive function deficits in high-functioning autistic individuals: relationship to theory of mind. Journal of Child Psychology and Psychiatry, and Allied Disciplines. 1991; 32: 1081–1105.
[8]
Hellbernd N, Sammler D. Neural bases of social communicative intentions in speech. Social Cognitive and Affective Neuroscience. 2018; 13: 604–615.
[9]
Le Gall E, Iakimova G. Social cognition in schizophrenia and autism spectrum disorder: Points of convergence and functional differences. L’Encephale. 2018; 44: 523–537.
[10]
Hubbard DJ, Faso DJ, Assmann PF, Sasson NJ. Production and perception of emotional prosody by adults with autism spectrum disorder. Autism Research. 2017; 10: 1991–2001.
[11]
Demopoulos C, Yu N, Tripp J, Mota N, Brandes-Aitken AN, Desai SS, et al. Magnetoencephalographic Imaging of Auditory and Somatosensory Cortical Responses in Children with Autism and Sensory Processing Dysfunction. Frontiers in Human Neuroscience. 2017; 11: 259.
[12]
Lepistö T, Kujala T, Vanhala R, Alku P, Huotilainen M, Näätänen R. The discrimination of and orienting to speech and non-speech sounds in children with autism. Brain Research. 2005; 1066: 147–157.
[13]
Duffy FH, Als H. A stable pattern of EEG spectral coherence distinguishes children with autism from neuro-typical controls - a large case control study. BMC Medicine. 2012; 10: 64.
[14]
Yoshimura Y, Kikuchi M, Hiraishi H, Hasegawa C, Takahashi T, Remijn GB, et al. Atypical development of the central auditory system in young children with Autism spectrum disorder. Autism Research. 2016; 9: 1216–1226.
[15]
Lindström R, Lepistö-Paisley T, Vanhala R, Alén R, Kujala T. Impaired neural discrimination of emotional speech prosody in children with autism spectrum disorder and language impairment. Neuroscience Letters. 2016; 628: 47–51.
[16]
Malaia E, Cockerham D, Rublein K. Visual integration of fear and anger emotional cues by children on the autism spectrum and neurotypical peers: An EEG study. Neuropsychologia. 2019; 126: 138–146.
[17]
DePriest J, Glushko A, Steinhauer K, Koelsch S. Language and music phrase boundary processing in Autism Spectrum Disorder: An ERP study. Scientific Reports. 2017; 7: 14465.
[18]
Kotz SA, Paulmann S. Emotion, language, and the brain. Language and Linguistics Compass. 2011; 5: 108–125.
[19]
Fields EC, Kuperberg GR. It’s all about you: an ERP study of emotion and self-relevance in discourse. NeuroImage. 2012; 62: 562–574.
[20]
Paulmann S, Kotz SA. Early emotional prosody perception based on different speaker voices. Neuroreport. 2008; 19: 209–213.
[21]
Bostanov V, Kotchoubey B. Recognition of affective prosody: continuous wavelet measures of event-related brain potentials to emotional exclamations. Psychophysiology. 2004; 41: 259–268.
[22]
Kostyunina MB, Kulikov MA. Frequency characteristics of EEG spectra in the emotions. Neuroscience and Behavioral Physiology. 1996; 26: 340–343.
[23]
Portnova GV. Lack of a Sense of Threat and Higher Emotional Lability in Patients With Chronic Microvascular Ischemia as Measured by Non-linear EEG Parameters. Frontiers in Neurology. 2020; 11: 122.
[24]
Tement S, Pahor A, Jaušovec N. EEG alpha frequency correlates of burnout and depression: The role of gender. Biological Psychology. 2016; 114: 1–12.
[25]
Nir RR, Sinai A, Raz E, Sprecher E, Yarnitsky D. Pain assessment by continuous EEG: association between subjective perception of tonic pain and peak frequency of alpha oscillations during stimulation and at rest. Brain Research. 2010; 1344: 77–86.
[26]
Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004; 134: 9–21.
[27]
Liu X, Liao Y, Zhou L, Sun G, Li M, Zhao L. Mapping the time course of the positive classification advantage: an ERP study. Cognitive, Affective and Behavioral Neuroscience. 2013; 13: 491–500.
[28]
Spychalska M, Kontinen J, Werning M. Investigating scalar implicatures in a truth-value judgement task: Evidence from event-related brain potentials. Language, Cognition and Neuroscience. 2016; 31: 817–840.
[29]
Irwin J, Avery T, Kleinman D, Landi N. Audiovisual Speech Perception in Children with Autism Spectrum Disorders: Evidence from Visual Phonemic Restoration. Journal of Autism and Developmental Disorders. 2022; 52: 28–37.
[30]
Bidet-Caulet A, Latinus M, Roux S, Malvy J, Bonnet-Brilhault F, Bruneau N. Atypical sound discrimination in children with ASD as indicated by cortical ERPs. Journal of Neurodevelopmental Disorders. 2017; 9: 13.
[31]
Sysoeva OV, Constantino JN, Anokhin AP. Event-related potential (ERP) correlates of face processing in verbal children with autism spectrum disorders (ASD) and their first-degree relatives: a family study. Molecular Autism. 2018; 9: 41.
[32]
Khuntia AT, Divakar R, Apicella F, Muratori F, Das K. Visual processing and attention rather than face and emotion processing play a distinct role in ASD: An EEG study. BioRxiv. 2019; 517664. (preprint)
[33]
Bolduc-Teasdale J, Jolicoeur P, McKerral M. Multiple electrophysiological markers of visual-attentional processing in a novel task directed toward clinical use. Journal of Ophthalmology. 2012; 2012: 618654.
[34]
Haigh SM, Van Key L, Brosseau P, Eack SM, Leitman DI, Salisbury DF, et al. Assessing trial-to-trial variability in auditory ERPs in autism and schizophrenia. Journal of Autism and Developmental Disorders. 2022. (online ahead of print)
[35]
Hajcak G, Foti D. Significance?& Significance! Empirical, methodological, and theoretical connections between the late positive potential and P300 as neural responses to stimulus significance: An integrative review. Psychophysiology. 2020; 57: e13570.
[36]
Day TC, Malik I, Boateng S, Hauschild KM, Lerner MD. Vocal emotion recognition in autism: behavioral performance and Event-Related Potential (ERP) response. Journal of Autism and Developmental Disorders. 2023. (online ahead of print)
[37]
Schirmer A, Kotz SA. ERP evidence for a sex-specific Stroop effect in emotional speech. Journal of Cognitive Neuroscience. 2003; 15: 1135–1148.
[38]
Portnova GV, Gladun KV. Laugh and crying perception in patients with severe and moderate TBI using FFT analysis. 2017 IEEE 30th International Symposium on Computer-Based Medical Systems. Thessaloniki, Greece. 22–24 June 2017. IEEE: Piscataway, NJ, USA. 2017; 123–126.
[39]
Kotz SA, Paulmann S. When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Research. 2007; 1151: 107–118.
[40]
O’Connor K. Auditory processing in autism spectrum disorder: a review. Neuroscience and Biobehavioral Reviews. 2012; 36: 836–854.
[41]
Gebauer L, Skewes J, Westphael G, Heaton P, Vuust P. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy vs. sad music. Frontiers in Neuroscience. 2014; 8: 192.
[42]
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, et al. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NeuroImage: Clinical. 2020; 28: 102512.
[43]
O’Connor K, Hamm JP, Kirk IJ. Neurophysiological responses to face, facial regions and objects in adults with Asperger’s syndrome: an ERP investigation. International Journal of Psychophysiology. 2007; 63: 283–293.

Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share
Back to top