IMR Press / JIN / Volume 19 / Issue 4 / DOI: 10.31083/j.jin.2020.04.166
Open Access Original Research
Attractiveness-related recognition bias captures the memory of the beholder
Show Less
1 School of Educational Science, Huazhong University of Science and Technology, Wuhan, 430074, P. R. China
2 Department of Orthodontics, Shanghai Xuhui District Dental Disease Prevention and Control Institute, Shanghai, 200032, P. R. China
3 Department of Oral and Cranio-maxillofacial Surgery, Ninth People’s Hospital, Shanghai JiaoTong University School of Medicine, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, National Clinical Research Center of Stomatology, Shanghai, 200011, P. R. China
4 City College, Wuhan University of Science and Technology, Wuhan, 430081, P. R. China
5 Center for Brain, Mind and Education, Shaoxing University, Shaoxing, 312099, P. R. China
6 School of Teacher Education, Shaoxing University, Shaoxing, 312099, P. R. China
*Correspondence: zhangyan1981@hust.edu.cn (Yan Zhang); xieyufei9@163.com (Yu-Fei Xie); wujinyang7029@foxmail.com (Jin-Yang Wu)
These authors contributed equally.
J. Integr. Neurosci. 2020, 19(4), 629–639; https://doi.org/10.31083/j.jin.2020.04.166
Submitted: 27 May 2020 | Revised: 11 October 2020 | Accepted: 13 October 2020 | Published: 30 December 2020
Copyright: © 2020 Zhang et al. Published by IMR Press.
This is an open access article under the CC BY 4.0 license ( https://creativecommons.org/licenses/by/4.0/).
Abstract

Earlier electroencephalographic studies have compared attractive and unattractive faces and between faces with other objects, such as flowers, without revealing if a recognition memory bias toward faces and flowers exists or whether humans exhibit enhanced specific components toward all attractive objects or only toward attractive faces. For objects with similar degrees of attractiveness, we sought to determine if the N170, P1, and N250 reflect upon the attractiveness of faces and flowers and demonstrated by comparing event-related potentials of humans’ different perceptual mechanisms recognizing high attractive faces and high attractive flowers. The repeated high attractive faces tended to elicit a larger N170. Simultaneously, the P1 was preferentially associated with the repeated high attractive flowers, but both indicated that the repetitive enhancement effect only occurred on repeated attractive faces. Thus, differences existed in the perceptual mechanisms for processing repeated high attractive faces and repeated high attractive flowers. However, there was no significant difference in N250 between repeated faces and repeated flowers or between high attractive faces and high attractive flowers. Consequently, high attractive faces and high attractive flowers capture the beholder’s memory bias in different processing stages. The N170 and P1 components are affected by attractiveness, thereby demonstrating the differences between human perceptual mechanisms in recognizing high attractive faces and objects.

Keywords
Event-related potentials
attractive faces
attractive flowers
repetitive enhancement effect
recognition
perceptual mechanisms
1. Introduction

The ability to recognize faces and objects is vital human skills, and plenty of evidence has shown that the former fundamentally differs from the latter. For instance, newborns prefer face-like configurations to other pictures (Johnson et al., 1991; Macchi Cassia et al., 2004) regardless of image inversion. The image is upside-down (Mondloch et al., 1999; Picozzi et al., 2009). For adults, face recognition is more influenced by inversion than object recognition and is also highly dependent on spatial relations among features (Farah et al., 1998). Using a deviant-standard-reversed paradigm, Wang et al. (2014) recently provided electrophysiological evidence for face orientation changes, which elicited larger event-related potentials (ERPs) components than object spatial changes.

Electroencephalographic (EEG) recordings of gross electrical activities, such as ERPs in the visual cortex, indicate that N1, P1, N170, and P170 are amplitudes concerned with the ERP effects of periods related to face perception (Halit et al., 2000). Numerous studies have confirmed that a face-selective response peak early at approximately 170 ms after presenting a facial stimulus (Bentin et al., 1996). Also, the N170 amplitude for faces is significantly larger than for other objects (Bentin et al., 1996). Moreover, the face-specific N170 component is entirely unaffected by facial expression, this suggesting that emotional expression analysis and structural face encoding are parallel processes (Eimer and Holmes, 2002; Eimer et al., 2003). N170 is also not influenced by race (Caldara et al., 2004) or familiarity (Bentin and Deouell, 2012; Eimer, 2000a).

Past studies have examined cognitive processes concerned with ERPs have been examined and interpreted N170 face sensitivity to determine the existence of brain mechanisms specializing in face processing (Duchaine and Nakayama, 2005) or as the result of adults typically having a higher level of expertise with faces than with other object categories (Kanwisher and Yovel, 2007).

Rossion et al. (2002) found that the amplitude effects are face-specific and mainly reflect the eye region contribution. Compared with other objects, a small part of the human visual cortex (fusiform face area) is more active when people look at faces (Itier et al., 2006).

The N170 component is most responsive to facial stimuli in the temporoparietal regions of the human scalp. Human response to facial stimuli is more significant than other visual objects (Bentin et al., 1996; Kanwisher et al., 1997). The electrophysiological activities recorded by ERPs evoked by faces in the bilateral occipitotemporal regions have also been shown to differ from that of objects at approximately 170 ms. Furthermore, N170 sources corresponding to the fusiform gyrus (FG) are located in the ventral temporal cortical areas (Rossion et al., 1999). A common bilateral source of N170 concerning faces, words, and cars are in the posterior FG (Itier and Taylor, 2002). Differences among categories are found in the lateralization, intensity, and orientation of dipoles. By comparison, N170 sources for faces are found in the temporal cortex (Schweinberger et al., 2002).

Although N170 is earlier and larger with faces than with all other objects (Rossion et al., 2003; Shibata et al., 2002), its specificity to faces remains fully explored. Only a few object categories have been used for ERP studies on face-sensitive N170, such as words, houses, cars, and butterflies (Itier and Taylor, 2004a). Other non-face objects have been added to identify neural sources of the face-sensitive N170 and confirm whether or not N170 amplitude for upright and inverted faces significantly differ from those for other objects, such as patterns, road signs, tools, lions, houses, and mushrooms. However, no statistically significant differences in N170 latency have been found between the images of faces and flowers (Carmel and Bentin, 2002). Thus, a certain similarity must exist between flowers and faces, especially since attractive people are often praised as ’as pretty as flowers. Also, recognition memory specificity toward attractive and unattractive faces have been explored, with results indicating that in identifying unattractive faces, the reaction time is longer, and accuracy is higher for identifying attractive faces (Aharon et al., 2001; Leder et al., 2011, 2019). ERPs also revealed that attractive faces elicit larger ERP amplitude (P160, N250, and P400) than unattractive faces on the recognition task (Rossion and Jacques, 2008).

Human faces can also elicit early ERP components (Braeutigam et al., 2001), such as P1 and N250. The P1 component is thought to reflect early attention-based visual perceptual processing (Hillyard and Anllo-Vento, 1998) and originates in the bilateral occipital lobe and FG (Mangun and Buck, 1998). Furthermore, P1 is associated with spatial visual attention and search resources (Luo et al., 2002; Mangun, 1995; Mangun and Buck, 1998). It may reflect the visual-spatial orientation of attention is faster attention capture. P1 can reflect human face processing (Herrmann et al., 2006; Mitsudo et al., 2011). The human faces’ processing mechanism differs from non-human faces, as reflected in the P1 component (Rossion and Caharel, 2011).

The evidence of N250 face sensitivity mainly comes from face familiarity modulation (Nasr and Esteky, 2009). A significant correlation has been confirmed between the N250 component and the processing of known face recognition (Barragan-Jason et al., 2016; Wuttke and Schweinberger, 2019). Furthermore, the N250 amplitude increases as a face familiarity (Alzueta et al., 2019). It is exclusively sensitive to face visibility even when the non-face stimuli serve as the task target. A correlation between evoked N170 and N250 is also observed (Nasr and Esteky, 2009).

The ERP effect of attractive faces can also be observed at the Pz electrode point at approximately 250 ms (Johnston and Oliver-Rodriguez, 1997). Therefore, the P1 and N250 components associated with face processing may be related to memory bias in face recognition.

2. Methods
2.1 Participants

Before the study, G*power software (Faul et al., 2009) was employed to calculate the total sample size needed to achieve a power of 0.95 in testing the repeated ANOVA measures. Theoretical considerations suggest that we have reason to expect a “large” effect size (f = 0.40) (Cohen, 1969, p.348). Thus, we selected a priori analysis to calculate the required sample size. The output indicated that the total sample size needed was 24, and the actual power was 0.82. Therefore, 30 adult participants aged 19-24 years old (mean age = 22.45) were invited to participate in this experiment. Of the 30 participants, 15 were males, and 15 were females. Each participant gave their informed consent after fully understanding the procedure and being given time to consider whether or not to take part in the experiment. All participants were right-handed, had self-reported normal vision, had no history of neurological or psychiatric disorders, and psychoactive medication.

2.2 Experimental stimuli

Before the ERP experiment, each participant was asked to select what they thought were the 160 high/low attractive faces and the 160 high/low attractive flowers from a picture pool (Itier and Taylor, 2004b).

First, we collected 845 unfamiliar Chinese female faces and 894 flower images from open picture material resources on Google’s website1 (1http://www.google.com.hk/images?q=%E8%AF%81%E4%BB%B6%E7%85%A7&hl=zh-CN&newwindow =1&safe=strict&client=aff-cs-360se&hs=RyH&source=lnt&tbs=isch:1,itp:photo&prmd=ivnsu&source=lnt&sa=X&ei=OFUQT dGrLYaGvAP-q-jIDQ&ved=0CA8QpwU, dates: 9-July-2009). There were 796 face stimuli and 772 flower stimuli left since low-resolution images were removed. They were edited to a uniform format (6 by 9 cm; 150 by 300 pixels), converted to 8-bit gray-scaled with identical white grounds. The photographs were digitally edited using Adobe Photoshop. The external face features, including fair, ears, and neck, were left, while the inner features, including eyes, nose, mouth, and cheek, were kept.

All faces were in the frontal view. For the 646 face and 667 flower images selected by the 2 specialists, a further 9-step rating on the dimension of Attractiveness (a beauty that appeals to the senses of stimuli images), Joviality (participants feel jolly and full of good humor when looking at the stimuli images), Arousal (a state of the heightened physiological activity of participants when looking at the stimuli images), 80 Chinese college students conducted distinctiveness (the degree of distinguishing trait of stimuli images) and a 3-step rating on Emotion valence (1-positive, 2-neutral, 3-negative) (mean age 21.98 years).

Finally, 226 high attractive face images and 249 high attractive flower images (rating range: 6-9), 260 low attractive face images, and 245 low attractive flower images (rating range: 1-4) were chosen. The t-test indicated that the attractive categories were significant. However, all categories were not significantly different between faces and flowers (P > 0.05), which means that there is no much difference in the face images and flower images at an attractive level. See Table 1.

Table 1.The t-test of flower and face images rating M (S.D.) (N = 80).
Rating Attractiveness Joviality Arousal Dominance Emotion valence
high attractive faces 7.68 (0.89) 7.50 (1.00) 7.171 (0.24) 6.59 (1.06) 2
low attractive faces 3.37 (0.92) 3.60 (0.69) 7.28 (0.89) 6.83 (0.77) 2
t 28.693 31 (0.981) -0.607 -1.444
P < 0.001 < 0.001 0.545 0.153
high attractive flowers 7.57 (0.98) 7.71 (0.85) 7.10 (0.77) 6.65 (0.98) 2
low attractive flowers 3.43 (0.54) 3.55 (0.59) 7.11 (0.71) 6.70 (0.75) 2
t 37.29 32.468 -0.092 -0.495
P < 0.001 < 0.001 0.927 0.622

Then, an experimental procedure was developed according to these pictures. Therefore, each participants’ experimental stimuli were unique and distinct from others. The ERP experiment was carried out a week later. There were 80 pictures of high attractive faces, 80 pictures of low attractive faces, 80 pictures of high attractive flowers, and 80 pictures of low attractive flowers in the study phase. The pictures were randomly selected from the pictures selected before the experiment. In this set, there were 160 high/low attractive faces and 160 high/low attractive flowers (including 80 fresh pictures of high/low attractive faces, 80 fresh pictures of high/low attractive flowers, 80 repeated pictures of high/low attractive faces and 80 repeated pictures of high/low attractive flowers) in the test phase. All face and flower pictures were edited to unify format (gray-scaled; 6 by 9 cm; 150 by 300 pixels). They were modified and controlled using the same numerical values of physical properties, including saturation, color gamut, luminance, lightness, contrast, color gradation before the experiment by Adobe Photoshop 7.0.1 software. However, several features might have profoundly affected the observers’ fixation pattern. In particular, visual saliency has been shown to affect perceptual (i.e., fixation patterns) and post-perceptual processes (Santangelo, 2015). Based on the saliency literature (Santangelo, 2015), visual saliency was checked in the current study to indicate whether each nose, mouth, and eyes region was not salient than others (Zhang et al., 2018). Therefore, low-level features were well controlled between faces and flowers.

2.3 Procedure

Participants performed a study-test paradigm. Thus, they completed two continuous phases: the study phase and the test phase. Initially, each participant performed 10 training trials.

Participants were presented with a series of selected images in the study phase, including faces and flowers. To avoid explicit learning and memory for the face and flower images, a modified location-matching paradigm (Zhang et al., 2011) was used. One of the images appeared randomly at one of the four corners of the screen for 300 ms, then the fixation point for 500 ms, and an image for 1000 ms. Participants were instructed to judge whether the current image was presented in the same visuospatial location as the previous image. The current images also appeared randomly at one of the four corners of the screen, and the four possible visuospatial locations were given equal probabilities. The expected probabilities of the same and different position trials were 25% and 75%, respectively. Participants responded by pressing the “1” and “2” keys of the keypad to indicate the same position and a different position, respectively. Of the 320 trials, 160 comprising high/low attractive flowers and 160 high/low attractive female faces. Following the study phase, the test phase was initiated after a five-minute break.

In the test phase, participants were instructed to recognize which face and flower images were seen in the study phase. They were required to press the “1” and “2” keys to detect whether or not each stimulus was ever presented, respectively. The face and flower images, including repeated and fresh images, were selected randomly at the same image pool and presented randomly. First, a fixation appeared in the screen’s center for 500 ms, followed by a clear screen for 300 ms. Next, the target stimulus appeared for 1000 ms, followed by a clear screen for 1500 ms. There were 640 trials divided into two blocks, including 320 previously-viewed images and 320 fresh images. Each block consisted of 320 trials, including 40 repeated high/low attractive faces, 40 repeated high/low attractive flowers, 40 fresh high/low attractive faces, and 40 fresh high/low attractive flowers. Each stimulus was presented randomly. A schematic overview of the experiment is shown in Fig. 1.

Fig. 1.

A schematic example of the one study-test trial from the experiment.

2.4 ERP recording

The electroencephalograph (EEG) voltages from 64 scalp sites were recorded by Brain Vision Recorder software (Version 1.10, Brain Products GmbH, Munich, Germany) with the references to the left and right mastoids (average mastoid reference) (Zhang et al., 2016). The EEG voltages were amplified using a DC~100 Hz bandpass and continuously sampled at 500 Hz/channel. Impedances were kept below 5 K Ω , and electrical signals were amplified with a bandwidth filter set at 0.1-70 Hz and stored on a hard disk for subsequent off-line processing and analysis (Zhang et al., 2016).

Off-line EEG data analysis was conducted in Brain Vision Analyzer (Version 2.1, Brain Products GmbH, Munich, Germany). During the off-line signal processing, individual trials were bandpass-filtered between frequencies of 0.1 and 30.0  Hz. For the ERP analysis, 200  ms before and 1000  ms after stimulus onset were chosen for each face and flower stimuli. The segmented epochs with EEG voltages greater than ± 80   μ V were removed. Moreover, the correct responses were used for further data processing (Zhang et al., 2016). The Average ERP waveforms were calculated separately for each condition described below in the test phase. The mean number of trials remaining after EEG processing for all conditions confounded was about 40 ± 5 trials (repeated attractive faces: 40; repeated attractive flowers: 45; fresh, attractive faces: 41; and fresh, attractive flowers: 38 trials). No significant difference was observed in the number of trails among the conditions.

2.5 Data analysis

For the ERP analysis, based on a visual examination of the topographical maps and grand averaged waveform (Fig. 4 and Fig. 5, respectively), as well as on previous literature (Ip et al., 2017), P1 (100-180 ms), N170 (140-200 ms) and N250 (200-300 ms) components, were identified. These components following the targets for correct responses were markedly elicited, after which the latency and amplitude (baseline to peak) was measured within the 100-300 ms time window. The four most representative electrode sites (PO7, PO8, O1, and O2) located in the parietal, occipital sites and occipital sites were selected for these components’ analyses. The amplitude of five conditions was measured relative to the mean pre-stimulus voltage levels. A visual inspection of the grand-averaged waveforms suggested positive and negative peaks in certain time-windows and ms for latency and μ A for amplitude. Five-way repeated measure ANOVAs were conducted on the amplitude (baseline to peak) and peak latencies (from the stimulus onset to the peak of the components) of N170, P1, and N25, with Electrode site (PO7/PO8, O1/O2), Pictures (face, flowers), Memory (repeated, fresh), Attractiveness (high, low), and Hemisphere (left, right) as within factors.

For the behavior results, recognition accuracy rate (ACC) was assessed by calculating the percentage of correct responses for repeated (hits) and fresh images (correct rejections) concerning high/low attractive faces and flowers ratings for participants. The ACC and reaction times (RTs) for correct recognition were analyzed by repeated-measures ANOVA, with Memory (repeated/fresh), Attractiveness (high/low), and Picture (faces/flowers) as within factors.

All data were exported into the data analyzing software SPSS 20.0 for repeated-measures ANOVA. Least-Significant Difference (LSD) was used in the posthoc tests when the main effect or interaction effect was significant. For all analyses, the P-values were corrected for deviation from sphericity according to the Greenhouse-Geisser correction method.

3. Results
3.1 Analysis of behavioral data under the stimulus-presented condition in the test phase

In the ACC ANOVA, the main effect for Attractive was significant [F (1, 29) = 10.43, P = 0.003, η p 2 = 0.265, and high attractive pictures (0.55 ± 0.02) obtained higher ACC than low attractive pictures (0.45 ± 0.02). There were no other significant main effects and interactions on ACC (all P > 0.05).

In the RTs ANOVA, the main effect for Memory was significant [F (1, 29) = 11.85, P = 0.002, η p 2 = 0.290, and fresh pictures (764.74 ± 18.96) obtained longer RT than repeated pictures (715.04 ± 24.90). The main effect for Pictures was significant [F (1, 29) = 17.55, P < 0.001, η p 2 = 0.377, and faces (761.63 ± 19.66) obtained longer RT than flowers (718.15 ± 24.08). A significant interaction Memory × Pictures interaction was found [F (1, 29) = 10.35, p = 0.003, η p 2 = 0.263], with indicating longer RT for fresh faces (815.66 ± 23.61) than for fresh flowers (713.82 ± 22.77), and longer RT for repeated flowers (722.48 ± 25.78) than for repeated faces (707.60 ± 24.39). In addition, a significant interaction on RT of Memory × Attractive was found [F (1, 29) = 4.76, P = 0.037, η p 2 = 0.141], but there were no significant simple effects (all P > 0.05).

The ACC and RTs of conditions’ descriptive statistics are shown in Fig. 2 and Fig. 3.

Fig. 2.

ACC of the recognition of faces vs. flowers.

Fig. 3.

RT of the recognition of faces vs.flowers.

3.2 The amplitude and latency of N170, P1 and N250 components were analyzed under the stimulus-presented condition in the test phase
3.2.1 Repeated-measure ANOVA based on the amplitude and latency of N170

The N170 amplitude was analyzed by four way repeated-measure ANOVA, in with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh) and Hemisphere (PO7, PO8) were used as within factors. The main effect for Hemisphere was significant [F (1, 29) = 6.94, P = 0.013, η p 2 = 0.193], and the amplitude was greater for PO8 (-0.83 ± 0.50 μ A) than for PO7 (0.44 ± 0.42 μ A). A significant interaction Memory × Pictures interaction was found [F (1, 29) = 9.14, P = 0.005, η p 2 = 0.240], with indicating greater amplitude for repeated high attractive faces (-0.97 ± 0.55 μ A) than for repeated high attractive flowers (0.44 ± 0.43 μ A) [F (1, 29) = 5.24, P = 0.030, η p 2 = 0.153]. A significant interaction Attractive × Pictures × Hemisphere interaction was found [F (1, 29) = 7.72, P = 0.009, η p 2 = 0.210], with indicating greater amplitude for high attractive faces (-1.11 ± 0.67 μ A) than for high attractive flowers (1.18 ± 0.62 μ A) [F (1, 29) = 9.55, P = 0.004, η p 2 = 0.248] in PO8. Moreover, a significant Memory × Attractive × Pictures × Hemisphere interaction was found [F (1, 29) = 4.44, P = 0.044, η p 2 = 0.133]. It indicted that the amplitude was greater for repeated high attractive faces (-1.41 ± 0.92 μ A) than repeated high attractive flowers (1.37 ± 0.60 μ A) [F (1, 29) = 13.63, P = 0.001, η p 2 = 0.320] in PO7. Furthermore, the amplitude was greater for repeated high attractive faces (-2.34 ± 0.82 μ A) than for repeated high attractive flowers (0.10 ± 0.55 μ A) [F (1, 29) = 6.55, P = 0.016, η p 2 = 0.184] in PO8. The N170 amplitude was also greater for fresh high attractive faces (-0.80 ± 0.63 μ A) than for fresh high attractive flowers (0.99 ± 0.77 μ A) [F (1, 29) = 6.77, P = 0.014, η p 2 = 0.189] in PO7. However, the other main effects and interactions were not significant (all P > 0.05).

The four within-subject factors repeated-measure ANOVA of N170 latencies revealed that the main effect of Pictures was significant [F (1, 29) = 10.80, P = 0.003, η p 2 = 0.271], and that the latency was shorter for faces (171.93 ± 2.04 ms) than for flowers (175.92 ± 1.76 ms). The main effect of Attractive was significant [F (1, 29) = 11.12, P = 0.002, η p 2 = 0.277], and the latency was shorter for high attractive pictures (171.86 ± 2.14 ms) than for low attractive pictures (175.99 ± 1.66 ms). The main effect of Hemisphere was significant [F (1, 29) = 10.43, P = 0.003, η p 2 = 0.264], and the latency was shorter for for right hemisphere (169.00 ± 2.70 ms) than for the left hemisphere (178.85 ± 1.98 ms). However, for other main effects and interactions were not significant (all P > 0.05).

3.2.2 Repeated-measure ANOVA based on the amplitude and latency of P1

The P1 amplitude was analyzed by four way repeated-measure ANOVA with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh) and Hemisphere (O1, O2) as within factors. The main effect for Attractive was significant [F (1, 29) = 13.37, P = 0.001, η p 2 = 0.316, and the amplitude was greater for low attractive pictures (5.41 ± 0.78 μ A) than high attractive pictures (4.27 ± 0.65 μ A). The main effect for Hemisphere was significant [F (1, 29) = 7.74, P = 0.009, η p 2 = 0.211, and the amplitude was greater for O1 (5.19 ± .77 μ A) than for O2 (4.49 ± 0.64 μ A). A significant Memory × Attractive interaction was found [F (1, 29) = 16.42, P = 0.001, η p 2 = 0.362], with indicating greater amplitude for repeated low attractive pictures (6.24 ± 0.91 μ A) than for repeated high attractive pictures (3.68 ± 0.69 μ A) [F (1, 29) = 23.39, P = 0.001, η p 2 = 0.446]. Moreover, a significant Memory × Attractive × Pictures interaction was found [F (1, 29) = 12.69, P = 0.001, η p 2 = 0.304], indicating that the P1 amplitude was greater for repeated high attractive flowers (4.55 ± 0.73 μ A) than repeated high attractive faces (2.80 ± 0.75 μ A) [F (1, 29) = 11.62, P = 0.002, η p 2 = 0.286]. However, the other main effects and interactions were not significant (all P > 0.05).

The four within-subject factors repeated-measure ANOVA of P1 latencies revealed that the main effect of Hemisphere was significant [F (1, 29) = 17.53, P = 0.001, η p 2 = 0.377], and that the latency was shorter for O1 (172.42 ± 3.59 ms) than for O2 (178.53 ± 2.89 ms). A significant Memory × Attractive × Pictures interaction was found [F (1, 29) = 8.96, P = 0.006, η p 2 = 0.236]. This indicated that the latency was shorter for fresh low attractive faces (169.53 ± 3.49 ms) than for fresh low attractive flowers (175.20 ± 3.16 ms) [F (1, 29) = 12.99, P = 0.001, η p 2 = 0.309], and that the latency was shorter for fresh high attractive flowers (173.08 ± 3.86 ms) than for fresh high attractive faces (181.89 ± 3.17 ms) [F (1, 29) = 27.58, P = 0.001, η p 2 = 0.487]. A significant Memory × Pictures × Hemisphere interaction was found [F (1, 29) = 13.08, P = 0.001, η p 2 = 0.311], indicating that the latency was shorter for repeated faces (171.30 ± 3.46 ms) than for repeated flowers (175.27 ± 3.89 ms) [F (1, 29) = 4.75, P = 0.038, η p 2 = 0.141] for O1. However, the other main effects and interactions were not significant (all P > 0.05).

3.2.3 Repeated-measure ANOVA based on the amplitude and latency of N250

The N250 amplitude was analyzed by five way repeated-measure ANOVA with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh), Hemisphere (left, right) and Electrodes (O1, O2, PO7, and PO8) as within factors. The main effect for Pictures was significant [F (1, 29) = 12.16, P = 0.002, η p 2 = 0.295], with greater amplitude for faces (2.65 ± 0.45 μ A) than for flowers (3.73 ± 0.32 μ A). The main effect for Attractive was also significant [F (1, 29) = 7.60, P = 0.010, η p 2 = 0.208], with greater amplitude for high attractive pictures (2.81 ± 0.42 μ A) than for low attractive pictures (3.57 ± 0.35 μ A). Similarly, the main effect for Hemisphere was significant [F (1, 29) = 13.39, P = 0.001, η p 2 = 0.316, with greater amplitude for the left (2.92 ± 0.38 μ A) than for the right (3.46 ± 0.37 μ A). A significant Hemisphere × Electrodes × Pictures interaction was also found [F (1, 29) = 13.56, P = 0.001, η p 2 = 0.319], thereby indicting that the N250 amplitude was greater for faces (O1: 5.80 ± 0.37 μ A, PO8: 3.36 ± 0.39 μ A) than for flowers (O1: 3.07 ± 0.49 μ A, PO8: 2.18 ± 0.41 μ A) in O1 [F (1, 29) = 29.24, P < 0.001, η p 2 = 0.699] and PO8 [F (1, 29) = 10.36, P = 0.003, η p 2 = 0.263]. Moreover, a significant Hemisphere × Electrodes × Attractive interaction was found [F (1, 29) = 9.59, P = 0.004, η p 2 = 0.249], indicating that the N250 amplitude was greater for high attractive pictures (O1: 3.48 ± 0.50 μ A, PO8: 2.37 ± 0.41 μ A) than for low attractive pictures (O1: 5.39 ± 0.38 μ A, PO8: 3.18 ± 0.33 μ A) in O1 [F (1, 29) = 12.92, P = 0.001, η p 2 = 0.308] and PO8 [F (1, 29) = 6.73, P = 0.015, η p 2 = 0.188]. However, the other main effects and interactions were not significant (all P > 0.05). The five within-subject factors repeated-measure ANOVA of N250 latencies revealed that the main effects and interactions were not significant (all P > 0.05).

4. Discussion

To investigate whether humans show cognitive bias toward all attractive things or only toward attractive faces, and to demonstrate the differences of human perceptual mechanisms in terms of recognizing faces and objects, a study-test paradigm was used to measure the attractiveness role to modulate the N170, P1 and N250 components during a faces and flower recognition task.

Behavior results showed that high attractive pictures’ accuracy was higher than low attractive pictures, indicating that people are more impressed by and better recognized high attractive pictures. Human beings are naturally keen on the pursuit of “beauty”. Thus, people unsurprisingly paid more attention to high attractive objects. The response time of fresh pictures was significantly longer than that of repeated pictures because recognizing fresh pictures usually took a long time. Moreover, the results indicated that faces received a longer reaction time than flowers, indicating that faces were much more complex and challenging for people to recognize than flowers. The human face is generally a valuable source of information; thus, people generally spend more time recognizing it.

The N170 amplitude of repeated high attractive faces was larger than that of repeated high attractive flowers, indicating the repetitive enhancement effect and faces’ sensitivity. Faces have long been argued to be a “special” as a category of visual stimuli, showing both cortical specificity (Ishai, 2008) and a wide range of face-specific perceptual effects (Lee et al., 2011). Although its exact neural generators are still a matter of debate (Itier and Taylor, 2004b; Rossion et al., 2003; Watanabe et al., 2003), this component is believed to reflect structural encoding (Rossion et al., 1999; Eimer, 2000b), that is, the extraction of a perceptual representation of the face. The N170 component is reliably larger toward faces than toward any other object category tested (Bentin et al., 1996; Carmel and Bentin, 2002; Eimer, 2000b; Itier and Taylor, 2004a) has become a marker for early face processing. The study phase may only involve the classification and evaluation of faces, but the test phase can involve tasks on memory and extraction of faces. High attractive pictures were repeated in the test phase, but the repetitive enhancement effect only occurred on repetitive attractive faces, which may be strengthened during the recognition extraction process. Thus, the amplitude difference between repeated attractive faces and repeated attractive flowers on N170 increased. This result is consistent with a previous study of repetitive priming effects using face recognition tasks (Schweinberger et al., 1995). More importantly, attractive flowers and attractive faces are of high aesthetic and rewarding values. For instance, women’s attractive faces are highly relevant to their economic activities (Elder, 2003). Attractive people also have more chances of going on a date than unattractive ones (Riggio, 1984). Several studies have proven that attractive people are considered positive (Lorenzo et al., 2010; Vermeir and Van de Sompel, 2013). Thus, attractive people may benefit from such enhanced positivity (Langlois et al., 2000). The grand mean values of waveforms in Fig. 4 and Fig. 5 indicated temporally distinct components’ modulation. In particular, the main regions, namely, the parietal-occipital regions, were activated by faces. These results are consistent with the findings presented in existing research (Zheng and Segalowitz, 2011, 2015), thereby supporting the facial specificity of repetition enhancement and the importance of faces for early face-specific processing. Future studies can adopt fMRI to investigate the face-sensitive N170 components’ system on different attractive faces, flowers, or other objects.

Fig. 4.

Grand-mean event-related potentials at representative electrode sites at four locations (Parietal-Occipital sites, PO7, PO8; Occipital sites, O1, O2) during the recognition of four conditions (repeated high attractive faces, fresh high attractive faces, repeated high attractive flowers, and fresh high attractive flowers).

Fig. 5.

Grand-mean event-related potentials at representative electrode sites at four locations (Parietal-Occipital sites, PO7, PO8; Occipital sites, O1, O2) during the recognition of four conditions (repeated low attractive faces, fresh low attractive faces, repeated low attractive flowers, and fresh low attractive flowers).

Greater P1 amplitude was found for repeated high attractive flowers than for repeated high attractive faces, and the latency was shorter for repeated faces than for repeated flowers. P1 has been related to early visual processing in face perception (Zhang et al., 2011). This result suggested that people were more alert to faces than flowers, even when faced with the same repetitive attractive images, and indicated the faster visual orientation and attention capture of faces. In the current study, people were more familiar with faces than flowers, explaining why repeated attractive faces were distinct from repeated attractive flowers. The effect of familiarity on the cognition processing of perception and recognition has been observed in a past study using the ERP technique.

Caharel et al. (2003) used this technique to record ERPs triggered by three different faces (i.e., an unfamiliar face, a famous face, and the face of the subject) and found the familiarity effect. Also, self-relevance is processed by high-order cognitive functions when participants view the following: SELF’, which are the objects owned by a participant; ’FAMILIAR’, which are disposable and public objects, that is, objects with reduced self-relevant familiarity; and ’UNFAMILIAR’, which are objects of others (Heisz et al., 2006). Low amplitude on familiar self-faces has also been observed, suggesting that self-face recognition is facilitated by a reduced need for attentional resources (Alzueta et al., 2019).

Accordingly, compared with repeated pretty flowers, people allocated less attention resources to repeated attractive faces. The P1 effect for faces might be attributed to the possibility of automatically processing faces compared to flowers, thus reflecting the distribution of early attention resources on attractive faces and attractive objects. However, the current study also found the repetitive enhancement effect for P1 amplitudes and repeated faces’ latencies. The attention resources of early visual processing were lower on repeated faces than repeated flowers.

Moreover, the N250 component showed greater amplitude for faces than for flowers, indicating sensitivity to faces. N170 and N250 are two components related to face processing, regulated by attention resources and facial expressions, respectively (Calvo and Beltrán, 2014). Thus, the N250 component responded more strongly to faces than to flowers. Also, this result might reflect active target detection (Kida et al., 2004) and discrimination (Calvo and Beltrán, 2014). The human face is generally a valuable source of information; it can reflect a person’s identity, age, gender, and even feelings. People are very skilled in “reading” these types of information. N250 has been proposed to reflect perceptual memory representations for individual faces (Herzmann, 2017). Therefore, faces trigger strong responses from objects or face-selective neurons. This finding suggests the sensitive mechanisms of human faces. However, in the current study, no significant differences were observed between repeated faces and repeated flowers and between attractive faces and attractive flowers.

Given that N170 is regarded as a marker of a face-specific system, merely showing that its amplitude is larger in response to faces than to other stimulus categories is insufficient. Other factors must also be considered, including the existence or absence of similar N170 distinctions across other categories. The interaction of these distinctions with task-associated strategies (e.g., attention and categorization) and observer-associated factors (e.g., levels of expertise). In the current work, experiments were carried out under the modified location-matching paradigm. Thus, being task-independent, the paradigm paying attention to attractive faces and attractive flowers was adequately controlled. The face and flower stimuli were edited to unify the format by controlling the same numerical values of their physical properties, including saturation, color gamut, luminance, lightness, contrast, and color gradation, and by presenting them with the same background and position. No difference was observed in the low-level features between faces and flowers. Although the sample in this study was well-powered and reached the required size needed to achieve a significant effect, the number of participants was limited and only included university students. On this basis, future research should fully consider the influence of other factors on the experimental results, such as more experimental materials and particular groups of subjects.

The N170 amplitude elicited by repeated attractive faces was significantly larger than repeated attractive flowers. The P1 amplitude elicited by repeated attractive flowers was significantly larger than that by repeated attractive faces. These results revealed that the repetitive enhancement effect of N170 and the familiarity effect of P1 was attractive face-specific. Therefore, in a recognition memory task, attractiveness modulated the face-specific N170 and P1 components, but not the N250.

Author contributions

Conceived and designed the experiment: ZY, LN, HFF, YCH, WGX, ZPQ. Recruitment and payment of participants: XYF, WJY. Analyzed the data: LN, HFF, ZPQ. Wrote and revised the paper: LN, HFF, ZPQ, CJW, AK.

Ethics approval and consent to participate

Each participant gave their informed consent after fully understanding the procedure and being given time to consider whether or not to take part in the experiment.

Acknowledgment

This research was supported and granted by the ’Ministry of education of humanities and social sciences research fund (19YJA880082)’, ’Key projects of Educational Science Planning of Hubei Province (2019GA003)’ and ’Natural science foundation of Hubei Province (2019CFB425)’ to YZ. We thank all the participants for their time and interest and the reviewers for their valuable feedback.

Conflict of Interest

The authors declare no conflict of interest.

References
[1]
Aharon, I., Etcoff, N., Ariely, D., Chabris, C. F., O’Connor, E. and Breiter, H. C. (2001) Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron 32, 537-551.
[2]
Albonico, A., Furubacke, A., Barton, J. J. S. and Oruc, I. (2018) Perceptual efficiency and the inversion effect for faces, words and houses. Vision Research 153, 91-97.
[3]
Alzueta, E., Melcón, M., Poch, C. and Capilla, A. (2019) Is your own face more than a highly familiar face? Biological Psychology 142, 100-107.
[4]
Barragan-Jason, G., Cauchoix, M. and Barbeau, E. J. (2016) The neural speed of familiar face recognition. Neuropsychologia 75, 390-401.
[5]
Bentin, S. and Deouell, L. Y. (2012) Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology 17, 35-55.
[6]
Bentin, S., Allison, T., Puce, A., Perez, E. and McCarthy, G. (1996) Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience 8, 551-565.
[7]
Bötzel, K., Schulze, S. and Stodieck, S. R. (1995) Scalp topography and analysis of intracranial sources of face-evoked potentials. Experimental Brain Research 104, 135-143.
[8]
Braeutigam, S., Bailey, A. J. and Swithenby, S. J. (2001) Task-dependent early latency (30-60 ms) visual processing of human faces and other objects. Neuroreport 12, 1531-1536.
[9]
Caharel, S., Poiroux, S., Bernard, C., Thibaut, F., Lalonde, R. and Rebai, M. (2003) ERPs associated with familiarity and degree of familiarity during face recognition. The International Journal of Neuroscience 112, 1499-1512.
[10]
Caldara, R., Rossion, B., Bovet, P. and Hauert, C. (2004) Event-related potentials and time course of the ‘other-race’ face classification advantage. Neuroreport 15, 905-910.
[11]
Calvo, M. G. and Beltrán, D. (2014) Brain lateralization of holistic versus analytic processing of emotional facial expressions. Neuroimage 92, 237-247.
[12]
Calvo, M. G. and Beltrán, D. (2014) Recognition advantage of happy faces: tracing the neurocognitive processes. Neuropsychologia 51, 2051-2061.
[13]
Campanella, S., Hanoteau, C., Depy, D., Rossion, B., Bruyer, R., Crommelinck, M. and Guerit, J. M. (2000) Right N170 modulation in a face discrimination task: an account for categorical perception of familiar faces. Psychophysiology 37, 796-806.
[14]
Cao, X., Ma, X. and Qi, C. (2016) N170 adaptation effect for repeated faces and words. Neuroscience 294, 21-28.
[15]
Carmel, D. and Bentin, S. (2002) Domain specificity versus expertise: factors influencing distinct processing of faces. Cognition 83, 1-29.
[16]
Cassia, V. M., Turati, C. and Simion, F. (2004) Can a nonspecific bias toward top-heavy patterns explain newborns’ face preference? Psychological Science 15, 379-383.
[17]
Duchaine, B. and Nakayama, K. (2005) Dissociations of face and object recognition in developmental prosopagnosia. Journal of Cognitive Neuroscience 17, 249-261.
[18]
Eimer, M. (1999) Does the face-specific N170 component reflect the activity of a specialized eye processor? Neuroreport 9, 2945-2948.
[19]
Eimer, M. (2000a) Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology 111, 694-705.
[20]
Eimer, M. (2000b) Effects of face inversion on the structural encoding and recognition of faces. Evidence from event-related brain potentials. Brain Research. Cognitive Brain Research 10, 145-158.
[21]
Eimer, M. (2001) The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 11, 2319-2324.
[22]
Eimer, M. and Holmes, A. (2002) An ERP study on the time course of emotional face processing. Neuroreport 13, 427-431.
[23]
Eimer, M., Holmes, A. and Mcglone, F. P. (2003) The role of spatial attention in the processing of facial expression: an ERP study of rapid brain responses to six basic emotions. Cognitive, Affective, & Behavioral Neuroscience 3, 97-110.
[24]
Elder, G. H. Jr. (1969) Occupational mobility, life patterns, and personality. Journal of Health and Social Behavior10, 308-323.
[25]
Farah, M. J., Wilson, K. D., Drain, M. and Tanaka, J. N. (1998). What is “special” about face perception? Psychological Review 105, 482-498.
[26]
Faul, F., Erdfelder, E., Buchner, A. and Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41, 1149-1160.
[27]
Grill-Spector, K. (2006) Selectivity of adaptation in single units: implications for fMRI experiments. Neuron 49, 170-171.
[28]
Halit, H., de Haan, M. and Johnson, M. H. (2000) Modulation of event-related potentials by prototypical and atypical faces. Neuroreport 11, 1871-1875.
[29]
Handy, T. C. (2005) Event-related Potentials: A Methods Handbook. MIT Press, Cambridge, Mass.
[30]
Heisz, J. J., Watter, S. and Shedden, J. M. (2006) Progressive N170 habituation to unattended repeated faces. Vision Research 46, 47-56.
[31]
Heisz, J. J., Watter, S. and Shedden, J. M. (2007) Automatic face identity encoding at the N170. Vision Research 46, 4604-4614.
[32]
Henson, R. N., Goshen-Gottstein, Y., Ganel, T., Otten, L. J., Quayle, A. and Rugg, M. D. (2003) Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cerebral Cortex 13, 793-805.
[33]
Herrmann, M. J., Ehlis, A., Ellgring, H. and Fallgatter, A. J. (2006) Early stages (P100) of face perception in humans as measured with event-related potentials (ERPs). Journal of Neural Transmission 112, 1073-1081.
[34]
Herzmann, G. (2017) Increased N250 amplitudes for other-race faces reflect more effortful processing at the individual level. International Journal of Psychophysiology 105, 57-65.
[35]
Hillyard, S. A. and Anllo-Vento, L. (1998) Event-related brain potentials in the study of visual selective attention. Proceedings of the National Academy of Sciences of the United States of America 95, 781-787.
[36]
Ip, C., Wang, H. and Fu, S. (2017) Relative expertise affects N170 during selective attention to superimposed face-character images. Psychophysiology 54, 955-968.
[37]
Ishai, A. (2008) Let’s face it: it’s a cortical network. Neuroimage 40, 415-419.
[38]
Itier, R. and Taylor, M. (2002) Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs. NeuroImage 15, 353-372.
[39]
Itier, R. J. and Taylor, M. J. (2002) Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs. Neuroimage 15, 353-372.
[40]
Itier, R. J. and Taylor, M. J. (2004a) N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cerebral Cortex 14, 132-142.
[41]
Itier, R. J. and Taylor, M. J. (2004b) Source analysis of the N170 to faces and objects. Neuroreport 15, 1261-1265.
[42]
Itier, R. J., Latinus, M. and Taylor, M. J. (2006) Face, eye and object early processing: what is the face specificity? Neuroimage 29, 667-676.
[43]
Jacques, C., d’Arripe, O. and Rossion, B. (2007) The time course of the inversion effect during individual face discrimination. Journal of Vision 7, 3.
[44]
Johnson, M. H., Dziurawiec, S., Ellis, H. and Morton, J. (1991) Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition 40, 1-19.
[45]
Johnston, V. S. and Oliver-Rodriguez, J. C. (1997) Facial beauty and the late positive component of event-related potentials. The Journal of Sex Research 34, 188-198.
[46]
Kanwisher, N. and Yovel, G. (2007) The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361, 2109-2128.
[47]
Kanwisher, N., McDermott, J. and Chun, M. M. (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience 17, 4302-4311.
[48]
Kida, T., Nishihira, Y., Hatta, A., Wasaka, T., Nakata, H., Sakamoto, M. and Nakajima, T. (2004) Changes in the somatosensory N250 and P300 by the variation of reaction time. European Journal of Applied Physiology 89, 326-330.
[49]
Kranz, F. and Ishai, A. (2006) Face perception is modulated by sexual preference. Current Biology 16, 63-68.
[50]
Langlois, J. H., Kalakanis, L., Rubenstein, A. J., Larson, A., Hallam, M. and Smoot, M. (2000) Maxims or myths of beauty? A meta-analytic and theoretical review. Psychological Bulletin 126, 390-423.
[51]
Leder, H., Mitrovic, A. and Goller, J. (2019) How beauty determines gaze! facial attractiveness and gaze duration in images of real world scenes. i-Perception 7, 2041669516664355.
[52]
Leder, H., Tinio, P. P. L., Fuchs, I. M. and Bohrn, I. (2011) When attractiveness demands longer looks: the effects of situation and gender. Quarterly Journal of Experimental Psychology 63, 1858-1871.
[53]
Lee, K., Anzures, G., Quinn, P. C., Pascalis, O. and Slater, A. (2011) Development of face processing expertise. In, Rhodes, G., Calder, A., Johnson, M. and Haxby, J. V. (eds.) Oxford Handbook of Face Perception (pp. 753-778). New York, Oxford University Press.
[54]
Lorenzo, G. L., Biesanz, J. C. and Human, L. J. (2010) What is beautiful is good and more accurately understood. Physical attractiveness and accuracy in first impressions of personality. Psychological Science 21, 1777-1782.
[55]
Luo, Y. J., Greenwood, P. M. and Parasuraman, R. (2002) Dynamics of the spatial scale of visual attention revealed by brain event-related potentials. Brain Research. Cognitive Brain Research 12, 371-381.
[56]
Macchi Cassia, V., Turati, C. and Simion, F. (2004) Can a nonspecific bias toward top‐heavy patterns explain newborns’ face preference? Psychological Science15, 379–383.
[57]
Mangun, G. R. (1995) Neural mechanisms of visual selective attention. Psychophysiology 32, 4-18.
[58]
Mangun, G. R. and Buck, L. A. (1998) Sustained visual-spatial attention produces costs and benefits in response time and evoked neural activity. Neuropsychologia 36, 189-200.
[59]
Mangun, G. R., Buonocore, M. H., Girelli, M. and Jha, A. P. (1999) ERP and fMRI measures of visual spatial selective attention. Human Brain Mapping 6, 383-389.
[60]
Mitsudo, T., Kamio, Y., Goto, Y., Nakashima, T. and Tobimatsu, S. (2011) Neural responses in the occipital cortex to unrecognizable faces. Clinical Neurophysiology 122, 708-718.
[61]
Miyakoshi, M., Nomura, M. and Ohira, H. (2007) An ERP study on self-relevant object recognition. Brain and Cognition 63, 182-189.
[62]
Mondloch, C. J., Lewis, T. L., Budreau, D. R., Maurer, D., Dannemiller, J. L., Stephens, B. R. and Kleiner, K. A. (1999). Face perception during early infancy. Psychological Science 10 , 419-422.
[63]
Nasr, S. and Esteky, H. (2009). A study of N250 event-related brain potential during face and non-face detection tasks. Journal of Vision 9, 1-14.
[64]
Nemrodov, D. and Itier, R. J. (2012) The role of eyes in early face processing: a rapid adaptation study of the inversion effect. British Journal of Psychology 102, 783-798.
[65]
Picozzi, M., Cassia, V. M., Turati, C. and Vescovo, E. (2009) The effect of inversion on 3- to 5-year-olds’ recognition of face and nonface visual objects. Journal of Experimental Child Psychology 102, 487-502.
[66]
Riggio, R. E. and Woll, S. B. (1984) The role of nonverbal cues and physical attractiveness in the selection of dating partners. Journal of Social and Personal Relationships1, 347–357.
[67]
Rossion, B. and Caharel, S. (2011) ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Research 51, 1297-1311.
[68]
Rossion, B. and Jacques, C. (2011) The N170: understanding the time-course of face perception in the human brain. In, Kappenman, E. S. and Luck, S. J. (eds.) The Oxford Handbook of Event-Related Potential Components (pp.115-142). New York, Oxford University Press.
[69]
Rossion, B. and Jacques, C. (2008) Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. Neuroimage 39, 1959-1979.
[70]
Rossion, B., Caldara, R., Seghier, M., Schuller, A., Lazeyras, F. and Mayer, E. (2003) A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. Brain: A Journal of Neurology 126, 2381-2395.
[71]
Rossion, B., Delvenne, J. F., Debatisse, D., Goffaux, V., Bruyer, R., Crommelinck, M. and Guérit, J. M. (1999) Spatio-temporal localization of the face inversion effect: an event-related potentials study. Biological Psychology 50, 173-189.
[72]
Rossion, B., Gauthier, I., Goffaux, V., Tarr, M. J. and Crommelinck, M. (2002). Expertise training with novel objects leads to left-lateralized facelike electrophysiological responses. Psychological Science 13, 250-257.
[73]
Rossion, B., Gauthier, I., Tarr, M. J., Despland, P., Bruyer, R., Linotte, S. and Crommelinck, M. (2000) The N170 occipitotemporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport 11, 69-74.
[74]
Rossion, B., Joyce, C. A., Cottrell, G. W. and Tarr, M. J. (2004) Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage 20, 1609-1624.
[75]
Santangelo, V. (2015) Forced to remember: when memory is biased by salient information. Behavioural Brain Research 283, 1-10.
[76]
Schweinberger, S. R., Pfutze, E. M. and Sommer, W. (1995). Repetition priming and associative priming of face recognition: evidence from event-related potentials. Journal of Experimental Psychology 21, 722-736.
[77]
Schweinberger, S. R., Pickering, E. C., Jentzsch, I., Burton, A. M. and Kaufmann, J. M. (2002) Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research 14, 398-409.
[78]
Shibata, T., Nishijo, H., Tamura, R., Miyamoto, K., Eifuku, S., Endo, S. and Taketoshi, O. (2002) Generators of Visual Evoked Potentials for Faces and Eyes in the Human Brain as Determined by Dipole Localization. Brain Topography15, 51-63.
[79]
Tanaka, J. W. and Curran, T. (2001) A neural basis for expert object recognition. Psychological Science 12, 43-47.
[80]
Valenza, E., Simion, F., Cassia, V. M. and Umiltà, C. (1996) Face preference at birth. Journal of Experimental Psychology. Human Perception and Performance 22, 892-903.
[81]
Vermeir, I. and Van de Sompel, D. (2013) Assessing the what is beautiful is good stereotype and the influence of moderately attractive and less attractive advertising models on self-perception, ad attitudes, and purchase intentions of 8-13-year-old children. Journal of Consumer Policy 37, 205-233.
[82]
Wang, W., Miao, D. and Zhao, L. (2014) Automatic detection of orientation changes of faces versus non-face objects: a visual MMN study. Biological Psychology 100, 71-78.
[83]
Watanabe, S., Kakigi, R. and Puce, A. (2003) The spatiotemporal dynamics of the face inversion effect: a magneto- and electro-encephalographic study. Neuroscience 116, 879-895.
[84]
Wuttke, S. J. and Schweinberger, S. R. (2019) The P200 predominantly reflects distance-to-norm in face space whereas the N250 reflects activation of identity-specific representations of known faces. Biological Psychology 140, 86-95.
[85]
Zhang, Y., Kong, F., Chen, H., Jackson, T., Han, L., Meng, J., Yang, Z., Gao, J. and Najam ul Hasan, A. (2011) Identifying cognitive preferences for attractive female faces: an event-related potential experiment using a study-test paradigm. Journal of Neuroscience Research 89, 1887-1893.
[86]
Zhang, Y., Wei, B., Zhao, P., Zheng, M. and Zhang, L. (2016) Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials. Neurocase 22, 317-323.
[87]
Zhang, Y., Xiang, Y., Guo, Y. and Zhang, L. (2018) Beauty-related perceptual bias: who captures the mind of the beholder? Brain and Behavior 8, e00945.
[88]
Zhang, Y., Zheng, M. and Wang, X. (2017) Effects of facial attractiveness on personality stimuli in an implicit priming task: an ERP study. Neurological Research 38, 685-691.
[89]
Zheng, X. and Segalowitz, S. J. (2015) Putting a face in its place: in- and out-group membership alters the N170 response. Social Cognitive and Affective Neuroscience 9, 961-968.
[90]
Zheng, X. and Segalowitz, S. J. (2011). The N170 face inversion effect is both face-specific and domain-general separate amplitude and latency effects, Psychophysiology 48, S46.
Share
Back to top