†These authors contributed equally.
Earlier electroencephalographic studies have compared attractive and unattractive faces and between faces with other objects, such as flowers, without revealing if a recognition memory bias toward faces and flowers exists or whether humans exhibit enhanced specific components toward all attractive objects or only toward attractive faces. For objects with similar degrees of attractiveness, we sought to determine if the N170, P1, and N250 reflect upon the attractiveness of faces and flowers and demonstrated by comparing event-related potentials of humans’ different perceptual mechanisms recognizing high attractive faces and high attractive flowers. The repeated high attractive faces tended to elicit a larger N170. Simultaneously, the P1 was preferentially associated with the repeated high attractive flowers, but both indicated that the repetitive enhancement effect only occurred on repeated attractive faces. Thus, differences existed in the perceptual mechanisms for processing repeated high attractive faces and repeated high attractive flowers. However, there was no significant difference in N250 between repeated faces and repeated flowers or between high attractive faces and high attractive flowers. Consequently, high attractive faces and high attractive flowers capture the beholder’s memory bias in different processing stages. The N170 and P1 components are affected by attractiveness, thereby demonstrating the differences between human perceptual mechanisms in recognizing high attractive faces and objects.
The ability to recognize faces and objects is vital human skills, and plenty of evidence has shown that the former fundamentally differs from the latter. For instance, newborns prefer face-like configurations to other pictures (Johnson et al., 1991; Macchi Cassia et al., 2004) regardless of image inversion. The image is upside-down (Mondloch et al., 1999; Picozzi et al., 2009). For adults, face recognition is more influenced by inversion than object recognition and is also highly dependent on spatial relations among features (Farah et al., 1998). Using a deviant-standard-reversed paradigm, Wang et al. (2014) recently provided electrophysiological evidence for face orientation changes, which elicited larger event-related potentials (ERPs) components than object spatial changes.
Electroencephalographic (EEG) recordings of gross electrical activities, such as ERPs in the visual cortex, indicate that N1, P1, N170, and P170 are amplitudes concerned with the ERP effects of periods related to face perception (Halit et al., 2000). Numerous studies have confirmed that a face-selective response peak early at approximately 170 ms after presenting a facial stimulus (Bentin et al., 1996). Also, the N170 amplitude for faces is significantly larger than for other objects (Bentin et al., 1996). Moreover, the face-specific N170 component is entirely unaffected by facial expression, this suggesting that emotional expression analysis and structural face encoding are parallel processes (Eimer and Holmes, 2002; Eimer et al., 2003). N170 is also not influenced by race (Caldara et al., 2004) or familiarity (Bentin and Deouell, 2012; Eimer, 2000a).
Past studies have examined cognitive processes concerned with ERPs have been examined and interpreted N170 face sensitivity to determine the existence of brain mechanisms specializing in face processing (Duchaine and Nakayama, 2005) or as the result of adults typically having a higher level of expertise with faces than with other object categories (Kanwisher and Yovel, 2007).
Rossion et al. (2002) found that the amplitude effects are face-specific and mainly reflect the eye region contribution. Compared with other objects, a small part of the human visual cortex (fusiform face area) is more active when people look at faces (Itier et al., 2006).
The N170 component is most responsive to facial stimuli in the temporoparietal regions of the human scalp. Human response to facial stimuli is more significant than other visual objects (Bentin et al., 1996; Kanwisher et al., 1997). The electrophysiological activities recorded by ERPs evoked by faces in the bilateral occipitotemporal regions have also been shown to differ from that of objects at approximately 170 ms. Furthermore, N170 sources corresponding to the fusiform gyrus (FG) are located in the ventral temporal cortical areas (Rossion et al., 1999). A common bilateral source of N170 concerning faces, words, and cars are in the posterior FG (Itier and Taylor, 2002). Differences among categories are found in the lateralization, intensity, and orientation of dipoles. By comparison, N170 sources for faces are found in the temporal cortex (Schweinberger et al., 2002).
Although N170 is earlier and larger with faces than with all other objects (Rossion et al., 2003; Shibata et al., 2002), its specificity to faces remains fully explored. Only a few object categories have been used for ERP studies on face-sensitive N170, such as words, houses, cars, and butterflies (Itier and Taylor, 2004a). Other non-face objects have been added to identify neural sources of the face-sensitive N170 and confirm whether or not N170 amplitude for upright and inverted faces significantly differ from those for other objects, such as patterns, road signs, tools, lions, houses, and mushrooms. However, no statistically significant differences in N170 latency have been found between the images of faces and flowers (Carmel and Bentin, 2002). Thus, a certain similarity must exist between flowers and faces, especially since attractive people are often praised as ’as pretty as flowers. Also, recognition memory specificity toward attractive and unattractive faces have been explored, with results indicating that in identifying unattractive faces, the reaction time is longer, and accuracy is higher for identifying attractive faces (Aharon et al., 2001; Leder et al., 2011, 2019). ERPs also revealed that attractive faces elicit larger ERP amplitude (P160, N250, and P400) than unattractive faces on the recognition task (Rossion and Jacques, 2008).
Human faces can also elicit early ERP components (Braeutigam et al., 2001), such as P1 and N250. The P1 component is thought to reflect early attention-based visual perceptual processing (Hillyard and Anllo-Vento, 1998) and originates in the bilateral occipital lobe and FG (Mangun and Buck, 1998). Furthermore, P1 is associated with spatial visual attention and search resources (Luo et al., 2002; Mangun, 1995; Mangun and Buck, 1998). It may reflect the visual-spatial orientation of attention is faster attention capture. P1 can reflect human face processing (Herrmann et al., 2006; Mitsudo et al., 2011). The human faces’ processing mechanism differs from non-human faces, as reflected in the P1 component (Rossion and Caharel, 2011).
The evidence of N250 face sensitivity mainly comes from face familiarity modulation (Nasr and Esteky, 2009). A significant correlation has been confirmed between the N250 component and the processing of known face recognition (Barragan-Jason et al., 2016; Wuttke and Schweinberger, 2019). Furthermore, the N250 amplitude increases as a face familiarity (Alzueta et al., 2019). It is exclusively sensitive to face visibility even when the non-face stimuli serve as the task target. A correlation between evoked N170 and N250 is also observed (Nasr and Esteky, 2009).
The ERP effect of attractive faces can also be observed at the Pz electrode point at approximately 250 ms (Johnston and Oliver-Rodriguez, 1997). Therefore, the P1 and N250 components associated with face processing may be related to memory bias in face recognition.
Before the study, G*power software (Faul et al., 2009) was employed to calculate the total sample size needed to achieve a power of 0.95 in testing the repeated ANOVA measures. Theoretical considerations suggest that we have reason to expect a “large” effect size (f = 0.40) (Cohen, 1969, p.348). Thus, we selected a priori analysis to calculate the required sample size. The output indicated that the total sample size needed was 24, and the actual power was 0.82. Therefore, 30 adult participants aged 19-24 years old (mean age = 22.45) were invited to participate in this experiment. Of the 30 participants, 15 were males, and 15 were females. Each participant gave their informed consent after fully understanding the procedure and being given time to consider whether or not to take part in the experiment. All participants were right-handed, had self-reported normal vision, had no history of neurological or psychiatric disorders, and psychoactive medication.
Before the ERP experiment, each participant was asked to select what they thought were the 160 high/low attractive faces and the 160 high/low attractive flowers from a picture pool (Itier and Taylor, 2004b).
First, we collected 845 unfamiliar Chinese female faces and 894 flower images from open picture material resources on Google’s website1 (1http://www.google.com.hk/images?q=%E8%AF%81%E4%BB%B6%E7%85%A7&hl=zh-CN&newwindow =1&safe=strict&client=aff-cs-360se&hs=RyH&source=lnt&tbs=isch:1,itp:photo&prmd=ivnsu&source=lnt&sa=X&ei=OFUQT dGrLYaGvAP-q-jIDQ&ved=0CA8QpwU, dates: 9-July-2009). There were 796 face stimuli and 772 flower stimuli left since low-resolution images were removed. They were edited to a uniform format (6 by 9 cm; 150 by 300 pixels), converted to 8-bit gray-scaled with identical white grounds. The photographs were digitally edited using Adobe Photoshop. The external face features, including fair, ears, and neck, were left, while the inner features, including eyes, nose, mouth, and cheek, were kept.
All faces were in the frontal view. For the 646 face and 667 flower images selected by the 2 specialists, a further 9-step rating on the dimension of Attractiveness (a beauty that appeals to the senses of stimuli images), Joviality (participants feel jolly and full of good humor when looking at the stimuli images), Arousal (a state of the heightened physiological activity of participants when looking at the stimuli images), 80 Chinese college students conducted distinctiveness (the degree of distinguishing trait of stimuli images) and a 3-step rating on Emotion valence (1-positive, 2-neutral, 3-negative) (mean age 21.98 years).
Finally, 226 high attractive face images and 249 high attractive flower images (rating range: 6-9), 260 low attractive face images, and 245 low attractive flower images (rating range: 1-4) were chosen. The t-test indicated that the attractive categories were significant. However, all categories were not significantly different between faces and flowers (P
Rating | Attractiveness | Joviality | Arousal | Dominance | Emotion valence |
high attractive faces | 7.68 (0.89) | 7.50 (1.00) | 7.171 (0.24) | 6.59 (1.06) | 2 |
low attractive faces | 3.37 (0.92) | 3.60 (0.69) | 7.28 (0.89) | 6.83 (0.77) | 2 |
t | 28.693 | 31 (0.981) | -0.607 | -1.444 | |
P |
|
|
0.545 | 0.153 | |
high attractive flowers | 7.57 (0.98) | 7.71 (0.85) | 7.10 (0.77) | 6.65 (0.98) | 2 |
low attractive flowers | 3.43 (0.54) | 3.55 (0.59) | 7.11 (0.71) | 6.70 (0.75) | 2 |
t | 37.29 | 32.468 | -0.092 | -0.495 | |
P |
|
|
0.927 | 0.622 |
Then, an experimental procedure was developed according to these pictures. Therefore, each participants’ experimental stimuli were unique and distinct from others. The ERP experiment was carried out a week later. There were 80 pictures of high attractive faces, 80 pictures of low attractive faces, 80 pictures of high attractive flowers, and 80 pictures of low attractive flowers in the study phase. The pictures were randomly selected from the pictures selected before the experiment. In this set, there were 160 high/low attractive faces and 160 high/low attractive flowers (including 80 fresh pictures of high/low attractive faces, 80 fresh pictures of high/low attractive flowers, 80 repeated pictures of high/low attractive faces and 80 repeated pictures of high/low attractive flowers) in the test phase. All face and flower pictures were edited to unify format (gray-scaled; 6 by 9 cm; 150 by 300 pixels). They were modified and controlled using the same numerical values of physical properties, including saturation, color gamut, luminance, lightness, contrast, color gradation before the experiment by Adobe Photoshop 7.0.1 software. However, several features might have profoundly affected the observers’ fixation pattern. In particular, visual saliency has been shown to affect perceptual (i.e., fixation patterns) and post-perceptual processes (Santangelo, 2015). Based on the saliency literature (Santangelo, 2015), visual saliency was checked in the current study to indicate whether each nose, mouth, and eyes region was not salient than others (Zhang et al., 2018). Therefore, low-level features were well controlled between faces and flowers.
Participants performed a study-test paradigm. Thus, they completed two continuous phases: the study phase and the test phase. Initially, each participant performed 10 training trials.
Participants were presented with a series of selected images in the study phase, including faces and flowers. To avoid explicit learning and memory for the face and flower images, a modified location-matching paradigm (Zhang et al., 2011) was used. One of the images appeared randomly at one of the four corners of the screen for 300 ms, then the fixation point for 500 ms, and an image for 1000 ms. Participants were instructed to judge whether the current image was presented in the same visuospatial location as the previous image. The current images also appeared randomly at one of the four corners of the screen, and the four possible visuospatial locations were given equal probabilities. The expected probabilities of the same and different position trials were 25% and 75%, respectively. Participants responded by pressing the “1” and “2” keys of the keypad to indicate the same position and a different position, respectively. Of the 320 trials, 160 comprising high/low attractive flowers and 160 high/low attractive female faces. Following the study phase, the test phase was initiated after a five-minute break.
In the test phase, participants were instructed to recognize which face and flower images were seen in the study phase. They were required to press the “1” and “2” keys to detect whether or not each stimulus was ever presented, respectively. The face and flower images, including repeated and fresh images, were selected randomly at the same image pool and presented randomly. First, a fixation appeared in the screen’s center for 500 ms, followed by a clear screen for 300 ms. Next, the target stimulus appeared for 1000 ms, followed by a clear screen for 1500 ms. There were 640 trials divided into two blocks, including 320 previously-viewed images and 320 fresh images. Each block consisted of 320 trials, including 40 repeated high/low attractive faces, 40 repeated high/low attractive flowers, 40 fresh high/low attractive faces, and 40 fresh high/low attractive flowers. Each stimulus was presented randomly. A schematic overview of the experiment is shown in Fig. 1.

A schematic example of the one study-test trial from the experiment.
The electroencephalograph (EEG) voltages from 64 scalp sites were recorded by Brain Vision Recorder software (Version 1.10, Brain Products GmbH, Munich, Germany) with the references to the left and right mastoids (average mastoid reference) (Zhang et al., 2016). The EEG voltages were amplified using a DC~100 Hz bandpass and continuously sampled at 500 Hz/channel. Impedances were kept below 5 K
Off-line EEG data analysis was conducted in Brain Vision Analyzer (Version 2.1, Brain Products GmbH, Munich, Germany). During the off-line signal processing, individual trials were bandpass-filtered between frequencies of 0.1 and 30.0 Hz. For the ERP analysis, 200 ms before and 1000 ms after stimulus onset were chosen for each face and flower stimuli. The segmented epochs with EEG voltages greater than
For the ERP analysis, based on a visual examination of the topographical maps and grand averaged waveform (Fig. 4 and Fig. 5, respectively), as well as on previous literature (Ip et al., 2017), P1 (100-180 ms), N170 (140-200 ms) and N250 (200-300 ms) components, were identified. These components following the targets for correct responses were markedly elicited, after which the latency and amplitude (baseline to peak) was measured within the 100-300 ms time window. The four most representative electrode sites (PO7, PO8, O1, and O2) located in the parietal, occipital sites and occipital sites were selected for these components’ analyses. The amplitude of five conditions was measured relative to the mean pre-stimulus voltage levels. A visual inspection of the grand-averaged waveforms suggested positive and negative peaks in certain time-windows and ms for latency and
For the behavior results, recognition accuracy rate (ACC) was assessed by calculating the percentage of correct responses for repeated (hits) and fresh images (correct rejections) concerning high/low attractive faces and flowers ratings for participants. The ACC and reaction times (RTs) for correct recognition were analyzed by repeated-measures ANOVA, with Memory (repeated/fresh), Attractiveness (high/low), and Picture (faces/flowers) as within factors.
All data were exported into the data analyzing software SPSS 20.0 for repeated-measures ANOVA. Least-Significant Difference (LSD) was used in the posthoc tests when the main effect or interaction effect was significant. For all analyses, the P-values were corrected for deviation from sphericity according to the Greenhouse-Geisser correction method.
In the ACC ANOVA, the main effect for Attractive was significant [F (1, 29) = 10.43, P = 0.003,
In the RTs ANOVA, the main effect for Memory was significant [F (1, 29) = 11.85, P = 0.002,
The ACC and RTs of conditions’ descriptive statistics are shown in Fig. 2 and Fig. 3.

ACC of the recognition of faces vs. flowers.

RT of the recognition of faces vs.flowers.
The N170 amplitude was analyzed by four way repeated-measure ANOVA, in with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh) and Hemisphere (PO7, PO8) were used as within factors. The main effect for Hemisphere was significant [F (1, 29) = 6.94, P = 0.013,
The four within-subject factors repeated-measure ANOVA of N170 latencies revealed that the main effect of Pictures was significant [F (1, 29) = 10.80, P = 0.003,
The P1 amplitude was analyzed by four way repeated-measure ANOVA with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh) and Hemisphere (O1, O2) as within factors. The main effect for Attractive was significant [F (1, 29) = 13.37, P = 0.001,
The four within-subject factors repeated-measure ANOVA of P1 latencies revealed that the main effect of Hemisphere was significant [F (1, 29) = 17.53, P = 0.001,
The N250 amplitude was analyzed by five way repeated-measure ANOVA with Pictures (face, flowers), Attractive (high, low), Memory (repeated, fresh), Hemisphere (left, right) and Electrodes (O1, O2, PO7, and PO8) as within factors. The main effect for Pictures was significant [F (1, 29) = 12.16, P = 0.002,
To investigate whether humans show cognitive bias toward all attractive things or only toward attractive faces, and to demonstrate the differences of human perceptual mechanisms in terms of recognizing faces and objects, a study-test paradigm was used to measure the attractiveness role to modulate the N170, P1 and N250 components during a faces and flower recognition task.
Behavior results showed that high attractive pictures’ accuracy was higher than low attractive pictures, indicating that people are more impressed by and better recognized high attractive pictures. Human beings are naturally keen on the pursuit of “beauty”. Thus, people unsurprisingly paid more attention to high attractive objects. The response time of fresh pictures was significantly longer than that of repeated pictures because recognizing fresh pictures usually took a long time. Moreover, the results indicated that faces received a longer reaction time than flowers, indicating that faces were much more complex and challenging for people to recognize than flowers. The human face is generally a valuable source of information; thus, people generally spend more time recognizing it.
The N170 amplitude of repeated high attractive faces was larger than that of repeated high attractive flowers, indicating the repetitive enhancement effect and faces’ sensitivity. Faces have long been argued to be a “special” as a category of visual stimuli, showing both cortical specificity (Ishai, 2008) and a wide range of face-specific perceptual effects (Lee et al., 2011). Although its exact neural generators are still a matter of debate (Itier and Taylor, 2004b; Rossion et al., 2003; Watanabe et al., 2003), this component is believed to reflect structural encoding (Rossion et al., 1999; Eimer, 2000b), that is, the extraction of a perceptual representation of the face. The N170 component is reliably larger toward faces than toward any other object category tested (Bentin et al., 1996; Carmel and Bentin, 2002; Eimer, 2000b; Itier and Taylor, 2004a) has become a marker for early face processing. The study phase may only involve the classification and evaluation of faces, but the test phase can involve tasks on memory and extraction of faces. High attractive pictures were repeated in the test phase, but the repetitive enhancement effect only occurred on repetitive attractive faces, which may be strengthened during the recognition extraction process. Thus, the amplitude difference between repeated attractive faces and repeated attractive flowers on N170 increased. This result is consistent with a previous study of repetitive priming effects using face recognition tasks (Schweinberger et al., 1995). More importantly, attractive flowers and attractive faces are of high aesthetic and rewarding values. For instance, women’s attractive faces are highly relevant to their economic activities (Elder, 2003). Attractive people also have more chances of going on a date than unattractive ones (Riggio, 1984). Several studies have proven that attractive people are considered positive (Lorenzo et al., 2010; Vermeir and Van de Sompel, 2013). Thus, attractive people may benefit from such enhanced positivity (Langlois et al., 2000). The grand mean values of waveforms in Fig. 4 and Fig. 5 indicated temporally distinct components’ modulation. In particular, the main regions, namely, the parietal-occipital regions, were activated by faces. These results are consistent with the findings presented in existing research (Zheng and Segalowitz, 2011, 2015), thereby supporting the facial specificity of repetition enhancement and the importance of faces for early face-specific processing. Future studies can adopt fMRI to investigate the face-sensitive N170 components’ system on different attractive faces, flowers, or other objects.

Grand-mean event-related potentials at representative electrode sites at four locations (Parietal-Occipital sites, PO7, PO8; Occipital sites, O1, O2) during the recognition of four conditions (repeated high attractive faces, fresh high attractive faces, repeated high attractive flowers, and fresh high attractive flowers).

Grand-mean event-related potentials at representative electrode sites at four locations (Parietal-Occipital sites, PO7, PO8; Occipital sites, O1, O2) during the recognition of four conditions (repeated low attractive faces, fresh low attractive faces, repeated low attractive flowers, and fresh low attractive flowers).
Greater P1 amplitude was found for repeated high attractive flowers than for repeated high attractive faces, and the latency was shorter for repeated faces than for repeated flowers. P1 has been related to early visual processing in face perception (Zhang et al., 2011). This result suggested that people were more alert to faces than flowers, even when faced with the same repetitive attractive images, and indicated the faster visual orientation and attention capture of faces. In the current study, people were more familiar with faces than flowers, explaining why repeated attractive faces were distinct from repeated attractive flowers. The effect of familiarity on the cognition processing of perception and recognition has been observed in a past study using the ERP technique.
Caharel et al. (2003) used this technique to record ERPs triggered by three different faces (i.e., an unfamiliar face, a famous face, and the face of the subject) and found the familiarity effect. Also, self-relevance is processed by high-order cognitive functions when participants view the following: SELF’, which are the objects owned by a participant; ’FAMILIAR’, which are disposable and public objects, that is, objects with reduced self-relevant familiarity; and ’UNFAMILIAR’, which are objects of others (Heisz et al., 2006). Low amplitude on familiar self-faces has also been observed, suggesting that self-face recognition is facilitated by a reduced need for attentional resources (Alzueta et al., 2019).
Accordingly, compared with repeated pretty flowers, people allocated less attention resources to repeated attractive faces. The P1 effect for faces might be attributed to the possibility of automatically processing faces compared to flowers, thus reflecting the distribution of early attention resources on attractive faces and attractive objects. However, the current study also found the repetitive enhancement effect for P1 amplitudes and repeated faces’ latencies. The attention resources of early visual processing were lower on repeated faces than repeated flowers.
Moreover, the N250 component showed greater amplitude for faces than for flowers, indicating sensitivity to faces. N170 and N250 are two components related to face processing, regulated by attention resources and facial expressions, respectively (Calvo and Beltrán, 2014). Thus, the N250 component responded more strongly to faces than to flowers. Also, this result might reflect active target detection (Kida et al., 2004) and discrimination (Calvo and Beltrán, 2014). The human face is generally a valuable source of information; it can reflect a person’s identity, age, gender, and even feelings. People are very skilled in “reading” these types of information. N250 has been proposed to reflect perceptual memory representations for individual faces (Herzmann, 2017). Therefore, faces trigger strong responses from objects or face-selective neurons. This finding suggests the sensitive mechanisms of human faces. However, in the current study, no significant differences were observed between repeated faces and repeated flowers and between attractive faces and attractive flowers.
Given that N170 is regarded as a marker of a face-specific system, merely showing that its amplitude is larger in response to faces than to other stimulus categories is insufficient. Other factors must also be considered, including the existence or absence of similar N170 distinctions across other categories. The interaction of these distinctions with task-associated strategies (e.g., attention and categorization) and observer-associated factors (e.g., levels of expertise). In the current work, experiments were carried out under the modified location-matching paradigm. Thus, being task-independent, the paradigm paying attention to attractive faces and attractive flowers was adequately controlled. The face and flower stimuli were edited to unify the format by controlling the same numerical values of their physical properties, including saturation, color gamut, luminance, lightness, contrast, and color gradation, and by presenting them with the same background and position. No difference was observed in the low-level features between faces and flowers. Although the sample in this study was well-powered and reached the required size needed to achieve a significant effect, the number of participants was limited and only included university students. On this basis, future research should fully consider the influence of other factors on the experimental results, such as more experimental materials and particular groups of subjects.
The N170 amplitude elicited by repeated attractive faces was significantly larger than repeated attractive flowers. The P1 amplitude elicited by repeated attractive flowers was significantly larger than that by repeated attractive faces. These results revealed that the repetitive enhancement effect of N170 and the familiarity effect of P1 was attractive face-specific. Therefore, in a recognition memory task, attractiveness modulated the face-specific N170 and P1 components, but not the N250.
Conceived and designed the experiment: ZY, LN, HFF, YCH, WGX, ZPQ. Recruitment and payment of participants: XYF, WJY. Analyzed the data: LN, HFF, ZPQ. Wrote and revised the paper: LN, HFF, ZPQ, CJW, AK.
Each participant gave their informed consent after fully understanding the procedure and being given time to consider whether or not to take part in the experiment.
This research was supported and granted by the ’Ministry of education of humanities and social sciences research fund (19YJA880082)’, ’Key projects of Educational Science Planning of Hubei Province (2019GA003)’ and ’Natural science foundation of Hubei Province (2019CFB425)’ to YZ. We thank all the participants for their time and interest and the reviewers for their valuable feedback.
The authors declare no conflict of interest.