Faces in context: Bottom-up and top-down influences on face perception
Freitag, 04.06.2021:
13:00 - 14:30

Chair der Sitzung: Julia Baum, Humboldt-Universität zu Berlin
Chair der Sitzung: Rasha Abdel Rahman, Humboldt-Universität zu Berlin
Ort: Attention and perception

Zusammenfassung der Sitzung

The way we process and evaluate other’s faces depends not only on the face alone, but also on the situational context, our goals and experiences, and what we know about the person. In this symposium five talks highlight different aspects of face processing in context, ranging from task-related over knowledge-based effects to encounters in real-life situations. Enya Weidner presents data from intracranial amygdala recordings investigating the mechanisms and time-course underlying the interaction of goal-directed attention and emotional processing. Employing ERPs, the next talks focus on how learning history shapes face processing. Anne Schacht shows how expression-induced salience and learning through conditioning modulate the processing of faces. Next, Julia Baum shows how more complex learning of verbal emotional information influences the processing of faces varying in attractiveness. Zooming out, we ask whether our understanding of face and emotion processing can benefit from putting faces back onto the body – as in real-life situations. Kirsten Stark used videos of real-life intense emotional reactions to show how the affective valence communicated by the body interacts with the recognition of facial expressions. Noga Ensenberg then provides evidence on how strongly the context influences what we read from a face depends on individual differences between perceivers. Taken together, our symposium challenges the traditional assumption that faces are processed in relative isolation, demonstrating a wide range of contextual influences.


Task-driven modulations of electrophysiological responses to facial expressions. Insights from intracranial EEG recordings.

Enya M. Weidner1, Sebastian Schindler2, Philip Grewe3, Christian G. Bien3, Johanna Kißler1

1Bielefeld University, Department of Psychology, Germany; 2Institute of Medical Psychology and Systems Neuroscience, University of Münster; 3Krankenhaus Mara, Bielefeld

Emotional facial expressions profit from their biological salience in visual processing. It is assumed that, via connections to the object recognition system, the amygdala rapidly modulates processing in favour of emotional objects. This effect, termed emotional attention, is often assumed to be independent of task-driven attention. To test this, we used an emotion-attention interaction task and analysed intracranial event-related potentials recorded from the healthy right human amygdala of one patient undergoing pre-surgical epilepsy monitoring. Random sequences of angry, neutral and happy faces were presented in three different blocks, one expression being denoted as the target. Data indicate an early (~50 ms) differentiation of angry faces when neutral faces were the target. A mid-latency (~200-300 ms) negative deflection in the “attend to happy block” indicated the differentiation of happy targets from angry and neutral faces. In a late time-window (~600 ms) a positive deflection for target expressions was found. It was most pronounced for emotional targets, angry and neutral faces exhibiting a more similar response profile than happy ones. So far, results suggest temporal differences in the goal-driven modulation of processing of different facial expressions. The earliest differentiation was shown for angry faces from neutral targets, followed by a differentiation of happy targets. Angry targets elicited the latest differential signal. These results reveal an influence on attention orientation on the timing of emotion effects in the processing of facial expressions. Earliest responses to angry, potentially threatening faces seem most pronounced under conditions of ambiguity, when the target category is neutral.

How associated relevance and inherent salience shape human face processing: time-resolved evidence from event-related brain potentials

Annekathrin Schacht

Affective Neuroscience and Psychophysiology, Institute of Psychology, University of Goettingen

To support adaptive behaviour in complex environments, the human brain developed efficient selection mechanisms that bias perception in favour of salient information. The present study aimed at investigating whether associated motivational salience causes preferential processing of inherently neutral faces similar to emotional expressions by means of event-related brain potentials (ERPs) and changes of the pupil size. To this aim, neutral facial expressions were implicitly associated with monetary outcome, while participants (N = 44) performed a face-matching task with masked primes that ensured performance around chance level and thus an equal proportion of gain, loss, and zero outcomes. During learning, motivational context strongly impacted the processing of the fixation, prime and mask stimuli prior to the target face, indicated by enhanced amplitudes of subsequent ERP components and increased pupil size. In a separate test session, previously associated faces as well as novel faces with emotional expressions were presented within the same task but without motivational context and performance feedback. Most importantly, previously gain-associated faces amplified the LPC, although the individually contingent face-outcome assignments were not made explicit during the learning session. Emotional expressions impacted the N170 and EPN components. Modulations of the pupil size were absent in both motivationally-associated and emotional conditions. Our findings demonstrate that neural representations of neutral stimuli can acquire increased salience via implicit learning, with an advantage for gain over loss associations.

Beautiful is good, moral is better: Social judgments based on facial attractiveness and affective information

Julia Baum1,2, Rasha Abdel Rahman1,2

1Humboldt-Universität zu Berlin, Faculty of Life Sciences, Department of Psychology; 2Humboldt-Universität zu Berlin, Faculty of Philosophy, Berlin School of Mind and Brain

Social-emotional impressions are formed based on the attractiveness of a person’s face, and we tend to judge beautiful people more positively. Further, knowing about the good or bad deeds of persons strongly influences how we perceive and judge others. Here, we investigated the interplay between attractiveness and person-related information on social judgments, employing event-related brain potentials. Participants associated negative, neutral or positive information with attractive or less attractive persons. In a separate test phase, they judged the persons based on the information. Attractiveness had an influence on social judgments only in the neutral but not in the positive or negative information condition. Reaction times reveal a congruency effect in the emotional knowledge conditions: positive information lead to faster judgments of attractive faces, and negative information to faster judgments of less attractive faces. Modulations of early brain responses associated with reflexive emotional processing (early posterior negativity, EPN) showed independent effects of affective information and attractiveness. Whereas later brain responses associated with more reflective emotional processing (late positive potential, LPP) reveal an interaction of person information and attractiveness, with stronger effects in the congruent conditions. Our findings suggest that social judgments are predominantly based on affective information, but may also be modulated by facial attractiveness.

Negative-appearing faces boost positive emotion perception

Kirsten Stark1,2, Ran R. Hassin3, Rasha Abdel Rahman1, Hillel Aviezer4

1Neurokognitive Psychologie, Humboldt-Universität zu Berlin; 2Charité – Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin; 3James Marshall Professor for Psychology, The Department of Psychology and the Federmann Center for the Study of Rationality, Hebrew University of Jerusalem, Israel; 4Department of Psychology, Hebrew University of Jerusalem, Israel

Growing evidence suggests that isolated intense facial expressions often convey misleading affective valence information, and that the body and other types of context play a crucial role in emotion recognition. This poses a puzzle: How is the affective information of faces and bodies integrated, especially if contradicting? We present results of three experiments (Ntotal=431; 199 female) investigating the role of the face and body in emotion recognition, using a novel set of stimuli—authentic home videos documenting sports fans reacting ecstatically to their winning teams. In Experiment 1A, participants viewed videos of faces, bodies, or faces with bodies and rated the affective valence of the fans’ reactions. As expected, people easily identified the target’s valence as positive from the body, but not the facial reactions. Intriguingly, negative-appearing faces boosted the perception of positive bodies: the presentation of full images (faces and bodies) was rated more positively than isolated bodies, even when the isolated faces themselves were incorrectly rated as negative. Experiment 1B demonstrated that the existence of such negative-appearing faces actually increased confidence in the respective rating. Evidence from Experiment 2 showed that the integration did not emerge from a consciously controlled process. Rather, participants automatically read-in valence information from the task-irrelevant body into the face, resulting in an illusion of facial positivity. We suggest that, instead of being averaged with the body, intense facial expressions amplify the amplitude of valence read from the body, and undergo a contextual disambiguation process, thereby contributing to an accurate perception of the entire gestalt.

Do you see what I see? Individual differences in contextualized emotion recognition

Noga Ensenberg

Hebrew University, Israel

Recent evidence suggests that real life facial expressions are often more ambiguous than previously assumed. Accordingly, context plays an indispensable role in communicating emotion. In fact, even the recognition of stereotypical, exaggerated facial expressions can be shifted by context. For example, previous reports suggest that the body context in which a face is presented can bring to a categorical shift in recognition from the face. This effect has been studied extensively at the group level but are we all effected in a similar way? Our results suggest the answer is no. Using a multiple-choice categorization task, 101 participants were presented with still presentations of incongruent facial and bodily emotional expressions. We asked whether individuals differ in their susceptibility to the bodily context when categorizing the face and if so whether effects are consistent over time. Striking differences were found, these were stable over two sessions (r = 0.84, p<0.001). Our second study suggests that this phenomenon is not bound to the method used and holds also when using an open question paradigm. Testing 83 participants we show a robust correlation between the methods (r = 0.63, p<0.01). Our third study shows that individual differences in the susceptibility to context hold even across modalities, presenting participants with dynamic audio-visual expressions (43 participants, r = 0.7, p < 0.001). We conclude that different people exposed to identical affective stimuli may perceive strikingly different emotions as a function of highly stable individual differences.