According to research, yes. Imagine this: you walk into the laboratory, and are a shown a series of 20-second video clips. In each clip, a different person is shown listening to another person. You can’t hear what the speaker is saying; there is no sound to the clip. But you’re told that the speaker is talking about a time when they suffered. The researchers ask you to rate how compassionate the listener is, just by what you can see: his or her body language and facial expressions.
This study was conducted by psychologists at the University of California, Berkeley, who found that people agreed on who was a compassionate listener. The participants all seemed to rely on the same cues to assess compassion: more open body language, eye contact, head nods, and smiling. I was excited to see this finding because I teach compassionate listening as a skill in the Stanford Compassion Training. Students in the training learn to deliberately do exactly what the participants in this study were using to assess compassion.
- The first step is what I call “listening with the whole body.” This means literally tuning in to the person who is speaking. “Compassionate” body language includes:
Turning toward the speaker, not just with your head, but positioning your whole body to face the speaker. Open body language, such as arms and legs not crossed (and certainly no distractions, like a cell phone, in your hands!). “Approach” signals, such as learning toward, not leaning back from the speaker. This counters our usual instinct to “avoid” or withdraw from suffering, even at the subtle level of body language.
In previous studies, people who felt high levels of compassion spontaneously shifted into this posture. But in my experience, just assuming this body language makes it easier to make a compassionate connection with someone. 2) The next step is what I call “soft eye contact.” When it comes to listening, eye contact is usually better than avoiding eye contact. But the most supportive and comfortable eye contact isn’t gazing deeply into a person’s eyes, or staring them down without a break in eye contact. Instead, it’s a soft focus on the triangle created by a person’s eyes and mouth. This allows you to take in the speaker’s full facial expressions. It also includes occasional breaks in eye contact to reduce what can be an uncomfortable intensity. 3) The last step is to offer “connecting gestures.” These gestures let a person know that you are feeling connected to what they are saying. The most appropriate connecting gestures are smiles and head nods, without interrupting the speaker. Connecting gestures encourage a speaker to continue, and often feel more supportive than when the listener jumps in verbally to make comments. When appropriate, touch is an even more powerful connecting gesture. Previous research has shown that people can more easily recognize compassion through touch—such as a comforting hand on your shoulder—than through voice or facial expressions. These three steps are simple—listen with the whole body; make soft eye contact with the intention of really seeing the speaker; and offer connecting gestures without interrupting the speaker to share your own comments or stories. Simple—but not always easy to do when we’re distracted, busy, or stressed out ourselves. This approach to compassionate listening can be a tremendous gift to the person who is talking, and to ourselves. It helps us stay grounded in the present moment, and more fully receive the gift of another person sharing his or her experience with us. This practice is also a good reminder that we don’t need to wait for compassion to spontaneously arise. When we have the intention to experience and offer compassion, we can make choices—even small ones, like how we make eye contact—that can lead to both the authentic experience of compassion. Study Reference: A Kogan, LR Saslow, EA Impett, C Oveis, D Keltner, S Rodrigues Saturn. Thin-slicing study of the oxytocin receptor (OXTR) gene and the evaluation and expression of the prosocial disposition. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1112658108.