The Neural Bases of Feeling Understood and Not Understood

109 13
The Neural Bases of Feeling Understood and Not Understood

Methods

Participants


Informed consent was obtained from 35 healthy University of California Los Angeles (UCLA) undergraduates during an initial behavioral session. Twenty-one of these students met criteria for the fMRI scanning session (i.e. right-handed, no metal, no psychoactive medications) and were scanned approximately 1 week later. One student was excluded from analyses due to a brain abnormality; a second student was excluded due to severe problems with normalization. Of the remaining 19 students, 9 were male and 10 were female (mean age =18.9 years, SD = 1.15). The sample was 37% Caucasian, 47% Asian American and 16% Latino/a.

Initial Behavioral Session


Before arriving at the lab, participants were asked to write a paragraph on SurveyMonkey about each of the six most positive and six most negative events in their life that they were willing to discuss in a lab setting and while being videotaped (following the procedure used by Zaki et al. [2008]). In addition, they gave each event a short title and rated its emotional intensity on a 9-point likert scale. Before the lab session, the experimenter selected the four most intense positive and four most intense negative events and pseudorandomized the order of events, such that no more than two positive or two negative events occurred in a row.

Once participants arrived at the laboratory, they were asked to videotape themselves while describing the details and emotions they experienced during each of the eight pre-selected events. Critically, participants were told that no one would see these videos, but the participants themselves. For each event, participants were asked to read their own paragraph about the event, spend one minute reliving the event, self-record a video approximately 2-min long describing the event, and then rate how emotionally intense they felt while talking about the event. Some example positive events were acceptance into UCLA, a surprise birthday party, and winning a scholarship; some example negative events were failing a class, getting bullied, and a romantic breakup.

As the experimenter prepared the videos for playback, participants completed the Sensitivity to Rejection Scale (Mehrabian, 1970). Participants then watched each of their videos and continuously rated the affective valence they felt while discussing the event, using a digital sliding scale ranging from very negative (1) to very positive (9). Finally, participants were asked for their permission to have other UCLA students watch their videos in the upcoming week. In reality, no UCLA students ever watched their videos.

In the week between the behavioral session and fMRI scanning session, the experimenters used the participants' videos and continuous ratings to create short, emotionally intense video clips with a significant upshift or downshift in self-reported valence for positive and negative events, respectively. More specifically, a clip was selected from a positive event if the continuous ratings were above the midpoint and showed an increase of two points or more in a 20-s time period (e.g. ratings from 5 → 7 or 6 → 9). In contrast, a clip was selected from a negative event if the ratings were below the midpoint and showed a decrease of two points or more in the 20-s time period (e.g. ratings from 5 → 2 or 3 → 1). Using iMovie, we then spliced these time periods from the full-length videos. For each participant, all video clips were reviewed by two independent judges and assessed for perceived emotional intensity (i.e. strong facial and verbal expressions of emotion) and comprehensibility. After discussing and resolving discrepancies, judges then selected two positive and two negative clips (each from a separate full-length video) to include in the fMRI task. Participants who did not have enough clips that met these criteria were not invited to participate in the fMRI scanning session.

fMRI Task


Before entering the scanner, participants were told that several UCLA students had come into the lab over the past week and that each student had randomly viewed one of the participant's eight videos. The experimenter then told participants that they would see how different students responded to each of their videos, that two responses per video would be shown, and that these students' responses were intentionally selected due to their different reactions to the same video. Next, participants were shown photos of the supposed UCLA students and told that each student responded to their video by choosing three sentences from a list of provided sentences. Finally, participants were familiarized with the structure of the experiment and given instructions about how to make responses in the scanner.

During the fMRI task, participants believed they were seeing how other UCLA students (i.e. responders) responded to two of their positive videos and two of their negative videos. For each of these four videos, participants saw responses from two different students that were intended to make the participant feel either understood or not understood. Participants saw a total of four 'Understood' blocks and four 'Not Understood' blocks. Each participant saw these blocks in one of five pseudorandomized orders.

In each block for the Understood and Not Understood conditions (Figure 1), participants saw the following: (1) the title of their event for 2 s; (2) a short video clip of their event for 20 s cued in on a moment of high emotionality; (3) a cue that they were about to see a student's response (e.g. 'Student 1') for 1 s; (4) the three sentences the responder supposedly chose in response to the participant's video (each shown for 5 s with a 0.5 second transition between sentences); (5) a scale for rating how understood they felt for 4 s; and (6) a fixation cross for 12 s.



(Enlarge Image)



Figure 1.



The experimental design for the fMRI task, depicting an example of an Understood block and a Not Understood block.





As described previously, the title of the event and video clip were drawn from each participant's initial behavioral session. The responders' three sentences for each of the 'understood' or 'not understood' blocks were generated by the authors and behaviorally piloted to verify that participants did indeed feel understood or not understood (Reis et al., 2000, 2004; Gable et al., 2004). Some examples of understanding sentences included the following: 'I know exactly how you felt,' 'I understand why that affected you a lot,' and 'I get why you responded like that.' Some examples of not understanding sentences included the following: 'I don't get why you reacted like that,' 'I would feel differently in that same situation,' and 'I don't understand why you felt that strongly.' After viewing the three sentences from the responder, participants then rated how understood they felt on a scale from not at all (1) to quite a bit (4).

Post Scanner Ratings


After exiting the scanner, participants were asked to provide additional ratings about their experiences in the scanner. Participants were re-shown the title of each event followed by the responders' three sentences for both the Understood and Not Understood conditions. After each block, participants were asked to rate how they felt in response to seeing the feedback on a scale from very negative (1) to very positive (9). To assess how much the participant liked the responder, we asked participants to rate (1) how much they liked the responder, (2) how warmly they felt towards the responder and (3) whether they would want to spend time with the responder.

fMRI Acquisition and Data Analysis


Scanning was performed on a Siemens Trio 3T at the UCLA Ahmanson-Lovelace Brain Mapping Center. The MATLAB Psychophysics Toolbox version 7.4 (Brainard, 1997) was used to present the task to participants and record their responses. Participants viewed the task through MR compatible LCD goggles and responded to the task with a MR compatible button response box in their right hand. For each participant, 278 functional T2*-weighted echo planar image volumes were acquired in one run (slice thickness = 3 mm, gap = 1 mm, 36 slices, TR = 2000 ms, TE = 25 ms, flip angle = 90°, matrix = 64 × 64, FOV = 200 mm). A T2-weighted, matched-bandwidth anatomical scan (slice thickness = 3 mm, gap = 1 mm, 36 slices, TR = 5000 ms, TE = 34 ms, flip angle = 90°, matrix = 128 × 128, FOV = 200 mm) and a T1-weighted, magnetization-prepared, rapid-acquisition, gradient echo (MPRAGE) anatomical scan (slice thickness = 1 mm, 192 slices, TR = 2170 ms, TE = 4.33 ms, flip angle = 7°, matrix = 256 × 256, FOV = 256 mm) were also acquired.

In SPM8 (Wellcome Department of Imaging Neuroscience, London), all functional and anatomical images were manually reoriented, realigned, co-registered to the MPRAGE, and normalized using the DARTEL procedure. First-level effects were estimated using the general linear model. 16-s blocks (i.e. three sentences of feedback from the responder for 5 s each with 0.5 s in between sentences) were modeled and convolved with the canonical (double-gamma) hemodynamic response function. The model included four regressors of interest: Positive Event-Understood, Negative Event-Understood, Positive Event-Not Understood, and Negative Event-Not Understood. The title for the event, the video clips, the rating scales and the standard six motion parameters were included as nuisance regressors. Based on a custom tool for assessing how different high-pass filters affect the estimation efficiency of an SPM design matrix, the time series was high-pass filtered using a cutoff period of 140 s. Serial autocorrelations were modeled as an AR(1) process.

Random effects analyses of the group were computed using the contrast images generated for each participant (Friston et al., 1999). Because our study is the first paradigm to examine the neural correlates of feeling understood and not understood, whole-brain group-level analyses were performed using an uncorrected P value of <0.005 with a cluster threshold of 25. For visualization of results, group contrasts were overlaid on a surface representation of the Montreal Neurological Institute (MNI) canonical brain using MRIcron (Rorden et al., 2007).

Source...
Subscribe to our newsletter
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.
You can unsubscribe at any time

Leave A Reply

Your email address will not be published.