Decoding the Brain by Coding (2)

Seoho Hahm
2 min readNov 7, 2019

--

In the previous blog post, we went over the basics of fMRI Data Preprocessing in Python by looking at what psychiatry researchers (Fu et al., 2008) did in their study on sad facial emotion processing in depressed vs healthy patients.

We learned how to:

  1. Install NiLearn
  2. Import NiLearn into iPython/Jupyter Notebook
  3. Import and plot data (image)
  4. Smooth images
  5. Mask images

Data Representation

We will now try to understand what they did in the next step:

Data Representation

Fu et al. 2008 say…

“Two methods for modeling the BOLD response were applied. In the first method, training/test examples were created by averaging the volumes within the event (e.g., if the event occurred in the first TR, the mean of the volumes acquired in the second and third TR) minus a control volume (i.e., mean of two baseline volumes preceding and two following the event), which produced 18 training examples for each intensity of facial expression (21). For each type of stimulus, the test examples were created by averaging all events for each subject.” (Fu et al., 2008)

Let’s pick apart this paragraph by first clearing up some terminology:

  • BOLD response: Blood Oxygen Level Dependent response, meaning, how much oxygen is rushing to and being used in a region (more oxygen/BOLD = more activation)
  • volume: all slices put together, meaning, one whole image of the brain
  • event: each different part to the experiment (in this case, they are referring to showing a facial expression)
  • TR: Repetition Time, basically meaning, how often the magnet is resetting to look for signals in the brain (typically 1–4 s, but can be as fast as 100 ms)

Their 1st Method in layman’s terms:

  • They took whole brain images (volumes) that were captured while showing the facial expression stimulus
  • They averaged out the images
  • They then took whole brain images (volumes) captured before and after showing the stimulus => baseline/control volumes
  • They averaged those out as well
  • They then subtracted the mean baseline/control volume from the mean event volume
  • They did this process for each facial expression

Their 1st Method in Python terms:

from nilearn.image import math, mean_img

mean_event_image =
mean_img(['filename.nii', 'filename1.nii'])
mean_control_image = mean_img(['filename2.nii', 'filename3.nii'])
difference_image = mean_event_image - mean_control_image

They followed the same procedure for specific ROIs (Regions of interest) that are known to be associated with processing emotional faces. So they now have training images for each facial expression stimulus.

Next, using these resulting images, they used machine learning — the Support Vector Machine (SVM) classifier — to see if there were distinctions in brain activation between depressed and healthy subjects. The next blog post will (finally!) cover SVM.

--

--