# Introduction to neuroimaging data with Python

In this tutorial we will learn how to load, plot, and manipulate neuroimaging data in Python

Written by Luke Chang

## Nibabel

Neuroimaging data is often stored in the format of nifti files .nii which can also be compressed using gzip .nii.gz. These files store both 3D and 4D data and also contain structured metadata in the image header.

There is an very nice tool to access nifti data stored on your file system in python called nibabel. If you don’t already have nibabel installed on your computer it is easy via pip. First, tell the jupyter cell that you would like to access the unix system outside of the notebook and then install nibabel using pip. You only need to run this once (unless you would like to update the version).

!pip install nibabel

nibabel objects can be initialized by simply pointing to a nifti file even if it is compressed through gzip. First, we will import the nibabel module as nib (short and sweet so that we don’t have to type so much when using the tool). I’m also including a path to where the data file is located so that I don’t have to constantly type this. It is easy to change this on your own computer.

import os
import nibabel as nib

base_dir = '/dartfs/rc/lab/P/Psych60/'
data.get_data()

If we want to get more help on how to work with the nibabel data object we can either consult the documentation or add a ?.

data?

The imaging data is stored in either a 3D or 4D numpy array. Just like numpy, it is easy to get the dimensions of the data using shape

data.shape
(157, 189, 136)

We can also directly access the data and plot a single slice using standard matplotlib functions.

%matplotlib inline

import matplotlib.pyplot as plt

plt.imshow(data.get_data()[:,:,50])
<matplotlib.image.AxesImage at 0x11c469da0>

Try slicing different dimensions (x,y,z) yourself to get a feel for how the data is represented in this anatomical image.

We can also access data from the image header. Let’s assign the header of an image to a variable and print it to view it’s contents.

sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b''
dim_info        : 0
dim             : [  3 157 189 136   1   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [-1.  1.  1.  1.  0.  1.  1.  1.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'FreeSurfer Mar  1 2013'
aux_file        : b''
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 1.0
quatern_d       : 0.0
qoffset_x       : 78.0
qoffset_y       : -112.0
qoffset_z       : -50.0
srow_x          : [-1.  0.  0. 78.]
srow_y          : [   0.    1.    0. -112.]
srow_z          : [  0.   0.   1. -50.]
intent_name     : b''
magic           : b'n+1'

Some of the important information in the header is information about the orientation of the image in space. This can be represented as the affine matrix.

data.affine
array([[  -1.,    0.,    0.,   78.],
[   0.,    1.,    0., -112.],
[   0.,    0.,    1.,  -50.],
[   0.,    0.,    0.,    1.]])

The affine matrix is a way to transform images between spaces.

Here is a short tutorial on affine transformations from the nibabel documentation.

Here is another nice tutorial from nilearn in 2D space.

We have voxel coordinates (in voxel space). We want to get scanner RAS+ coordinates corresponding to the voxel coordinates. We need a coordinate transform to take us from voxel coordinates to scanner RAS+ coordinates.

In general, we have some voxel space coordinate $(i, j, k)$, and we want to generate the reference space coordinate $(x, y, z)$.

Imagine we had solved this, and we had a coordinate transform function $f$ that accepts a voxel coordinate and returns a coordinate in the reference space:

$(x, y, z) = f(i, j, k)$

$f$ accepts a coordinate in the input space and returns a coordinate in the output space. In our case the input space is voxel space and the output space is scanner RAS+.

In theory $f$ could be a complicated non-linear function, but in practice, we know that the scanner collects data on a regular grid. This means that the relationship between $(i, j, k)$ and $(x, y, z)$ is linear (actually affine), and can be encoded with linear (actually affine) transformations comprising translations, rotations and zooms wikipedia linear transform

Scaling (zooming) in three dimensions can be represented by a diagonal 3 by 3 matrix. Here’s how to zoom the first dimension by $p$, the second by $q$ and the third by $r$ units:

A rotation in three dimensions can be represented as a 3 by 3 rotation matrix wikipedia rotation matrix. For example, here is a rotation by $\theta$ radians around the third array axis:

This is a rotation by $\phi$ radians around the second array axis:

A rotation of $\gamma$ radians around the first array axis:

Zoom and rotation matrices can be combined by matrix multiplication.

Here’s a scaling of $p, q, r$ units followed by a rotation of $\theta$ radians around the third axis followed by a rotation of $\phi$ radians around the second axis:

This can also be written:

This might be obvious because the matrix multiplication is the result of applying each transformation in turn on the coordinates output from the previous transformation. Combining the transformations into a single matrix $M$ works because matrix multiplication is associative – $ABCD = (ABC)D$.

A translation in three dimensions can be represented as a length 3 vector to be added to the length 3 coordinate. For example, a translation of $a$ units on the first axis, $b$ on the second and $c$ on the third might be written as:

We can write our function $f$ as a combination of matrix multiplication by some 3 by 3 rotation / zoom matrix $M$ followed by addition of a 3 by 1 translation vector $(a, b, c)$

We could record the parameters necessary for $f$ as the 3 by 3 matrix, $M$ and the 3 by 1 vector $(a, b, c)$.

In fact, the 4 by 4 image affine array does include exactly this information. If $m_{i,j}$ is the value in row $i$ column $j$ of matrix $M$, then the image affine matrix $A$ is:

Why the extra row of $[0, 0, 0, 1]$? We need this row because we have rephrased the combination of rotations / zooms and translations as a transformation in homogenous coordinates (see wikipedia homogenous coordinates). This is a trick that allows us to put the translation part into the same matrix as the rotations / zooms, so that both translations and rotations / zooms can be applied by matrix multiplication. In order to make this work, we have to add an extra 1 to our input and output coordinate vectors:

This results in the same transformation as applying $M$ and $(a, b, c)$ separately. One advantage of encoding transformations this way is that we can combine two sets of [rotations, zooms, translations] by matrix multiplication of the two corresponding affine matrices.

In practice, although it is common to combine 3D transformations using 4 by 4 affine matrices, we usually apply the transformations by breaking up the affine matrix into its component $M$ matrix and $(a, b, c)$ vector and doing:

As long as the last row of the 4 by 4 is $[0, 0, 0, 1]$, applying the transformations in this way is mathematically the same as using the full 4 by 4 form, without the inconvenience of adding the extra 1 to our input and output vectors.

You can think of the image affine as a combination of a series of transformations to go from voxel coordinates to mm coordinates in terms of the magnet isocenter. Here is the EPI affine broken down into a series of transformations, with the results shown on the localizer image:

Applying different affine transformations allows us to rotate, reflect, scale, and shear the image.

For example, let’s try to reflect the image so that it is facing the opposite direction.

import numpy as np
from nibabel.affines import apply_affine, from_matvec, to_matvec

reflect = np.array([[-1, 0, 0],
[0, 1, 0],
[0, 0, 1]])

reflect_affine = from_matvec(reflect)
reflect_affine
array([[-1,  0,  0,  0],
[ 0,  1,  0,  0],
[ 0,  0,  1,  0],
[ 0,  0,  0,  1]])

Now maybe we would like to shift the image 10 units in the x direction.

translate_affine = from_matvec(reflect, [10, 0, 0])
translate_affine
array([[-1,  0,  0, 10],
[ 0,  1,  0,  0],
[ 0,  0,  1,  0],
[ 0,  0,  0,  1]])
transformed = np.dot(data.get_data(),translate_affine)
plt.imshow(transformed[:,:,50])

---------------------------------------------------------------------

ValueError                          Traceback (most recent call last)

<ipython-input-13-070fda9712c9> in <module>
----> 1 transformed = np.dot(data.get_data(),translate_affine)
2 plt.imshow(transformed[:,:,50])

ValueError: shapes (157,189,136) and (4,4) not aligned: 136 (dim 2) != 4 (dim 0)

What if we wanted to make the brain smaller by applying a scaling transformation?

scaling_affine = np.array([[3, 0, 0, 0],
[0, 3, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 1]])

scaling_affine
array([[3, 0, 0, 0],
[0, 3, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 1]])

cos_gamma = np.cos(0.3)
sin_gamma = np.sin(0.3)
rotation_affine = np.array([[1, 0, 0, 0],
[0, cos_gamma, -sin_gamma, 0],
[0, sin_gamma, cos_gamma, 0],
[0, 0, 0, 1]])

## Nilearn

There are many useful tools from the nilearn library to help manipulate and visualize neuroimaging data. See their documentation for an example.

Let’s make sure it is installed first.

!pip install nilearn

Now let’s load a few different plotting functions from their plotting module

%matplotlib inline

from nilearn.plotting import view_img, glass_brain, plot_anat, plot_epi
plot_anat(data)
<nilearn.plotting.displays.OrthoSlicer at 0x109aed7f0>

Nilearn plotting functions are very flexible and allow us to easily customize our plots

plot_anat(data, draw_cross=False, display_mode='z')
<nilearn.plotting.displays.ZSlicer at 0x11ccc0cc0>

try to get more information how to use the function with ? and try to add different commands to change the plot

nilearn also has a neat interactive viewer called view_img for examining images directly in the notebook.

view_img(data)