Skip to content

Introduction to Neuroimaging Data

In this tutorial we will learn the basics of the organization of data folders, and how to load, plot, and manipulate neuroimaging data in Python.

To introduce the basics of fMRI data structures, watch this short video by Martin Lindquist.

Software Packages

There are many different software packages to analyze neuroimaging data. Most of them are open source and free to use (with the exception of BrainVoyager). The most popular ones (SPM, FSL, & AFNI) have been around a long time and are where many new methods are developed and distributed. These packages have focused on implementing what they believe are the best statistical methods, ease of use, and computational efficiency. They have very large user bases so many bugs have been identified and fixed over the years. There are also lots of publicly available documentation, listserves, and online tutorials, which makes it very easy to get started using these tools.

There are also many more boutique packages that focus on specific types of preprocessing step and analyses such as spatial normalization with ANTs, connectivity analyses with the conn-toolbox, representational similarity analyses with the rsaToolbox, and prediction/classification with pyMVPA.

Many packages have been developed within proprietary software such as Matlab (e.g., SPM, Conn, RSAToolbox, etc). Unfortunately, this requires that your university has site license for Matlab and many individual add-on toolboxes. If you are not affiliated with a University, you may have to pay for Matlab, which can be fairly expensive. There are free alternatives such as octave, but octave does not include many of the add-on toolboxes offered by matlab that may be required for a specific package. Because of this restrictive licensing, it is difficult to run matlab on cloud computing servers and to use with free online courses such as dartbrains. Other packages have been written in C/C++/C# and need to be compiled to run on your specific computer and operating system. While these tools are typically highly computationally efficient, it can sometimes be challenging to get them to install and work on specific computers and operating systems.

There has been a growing trend to adopt the open source Python framework in the data science and scientific computing communities, which has lead to an explosion in the number of new packages available for statistics, visualization, machine learning, and web development. pyMVPA was an early leader in this trend, and there are many great tools that are being actively developed such as nilearn, brainiak, neurosynth, nipype, fmriprep, and many more. One exciting thing is that these newer developments have built on the expertise of decades of experience with imaging analyses, and leverage changes in high performance computing. There is also a very tight integration with many cutting edge developments in adjacent communities such as machine learning with scikit-learn, tensorflow, and pytorch, which has made new types of analyses much more accessible to the neuroimaging community. There has also been an influx of younger contributors with software development expertise. You might be surprised to know that many of the popular tools being used had core contributors originating from the neuroimaging community (e.g., scikit-learn, seaborn, and many more).

For this course, I have chosen to focus on tools developed in Python as it is an easy to learn programming language, has excellent tools, works well on distributed computing systems, has great ways to disseminate information (e.g., jupyter notebooks, jupyter-book, etc), and is free! If you are just getting started, I would spend some time working with NiLearn and Brainiak, which have a lot of functionality, are very well tested, are reasonably computationally efficient, and most importantly have lots of documentation and tutorials to get started.

We will be using many packages throughout the course such as fmriprep to perform preprocessing, and nltools, which is a package developed in my lab, to do basic data manipulation and analysis. NLtools is built using many other toolboxes such as nibabel and nilearn, and we will also be using these frequently throughout the course.

BIDS: Brain Imaging Dataset Specification

Recently, there has been growing interest to share datasets across labs and even on public repositories such as openneuro. In order to make this a successful enterprise, it is necessary to have some standards in how the data are named and organized. Historically, each lab has used their own idiosyncratic conventions, which can make it difficult for outsiders to analyze. In the past few years, there have been heroic efforts by the neuroimaging community to create a standardized file organization and naming practices. This specification is called BIDS for Brain Imaging Dataset Specification.

As you can imagine, individuals have their own distinct method of organizing their files. Think about how you keep track of your files on your personal laptop (versus your friend). This may be okay in the personal realm, but in science, it's best if anyone (especially yourself 6 months from now!) can follow your work and know which files mean what by their titles.

Our course dataset — the dartbrains/localizer dataset on HuggingFace — follows the BIDS layout. Here's the top-level structure of the raw side:

localizer/
├── dataset_description.json     # dataset name, BIDS version, authors
├── participants.tsv             # one row per subject (age, sex, …)
├── participants.json            # column descriptions for participants.tsv
├── task-localizer_bold.json     # task-level acquisition params (TR, slice timing, …)
├── README.md
├── sub-S01/
│   ├── anat/
│   │   └── metadata.csv
│   └── func/
│       ├── sub-S01_task-localizer_events.tsv   # stimulus onsets, durations, conditions
│       └── metadata.csv
├── sub-S02/ …
├── sub-S20/
└── derivatives/                 # processed outputs (see next section)

A few things to notice:

  1. Files are in NIfTI format, not raw DICOMs. (In this dataset the raw .nii.gz files aren't hosted to keep the download small — only the events.tsv per subject lives under raw, with the preprocessed scans available under derivatives/. A complete BIDS dataset would include sub-S01/anat/sub-S01_T1w.nii.gz and sub-S01/func/sub-S01_task-localizer_bold.nii.gz here.)
  2. Scans are broken up by modalityanat/, func/, dwi/, fmap/ — for each subject.
  3. Filenames carry metadata as key-value entities separated by underscores: sub-S01_task-localizer_events.tsv tells you the subject, task, and content type at a glance.
  4. Sidecar JSON files describe acquisition parameters in a machine-readable format (echo time, slice timing, phase encoding direction, …), either alongside each scan or "inherited" from a top-level file like task-localizer_bold.json.

Not only does this specification standardize within labs, it also makes collaboration, software development, and data publishing dramatically easier. Because the format is consistent, tools like pybids can programmatically index and query an entire BIDS directory. In this course, we use lightweight helper functions in dartbrains_tools.data that download individual files on demand from HuggingFace Hub.

The derivatives/ folder

BIDS makes a strict separation between raw data (what came off the scanner) and derivatives (anything produced by running a pipeline on that raw data). Derived files live in a sibling derivatives/ directory, with one subfolder per pipeline. Here's the actual layout for our dataset:

localizer/derivatives/
├── fmriprep/
│   ├── dataset_description.json
│   ├── sub-S01.html             # per-subject QC report
│   ├── sub-S01/
│   │   ├── anat/
│   │   │   ├── sub-S01_desc-preproc_T1w.nii.gz                        # T1 in native space
│   │   │   ├── sub-S01_desc-brain_mask.nii.gz                         # brain mask, native
│   │   │   ├── sub-S01_dseg.nii.gz                                    # tissue segmentation
│   │   │   ├── sub-S01_label-{GM,WM,CSF}_probseg.nii.gz               # tissue probabilities
│   │   │   ├── sub-S01_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5   # forward transform
│   │   │   ├── sub-S01_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5   # inverse transform
│   │   │   ├── sub-S01_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz   # T1 in MNI space
│   │   │   └── sub-S01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz
│   │   ├── func/
│   │   │   ├── sub-S01_task-localizer_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz
│   │   │   ├── sub-S01_task-localizer_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz
│   │   │   ├── sub-S01_task-localizer_space-MNI152NLin2009cAsym_boldref.nii.gz
│   │   │   └── sub-S01_task-localizer_desc-confounds_regressors.tsv  # motion + physio regressors
│   │   └── figures/             # QC SVGs (carpetplot, flirtbbr, dseg, …)
│   ├── sub-S02/ …
│   └── logs/CITATION.{bib,html,md,tex}
└── betas/                       # condition-level GLM estimates
    ├── S01_beta_audio_computation.nii.gz
    ├── S01_beta_audio_left_hand.nii.gz
    │   …  (10 conditions per subject)
    ├── S01_betas.nii.gz         # stacked 4D image (10 conditions)
    ├── S02_beta_…
    └── …

Each pipeline gets its own subfolder under derivatives/ (here: fmriprep/ for preprocessing and betas/ for our first-level GLM outputs; other common ones are freesurfer/, mriqc/, xcp_d/). This means you can run multiple pipelines on the same dataset without them colliding, and deleting and re-running a pipeline never risks the raw data.

Derivative files follow BIDS naming conventions but add entities that describe the processing variant. The most important ones to recognize:

  • desc- describes what kind of derivativedesc-preproc_bold is the preprocessed BOLD timeseries; desc-brain_mask is a brain mask; desc-confounds_regressors is the confounds TSV.
  • space- identifies the coordinate spacespace-MNI152NLin2009cAsym means the file has been warped into the MNI152 nonlinear 2009c asymmetric template; absence of space- means native subject space.
  • from-/to- on xfm.h5 files describe the direction of a transform (T1w → MNI for forward warps, MNI → T1w for inverse).
  • label- distinguishes tissue classes on segmentation outputs (GM, WM, CSF).

These conventions keep filenames self-describing: sub-S01_task-localizer_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz tells you it's subject S01's localizer task, preprocessed and resampled into MNI space — without opening the file.

In this course, our Code.data.get_file() helper takes a scope argument that distinguishes raw from derivative data: scope='raw' pulls from sub-S01/, scope='derivatives' pulls from derivatives/fmriprep/sub-S01/, and scope='betas' pulls from derivatives/betas/. The helper downloads on demand from HuggingFace and caches locally, so you don't need the full directory structure on disk.

Accessing the Dataset

The Localizer dataset is hosted on HuggingFace in BIDS format. We provide helper functions in dartbrains_tools.data that download files on demand and cache them locally:

from dartbrains_tools.data import get_file, get_subjects, load_events

# Get the preprocessed BOLD file for subject S01
bold_path = get_file('S01', 'derivatives', 'bold')

# Get a list of all subjects
subjects = get_subjects()  # ['S01', 'S02', ..., 'S20']

# Load event timing for a subject
events = load_events('S01')

Files are downloaded from HuggingFace Hub the first time you request them and cached locally for subsequent use.

With a BIDS dataset, we often want to know which subjects are available, and retrieve specific files by subject, data type, and scope (raw vs. derivatives). Let's start by listing the subjects in the dataset.

subjects = get_subjects()
subjects[:10]

We can also retrieve the path to a specific file. For example, let's get the preprocessed BOLD file for the first 10 subjects. The get_file function downloads the file from HuggingFace Hub on first access and returns the local cached path.

bold_files = [get_file(sub, 'derivatives', 'bold') for sub in get_subjects()[:10]]
bold_files
Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.

In a BIDS dataset, each file follows a structured naming convention. For example, a preprocessed BOLD file is named:

sub-S01_task-localizer_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz

The key-value pairs (sub-S01, task-localizer, space-..., desc-preproc) are called entities and they encode metadata directly in the filename. This is one of the core design principles of BIDS: you can understand what a file contains just by reading its name.

Common BIDS entities include: - sub-<label>: Subject identifier - task-<label>: Task name - space-<label>: Reference space (e.g., MNI152NLin2009cAsym) - desc-<label>: Description (e.g., preproc for preprocessed) - suffix: The type of data (bold, T1w, events, etc.)

Let's look at the path for a single file to see this structure.

f = get_file('S01', 'derivatives', 'bold')
f
'/home/runner/.cache/huggingface/hub/datasets--dartbrains--localizer/snapshots/493f7614c8b7cdc0593a89eb0635f10669b30a10/derivatives/fmriprep/sub-S01/func/sub-S01_task-localizer_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz'

This dataset contains a single task called localizer. Look at the Download Data page for more information about this task.

We can also retrieve event files that describe the experimental conditions and their timing. Let's load the events for the first subject.

events_df = load_events('S01')
events_df.head(10)
onsetdurationtrial_type
00.01video_computation
12.41video_computation
28.71horizontal_checkerboard
311.41audio_right_hand
415.01audio_sentence
518.01video_right_hand
620.71audio_sentence
723.71audio_left_hand
826.71video_left_hand
929.71audio_sentence

Loading Data with Nibabel

Neuroimaging data is often stored in the format of nifti files .nii which can also be compressed using gzip .nii.gz. These files store both 3D and 4D data and also contain structured metadata in the image header.

There is a very nice tool to access nifti data stored on your file system in python called nibabel. If you don't already have nibabel installed on your computer it is easy via pip. First, tell the jupyter cell that you would like to access the unix system outside of the notebook and then install nibabel using pip !pip install nibabel. You only need to run this once (unless you would like to update the version).

nibabel objects can be initialized by simply pointing to a nifti file even if it is compressed through gzip. First, we will import the nibabel module as nib (short and sweet so that we don't have to type so much when using the tool). I'm also including a path to where the data file is located so that I don't have to constantly type this. It is easy to change this on your own computer.

We will be loading an anatomical image from subject S01 from the localizer dataset. See this paper for more information about this dataset.

We will use our get_file helper to grab subject S01's T1 image.

data = nib.load(get_file('S01', 'derivatives', 'T1w'))

If we want to get more help on how to work with the nibabel data object we can either consult the documentation or add a ?.

help(data)
Help on Nifti1Image in module nibabel.nifti1 object:

class Nifti1Image(Nifti1Pair, nibabel.filebasedimages.SerializableImage)
 |  Nifti1Image(dataobj, affine, header=None, extra=None, file_map=None, dtype=None)
 |
 |  Class for single file NIfTI1 format image
 |
 |  Method resolution order:
 |      Nifti1Image
 |      Nifti1Pair
 |      nibabel.analyze.AnalyzeImage
 |      nibabel.spatialimages.SpatialImage
 |      nibabel.dataobj_images.DataobjImage
 |      nibabel.filebasedimages.SerializableImage
 |      nibabel.filebasedimages.FileBasedImage
 |      builtins.object
 |
 |  Methods defined here:
 |
 |  update_header(self)
 |      Harmonize header with image data and affine
 |
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |
 |  __annotations__ = {}
 |
 |  files_types = (('image', '.nii'),)
 |
 |  header_class = <class 'nibabel.nifti1.Nifti1Header'>
 |      Class for NIfTI1 header
 |
 |      The NIfTI1 header has many more coded fields than the simpler Analyze
 |      variants.  NIfTI1 headers also have extensions.
 |
 |      Nifti allows the header to be a separate file, as part of a nifti image /
 |      header pair, or to precede the data in a single file.  The object needs to
 |      know which type it is, in order to manage the voxel offset pointing to the
 |      data, extension reading, and writing the correct magic string.
 |
 |      This class handles the header-preceding-data case.
 |
 |
 |  valid_exts = ('.nii',)
 |
 |  ----------------------------------------------------------------------
 |  Methods inherited from Nifti1Pair:
 |
 |  __init__(
 |      self,
 |      dataobj,
 |      affine,
 |      header=None,
 |      extra=None,
 |      file_map=None,
 |      dtype=None
 |  )
 |      Initialize image
 |
 |      The image is a combination of (array-like, affine matrix, header), with
 |      optional metadata in `extra`, and filename / file-like objects
 |      contained in the `file_map` mapping.
 |
 |      Parameters
 |      ----------
 |      dataobj : object
 |         Object containing image data.  It should be some object that returns an
 |         array from ``np.asanyarray``.  It should have a ``shape`` attribute
 |         or property
 |      affine : None or (4,4) array-like
 |         homogeneous affine giving relationship between voxel coordinates and
 |         world coordinates.  Affine can also be None.  In this case,
 |         ``obj.affine`` also returns None, and the affine as written to disk
 |         will depend on the file format.
 |      header : None or mapping or header instance, optional
 |         metadata for this image format
 |      extra : None or mapping, optional
 |         metadata to associate with image that cannot be stored in the
 |         metadata of this image type
 |      file_map : mapping, optional
 |         mapping giving file information for this image format
 |
 |              Notes
 |              -----
 |
 |              If both a `header` and an `affine` are specified, and the `affine` does
 |              not match the affine that is in the `header`, the `affine` will be used,
 |              but the ``sform_code`` and ``qform_code`` fields in the header will be
 |              re-initialised to their default values. This is performed on the basis
 |              that, if you are changing the affine, you are likely to be changing the
 |              space to which the affine is pointing.  The :meth:`set_sform` and
 |              :meth:`set_qform` methods can be used to update the codes after an image
 |              has been created - see those methods, and the :ref:`manual
 |              <default-sform-qform-codes>` for more details.
 |
 |  as_reoriented(self, ornt)
 |      Apply an orientation change and return a new image
 |
 |      If ornt is identity transform, return the original image, unchanged
 |
 |      Parameters
 |      ----------
 |      ornt : (n,2) orientation array
 |         orientation transform. ``ornt[N,1]` is flip of axis N of the
 |         array implied by `shape`, where 1 means no flip and -1 means
 |         flip.  For example, if ``N==0`` and ``ornt[0,1] == -1``, and
 |         there's an array ``arr`` of shape `shape`, the flip would
 |         correspond to the effect of ``np.flipud(arr)``.  ``ornt[:,0]`` is
 |         the transpose that needs to be done to the implied array, as in
 |         ``arr.transpose(ornt[:,0])``
 |
 |  get_data_dtype(self, finalize=False)
 |      Get numpy dtype for data
 |
 |      If ``set_data_dtype()`` has been called with an alias
 |      and ``finalize`` is ``False``, return the alias.
 |      If ``finalize`` is ``True``, determine the appropriate dtype
 |      from the image data object and set the final dtype in the
 |      header before returning it.
 |
 |  get_qform(self, coded=False)
 |      Return 4x4 affine matrix from qform parameters in header
 |
 |      Parameters
 |      ----------
 |      coded : bool, optional
 |          If True, return {affine or None}, and qform code.  If False, just
 |          return affine.  {affine or None} means, return None if qform code
 |          == 0, and affine otherwise.
 |
 |      Returns
 |      -------
 |      affine : None or (4,4) ndarray
 |          If `coded` is False, always return affine reconstructed from qform
 |          quaternion.  If `coded` is True, return None if qform code is 0,
 |          else return the affine.
 |      code : int
 |          Qform code. Only returned if `coded` is True.
 |
 |      See also
 |      --------
 |      set_qform
 |      get_sform
 |
 |  get_sform(self, coded=False)
 |      Return 4x4 affine matrix from sform parameters in header
 |
 |      Parameters
 |      ----------
 |      coded : bool, optional
 |          If True, return {affine or None}, and sform code.  If False, just
 |          return affine.  {affine or None} means, return None if sform code
 |          == 0, and affine otherwise.
 |
 |      Returns
 |      -------
 |      affine : None or (4,4) ndarray
 |          If `coded` is False, always return affine from sform fields. If
 |          `coded` is True, return None if sform code is 0, else return the
 |          affine.
 |      code : int
 |          Sform code. Only returned if `coded` is True.
 |
 |      See also
 |      --------
 |      set_sform
 |      get_qform
 |
 |  set_data_dtype(self, datatype)
 |      Set numpy dtype for data from code, dtype, type or alias
 |
 |      Using :py:class:`int` or ``"int"`` is disallowed, as these types
 |      will be interpreted as ``np.int64``, which is almost never desired.
 |      ``np.int64`` is permitted for those intent on making poor choices.
 |
 |      The following aliases are defined to allow for flexible specification:
 |
 |        * ``'mask'`` - Alias for ``uint8``
 |        * ``'compat'`` - The nearest Analyze-compatible datatype
 |          (``uint8``, ``int16``, ``int32``, ``float32``)
 |        * ``'smallest'`` - The smallest Analyze-compatible integer
 |          (``uint8``, ``int16``, ``int32``)
 |
 |      Dynamic aliases are resolved when ``get_data_dtype()`` is called
 |      with a ``finalize=True`` flag. Until then, these aliases are not
 |      written to the header and will not persist to new images.
 |
 |      Examples
 |      --------
 |      >>> ints = np.arange(24, dtype='i4').reshape((2,3,4))
 |
 |      >>> img = Nifti1Image(ints, np.eye(4))
 |      >>> img.set_data_dtype(np.uint8)
 |      >>> img.get_data_dtype()
 |      dtype('uint8')
 |      >>> img.set_data_dtype('mask')
 |      >>> img.get_data_dtype()
 |      dtype('uint8')
 |      >>> img.set_data_dtype('compat')
 |      >>> img.get_data_dtype()
 |      'compat'
 |      >>> img.get_data_dtype(finalize=True)
 |      dtype('<i4')
 |      >>> img.get_data_dtype()
 |      dtype('<i4')
 |      >>> img.set_data_dtype('smallest')
 |      >>> img.get_data_dtype()
 |      'smallest'
 |      >>> img.get_data_dtype(finalize=True)
 |      dtype('uint8')
 |      >>> img.get_data_dtype()
 |      dtype('uint8')
 |
 |      Note that floating point values will not be coerced to ``int``
 |
 |      >>> floats = np.arange(24, dtype='f4').reshape((2,3,4))
 |      >>> img = Nifti1Image(floats, np.eye(4))
 |      >>> img.set_data_dtype('smallest')
 |      >>> img.get_data_dtype(finalize=True)
 |      Traceback (most recent call last):
 |         ...
 |      ValueError: Cannot automatically cast array (of type float32) to an integer
 |      type with fewer than 64 bits. Please set_data_dtype() to an explicit data type.
 |
 |      >>> arr = np.arange(1000, 1024, dtype='i4').reshape((2,3,4))
 |      >>> img = Nifti1Image(arr, np.eye(4))
 |      >>> img.set_data_dtype('smallest')
 |      >>> img.set_data_dtype('implausible')
 |      Traceback (most recent call last):
 |         ...
 |      nibabel.spatialimages.HeaderDataError: data dtype "implausible" not recognized
 |      >>> img.set_data_dtype('none')
 |      Traceback (most recent call last):
 |         ...
 |      nibabel.spatialimages.HeaderDataError: data dtype "none" known but not supported
 |      >>> img.set_data_dtype(np.void)
 |      Traceback (most recent call last):
 |         ...
 |      nibabel.spatialimages.HeaderDataError: data dtype "<class 'numpy.void'>" known
 |      but not supported
 |      >>> img.set_data_dtype('int')
 |      Traceback (most recent call last):
 |         ...
 |      ValueError: Invalid data type 'int'. Specify a sized integer, e.g., 'uint8' or numpy.int16.
 |      >>> img.set_data_dtype(int)
 |      Traceback (most recent call last):
 |         ...
 |      ValueError: Invalid data type <class 'int'>. Specify a sized integer, e.g., 'uint8' or
 |      numpy.int16.
 |      >>> img.set_data_dtype('int64')
 |      >>> img.get_data_dtype() == np.dtype('int64')
 |      True
 |
 |  set_qform(self, affine, code=None, strip_shears=True, **kwargs)
 |      Set qform header values from 4x4 affine
 |
 |      Parameters
 |      ----------
 |      affine : None or 4x4 array
 |          affine transform to write into sform. If None, only set code.
 |      code : None, string or integer
 |          String or integer giving meaning of transform in *affine*.
 |          The default is None.  If code is None, then:
 |
 |          * If affine is None, `code`-> 0
 |          * If affine not None and existing qform code in header == 0,
 |            `code`-> 2 (aligned)
 |          * If affine not None and existing qform code in header != 0,
 |            `code`-> existing qform code in header
 |
 |      strip_shears : bool, optional
 |          Whether to strip shears in `affine`.  If True, shears will be
 |          silently stripped. If False, the presence of shears will raise a
 |          ``HeaderDataError``
 |      update_affine : bool, optional
 |          Whether to update the image affine from the header best affine
 |          after setting the qform. Must be keyword argument (because of
 |          different position in `set_qform`). Default is True
 |
 |      See also
 |      --------
 |      get_qform
 |      set_sform
 |
 |      Examples
 |      --------
 |      >>> data = np.arange(24, dtype='f4').reshape((2,3,4))
 |      >>> aff = np.diag([2, 3, 4, 1])
 |      >>> img = Nifti1Pair(data, aff)
 |      >>> img.get_qform()
 |      array([[2., 0., 0., 0.],
 |             [0., 3., 0., 0.],
 |             [0., 0., 4., 0.],
 |             [0., 0., 0., 1.]])
 |      >>> img.get_qform(coded=True)
 |      (None, 0)
 |      >>> aff2 = np.diag([3, 4, 5, 1])
 |      >>> img.set_qform(aff2, 'talairach')
 |      >>> qaff, code = img.get_qform(coded=True)
 |      >>> np.all(qaff == aff2)
 |      True
 |      >>> int(code)
 |      3
 |
 |  set_sform(self, affine, code=None, **kwargs)
 |      Set sform transform from 4x4 affine
 |
 |      Parameters
 |      ----------
 |      affine : None or 4x4 array
 |          affine transform to write into sform.  If None, only set `code`
 |      code : None, string or integer
 |          String or integer giving meaning of transform in *affine*.
 |          The default is None.  If code is None, then:
 |
 |          * If affine is None, `code`-> 0
 |          * If affine not None and existing sform code in header == 0,
 |            `code`-> 2 (aligned)
 |          * If affine not None and existing sform code in header != 0,
 |            `code`-> existing sform code in header
 |
 |      update_affine : bool, optional
 |          Whether to update the image affine from the header best affine
 |          after setting the qform.  Must be keyword argument (because of
 |          different position in `set_qform`). Default is True
 |
 |      See also
 |      --------
 |      get_sform
 |      set_qform
 |
 |      Examples
 |      --------
 |      >>> data = np.arange(24, dtype='f4').reshape((2,3,4))
 |      >>> aff = np.diag([2, 3, 4, 1])
 |      >>> img = Nifti1Pair(data, aff)
 |      >>> img.get_sform()
 |      array([[2., 0., 0., 0.],
 |             [0., 3., 0., 0.],
 |             [0., 0., 4., 0.],
 |             [0., 0., 0., 1.]])
 |      >>> saff, code = img.get_sform(coded=True)
 |      >>> saff
 |      array([[2., 0., 0., 0.],
 |             [0., 3., 0., 0.],
 |             [0., 0., 4., 0.],
 |             [0., 0., 0., 1.]])
 |      >>> int(code)
 |      2
 |      >>> aff2 = np.diag([3, 4, 5, 1])
 |      >>> img.set_sform(aff2, 'talairach')
 |      >>> saff, code = img.get_sform(coded=True)
 |      >>> np.all(saff == aff2)
 |      True
 |      >>> int(code)
 |      3
 |
 |  to_file_map(self, file_map=None, dtype=None)
 |      Write image to `file_map` or contained ``self.file_map``
 |
 |      Parameters
 |      ----------
 |      file_map : None or mapping, optional
 |         files mapping.  If None (default) use object's ``file_map``
 |         attribute instead
 |      dtype : dtype-like, optional
 |         The on-disk data type to coerce the data array.
 |
 |  ----------------------------------------------------------------------
 |  Data and other attributes inherited from Nifti1Pair:
 |
 |  rw = True
 |
 |  ----------------------------------------------------------------------
 |  Class methods inherited from nibabel.analyze.AnalyzeImage:
 |
 |  from_file_map(file_map, *, mmap=True, keep_file_open=None)
 |      Class method to create image from mapping in ``file_map``
 |
 |      Parameters
 |      ----------
 |      file_map : dict
 |          Mapping with (key, value) pairs of (``file_type``, FileHolder
 |          instance giving file-likes for each file needed for this image
 |          type.
 |      mmap : {True, False, 'c', 'r'}, optional, keyword only
 |          `mmap` controls the use of numpy memory mapping for reading image
 |          array data.  If False, do not try numpy ``memmap`` for data array.
 |          If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``.  A
 |          `mmap` value of True gives the same behavior as ``mmap='c'``.  If
 |          image data file cannot be memory-mapped, ignore `mmap` value and
 |          read array from file.
 |      keep_file_open : { None, True, False }, optional, keyword only
 |          `keep_file_open` controls whether a new file handle is created
 |          every time the image is accessed, or a single file handle is
 |          created and used for the lifetime of this ``ArrayProxy``. If
 |          ``True``, a single file handle is created and used. If ``False``,
 |          a new file handle is created every time the image is accessed.
 |          If ``file_map`` refers to an open file handle, this setting has no
 |          effect. The default value (``None``) will result in the value of
 |          ``nibabel.arrayproxy.KEEP_FILE_OPEN_DEFAULT`` being used.
 |
 |      Returns
 |      -------
 |      img : AnalyzeImage instance
 |
 |  ----------------------------------------------------------------------
 |  Data and other attributes inherited from nibabel.analyze.AnalyzeImage:
 |
 |  ImageArrayProxy = <class 'nibabel.arrayproxy.ArrayProxy'>
 |      Class to act as proxy for the array that can be read from a file
 |
 |      The array proxy allows us to freeze the passed fileobj and header such that
 |      it returns the expected data array.
 |
 |      This implementation assumes a contiguous array in the file object, with one
 |      of the numpy dtypes, starting at a given file position ``offset`` with
 |      single ``slope`` and ``intercept`` scaling to produce output values.
 |
 |      The class ``__init__`` requires a spec which defines how the data will be
 |      read and rescaled. The spec may be a tuple of length 2 - 5, containing the
 |      shape, storage dtype, offset, slope and intercept, or a ``header`` object
 |      with methods:
 |
 |      * get_data_shape
 |      * get_data_dtype
 |      * get_data_offset
 |      * get_slope_inter
 |
 |      A header should also have a 'copy' method.  This requirement will go away
 |      when the deprecated 'header' property goes away.
 |
 |      This implementation allows us to deal with Analyze and its variants,
 |      including Nifti1, and with the MGH format.
 |
 |      Other image types might need more specific classes to implement the API.
 |      See :mod:`nibabel.minc1`, :mod:`nibabel.ecat` and :mod:`nibabel.parrec` for
 |      examples.
 |
 |
 |  makeable = True
 |
 |  ----------------------------------------------------------------------
 |  Methods inherited from nibabel.spatialimages.SpatialImage:
 |
 |  __getitem__(self, idx: 'object') -> 'None'
 |      No slicing or dictionary interface for images
 |
 |      Use the slicer attribute to perform cropping and subsampling at your
 |      own risk.
 |
 |  __str__(self) -> 'str'
 |      Return str(self).
 |
 |  orthoview(self) -> 'OrthoSlicer3D'
 |      Plot the image using OrthoSlicer3D
 |
 |      Returns
 |      -------
 |      viewer : instance of OrthoSlicer3D
 |          The viewer.
 |
 |      Notes
 |      -----
 |      This requires matplotlib. If a non-interactive backend is used,
 |      consider using viewer.show() (equivalently plt.show()) to show
 |      the figure.
 |
 |  ----------------------------------------------------------------------
 |  Class methods inherited from nibabel.spatialimages.SpatialImage:
 |
 |  from_image(img: 'SpatialImage | FileBasedImage') -> 'Self'
 |      Class method to create new instance of own class from `img`
 |
 |      Parameters
 |      ----------
 |      img : ``spatialimage`` instance
 |         In fact, an object with the API of ``spatialimage`` -
 |         specifically ``dataobj``, ``affine``, ``header`` and ``extra``.
 |
 |      Returns
 |      -------
 |      cimg : ``spatialimage`` instance
 |         Image, of our own class
 |
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from nibabel.spatialimages.SpatialImage:
 |
 |  affine
 |
 |  slicer
 |      Slicer object that returns cropped and subsampled images
 |
 |      The image is resliced in the current orientation; no rotation or
 |      resampling is performed, and no attempt is made to filter the image
 |      to avoid `aliasing`_.
 |
 |      The affine matrix is updated with the new intercept (and scales, if
 |      down-sampling is used), so that all values are found at the same RAS
 |      locations.
 |
 |      Slicing may include non-spatial dimensions.
 |      However, this method does not currently adjust the repetition time in
 |      the image header.
 |
 |      .. _aliasing: https://en.wikipedia.org/wiki/Aliasing
 |
 |  ----------------------------------------------------------------------
 |  Data and other attributes inherited from nibabel.spatialimages.SpatialImage:
 |
 |  ImageSlicer = <class 'nibabel.spatialimages.SpatialFirstSlicer'>
 |      Slicing interface that returns a new image with an updated affine
 |
 |      Checks that an image's first three axes are spatial
 |
 |
 |  ----------------------------------------------------------------------
 |  Methods inherited from nibabel.dataobj_images.DataobjImage:
 |
 |  get_data(self, caching='fill')
 |      Return image data from image with any necessary scaling applied
 |
 |      get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
 |
 |      * deprecated from version: 3.0
 |      * Raises <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
 |
 |  get_fdata(
 |      self,
 |      caching: "ty.Literal['fill', 'unchanged']" = 'fill',
 |      dtype: 'npt.DTypeLike' = <class 'numpy.float64'>
 |  ) -> 'np.ndarray[ty.Any, np.dtype[np.floating]]'
 |      Return floating point image data with necessary scaling applied
 |
 |      The image ``dataobj`` property can be an array proxy or an array.  An
 |      array proxy is an object that knows how to load the image data from
 |      disk.  An image with an array proxy ``dataobj`` is a *proxy image*; an
 |      image with an array in ``dataobj`` is an *array image*.
 |
 |      The default behavior for ``get_fdata()`` on a proxy image is to read
 |      the data from the proxy, and store in an internal cache.  Future calls
 |      to ``get_fdata`` will return the cached array.  This is the behavior
 |      selected with `caching` == "fill".
 |
 |      Once the data has been cached and returned from an array proxy, if you
 |      modify the returned array, you will also modify the cached array
 |      (because they are the same array).  Regardless of the `caching` flag,
 |      this is always true of an array image.
 |
 |      Parameters
 |      ----------
 |      caching : {'fill', 'unchanged'}, optional
 |          See the Notes section for a detailed explanation.  This argument
 |          specifies whether the image object should fill in an internal
 |          cached reference to the returned image data array. "fill" specifies
 |          that the image should fill an internal cached reference if
 |          currently empty.  Future calls to ``get_fdata`` will return this
 |          cached reference.  You might prefer "fill" to save the image object
 |          from having to reload the array data from disk on each call to
 |          ``get_fdata``.  "unchanged" means that the image should not fill in
 |          the internal cached reference if the cache is currently empty.  You
 |          might prefer "unchanged" to "fill" if you want to make sure that
 |          the call to ``get_fdata`` does not create an extra (cached)
 |          reference to the returned array.  In this case it is easier for
 |          Python to free the memory from the returned array.
 |      dtype : numpy dtype specifier
 |          A numpy dtype specifier specifying a floating point type.  Data is
 |          returned as this floating point type.  Default is ``np.float64``.
 |
 |      Returns
 |      -------
 |      fdata : array
 |          Array of image data of data type `dtype`.
 |
 |      See also
 |      --------
 |      uncache: empty the array data cache
 |
 |      Notes
 |      -----
 |      All images have a property ``dataobj`` that represents the image array
 |      data.  Images that have been loaded from files usually do not load the
 |      array data from file immediately, in order to reduce image load time
 |      and memory use.  For these images, ``dataobj`` is an *array proxy*; an
 |      object that knows how to load the image array data from file.
 |
 |      By default (`caching` == "fill"), when you call ``get_fdata`` on a
 |      proxy image, we load the array data from disk, store (cache) an
 |      internal reference to this array data, and return the array.  The next
 |      time you call ``get_fdata``, you will get the cached reference to the
 |      array, so we don't have to load the array data from disk again.
 |
 |      Array images have a ``dataobj`` property that already refers to an
 |      array in memory, so there is no benefit to caching, and the `caching`
 |      keywords have no effect.
 |
 |      For proxy images, you may not want to fill the cache after reading the
 |      data from disk because the cache will hold onto the array memory until
 |      the image object is deleted, or you use the image ``uncache`` method.
 |      If you don't want to fill the cache, then always use
 |      ``get_fdata(caching='unchanged')``; in this case ``get_fdata`` will not
 |      fill the cache (store the reference to the array) if the cache is empty
 |      (no reference to the array).  If the cache is full, "unchanged" leaves
 |      the cache full and returns the cached array reference.
 |
 |      The cache can effect the behavior of the image, because if the cache is
 |      full, or you have an array image, then modifying the returned array
 |      will modify the result of future calls to ``get_fdata()``.  For example
 |      you might do this:
 |
 |      >>> import os
 |      >>> import nibabel as nib
 |      >>> from nibabel.testing import data_path
 |      >>> img_fname = os.path.join(data_path, 'example4d.nii.gz')
 |
 |      >>> img = nib.load(img_fname) # This is a proxy image
 |      >>> nib.is_proxy(img.dataobj)
 |      True
 |
 |      The array is not yet cached by a call to "get_fdata", so:
 |
 |      >>> img.in_memory
 |      False
 |
 |      After we call ``get_fdata`` using the default `caching` == 'fill', the
 |      cache contains a reference to the returned array ``data``:
 |
 |      >>> data = img.get_fdata()
 |      >>> img.in_memory
 |      True
 |
 |      We modify an element in the returned data array:
 |
 |      >>> data[0, 0, 0, 0]
 |      0.0
 |      >>> data[0, 0, 0, 0] = 99
 |      >>> data[0, 0, 0, 0]
 |      99.0
 |
 |      The next time we call 'get_fdata', the method returns the cached
 |      reference to the (modified) array:
 |
 |      >>> data_again = img.get_fdata()
 |      >>> data_again is data
 |      True
 |      >>> data_again[0, 0, 0, 0]
 |      99.0
 |
 |      If you had *initially* used `caching` == 'unchanged' then the returned
 |      ``data`` array would have been loaded from file, but not cached, and:
 |
 |      >>> img = nib.load(img_fname)  # a proxy image again
 |      >>> data = img.get_fdata(caching='unchanged')
 |      >>> img.in_memory
 |      False
 |      >>> data[0, 0, 0] = 99
 |      >>> data_again = img.get_fdata(caching='unchanged')
 |      >>> data_again is data
 |      False
 |      >>> data_again[0, 0, 0, 0]
 |      0.0
 |
 |  uncache(self) -> 'None'
 |      Delete any cached read of data from proxied data
 |
 |      Remember there are two types of images:
 |
 |      * *array images* where the data ``img.dataobj`` is an array
 |      * *proxy images* where the data ``img.dataobj`` is a proxy object
 |
 |      If you call ``img.get_fdata()`` on a proxy image, the result of reading
 |      from the proxy gets cached inside the image object, and this cache is
 |      what gets returned from the next call to ``img.get_fdata()``.  If you
 |      modify the returned data, as in::
 |
 |          data = img.get_fdata()
 |          data[:] = 42
 |
 |      then the next call to ``img.get_fdata()`` returns the modified array,
 |      whether the image is an array image or a proxy image::
 |
 |          assert np.all(img.get_fdata() == 42)
 |
 |      When you uncache an array image, this has no effect on the return of
 |      ``img.get_fdata()``, but when you uncache a proxy image, the result of
 |      ``img.get_fdata()`` returns to its original value.
 |
 |  ----------------------------------------------------------------------
 |  Class methods inherited from nibabel.dataobj_images.DataobjImage:
 |
 |  from_filename(
 |      filename: 'FileSpec',
 |      *,
 |      mmap: "bool | ty.Literal['c', 'r']" = True,
 |      keep_file_open: 'bool | None' = None
 |  ) -> 'Self'
 |      Class method to create image from filename `filename`
 |
 |      Parameters
 |      ----------
 |      filename : str
 |          Filename of image to load
 |      mmap : {True, False, 'c', 'r'}, optional, keyword only
 |          `mmap` controls the use of numpy memory mapping for reading image
 |          array data.  If False, do not try numpy ``memmap`` for data array.
 |          If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``.  A
 |          `mmap` value of True gives the same behavior as ``mmap='c'``.  If
 |          image data file cannot be memory-mapped, ignore `mmap` value and
 |          read array from file.
 |      keep_file_open : { None, True, False }, optional, keyword only
 |          `keep_file_open` controls whether a new file handle is created
 |          every time the image is accessed, or a single file handle is
 |          created and used for the lifetime of this ``ArrayProxy``. If
 |          ``True``, a single file handle is created and used. If ``False``,
 |          a new file handle is created every time the image is accessed.
 |          The default value (``None``) will result in the value of
 |          ``nibabel.arrayproxy.KEEP_FILE_OPEN_DEFAULT`` being used.
 |
 |      Returns
 |      -------
 |      img : DataobjImage instance
 |
 |  load = from_filename(
 |      filename: 'FileSpec',
 |      *,
 |      mmap: "bool | ty.Literal['c', 'r']" = True,
 |      keep_file_open: 'bool | None' = None
 |  ) -> 'Self'
 |      Class method to create image from filename `filename`
 |
 |      Parameters
 |      ----------
 |      filename : str
 |          Filename of image to load
 |      mmap : {True, False, 'c', 'r'}, optional, keyword only
 |          `mmap` controls the use of numpy memory mapping for reading image
 |          array data.  If False, do not try numpy ``memmap`` for data array.
 |          If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``.  A
 |          `mmap` value of True gives the same behavior as ``mmap='c'``.  If
 |          image data file cannot be memory-mapped, ignore `mmap` value and
 |          read array from file.
 |      keep_file_open : { None, True, False }, optional, keyword only
 |          `keep_file_open` controls whether a new file handle is created
 |          every time the image is accessed, or a single file handle is
 |          created and used for the lifetime of this ``ArrayProxy``. If
 |          ``True``, a single file handle is created and used. If ``False``,
 |          a new file handle is created every time the image is accessed.
 |          The default value (``None``) will result in the value of
 |          ``nibabel.arrayproxy.KEEP_FILE_OPEN_DEFAULT`` being used.
 |
 |      Returns
 |      -------
 |      img : DataobjImage instance
 |
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from nibabel.dataobj_images.DataobjImage:
 |
 |  dataobj
 |
 |  in_memory
 |      True when any array data is in memory cache
 |
 |      There are separate caches for `get_data` reads and `get_fdata` reads.
 |      This property is True if either of those caches are set.
 |
 |  ndim
 |
 |  shape
 |
 |  ----------------------------------------------------------------------
 |  Methods inherited from nibabel.filebasedimages.SerializableImage:
 |
 |  to_bytes(self, **kwargs) -> 'bytes'
 |      Return a ``bytes`` object with the contents of the file that would
 |      be written if the image were saved.
 |
 |      Parameters
 |      ----------
 |      \*\*kwargs : keyword arguments
 |          Keyword arguments that may be passed to ``img.to_file_map()``
 |
 |      Returns
 |      -------
 |      bytes
 |          Serialized image
 |
 |  to_stream(self, io_obj: 'io.IOBase', **kwargs) -> 'None'
 |      Save image to writable IO stream
 |
 |      Parameters
 |      ----------
 |      io_obj : IOBase object
 |          Writable stream
 |      \*\*kwargs : keyword arguments
 |          Keyword arguments that may be passed to ``img.to_file_map()``
 |
 |  ----------------------------------------------------------------------
 |  Class methods inherited from nibabel.filebasedimages.SerializableImage:
 |
 |  from_bytes(bytestring: 'bytes') -> 'Self'
 |      Construct image from a byte string
 |
 |      Class method
 |
 |      Parameters
 |      ----------
 |      bytestring : bytes
 |          Byte string containing the on-disk representation of an image
 |
 |  from_stream(io_obj: 'io.IOBase') -> 'Self'
 |      Load image from readable IO stream
 |
 |      Convert to BytesIO to enable seeking, if input stream is not seekable
 |
 |      Parameters
 |      ----------
 |      io_obj : IOBase object
 |          Readable stream
 |
 |  from_url(url: 'str | request.Request', timeout: 'float' = 5) -> 'Self'
 |      Retrieve and load an image from a URL
 |
 |      Class method
 |
 |      Parameters
 |      ----------
 |      url : str or urllib.request.Request object
 |          URL of file to retrieve
 |      timeout : float, optional
 |          Time (in seconds) to wait for a response
 |
 |  ----------------------------------------------------------------------
 |  Methods inherited from nibabel.filebasedimages.FileBasedImage:
 |
 |  get_filename(self) -> 'str | None'
 |      Fetch the image filename
 |
 |      Parameters
 |      ----------
 |      None
 |
 |      Returns
 |      -------
 |      fname : None or str
 |         Returns None if there is no filename, or a filename string.
 |         If an image may have several filenames associated with it (e.g.
 |         Analyze ``.img, .hdr`` pair) then we return the more characteristic
 |         filename (the ``.img`` filename in the case of Analyze')
 |
 |  set_filename(self, filename: 'str') -> 'None'
 |      Sets the files in the object from a given filename
 |
 |      The different image formats may check whether the filename has
 |      an extension characteristic of the format, and raise an error if
 |      not.
 |
 |      Parameters
 |      ----------
 |      filename : str or os.PathLike
 |         If the image format only has one file associated with it,
 |         this will be the only filename set into the image
 |         ``.file_map`` attribute. Otherwise, the image instance will
 |         try and guess the other filenames from this given filename.
 |
 |  to_filename(self, filename: 'FileSpec', **kwargs) -> 'None'
 |      Write image to files implied by filename string
 |
 |      Parameters
 |      ----------
 |      filename : str or os.PathLike
 |         filename to which to save image.  We will parse `filename`
 |         with ``filespec_to_file_map`` to work out names for image,
 |         header etc.
 |      \*\*kwargs : keyword arguments
 |         Keyword arguments to format-specific save
 |
 |      Returns
 |      -------
 |      None
 |
 |  ----------------------------------------------------------------------
 |  Class methods inherited from nibabel.filebasedimages.FileBasedImage:
 |
 |  filespec_to_file_map(filespec: 'FileSpec') -> 'FileMap'
 |      Make `file_map` for this class from filename `filespec`
 |
 |      Class method
 |
 |      Parameters
 |      ----------
 |      filespec : str or os.PathLike
 |          Filename that might be for this image file type.
 |
 |      Returns
 |      -------
 |      file_map : dict
 |          `file_map` dict with (key, value) pairs of (``file_type``,
 |          FileHolder instance), where ``file_type`` is a string giving the
 |          type of the contained file.
 |
 |      Raises
 |      ------
 |      ImageFileError
 |          if `filespec` is not recognizable as being a filename for this
 |          image type.
 |
 |  instance_to_filename(img: 'FileBasedImage', filename: 'FileSpec') -> 'None'
 |      Save `img` in our own format, to name implied by `filename`
 |
 |      This is a class method
 |
 |      Parameters
 |      ----------
 |      img : ``any FileBasedImage`` instance
 |
 |      filename : str
 |         Filename, implying name to which to save image.
 |
 |  make_file_map(mapping: 'ty.Mapping[str, str | io.IOBase] | None' = None) -> 'FileMap'
 |      Class method to make files holder for this image type
 |
 |      Parameters
 |      ----------
 |      mapping : None or mapping, optional
 |         mapping with keys corresponding to image file types (such as
 |         'image', 'header' etc, depending on image class) and values
 |         that are filenames or file-like.  Default is None
 |
 |      Returns
 |      -------
 |      file_map : dict
 |         dict with string keys given by first entry in tuples in
 |         sequence klass.files_types, and values of type FileHolder,
 |         where FileHolder objects have default values, other than
 |         those given by `mapping`
 |
 |  path_maybe_image(
 |      filename: 'FileSpec',
 |      sniff: 'FileSniff | None' = None,
 |      sniff_max: 'int' = 1024
 |  ) -> 'tuple[bool, FileSniff | None]'
 |      Return True if `filename` may be image matching this class
 |
 |      Parameters
 |      ----------
 |      filename : str or os.PathLike
 |          Filename for an image, or an image header (metadata) file.
 |          If `filename` points to an image data file, and the image type has
 |          a separate "header" file, we work out the name of the header file,
 |          and read from that instead of `filename`.
 |      sniff : None or (bytes, filename), optional
 |          Bytes content read from a previous call to this method, on another
 |          class, with metadata filename.  This allows us to read metadata
 |          bytes once from the image or header, and pass this read set of
 |          bytes to other image classes, therefore saving a repeat read of the
 |          metadata.  `filename` is used to validate that metadata would be
 |          read from the same file, re-reading if not.  None forces this
 |          method to read the metadata.
 |      sniff_max : int, optional
 |          The maximum number of bytes to read from the metadata.  If the
 |          metadata file is long enough, we read this many bytes from the
 |          file, otherwise we read to the end of the file.  Longer values
 |          sniff more of the metadata / image file, making it more likely that
 |          the returned sniff will be useful for later calls to
 |          ``path_maybe_image`` for other image classes.
 |
 |      Returns
 |      -------
 |      maybe_image : bool
 |          True if `filename` may be valid for an image of this class.
 |      sniff : None or (bytes, filename)
 |          Read bytes content from found metadata.  May be None if the file
 |          does not appear to have useful metadata.
 |
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from nibabel.filebasedimages.FileBasedImage:
 |
 |  header
 |
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from nibabel.filebasedimages.FileBasedImage:
 |
 |  __dict__
 |      dictionary for instance variables
 |
 |  __weakref__
 |      list of weak references to the object

The imaging data is stored in either a 3D or 4D numpy array. Just like numpy, it is easy to get the dimensions of the data using shape.

data.shape

Looks like there are 3 dimensions (x,y,z) that is the number of voxels in each dimension. If we know the voxel size, we could convert this into millimeters.

We can also directly access the data and plot a single slice using standard matplotlib functions.

plt.imshow(data.get_fdata()[:,:,50], cmap='RdBu_r')

Try slicing different dimensions (x,y,z) yourself to get a feel for how the data is represented in this anatomical image.

We can also access data from the image header. Let's assign the header of an image to a variable and print it to view it's contents.

header = data.header
print(header)
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b'r'
dim_info        : 0
dim             : [  3 193 229 193   1   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [1. 1. 1. 1. 0. 0. 0. 0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 2
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'xform matrices modified by FixHeaderApplyTransforms (niworkflows v1.1.12).'
aux_file        : b''
qform_code      : mni
sform_code      : mni
quatern_b       : 0.0
quatern_c       : 0.0
quatern_d       : 0.0
qoffset_x       : -96.0
qoffset_y       : -132.0
qoffset_z       : -78.0
srow_x          : [  1.   0.   0. -96.]
srow_y          : [   0.    1.    0. -132.]
srow_z          : [  0.   0.   1. -78.]
intent_name     : b''
magic           : b'n+1'

Some of the important information in the header is information about the orientation of the image in space. This can be represented as the affine matrix, which can be used to transform images between different spaces.

data.affine
array([[   1.,    0.,    0.,  -96.],
       [   0.,    1.,    0., -132.],
       [   0.,    0.,    1.,  -78.],
       [   0.,    0.,    0.,    1.]])

We will dive deeper into affine transformations in the preprocessing tutorial.

Plotting Data with Nilearn

There are many useful tools from the nilearn library to help manipulate and visualize neuroimaging data. See their documentation for an example.

In this section, we will explore a few of their different plotting functions, which can work directly with nibabel instances.

A note on displaying plots in marimo

Marimo renders the last expression of each cell. Unlike Jupyter, it doesn't automatically call _repr_html_ on opaque return objects, so plot_anat(data) or plt.imshow(...) alone will just print the object's repr string (e.g. <OrthoSlicer object at 0x...>) instead of a figure.

No %matplotlib inline equivalent is needed — marimo always renders real Figure objects inline. You just have to make sure the last line of the cell is a Figure (or an HTML component). Three patterns, in order of recommendation:

1. Create the figure, pass it in, return it (preferred)

fig, ax = plt.subplots(figsize=(12, 4))
plot_anat(data, axes=ax)
fig
  • Explicit figure handle — safe across reactive re-runs
  • You control size, DPI, subplot layout
  • A couple extra lines per cell

2. Call the plotting function, then plt.gcf() (quick fix)

plot_anat(data)
plt.gcf()
  • One-line escape hatch — easy to retrofit
  • gcf() returns whichever figure pyplot touched most recently, which can be surprising when cells re-execute out of order in a reactive notebook
  • No control over figure dimensions

3. mo.Html(view.get_iframe()) for nilearn interactive views

mo.Html(view_img(data).get_iframe())

Used for view_img, view_connectome, view_surf — these return a nilearn HTMLDocument with embedded JavaScript. The get_iframe() call sandboxes the viewer into its own iframe so its JS doesn't collide with marimo's. Use .html instead of .get_iframe() if you want it inline in the main DOM and are sure there are no JS conflicts.

_fig, _ax = plt.subplots(figsize=(12, 4))
plot_anat(data, axes=_ax)
_fig

Nilearn plotting functions are very flexible and allow us to easily customize our plots

plot_anat(data, draw_cross=False, display_mode='z')
plt.gcf()

try to get more information how to use the function with ? and try to add different commands to change the plot.

nilearn also has a neat interactive viewer called view_img for examining images directly in the notebook.

view_img(data)
/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/numpy/_core/fromnumeric.py:840: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)

The view_img function is particularly useful for overlaying statistical maps over an anatomical image so that we can interactively examine where the results are located.

As an example, let's load a mask of the amygdala and try to find where it is located. We will download it from Neurovault using a function from nltools.

amygdala_mask = Brain_Data('https://neurovault.org/media/images/1290/FSL_BAmyg_thr0.nii.gz').to_nifti()

view_img(amygdala_mask, data)
/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/nltools/data/brain_data.py:253: UserWarning: [NiftiMasker.fit] Generation of a mask has been requested (imgs != None) while a mask was given at masker creation. Given mask will be used.
  self.data = self.nifti_masker.fit_transform(data)
/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/numpy/_core/fromnumeric.py:840: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)
/tmp/marimo_5133/__marimo__cell_kqZH_.py:3: UserWarning: Resampling binary images with continuous or linear interpolation. This might lead to unexpected results. You might consider using nearest interpolation instead.
  view_img(amygdala_mask, data)

We can also plot a glass brain which allows us to see through the brain from different slice orientations. In this example, we will plot the binary amygdala mask.

plot_glass_brain(amygdala_mask)
plt.gcf()

Manipulating Data with Nltools

Ok, we've now learned how to use nibabel to load imaging data and nilearn to plot it.

Next we are going to learn how to use the nltools package that tries to make loading, plotting, and manipulating data easier. It uses many functions from nibabel, nilearn, and other python libraries. The bulk of the nltools toolbox is built around the Brain_Data() class. The concept behind the class is to have a similar feel to a pandas dataframe, which means that it should feel intuitive to manipulate the data.

The Brain_Data() class has several attributes that may be helpful to know about. First, it stores imaging data in .data as a vectorized features by observations matrix. Each image is an observation and each voxel is a feature. Space is flattened using nifti_masker from nilearn. This object is also stored as an attribute in .nifti_masker to allow transformations from 2D to 3D/4D matrices. In addition, a brain_mask is stored in .mask. Finally, there are attributes to store either class labels for prediction/classification analyses in .Y and design matrices in .X. These are both expected to be pandas DataFrames.

We will give a quick overview of basic Brain_Data operations, but we encourage you to see our documentation for more details.

Brain_Data basics

To get a feel for Brain_Data, let's load an example anatomical overlay image that comes packaged with the toolbox.

anat = Brain_Data(get_anatomical())
anat
/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/nltools/data/brain_data.py:262: UserWarning: [NiftiMasker.fit] Generation of a mask has been requested (imgs != None) while a mask was given at masker creation. Given mask will be used.
  self.data = np.array(self.nifti_masker.fit_transform(data))
nltools.data.brain_data.Brain_Data(data=(238955,), Y=(0, 0), X=(0, 0), mask=MNI152_2mm_mask.nii.gz)

To view the attributes of Brain_Data use the vars() function.

print(vars(anat))
{'_h5_compression': 'gzip', 'mask': <nibabel.nifti1.Nifti1Image object at 0x7fbfe0432570>, 'nifti_masker': NiftiMasker(mask_img=<nibabel.nifti1.Nifti1Image object at 0x7fbfe0432570>), 'data': array([1875., 2127., 2182., ..., 5170., 5180., 2836.],
      shape=(238955,), dtype=float32), 'Y': Empty DataFrame
Columns: []
Index: [], 'X': Empty DataFrame
Columns: []
Index: []}

Brain_Data has many methods to help manipulate, plot, and analyze imaging data. We can use the dir() function to get a quick list of all of the available methods that can be used on this class.

To learn more about how to use these tools either use the ? function, or look up the function in the api documentation.

print(dir(anat))
['X', 'Y', '__add__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__firstlineno__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getstate__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__isub__', '__iter__', '__itruediv__', '__le__', '__len__', '__lt__', '__module__', '__mul__', '__ne__', '__new__', '__radd__', '__reduce__', '__reduce_ex__', '__repr__', '__rmul__', '__rsub__', '__setattr__', '__setitem__', '__sizeof__', '__static_attributes__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__weakref__', '_check_shape_compatibility', '_h5_compression', '_validate_arithmetic_operands', 'aggregate', 'align', 'append', 'apply_mask', 'astype', 'bootstrap', 'copy', 'data', 'decompose', 'detrend', 'distance', 'dtype', 'empty', 'extract_roi', 'filter', 'find_spikes', 'groupby', 'icc', 'iplot', 'isempty', 'mask', 'mean', 'median', 'multivariate_similarity', 'nifti_masker', 'plot', 'predict', 'predict_multi', 'r_to_z', 'randomise', 'regions', 'regress', 'scale', 'shape', 'similarity', 'smooth', 'standardize', 'std', 'sum', 'temporal_resample', 'threshold', 'to_nifti', 'transform_pairwise', 'ttest', 'upload_neurovault', 'write', 'z_to_r']

Ok, now let's load a single subject's functional data from the localizer dataset. We will load one that has already been preprocessed with fmriprep and is stored in the derivatives folder.

Loading data can be a little bit slow especially if the data need to be resampled to the template, which is set at \(2mm^3\) by default. However, once it's loaded into the workspace it should be relatively fast to work with it.

data_1 = Brain_Data(get_file('S01', 'derivatives', 'bold'))
/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/nltools/data/brain_data.py:253: UserWarning: [NiftiMasker.fit] Generation of a mask has been requested (imgs != None) while a mask was given at masker creation. Given mask will be used.
  self.data = self.nifti_masker.fit_transform(data)

Here are a few quick basic data operations.

Find number of images in Brain_Data() instance

print(len(data_1))
128

Find the dimensions of the data (images x voxels)

print(data_1.shape())
(128, 238955)

We can use any type of indexing to slice the data such as integers, lists of integers, slices, or boolean vectors.

import numpy as np
print(data_1[5].shape())
print(data_1[[1, 6, 2]].shape())
print(data_1[0:10].shape())
index = np.zeros(len(data_1), dtype=bool)
index[[1, 5, 9, 16, 20, 22]] = True
print(data_1[index].shape())
(238955,)
(3, 238955)
(10, 238955)
(6, 238955)

Simple Arithmetic Operations

Calculate the mean for every voxel over images

data_1.mean()
nltools.data.brain_data.Brain_Data(data=(238955,), Y=(0, 0), X=(0, 0), mask=MNI152_2mm_mask.nii.gz)

Calculate the standard deviation for every voxel over images

data_1.std()
nltools.data.brain_data.Brain_Data(data=(238955,), Y=(0, 0), X=(0, 0), mask=MNI152_2mm_mask.nii.gz)

Methods can be chained. Here we get the shape of the mean.

print(data_1.mean().shape())
(238955,)

Brain_Data instances can be added and subtracted

new = data_1[1] + data_1[2]

Brain_Data instances can be manipulated with basic arithmetic operations.

Here we add 10 to every voxel and scale by 2

data2 = (data_1 + 10) * 2

Brain_Data instances can be copied

new_1 = data_1.copy()

Brain_Data instances can be easily converted to nibabel instances, which store the data in a 3D/4D matrix. This is useful for interfacing with other python toolboxes such as nilearn

data_1.to_nifti()
<nibabel.nifti1.Nifti1Image object at 0x7fbfe0433140>

Brain_Data instances can be concatenated using the append method

new_2 = new_1.append(data_1[4])

Lists of Brain_Data instances can also be concatenated by recasting as a Brain_Data object.

print(type([x for x in data_1[:4]]))
type(Brain_Data([x for x in data_1[:4]]))
<class 'list'>
<class 'nltools.data.brain_data.Brain_Data'>

Any Brain_Data object can be written out to a nifti file.

data_1.write('Tmp_Data.nii.gz')

Images within a Brain_Data() instance are iterable. Here we use a list comprehension to calculate the overall mean across all voxels within an image.

[x.mean() for x in data_1]

Though, we could also do this with the mean method by setting axis=1.

data_1.mean(axis=1)
array([3632.536 , 3639.0664, 3636.0562, 3631.1252, 3627.157 , 3631.0984,
       3648.3142, 3657.7656, 3654.2617, 3658.1345, 3652.416 , 3647.3025,
       3648.4014, 3651.63  , 3648.5498, 3648.7568, 3654.2317, 3655.2266,
       3650.022 , 3644.6763, 3645.765 , 3645.067 , 3633.773 , 3635.3667,
       3634.096 , 3637.7717, 3636.5327, 3643.0576, 3643.0342, 3635.3635,
       3644.4868, 3659.792 , 3650.5303, 3642.402 , 3644.544 , 3638.2612,
       3638.79  , 3645.699 , 3641.147 , 3632.2188, 3624.9937, 3628.2786,
       3626.0632, 3623.8186, 3636.2056, 3628.5876, 3629.2666, 3622.416 ,
       3617.2483, 3609.94  , 3620.9905, 3626.5642, 3631.9675, 3629.2761,
       3631.3801, 3625.3945, 3621.508 , 3620.3164, 3625.7053, 3626.373 ,
       3623.368 , 3630.5776, 3630.8086, 3626.8875, 3622.809 , 3618.6863,
       3617.2356, 3615.7014, 3618.1897, 3622.9312, 3618.4062, 3611.3345,
       3613.9844, 3630.3782, 3630.0918, 3622.9006, 3613.8499, 3614.9834,
       3622.9841, 3622.4487, 3619.9285, 3612.1162, 3614.8423, 3608.7312,
       3614.767 , 3609.9822, 3606.212 , 3602.921 , 3602.1323, 3601.921 ,
       3611.0999, 3619.5076, 3616.9976, 3613.6519, 3607.4514, 3624.702 ,
       3628.3745, 3612.5479, 3596.0322, 3581.2942, 3580.8047, 3584.3643,
       3593.8281, 3605.6892, 3608.284 , 3622.2231, 3618.5676, 3613.901 ,
       3607.047 , 3598.382 , 3589.81  , 3588.8022, 3599.5337, 3603.5144,
       3602.968 , 3611.6597, 3611.822 , 3605.5686, 3593.081 , 3592.4163,
       3600.1592, 3612.8616, 3611.4587, 3619.3752, 3613.7874, 3599.8997,
       3595.626 , 3597.0574], dtype=float32)

Let's plot the mean to see how the global signal changes over time.

plt.plot(data_1.mean(axis=1))

Notice the slow linear drift over time, where the global signal intensity gradually decreases. We will learn how to remove this with a high pass filter in future tutorials.

Plotting

There are multiple ways to plot your data.

For a very quick plot, you can return a montage of axial slices with the .plot() method. As an example, we will plot the mean of each voxel over time.

f_2 = plot_stat_map(data_1.mean().to_nifti())
plt.gcf()

There is an interactive .iplot() method based on nilearn view_img.

data_1.mean().iplot()


/home/runner/work/dartbrains/dartbrains/.venv/lib/python3.13/site-packages/numpy/_core/fromnumeric.py:840: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)

Brain_Data() instances can be converted to a nibabel instance and plotted using any nilearn plot method such as glass brain.

plot_glass_brain(data_1.mean().to_nifti())
plt.gcf()

Ok, that's the basics. Brain_Data can do much more!

Check out some of our tutorials for more detailed examples.

We'll be using this tool throughout the course.

Exercises

For homework, let's practice our skills in working with data.

Exercise 1

A few subjects have already been preprocessed with fMRI prep.

Use get_subjects() to figure out which subjects are available in the dataset.

Exercise 2

One question we are often interested in is where in the brain do we have an adequate signal to noise ratio (SNR). There are many different metrics, here we will use temporal SNR, which the voxel mean over time divided by it's standard deviation.

\[\text{tSNR} = \frac{\text{mean}(\text{voxel}_{i})}{\text{std}(\text{voxel}_i)}\]

In Exercise 2, calculate the SNR for S01 and plot this so we can figure which regions have high and low SNR.

Exercise 3

We are often interested in identifying outliers in our data. In this exercise, find any image that is outside 95% of all images based on global intensity (i.e., zscore greater than 2) from 'S01' and plot each one.