fMRI Preprocessing Pipeline Guide

Domain-validated guidance for fMRI preprocessing decisions: motion correction, slice timing, spatial normalization, smoothing, confound regression, and quality control

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "fMRI Preprocessing Pipeline Guide" with this command: npx skills add haoxuanlithuai/awesome_cognitive_and_neuroscience_skills/haoxuanlithuai-awesome-cognitive-and-neuroscience-skills-fmri-preprocessing-pipeline-guide

fMRI Preprocessing Pipeline Guide

Purpose

fMRI preprocessing transforms raw scanner data into a form suitable for statistical analysis. Unlike generic data cleaning, every preprocessing decision in fMRI involves domain-specific trade-offs: choosing the wrong step order can introduce artifacts that mimic neural signal, smoothing at the wrong scale destroys the spatial information needed for multivariate analyses, and failing to correct for susceptibility distortions misaligns brain regions by several millimeters.

A competent programmer without neuroimaging training would get many of these decisions wrong. This skill encodes the domain knowledge required to make correct preprocessing choices for different analysis goals.

When to Use This Skill

  • Setting up a preprocessing pipeline for task fMRI, resting-state fMRI, or MVPA
  • Choosing between preprocessing tools (fMRIPrep, FSL, SPM, AFNI)
  • Deciding which steps to include, skip, or modify for a specific analysis type
  • Performing quality control on preprocessed data
  • Reviewing or troubleshooting an existing preprocessing pipeline
  • Selecting parameters for motion correction, smoothing, or normalization

Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question — What analysis will follow preprocessing and what does it require?
  2. Justify the preprocessing choices — Why these steps in this order? What alternatives were considered?
  3. Declare expected quality metrics — What motion thresholds, SNR values, and exclusion criteria will you use?
  4. Note assumptions and limitations — What does this pipeline assume about the data? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

Pipeline Order Decision Tree

The standard preprocessing order is well-established, but branch points exist depending on the analysis type. The canonical order is:

1. DICOM to NIfTI conversion (BIDS format)
2. Discard non-steady-state volumes
3. Slice timing correction (conditional)
4. Motion correction (rigid body)
5. Distortion correction (fieldmap-based)
6. Coregistration (functional to structural)
7. Spatial normalization (to MNI/template space)
8. Smoothing (conditional on analysis type)

Decision points:

Is your TR < 1 second (e.g., multiband)?
 |
 +-- YES --> Skip slice timing correction; use temporal derivative in GLM instead
 | (Sladky et al., 2011; HCP consortium recommendation)
 |
 +-- NO --> Apply slice timing correction before motion correction
 (AFNI and SPM convention; Sladky et al., 2011)

What is your analysis type?
 |
 +-- Task activation (univariate GLM)
 | --> Full pipeline with smoothing (FWHM = 2-3x voxel size)
 |
 +-- Resting-state connectivity
 | --> Full pipeline + aggressive confound regression; smoothing 4-6 mm
 |
 +-- MVPA / multivariate decoding
 --> Full pipeline WITHOUT smoothing (or minimal, <= 2 mm)
 Smooth only after first-level pattern extraction

Core Preprocessing Steps

Step 1: DICOM to NIfTI Conversion

Convert raw DICOM files to NIfTI format organized in BIDS (Brain Imaging Data Structure; Gorgolewski et al., 2016). BIDS standardizes file naming and metadata, enabling automated pipeline tools like fMRIPrep to detect acquisition parameters automatically.

Tools: dcm2niix (Li et al., 2016), heudiconv, BIDScoin

Step 2: Discard Non-Steady-State Volumes

The first few volumes of an fMRI run are acquired before the MR signal reaches T1 equilibrium, producing artificially high signal intensity. Discard the first 3-5 volumes (approximately 5-10 seconds) unless the scanner automatically acquired dummy scans (Poldrack et al., 2011, Ch. 5).

Domain insight: Modern scanners often acquire dummy scans that are not saved. Check the acquisition protocol. fMRIPrep detects non-steady-state volumes automatically using signal intensity changes.

Step 3: Slice Timing Correction

Within each TR, slices are acquired sequentially (not simultaneously), creating temporal offsets of up to one full TR between first and last slices. Slice timing correction (STC) interpolates each slice to a common time point (Sladky et al., 2011).

ConditionRecommendationRationale
TR > 2 s, interleaved acquisitionAlways apply STCTemporal offset is large; significant benefit (Sladky et al., 2011)
TR 1-2 sApply STC; moderate benefitStill corrects meaningful timing offsets
TR < 1 s (multiband)Skip STC; use temporal derivative in GLMMinimal offset; correction provides negligible benefit (HCP consortium)
Using dynamic causal modeling (DCM)Mandatory STCDCM requires precise timing alignment across regions

Order debate: STC before motion correction is standard in AFNI and SPM. FSL applies STC after motion correction. Both are acceptable; the optimal order depends on the level of motion and slice acquisition pattern (Parker & Razlighi, 2019). Ideally, joint correction would be applied (Roche, 2011), but this is not yet standard in major packages.

Step 4: Motion Correction

Align all volumes to a reference volume using 6-parameter rigid body (3 translation, 3 rotation) transformation (Jenkinson et al., 2002). This is the single most critical preprocessing step.

ParameterRecommendationSource
Degrees of freedom6 (rigid body)Jenkinson et al., 2002
Reference volumeMean image or middle volumeJenkinson et al., 2002
Cost functionNormalized correlationJenkinson et al., 2002 (MCFLIRT default)
InterpolationTrilinear (during estimation); sinc or spline (final reslice)Poldrack et al., 2011, Ch. 5

Domain insight: Motion correction is inherently imperfect because each slice within a volume was acquired at a different time. When the head moves during a TR, each slice has a slightly different rigid-body transformation, but whole-volume correction applies a single transformation. This is an unavoidable limitation (Jenkinson et al., 2002).

Output: Save the 6 motion parameters for use as confound regressors in the GLM (see fmri-glm-analysis-guide).

Step 5: Distortion Correction

EPI images suffer geometric distortions along the phase-encoding direction due to B0 field inhomogeneities. Distortions are worst near air-tissue boundaries: orbitofrontal cortex, anterior temporal lobes, and inferior temporal regions (Jezzard & Balaban, 1995).

MethodData RequiredToolSource
Fieldmap-based (FUGUE)Dual-echo gradient echo fieldmapFSL FUGUEJezzard & Balaban, 1995
Reverse phase-encoding (TOPUP)Opposite-PE EPI pair (AP/PA)FSL TOPUPAndersson et al., 2003
SyN-based (fieldmapless)T1-weighted image onlyANTs SyN-SDCfMRIPrep fallback

Domain warning: If no fieldmap data were acquired, fMRIPrep can perform fieldmapless distortion correction using nonlinear registration to the T1, but this is less accurate than fieldmap-based methods. Always acquire fieldmap data when possible.

Step 6: Coregistration

Align the functional (EPI) image to the subject's structural (T1-weighted) image. This enables projecting functional results onto anatomical space and provides the bridge to template normalization.

  • Method: Boundary-based registration (BBR; Greve & Fischl, 2009) is preferred over intensity-based methods for EPI-to-T1 alignment because it uses white matter boundaries, which are well-defined in both modalities
  • Always visually inspect the coregistration overlay (functional edges on structural image)

Step 7: Spatial Normalization

Warp each subject's brain to a standard template space to enable group-level comparisons.

ParameterRecommendationSource
TemplateMNI152NLin2009cAsym (fMRIPrep default) or MNI152NLin6AsymFonov et al., 2011
MethodNonlinear (ANTs SyN or SPM Unified Segmentation)Ashburner & Friston, 2005; Avants et al., 2008
Output resolution2 mm isotropic (standard)Convention; matches MNI template resolution
For high-resolution dataMatch native resolution (e.g., 1.5 mm)Preserve spatial detail

Domain warning: Always visually inspect normalization quality. Check that major sulci (central sulcus, Sylvian fissure) and subcortical structures (caudate, putamen) align with the template. Poor normalization is a common but silent source of error in group analyses.

Step 8: Spatial Smoothing

Smoothing with a Gaussian kernel increases SNR, satisfies the smoothness assumptions of Random Field Theory, and reduces inter-subject anatomical variability (Mikl et al., 2008).

Analysis TypeFWHM RecommendationRationaleSource
Univariate (task GLM)2-3x voxel size (e.g., 6-8 mm for 2-3 mm voxels)Matches expected activation extent; maximizes sensitivityMikl et al., 2008; Poldrack et al., 2011
Resting-state connectivity4-6 mmModerate smoothing; balance noise reduction and spatial specificityCiric et al., 2017
MVPA / decodingNone or <= 2 mmSmoothing destroys fine-grained spatial patterns essential for decodingMisaki et al., 2013
Searchlight analysisNoneSearchlight already averages within the sphereEtzel et al., 2013

Domain warning: Smoothing before MVPA is one of the most common preprocessing errors. For multivariate analyses, skip smoothing during preprocessing entirely. If group-level smoothing is needed, apply it only after the first-level pattern analysis is complete (Misaki et al., 2013).

Step 9: Confound Regression

Confound regression removes variance from non-neural sources. This step is typically performed during the statistical model (GLM) rather than as a separate preprocessing step, but the preprocessing pipeline must output the confound time series.

For detailed confound regression guidance, see the fmri-glm-analysis-guide skill.

Key confounds to extract during preprocessing:

  • 6 motion parameters (from Step 4) and their temporal derivatives and squared terms (24-parameter model; Friston et al., 1996)
  • Framewise displacement (FD; Power et al., 2012)
  • DVARS (Power et al., 2012)
  • aCompCor components from white matter and CSF (Behzadi et al., 2007)
  • Global signal (optional; controversial for connectivity analyses; Murphy & Fox, 2017)

Pipeline Variants by Analysis Type

StepTask ActivationResting-State ConnectivityMVPA
Non-steady-state removalYesYesYes
Slice timing correctionYes (if TR > 1 s)Yes (if TR > 1 s)Yes (if TR > 1 s)
Motion correctionYesYesYes
Distortion correctionYesYesYes
CoregistrationYesYesYes
NormalizationYes (2 mm)Yes (2 mm)Optional; can stay in native space
Smoothing6-8 mm FWHM4-6 mm FWHMNone
High-pass filter128 s (in GLM)0.01 Hz (in preprocessing)128 s (in GLM)
Band-pass filterNo0.01-0.1 Hz (controversial)No
Motion threshold (FD)0.5 mm (spike regress)0.2 mm (scrub or regress)0.5 mm
Confound model24-param + aCompCor36-param or aCompCor + GSRMinimal (6-param motion)

fMRIPrep as the Recommended Standard

fMRIPrep (Esteban et al., 2019) is recommended as the default preprocessing tool for most fMRI studies. It provides:

  • Automated, analysis-agnostic preprocessing that adapts to the specific dataset
  • Transparent, reproducible workflow with detailed visual reports for QC
  • Best-in-breed algorithms: ANTs for normalization, FreeSurfer for surface reconstruction, MCFLIRT/ANTs for motion correction
  • BIDS-compatible input and output
  • Comprehensive confound time series output (motion, CompCor, FD, DVARS)

What fMRIPrep does NOT do:

  • Smoothing (intentionally left to the user because the optimal kernel depends on analysis type)
  • Temporal filtering (left to the GLM stage)
  • Confound regression (outputs confounds but does not regress them)
  • Statistical analysis

Domain insight: fMRIPrep outputs the preprocessed BOLD data in both MNI space and native space. For MVPA, use the native-space output. For group analyses, use the MNI-space output.

Common Pitfalls

  1. Smoothing before MVPA: Spatial smoothing destroys the fine-grained voxel patterns that multivariate methods rely on. Skip smoothing entirely for MVPA and searchlight analyses (Misaki et al., 2013)

  2. Wrong interpolation for final reslicing: Use sinc or spline interpolation for the final reslice step. Trilinear interpolation is acceptable during motion parameter estimation but introduces blurring in final images (Poldrack et al., 2011, Ch. 5)

  3. Not checking normalization quality: Normalization can fail silently, especially in populations with atypical anatomy (older adults, patients with lesions, pediatric). Always visually inspect the overlap of normalized functional images with the template

  4. Motion-connectivity confound in resting-state: Head motion creates spurious short-distance correlations and reduces long-distance correlations in functional connectivity (Power et al., 2012). For resting-state analyses, use stringent motion thresholds (FD < 0.2 mm; Power et al., 2014) and aggressive confound regression (Ciric et al., 2017)

  5. Skipping distortion correction: Without distortion correction, orbitofrontal and anterior temporal signals are mislocalized by several millimeters. This is especially problematic for studies of emotion, reward, and memory, which involve these regions (Jezzard & Balaban, 1995)

  6. Applying band-pass filtering for task fMRI: Band-pass filtering (0.01-0.1 Hz) is appropriate for resting-state connectivity analysis but removes task-related signal in event-related and block designs. For task fMRI, use only high-pass filtering in the GLM

  7. Not discarding non-steady-state volumes: The first few volumes have inflated signal intensity. If not removed, they can bias motion estimates and inflate variance (Poldrack et al., 2011, Ch. 5)

For detailed step-by-step parameters and software-specific guidance, see references/step-by-step-pipeline.md. For quality control metrics, thresholds, and exclusion criteria, see references/quality-control.md.

Key References

  • Andersson, J. L. R., Skare, S., & Ashburner, J. (2003). How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. NeuroImage, 20(2), 870-888.
  • Ashburner, J., & Friston, K. J. (2005). Unified segmentation. NeuroImage, 26(3), 839-851.
  • Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation. Medical Image Analysis, 12(1), 26-41.
  • Behzadi, Y., Restom, K., Liau, J., & Liu, T. T. (2007). A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. NeuroImage, 37(1), 90-101.
  • Ciric, R., Wolf, D. H., Power, J. D., et al. (2017). Benchmarking of participant-level confound regression strategies. NeuroImage, 154, 174-187.
  • Esteban, O., Birman, D., Schaer, M., et al. (2017). MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites. PLoS ONE, 12(9), e0184661.
  • Esteban, O., Markiewicz, C. J., Blair, R. W., et al. (2019). fMRIPrep: a robust preprocessing pipeline for functional MRI. Nature Methods, 16, 111-116.
  • Fonov, V. S., Evans, A. C., Botteron, K., et al. (2011). Unbiased average age-appropriate atlases for pediatric studies. NeuroImage, 54(1), 313-327.
  • Friston, K. J., Williams, S., Howard, R., et al. (1996). Movement-related effects in fMRI time-series. Magnetic Resonance in Medicine, 35(3), 346-355.
  • Gorgolewski, K. J., Auer, T., Calhoun, V. D., et al. (2016). The brain imaging data structure. Scientific Data, 3, 160044.
  • Greve, D. N., & Fischl, B. (2009). Accurate and robust brain image alignment using boundary-based registration. NeuroImage, 48(1), 63-72.
  • Jenkinson, M., Bannister, P., Brady, J. M., & Smith, S. M. (2002). Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage, 17(2), 825-841.
  • Jezzard, P., & Balaban, R. S. (1995). Correction for geometric distortion in echo planar images from B0 field variations. Magnetic Resonance in Medicine, 34(1), 65-73.
  • Mikl, M., Marecek, R., Hlustik, P., et al. (2008). Effects of spatial smoothing on fMRI group inferences. Magnetic Resonance Imaging, 26(4), 490-503.
  • Misaki, M., Luh, W. M., & Bandettini, P. A. (2013). The effect of spatial smoothing on fMRI decoding of columnar-level organization. NeuroImage, 78, 13-22.
  • Murphy, K., & Fox, M. D. (2017). Towards a consensus regarding global signal regression for resting state functional connectivity MRI. NeuroImage, 154, 169-173.
  • Parker, D. B., & Razlighi, Q. R. (2019). The benefit of slice timing correction in common fMRI preprocessing pipelines. Frontiers in Neuroscience, 13, 821.
  • Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of Functional MRI Data Analysis. Cambridge University Press.
  • Power, J. D., Barnes, K. A., Snyder, A. Z., et al. (2012). Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage, 59(3), 2142-2154.
  • Power, J. D., Mitra, A., Laumann, T. O., et al. (2014). Methods to detect, characterize, and remove motion artifact in resting state fMRI. NeuroImage, 84, 320-341.
  • Sladky, R., Friston, K., Trostl, J., et al. (2011). Slice-timing effects and their correction in functional MRI. NeuroImage, 58(2), 588-594.

See references/ for detailed pipeline parameters and quality control procedures.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eeg preprocessing pipeline guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

lesion-symptom mapping guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

verify skill

No summary provided by upstream source.

Repository SourceNeeds Review
General

act-r model builder

No summary provided by upstream source.

Repository SourceNeeds Review