ERP Data Analysis
Purpose
This skill encodes expert methodological knowledge for analyzing event-related potentials (ERPs) from EEG data. It provides domain-specific parameter recommendations, processing order guidance, component identification criteria, and statistical analysis strategies that a general-purpose programmer or data scientist would not know without specialized training.
When to Use This Skill
-
Designing an ERP preprocessing pipeline for a new study
-
Choosing filter settings, reference schemes, or artifact rejection criteria
-
Identifying which ERP component to measure and how to define its time window and ROI
-
Selecting appropriate amplitude measures (mean, peak, area) for a given component
-
Choosing between traditional ANOVA-based analysis and mass univariate approaches
-
Reviewing or troubleshooting an existing ERP analysis pipeline
Research Planning Protocol
Before executing the domain-specific steps below, you MUST:
-
State the research question — What specific question is this ERP analysis addressing?
-
Justify the method choice — Why ERP (not fMRI, behavior-only, etc.)? What alternatives were considered?
-
Declare expected outcomes — Which component(s) do you expect to differ, in what direction?
-
Note assumptions and limitations — What does this method assume? Where could it mislead?
-
Present the plan to the user and WAIT for confirmation before proceeding.
For detailed methodology guidance, see the research-literacy skill.
⚠️ Verification Notice
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
Preprocessing Pipeline Overview
The standard ERP preprocessing pipeline involves 7 ordered steps. Processing order matters and can influence results (the "multiverse" problem; see Lonedo et al., 2020).
Recommended Default Order
-
Import and inspect raw data -- Check for gross artifacts, disconnected electrodes
-
Filter -- Apply bandpass and optional notch filter
-
Re-reference -- Choose and apply reference scheme
-
Bad channel identification and interpolation -- Identify and interpolate noisy channels
-
ICA-based artifact correction -- Remove eye blinks, saccades, cardiac artifacts
-
Epoching and baseline correction -- Segment continuous data around events
-
Epoch rejection -- Remove remaining bad epochs by amplitude threshold
Multiverse note: Steps 2-5 interact. Filtering before ICA improves decomposition quality (Winkler et al., 2015). Some researchers re-reference after ICA. Document your choices and consider running key alternatives to assess robustness.
For detailed parameters at each step, see references/preprocessing-pipeline.md .
Key Parameter Defaults
Step Parameter Default Source
Filter High-pass 0.1 Hz Luck, 2014, Ch. 5; Tanner et al., 2015
Filter Low-pass 30 Hz Luck, 2014, Ch. 5
Filter Filter type FIR, zero-phase Widmann et al., 2015
Re-reference Scheme Average reference Luck, 2014, Ch. 5; Keil et al., 2014
Artifact rejection Threshold +/-100 uV Luck, 2014
Bad channels Max proportion < 10% of total channels Keil et al., 2014
Epoching Window -200 ms to 800 ms Luck, 2014, Ch. 5
Baseline Window -200 ms to 0 ms Luck, 2014, Ch. 5
ERP Component Identification
To measure an ERP component correctly, you need three pieces of information:
-
Latency range -- The expected time window for the component
-
Scalp distribution -- Where on the scalp the component is maximal
-
Functional context -- What experimental manipulation elicits the component
Component Selection Decision Logic
-
Identify your cognitive process of interest (e.g., semantic processing, error monitoring)
-
Look up the canonical component in references/erp-components.md
-
Verify the component matches your paradigm -- The same voltage deflection in a different paradigm may reflect a different component (Luck, 2014, Ch. 2)
-
Use the recommended ROI and time window as your starting point, then verify against your own grand average waveform
-
For disputed components, read the Interpretation Notes in references/erp-components.md before committing to a theoretical interpretation
Component Domains at a Glance
Domain Key Components Reference File Section
Visual perception P1, N1, N170, N2pc erp-components.md Section 1
Language N400, P600/LPC, ELAN, LAN erp-components.md Section 2
Executive control ERN/Ne, Pe, N2, CNV erp-components.md Section 3
Memory FN400, parietal old/new, Dm erp-components.md Section 4
Statistical Analysis Strategy
Amplitude Measurement
Choose your measurement approach based on the component:
Measure Best For Avoid When Source
Mean amplitude Broad components (N400, P300, LPC) Component is sharp and brief Luck, 2014, Ch. 9
Peak amplitude Sharp, well-defined peaks (P1, N1) Component has no clear peak or has multiple peaks Luck, 2014, Ch. 9
Peak latency Measuring processing speed Component lacks a clear peak Luck, 2014, Ch. 9
50% fractional area latency Latency with unequal amplitudes across conditions Rarely inappropriate; preferred over peak latency Luck, 2014, Ch. 9; Kiesel et al., 2008
Signed/unsigned area Components spanning positive and negative voltages Simple, unipolar components Luck, 2014, Ch. 9
Time Window and ROI Selection
-
A priori selection (preferred): Choose time window and electrodes based on prior literature before looking at your data (Luck & Gaspelin, 2017)
-
Collapsed localizer: Average across all conditions to identify the window/ROI, then test differences between conditions within that window (Luck & Gaspelin, 2017)
-
Data-driven: Use mass univariate approach to avoid arbitrary window selection (see below)
Critical: Never select a time window or ROI by looking at the difference between conditions. This inflates Type I error (Luck & Gaspelin, 2017).
Choosing a Statistical Framework
Is your hypothesis about a specific, well-characterized component? | +-- YES --> Do you have a priori time window and ROI? | | | +-- YES --> Traditional ANOVA on mean/peak amplitude | | | +-- NO --> Use collapsed localizer, then ANOVA | +-- NO --> Is your effect potentially distributed across time/space? | +-- YES --> Cluster-based permutation test (Maris & Oostenveld, 2007) | +-- NO --> Mass univariate with FDR correction (Groppe et al., 2011)
For detailed statistical method descriptions, see references/statistical-approaches.md .
Common Pitfalls
-
Double-dipping: Selecting time windows or electrodes based on the effect of interest, then testing that same effect in the selected window (Kriegeskorte et al., 2009)
-
Excessive filtering: High-pass cutoffs above 0.1 Hz can distort slow components like the P300, N400, and LPC (Tanner et al., 2015; Widmann & Schroger, 2012)
-
Confounding component overlap: Apparent differences in one component may be driven by overlap from an adjacent component; consider difference waves and component-specific analyses (Luck, 2014, Ch. 2)
-
Ignoring trial count imbalance: Unequal trial counts across conditions produce differential noise levels, biasing peak amplitude and latency measures (Luck, 2014, Ch. 9)
-
Cluster-based tests for latency: Cluster permutation tests control family-wise error but cannot localize effects to specific time points or channels (Maris & Oostenveld, 2007)
-
Reporting only p-values: Always report effect sizes (partial eta-squared for ANOVA, Cohen's d for t-tests) alongside p-values (Keil et al., 2014)
Minimum Reporting Checklist
Based on Keil et al. (2014) and Luck (2014):
-
Number of accepted trials per condition (minimum 30 per condition recommended; Boudewyn et al., 2018)
-
Filter settings (type, cutoff frequencies, roll-off)
-
Reference scheme
-
Artifact rejection method and criteria (ICA, threshold, proportion rejected)
-
Number and identity of interpolated channels
-
Epoch window and baseline correction window
-
Component time window and ROI electrodes (with justification)
-
Amplitude measure used (mean, peak, area)
-
Statistical test, correction method, and effect sizes
References
-
Boudewyn, M. A., Luck, S. J., Farrens, J. L., & Kappenman, E. S. (2018). How many trials does it take to get a significant ERP effect? Psychophysiology, 55(6), e13049.
-
Groppe, D. M., Urbach, T. P., & Kutas, M. (2011). Mass univariate analysis of event-related brain potentials/fields I. Psychophysiology, 48(12), 1711-1725.
-
Keil, A., et al. (2014). Committee report: Publication guidelines and recommendations for studies using EEG and MEG. Psychophysiology, 51(1), 1-21.
-
Kiesel, A., Miller, J., Jolicoeur, P., & Brisson, B. (2008). Measurement of ERP latency differences. Psychophysiology, 45(4), 517-523.
-
Kriegeskorte, N., Simmons, W. K., Bellgowan, P. S., & Baker, C. I. (2009). Circular analysis in systems neuroscience. Nature Neuroscience, 12(5), 535-540.
-
Lonedo, A., et al. (2020). The multiverse of ERP analysis pipelines. NeuroImage, 209, 116465.
-
Luck, S. J. (2014). An Introduction to the Event-Related Potential Technique (2nd ed.). MIT Press.
-
Luck, S. J., & Gaspelin, N. (2017). How to get statistically significant effects in any ERP experiment (and why you shouldn't). Psychophysiology, 54(1), 146-157.
-
Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177-190.
-
Tanner, D., Morgan-Short, K., & Luck, S. J. (2015). How inappropriate high-pass filters can produce artifactual effects. Psychophysiology, 52(8), 997-1009.
-
Widmann, A., Schroger, E., & Maess, B. (2015). Digital filter design for electrophysiological data. Journal of Neuroscience Methods, 250, 34-46.
-
Winkler, I., Debener, S., Muller, K. R., & Tangermann, M. (2015). On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP. Proceedings of the IEEE EMBC, 4101-4105.
See references/ for detailed parameter tables, component database, and statistical method descriptions.