Spiking Network Model Builder

Assists building spiking neural network simulations: neuron models, connectivity, plasticity rules

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Spiking Network Model Builder" with this command: npx skills add haoxuanlithuai/awesome_cognitive_and_neuroscience_skills/haoxuanlithuai-awesome-cognitive-and-neuroscience-skills-spiking-network-model-builder

Spiking Network Model Builder

Purpose

This skill encodes expert methodological knowledge for constructing biologically realistic spiking neural network simulations. A competent programmer without computational neuroscience training will get this wrong because:

  • Neuron model choice determines what phenomena can emerge. A leaky integrate-and-fire (LIF) neuron cannot produce bursting, adaptation, or rebound spikes. If your phenomenon depends on these, you need an AdEx or Izhikevich model, not just a "more complex" model (Izhikevich, 2004).
  • E/I balance is not optional. Cortical networks maintain a tight excitation/inhibition balance. Networks without proper E/I ratio produce either silence or epileptiform runaway activity, neither of which is biologically realistic (Brunel, 2000).
  • Synaptic time constants encode biology. AMPA (fast, ~5 ms), NMDA (slow, ~100 ms), and GABA_A (~10 ms) receptors have fundamentally different dynamics. Using a single generic synapse model erases critical temporal structure (Dayan & Abbott, 2001).
  • Time step selection affects correctness. Too-large integration steps cause LIF neurons to miss spikes and HH neurons to become numerically unstable. The correct step depends on the neuron model, not general ODE intuition (Rotter & Diesmann, 1999).
  • Weight scaling must respect network size. Naive weight choices produce firing rates that change with network size. Balanced networks require 1/sqrt(N) scaling (Brunel, 2000).

When to Use This Skill

  • Constructing a spiking neural network simulation for a research question
  • Choosing a neuron model appropriate for the phenomenon of interest
  • Setting biologically constrained connectivity parameters
  • Implementing synaptic plasticity (STDP, homeostatic, etc.)
  • Validating model outputs against known cortical statistics
  • Selecting simulation software and numerical parameters

Do NOT use this skill for:

  • Rate-based neural network models (use standard ML/deep learning frameworks)
  • Detailed compartmental modeling of single neurons (use NEURON with morphological data)
  • Analyzing experimental neural data (see neural-population-analysis-guide)

Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question -- What specific question is this analysis/paradigm addressing?
  2. Justify the method choice -- Why is this approach appropriate? What alternatives were considered?
  3. Declare expected outcomes -- What results would support vs. refute the hypothesis?
  4. Note assumptions and limitations -- What does this method assume? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.


⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

Step 1: Select a Neuron Model

Neuron Model Decision Tree

What firing properties does your model need?
|
+-- "Just spikes, basic rate coding, large networks"
| --> Leaky Integrate-and-Fire (LIF)
| Simplest; fastest simulation; no adaptation or bursting
|
+-- "Spike initiation sharpness matters"
| --> Exponential IF (EIF)
| Adds realistic spike onset; still single-variable
|
+-- "Spike-frequency adaptation or bursting"
| --> Adaptive Exponential IF (AdEx)
| Two variables; can produce regular spiking, bursting,
| intrinsic oscillations, adaptation
|
+-- "Diverse firing patterns with minimal complexity"
| --> Izhikevich model
| Four parameters; 20+ firing patterns; fast to simulate
|
+-- "Biophysically detailed ion channel dynamics"
 --> Hodgkin-Huxley (HH)
 Four variables; channel-level accuracy; slow to simulate
 Use only when ion channel pharmacology is relevant

Neuron Model Parameters

Leaky Integrate-and-Fire (LIF)

ParameterSymbolValueSource
Resting potentialV_rest-65 mVDayan & Abbott, 2001
ThresholdV_thresh-50 mVDayan & Abbott, 2001
Reset potentialV_reset-65 mVDayan & Abbott, 2001
Membrane time constanttau_m20 msDayan & Abbott, 2001
Membrane resistanceR_m100 MOhm (typical cortical)Dayan & Abbott, 2001
Refractory periodt_ref2 ms (absolute)Dayan & Abbott, 2001

Exponential Integrate-and-Fire (EIF)

ParameterSymbolValueSource
All LIF parameters--Same as aboveDayan & Abbott, 2001
Sharpness of spike initiationDelta_T2 mVFourcaud-Trocme et al., 2003
Spike detection thresholdV_peak0 mV or 20 mVFourcaud-Trocme et al., 2003

Adaptive Exponential IF (AdEx)

ParameterSymbolValueSource
Subthreshold adaptationa4 nSBrette & Gerstner, 2005
Spike-triggered adaptationb0.08 nA (80 pA)Brette & Gerstner, 2005
Adaptation time constanttau_w100--300 msBrette & Gerstner, 2005
Spike initiation sharpnessDelta_T2 mVBrette & Gerstner, 2005
All EIF parameters--Same as EIF aboveBrette & Gerstner, 2005

AdEx firing patterns by parameter regime (Brette & Gerstner, 2005; Naud et al., 2008):

Patterna (nS)b (nA)tau_w (ms)Typical neuron type
Regular spiking40.08150Cortical pyramidal
Bursting40.5100Intrinsically bursting
Fast spiking00--PV+ interneuron (no adaptation)
Adapting40.08300Slow-adapting pyramidal

Izhikevich Model

The model uses two variables (v, u) with four parameters (a, b, c, d) (Izhikevich, 2003):

Patternabc (mV)dSource
Regular spiking0.020.2-658Izhikevich, 2003
Intrinsically bursting0.020.2-554Izhikevich, 2003
Chattering0.020.2-502Izhikevich, 2003
Fast spiking0.10.2-652Izhikevich, 2003
Low-threshold spiking0.020.25-652Izhikevich, 2003

Hodgkin-Huxley (HH)

Use only when biophysical detail is required. See references/hh-parameters.md for the full parameter set. Key values (Hodgkin & Huxley, 1952):

  • g_Na = 120 mS/cm^2, E_Na = 50 mV
  • g_K = 36 mS/cm^2, E_K = -77 mV
  • g_L = 0.3 mS/cm^2, E_L = -54.4 mV
  • C_m = 1 uF/cm^2

Step 2: Configure Synapses

Synaptic Time Constants

Receptortau_risetau_decayNet tau_synSource
AMPA~0.5 ms~5 ms5 ms (single exponential)Dayan & Abbott, 2001
NMDA~2 ms~100 ms100 ms (single exponential)Dayan & Abbott, 2001
GABA_A~0.5 ms~10 ms10 ms (single exponential)Dayan & Abbott, 2001
GABA_B~50 ms~200 ms200 ms (single exponential)Dayan & Abbott, 2001

Conductance-Based vs. Current-Based Synapses

TypeEquationWhen to UseSource
Current-basedI_syn = w * g(t)Large networks; faster simulation; when voltage-dependent effects are unimportantBrunel, 2000
Conductance-basedI_syn = g(t) * (V - E_rev)When synaptic interactions depend on membrane potential (e.g., NMDA voltage dependence, shunting inhibition)Dayan & Abbott, 2001

Domain judgment: Current-based synapses are appropriate for most network-level studies. Switch to conductance-based when the research question involves voltage-dependent effects (NMDA Mg2+ block, shunting inhibition) or when accurate I-V relationships matter (Brunel, 2000; Dayan & Abbott, 2001).

Short-Term Plasticity: Tsodyks-Markram Model

The Tsodyks-Markram (TM) model captures short-term facilitation and depression (Tsodyks & Markram, 1997):

ParameterFacilitating synapseDepressing synapseSource
U (initial release prob.)0.10.5Tsodyks & Markram, 1997
tau_rec (recovery time)800 ms800 msTsodyks & Markram, 1997
tau_fac (facilitation time)1000 ms0 ms (no facilitation)Tsodyks & Markram, 1997

Step 3: Configure Network Connectivity

Excitatory/Inhibitory Balance

ParameterValueSource
Excitatory fraction80% of neuronsBraitenberg & Schutz, 1998
Inhibitory fraction20% of neuronsBraitenberg & Schutz, 1998
E-to-E connection probability10--20% (random)Brunel, 2000
E-to-I connection probability10--20%Brunel, 2000
I-to-E connection probability10--20%Brunel, 2000
I-to-I connection probability10--20%Brunel, 2000

Weight Scaling for Balanced Networks

For a balanced network to produce biologically realistic asynchronous irregular (AI) firing (Brunel, 2000):

  • Excitatory weight: J_E = J_0 / sqrt(N_E * p), where N_E is excitatory population size and p is connection probability
  • Inhibitory weight: J_I = -g * J_E, where g > 1 (typically g = 4--8 for the AI regime)
  • External drive: Poisson input to maintain target firing rates

Domain judgment: The ratio g = J_I/J_E (relative inhibitory strength) determines the network regime. g < 4 produces synchronous regular firing; g = 4--8 produces the biologically realistic asynchronous irregular (AI) state; g >> 8 produces very low firing rates or silence (Brunel, 2000).

Network Size

ScaleNeuronsTypical UseSource
Minimal100--500Quick tests; parameter explorationExpert consensus
Cortical column1,000--10,000Standard for cortical circuit modelsBrunel, 2000
Large-scale10,000--100,000Multi-area models; detailed columnPotjans & Diesmann, 2014

Step 4: Implement Plasticity Rules

Spike-Timing-Dependent Plasticity (STDP)

Standard pair-based STDP parameters (Bi & Poo, 1998; Song et al., 2000):

ParameterSymbolValueSource
Potentiation time constanttau_+20 msBi & Poo, 1998
Depression time constanttau_-20 msBi & Poo, 1998
Potentiation amplitudeA_+0.01 (relative)Song et al., 2000
Depression amplitudeA_--0.012 (A_-
Maximum weightw_maxSet to prevent runawaySong et al., 2000

Domain judgment: The asymmetry |A_-| > A_+ is critical. Without it, STDP drives all weights to their maximum value (runaway potentiation). The slight depression bias ensures stable weight distributions (Song et al., 2000). Additional stabilization mechanisms (weight dependence, homeostatic plasticity) are often needed in practice.

Rate-Based Plasticity: BCM Rule

The Bienenstock-Cooper-Munro (BCM) rule provides a stable, rate-based plasticity rule (Bienenstock et al., 1982):

  • Learning rule: dw/dt = eta * y * (y - theta_m) * x
  • Sliding threshold: theta_m = E[y^2], ensuring stability
  • Where y = postsynaptic rate, x = presynaptic rate, eta = learning rate

Homeostatic Plasticity

For long simulations with STDP, add homeostatic mechanisms to prevent runaway dynamics:

  • Synaptic scaling: Multiplicatively scale all incoming weights to maintain target firing rate (Turrigiano et al., 1998)
  • Intrinsic plasticity: Adjust neuronal excitability (threshold or adaptation) to maintain target rate (Desai et al., 1999)
  • Target firing rate: 1--5 Hz for excitatory cortical neurons (expert consensus based on in vivo recordings)

Step 5: Set Simulation Parameters

Integration Time Step

Neuron ModelRecommended dtMaximum dtRationaleSource
LIF0.1 ms0.5 msExact integration possible; larger steps miss coincident spikesRotter & Diesmann, 1999
EIF / AdEx0.1 ms0.1 msExponential term requires small steps near thresholdBrette & Gerstner, 2005
Izhikevich0.1 ms0.5 ms (with Euler)Use 0.5 ms with two half-steps per Izhikevich (2003)Izhikevich, 2003
Hodgkin-Huxley0.01--0.05 ms0.05 msGating variable dynamics require fine resolutionRotter & Diesmann, 1999

Simulation Duration

PhenomenonMinimum DurationRationaleSource
Network stabilization (transient)500 ms discardAllow initial transient to decayExpert consensus
Asynchronous irregular state1--5 s after transientSufficient for firing rate and CV statisticsBrunel, 2000
STDP weight development10--100 sWeights evolve slowlySong et al., 2000
Oscillation analysis2--10 sNeed multiple cycles for spectral analysisExpert consensus

Step 6: Validate the Model

Essential Validation Metrics

MetricTarget ValueWhat It IndicatesSource
Mean firing rate (excitatory)1--10 HzRealistic cortical activityBrunel, 2000
Mean firing rate (inhibitory)5--30 HzFast-spiking interneurons fire fasterBrunel, 2000
CV of ISI~1.0 (0.8--1.2)Irregular firing (Poisson-like)Brunel, 2000; Softky & Koch, 1993
Fano factor (spike count)~1.0Poisson-like variabilitySoftky & Koch, 1993
Population synchrony (chi)< 0.2 for AI stateAsynchronous activityBrunel, 2000
Pairwise correlation0.01--0.1Weak correlations as in cortexCohen & Kohn, 2011

Domain judgment: A network with mean firing rate in range but CV << 1 (regular firing) is NOT in a biologically realistic regime. Cortical neurons fire irregularly (CV ~ 1) even when the network is in a stationary state. If your CV is much less than 1, inhibition is likely too weak or connectivity too structured (Brunel, 2000).


Simulator Selection

SimulatorLanguageBest ForLimitationsSource
NESTPython/C++Large-scale LIF/IF networks; exact integrationLess flexible for custom modelsGewaltig & Diesmann, 2007
Brian2PythonRapid prototyping; custom equations; educationSlower than NEST for very large networksStimberg et al., 2019
NEURONPython/HOCCompartmental models; biophysical detailOverkill for point-neuron networksHines & Carnevale, 1997
GeNNC++/PythonGPU-accelerated; very large networksRequires NVIDIA GPU; steeper learning curveYavuz et al., 2016

Recommendation: Start with Brian2 for prototyping and model development. Use NEST for production runs of large-scale networks. Use NEURON only when compartmental morphology is needed. Use GeNN when GPU acceleration is required for network size (Stimberg et al., 2019).


Common Pitfalls

1. No E/I Balance

Networks without proper E/I ratio (80/20) and weight scaling produce unrealistic dynamics: runaway excitation, epileptiform synchrony, or silence. Always verify the network operates in the AI regime (Brunel, 2000).

2. Ignoring the Initial Transient

The first 200--500 ms of simulation reflect initial conditions, not the network's steady state. Always discard this transient period before computing statistics (expert consensus).

3. Wrong Time Step for the Neuron Model

Using dt = 1 ms for HH models causes numerical instability. Using dt = 0.01 ms for LIF networks wastes computation. Match dt to the model (Rotter & Diesmann, 1999).

4. STDP Without Stabilization

Pair-based STDP alone drives weights to bimodal (all 0 or all w_max) distributions. Add weight dependence, homeostatic scaling, or use triplet STDP rules for stable learning (Song et al., 2000; Turrigiano et al., 1998).

5. Network Size-Dependent Behavior

Changing network size N without rescaling weights (1/sqrt(N)) changes firing rates and dynamics. Always verify that results are robust to network size or explicitly rescale (Brunel, 2000).

6. Using Conductance-Based Synapses When Current-Based Suffice

Conductance-based synapses are slower to simulate and add complexity. Unless voltage-dependent effects (NMDA, shunting inhibition) are central to the question, current-based synapses are appropriate and much faster (Brunel, 2000).


Minimum Reporting Checklist

Based on Nordlie et al. (2009) model description standards and Brunel (2000):

  • Neuron model type and all parameters (with units)
  • Synapse model type (current vs. conductance) and time constants per receptor type
  • Network size (N_E, N_I) and connection probability
  • Weight values and scaling rule (how weights relate to N)
  • External input description (Poisson rate, current injection)
  • Plasticity rule and parameters (if applicable)
  • Integration method and time step
  • Simulation duration (including discarded transient)
  • Validation metrics: mean firing rates, CV of ISI, synchrony measure
  • Simulator name and version
  • Random seed or number of independent realizations

Key References

  • Bi, G.-Q., & Poo, M.-M. (1998). Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience, 18(24), 10464--10472.
  • Bienenstock, E. L., Cooper, L. N., & Munro, P. W. (1982). Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience, 2(1), 32--48.
  • Braitenberg, V., & Schutz, A. (1998). Cortex: Statistics and Geometry of Neuronal Connectivity (2nd ed.). Springer.
  • Brette, R., & Gerstner, W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 94(5), 3637--3642.
  • Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183--208.
  • Dayan, P., & Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press.
  • Fourcaud-Trocme, N., Hansel, D., van Vreeswijk, C., & Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. Journal of Neuroscience, 23(37), 11628--11640.
  • Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press.
  • Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117(4), 500--544.
  • Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569--1572.
  • Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5), 1063--1070.
  • Nordlie, E., Gewaltig, M.-O., & Plesser, H. E. (2009). Towards reproducible descriptions of neuronal network models. PLoS Computational Biology, 5(8), e1000456.
  • Rotter, S., & Diesmann, M. (1999). Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biological Cybernetics, 81(5--6), 381--402.
  • Song, S., Miller, K. D., & Abbott, L. F. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience, 3(9), 919--926.
  • Tsodyks, M. V., & Markram, H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences, 94(2), 719--723.

See references/hh-parameters.md for full Hodgkin-Huxley parameter tables. See references/network-regimes.md for Brunel network regime diagrams and extended parameter sweeps.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eeg preprocessing pipeline guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

lesion-symptom mapping guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

verify skill

No summary provided by upstream source.

Repository SourceNeeds Review
General

act-r model builder

No summary provided by upstream source.

Repository SourceNeeds Review