Draft:State space model
|  | Review waiting, please be patient. This may take 2 months or more, since drafts are reviewed in no specific order. There are 2,749 pending submissions waiting for review. 
 Where to get help 
 How to improve a draft 
 You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources 
 Reviewer tools 
 | 
State Space Models (SSMs) are a class of neural network architectures for processing time series data that model sequences using principles from control theory. SSMs have emerged as efficient alternatives to Transformer and recurrent neural network (RNN) architectures, particularly for handling long-range dependencies in sequence modeling tasks.[1] Unlike Transformers which have quadratic complexity with respect to sequence length, SSMs achieve linear or near-linear time complexity, making them particularly effective for processing very long sequences.[2][3]
Overview
[edit]State Space Models in deep learning are based on continuous-time state space representations from classical control theory.[2] At their core, SSMs map a one-dimensional input signal u(t) to an output signal y(t) through a hidden state x(t) using a system of differential equations.[4] The basic SSM is defined by the equations:
x'(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)
where A is the state matrix, B is the control matrix, C is the output matrix, and D is a direct feedthrough term (often treated as a skip connection in deep learning applications).[1]
SSMs offer several key advantages: they can naturally handle continuous data, automatically adapt to different sampling rates without retraining, and provide mathematically tractable analysis of their dynamics.[2][5] Through discretization, SSMs can be viewed from three complementary perspectives: as continuous-time systems, as recurrent networks during inference, and as convolutional models during training.[1]
History and Development
[edit]Origins in Neuroscience
[edit]The application of state space models to deep learning traces back to theoretical neuroscience research. In 2018, Aaron R. Voelker and Chris Eliasmith from the University of Waterloo proposed that the dynamic system in SSMs can effectively model "time cells" present in the hippocampus and cortex, leading to their work on applying SSMs to neural networks.[6][2]
Legendre Memory Units (2019)
[edit]The Legendre Memory Unit (LMU), introduced by Voelker, Kajić, and Eliasmith in 2019, was among the first successful applications of SSMs in deep learning.[7] LMUs are mathematically derived to orthogonalize continuous-time history by solving coupled ordinary differential equations, with their phase space mapping onto sliding windows of time via Legendre polynomials. LMUs demonstrated the ability to handle temporal dependencies spanning 100,000 time steps and achieved state-of-the-art performance on permuted sequential MNIST, exceeding 97% accuracy.[7]
HiPPO Framework (2020)
[edit]The High-Order Polynomial Projection Operators (HiPPO) framework, introduced by Gu et al. in 2020 who were from Stanford University, provided a unified mathematical foundation for memory in sequence models.[8] HiPPO optimally projects continuous signals onto polynomial bases, yielding linear dynamics for the projection coefficients. This framework produces several instantiations including HiPPO-LegS (scaled Legendre) and HiPPO-LegT (translated Legendre), which achieve timescale robustness and bounded gradients.[8][9] The HiPPO framework achieved 98.3% accuracy on permuted MNIST, surpassing previous RNN approaches by a significant margin.[8]
Parallelization (2021)
[edit]Chilkuri and Eliasmith proposed and demonstrated a method to efficiently train SSMs in parallel on GPUs.[10] This overcomes concerns that the recurrence in SSMs would be difficult to train on GPUs, since other recurrent networks like LSTMs fell out of favour for this reason. Subsequently the first large language model (LLM) using SSMs was demonstrated to scale better than either LSTMs or Transformers using this method.[11]
Structured State Space Models (S4, 2021)
[edit]The Structured State Space sequence model (S4), introduced by Gu, Goel, and Ré in 2021, marked a breakthrough in making SSMs practical for large-scale deep learning.[12] S4 addressed the computational challenges of naive SSM implementations through a novel parameterization involving:
- Structured initialization: Using the HiPPO matrix for the state matrix A
- Normal plus low-rank (NPLR) decomposition: Allowing A to be diagonalized stably
- Efficient computation: Reducing the SSM to a Cauchy kernel computation[12]
S4 achieved remarkable results across multiple domains:[12][13]
- 91% accuracy on sequential CIFAR-10 (matching 2D ResNets with no data augmentation)
- State-of-the-art on all tasks in the Long Range Arena benchmark
- First model to solve the Path-X task (16,000 sequence length) with 88% accuracy
- 60× faster generation than Transformers on language modeling
The model demonstrated the ability to handle sequences exceeding 10,000 steps while maintaining linear scaling in sequence length.[12]
Key Architectural Innovations
[edit]Mamba (2023)
[edit]Mamba, introduced by Gu and Dao in December 2023, represents a major advancement in SSM architectures through the introduction of selective state space models.[11] Traditional SSMs use time-invariant parameters, meaning the matrices A, B, and C remain constant across the sequence. Mamba's key innovation is making these parameters functions of the input, allowing the model to selectively propagate or forget information based on content.[11]
Key features of Mamba:
- Selective SSMs (S6): Input-dependent parameters (B, C, and step size Δ) that enable content-based reasoning[14]
- Hardware-aware algorithms: Parallel scan, kernel fusion, and selective recomputation to achieve efficient training[14]
- Linear-time complexity: O(n) scaling with sequence length, compared to O(n²) for Transformers[15]
- Simplified architecture: Replaces attention and MLP blocks with a unified SSM block[16]
Mamba achieved competitive or superior performance compared to Transformers while providing 5× higher throughput and linear scaling to million-length sequences.[14] On language modeling, Mamba-3B matched Transformers twice its size in both pretraining and downstream evaluation.[14]
Mamba-2 (2024)
[edit]In May 2024, Dao and Gu introduced Mamba-2 through their "Transformers are SSMs" paper, which established theoretical connections between SSMs and attention mechanisms via structured semiseparable matrices.[17] The State Space Duality (SSD) framework enabled the design of Mamba-2, which is 2-8× faster than Mamba while maintaining competitive performance with Transformers on language modeling.[17]
Mamba-2 achieves faster computation by leveraging matrix multiplication primitives and tensor cores on modern GPUs, allowing for larger state expansion (typically N=128-256 compared to N=16 in Mamba) while remaining computationally efficient.[18] The model also enables better system-level optimizations including tensor parallelism and sequence parallelism.[19]
Hybrid Architectures
[edit]Jamba (2024)
[edit]AI21 Labs introduced Jamba in March 2024, the first production-grade model combining Mamba SSM layers with Transformer attention and mixture-of-experts (MoE) components.[20] Jamba features:
- A hybrid architecture interleaving Transformer and Mamba layers at a 1:7 ratio[20]
- 52B total parameters with only 12B active parameters during inference[20]
- Support for 256K token context windows
- 3× throughput improvement on long contexts compared to Mixtral 8x7B[20]
The architecture demonstrated that hybrid approaches can effectively balance the strengths of both SSMs (efficiency, long context) and Transformers (performance, in-context learning).[21] Jamba 1.5, released in August 2024, scaled to 398B total parameters with 94B active, representing the largest hybrid SSM-Transformer architecture to date.[22]
Other Hybrid Models
[edit]Recent work has explored various hybrid architectures:
- Vision Mamba (Vim): Bidirectional Mamba blocks for visual data processing[16]
- MambaByte: Byte-level language modeling without tokenization[23]
- MoE Mamba: Integration of mixture-of-experts with Mamba, requiring 2.2× fewer training steps than standard Mamba[16]
Mathematical Framework
[edit]Continuous Representation
[edit]The continuous-time SSM is defined by linear ordinary differential equations:[1][2]
x'(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)
where:
- x(t) ∈ ℝᴺ is the state vector (N-dimensional latent state)
- u(t) ∈ ℝ is the input signal
- y(t) ∈ ℝ is the output signal
- A ∈ ℝᴺˣᴺ is the state transition or dynamics matrix
- B ∈ ℝᴺˣ¹ is the control matrix
- C ∈ ℝ¹ˣᴺ is the output matrix
- D ∈ ℝ is the feedthrough term
Discretization
[edit]To implement SSMs on digital computers, the continuous system must be discretized. The most common approach uses the zero-order hold (ZOH) method with step size Δ:[1][24]
x̄ₖ = Āxₖ₋₁ + B̄uₖ yₖ = Cx̄ₖ + Duₖ
where the discrete parameters are:
Ā = exp(ΔA) B̄ = (exp(ΔA) - I)A⁻¹B
This discretization highlights two complementary views:[1]
- Recurrent view: Efficient O(N) linear-time inference by maintaining state
- Convolutional view: Parallel O(N log N) training via FFT-based convolutions[10]
The convolution kernel K̄ can be precomputed as:
K̄ₖ = CĀᵏB̄
This duality allows SSMs to combine the inference efficiency of RNNs with the training parallelism of CNNs and Transformers.[1]
Computational Complexity and Efficiency
[edit]Comparison with Transformers
[edit]A fundamental advantage of SSMs is their computational complexity compared to Transformers:[2][12][24][25]
Transformers:
- Training complexity: O(N²D) where N is sequence length, D is dimension
- Inference complexity: O(N²D) due to attention over all previous tokens
- Memory: O(N) for KV cache, growing linearly with sequence length
State Space Models:
- Training complexity: O(N log N) via FFT for convolutional view
- Inference complexity: O(N) linear time per token (only update hidden state)
- Memory: O(1) constant for state, independent of sequence length
This means SSMs scale better during training and achieve linear-time generation, while Transformers have quadratic complexity that becomes prohibitive for very long sequences.[3][24] At sequence lengths beyond 8,000-16,000 tokens, SSMs typically become significantly faster than Transformers.[25]
Hardware Optimization
[edit]Modern SSM implementations employ several hardware-aware optimizations:[14][19]
- Kernel fusion: Combining multiple operations to minimize memory transfers between GPU memory hierarchies
- Parallel scan algorithms: Efficient parallelization of recurrent computation during training
- Selective recomputation: Recomputing intermediate values during backpropagation rather than storing them
- Tensor parallelism: Distributing computation across multiple GPUs
These optimizations have been crucial for achieving competitive wall-clock time performance with highly optimized Transformer implementations.[14]
Applications
[edit]Natural Language Processing
[edit]SSMs have demonstrated strong performance on various NLP tasks:[14][26]
- Language modeling competitive with Transformers of similar or larger size
- Long-document understanding with contexts up to 256K tokens
- Strong performance on question answering, summarization, and text classification
The selective mechanism in Mamba has proven particularly effective for discrete modalities like language, addressing early limitations of S4 in this domain.[14]
Computer Vision
[edit]Vision applications of SSMs include:[16][27]
- Image classification on ImageNet
- Sequential image tasks (e.g., sequential CIFAR-10)
- Video understanding and generation
- Vision Mamba (Vim) achieving competitive results with Vision Transformers[28]
Audio and Speech
[edit]SSMs excel at audio tasks due to their continuous-time formulation:[29]
- Speech generation with models like SaShiMi
- Audio classification on benchmarks like Speech Commands (96.3% accuracy with S4)
- Music generation
- Robustness to different sampling rates without retraining
Time Series and Scientific Computing
[edit]The continuous nature of SSMs makes them well-suited for:[30]
- Time series forecasting
- Genomic sequence modeling (million-length DNA sequences)
- Climate and weather prediction
- Medical time series analysis
- Financial modeling
Limitations and Open Challenges
[edit]Despite their advantages, SSMs face several challenges:[14][31]
- Associative recall: Pure SSM architectures may struggle with tasks requiring precise retrieval of specific information from long contexts, where attention mechanisms excel
- Training speed at short sequences: Highly optimized Transformer implementations can be faster than SSMs at sequence lengths below 2,000-4,000 tokens
- State capacity: Fixed-size hidden states may saturate on extremely long sequences, though this can be mitigated with larger state dimensions
- Discrete modalities: While Mamba addressed this, earlier SSMs like S4 showed higher perplexity on language tasks compared to Transformers
These limitations have motivated hybrid architectures that combine SSMs with attention mechanisms to leverage the strengths of both approaches.[20][21]
See Also
[edit]- Recurrent neural network
- Transformer (machine learning model)
- Attention (machine learning)
- Long short-term memory
- Control theory
- Mamba (deep learning architecture)
External Links
[edit]- Mamba official GitHub repository
- S4 official GitHub repository
- The Annotated S4 tutorial
- HiPPO official code
- AI21 Jamba models
- ^ a b c d e f g Bourdois, L. (2024). "Introduction to State Space Models (SSM)". Hugging Face Blog. https://huggingface.co/blog/lbourdois/get-on-the-ssm-train
- ^ a b c d e f Voelker, A. R. (2019). "Dynamical Systems in Spiking Neuromorphic Hardware". Doctoral Thesis, University of Waterloo. https://compneuro.uwaterloo.ca/publications/voelker2019.html
- ^ a b Somvanshi, S., et al. (2025). "From S4 to Mamba: A Comprehensive Survey on Structured State Space Models". arXiv:2503.18970
- ^ Grootendorst, M. (2024). "A Visual Guide to Mamba and State Space Models". https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-mamba-and-state
- ^ Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., & Ré, C. (2021). "Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers". Advances in Neural Information Processing Systems (NeurIPS), 34. arXiv:2110.13985
- ^ Voelker, A. R., & Eliasmith, C. (2018). "Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells". Neural Computation.
- ^ a b Voelker, A. R., Kajić, I., & Eliasmith, C. (2019). "Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks". Advances in Neural Information Processing Systems (NeurIPS), 32, 15544-15553.
- ^ a b c Gu, A., Dao, T., Ermon, S., Rudra, A., & Ré, C. (2020). "HiPPO: Recurrent Memory with Optimal Polynomial Projections". Advances in Neural Information Processing Systems (NeurIPS), 33. arXiv:2008.07669
- ^ Gu, A., Dao, T., Ermon, S., Rudra, A., & Ré, C. (2020). "HiPPO: Recurrent Memory with Optimal Polynomial Projections". Stanford Hazy Research Blog. https://hazyresearch.stanford.edu/blog/2020-12-05-hippo
- ^ a b Chilkuri, N. & Eliasmith, C. "Parallelizing legendre memory unit training." Proceedings of the 38th International Conference on Machine Learning, PMLR, 1898–1907. Jul 2021. URL: https://proceedings.mlr.press/v139/chilkuri21a.html
- ^ a b c Chilkuri, N., Hunsberger, E., Voelker, A., Malik, G., & Eliasmith, C. "Language modeling using lmus: 10x better data efficiency or improved scaling compared to transformers." arXiv preprint, 2021. URL: https://arxiv.org/abs/2110.02402.
- ^ a b c d e Gu, A., Goel, K., & Ré, C. (2021). "Efficiently Modeling Long Sequences with Structured State Spaces". International Conference on Learning Representations (ICLR). arXiv:2111.00396
- ^ Rush, S., & Karamcheti, S. (2022). "The Annotated S4". https://srush.github.io/annotated-s4/
- ^ a b c d e f g h i Gu, A., & Dao, T. (2023). "Mamba: Linear-Time Sequence Modeling with Selective State Spaces". arXiv:2312.00752
- ^ Mohapatra, A. (2024). "MAMBA and State Space Models Explained". Medium. https://athekunal.medium.com/mamba-and-state-space-models-explained-b1bf3cb3bb77
- ^ a b c d "Mamba (deep learning architecture)". Wikipedia. https://en.wikipedia.org/wiki/Mamba_(deep_learning_architecture)
- ^ a b Dao, T., & Gu, A. (2024). "Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality". Proceedings of Machine Learning Research, 235, 10041-10071.
- ^ Dao, T. (2024). "State Space Duality (Mamba-2) Part I - The Model". https://tridao.me/blog/2024/mamba2-part1-model/
- ^ a b Dao, T. (2024). "State Space Duality (Mamba-2) Part IV - The Systems". https://tridao.me/blog/2024/mamba2-part4-systems/
- ^ a b c d e Lieber, O., et al. (2024). "Introducing Jamba: AI21's Groundbreaking SSM-Transformer Model". AI21 Labs Blog. https://www.ai21.com/blog/announcing-jamba/
- ^ a b Lieber, O., et al. (2024). "Jamba: A Hybrid Transformer-Mamba Language Model". arXiv:2403.19887
- ^ AI21 Labs (2024). "Jamba-1.5: Hybrid Transformer-Mamba Models at Scale". arXiv:2408.12570. https://www.ai21.com/research/jamba-1-5-hybrid-transformer-mamba-models-at-scale/
- ^ Wang, J., et al. (2024). "MambaByte: Token-free Selective State Space Model". arXiv:2401.13660
- ^ a b c Nichkawde, C. "Beyond Transformers: Structured State Space Sequence Models". https://cnichkawde.github.io/statespacesequencemodels.html
- ^ a b Alkin, B., et al. (2024). "Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling". arXiv:2404.16112
- ^ "State Space Models". Aman's AI Journal. https://aman.ai/primers/ai/state-space-models/
- ^ Zhu, X., Ruan, Q., Qian, S. & Zhang, M. (2025). "A hybrid model based on transformer and Mamba for enhanced sequence modeling". Scientific Reports. https://www.nature.com/articles/s41598-025-87574-8
- ^ Zhu, L., et al. (2024). "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model". arXiv:2401.09417
- ^ Goel, K., Gu, A., Donahue, C., & Ré, C. (2022). "It's Raw! Audio Generation with State-Space Models". International Conference on Machine Learning (ICML). https://proceedings.mlr.press/v162/goel22a/goel22a.pdf
- ^ Zhou, H., et al. (2020). "Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting". arXiv:2012.07436
- ^ Tiezzi, M., Casoni, M., Betti, A., Guidi, T., Gori, M. & Melacci, S. (2025). "Back to recurrent processing at the crossroad of transformers and state-space models". Nature Machine Intelligence. https://www.nature.com/articles/s42256-025-01034-6
