LION

LION

Framework converting linear-attention Transformers into bidirectional RNN equivalents.

Framework converting linear-attention Transformers to bidirectional RNN equivalents for fast training and efficient inference. Provides three variants (LION-Lit, LION-D, LION-S) for image classification and masked language modeling, with configurable size, masking, patch order, and format. Achieves competitive accuracy with reduced training time.

Deep Neural Networks
Key facts
Maturity
Support
C4DT
Inactive
Lab
Unknown
  • Technical

Laboratory for Information and Inference Systems

Laboratory for Information and Inference Systems
Volkan Cevher

Prof. Volkan Cevher

At LIONS, we are concerned with optimized information extraction from signals or data volumes. We therefore develop mathematical theory and computational methods for information recovery from highly incomplete data.

This page was last edited on 2026-03-03.