TiMoE

TiMoE

Modular framework for temporally grounded language models with time-aware expert routing.

Modular Mixture-of-Experts (MoE) framework for temporally grounded LLMs. Trains separate GPT experts on two-year data slices; routes queries to experts up to the query timestamp. Aggregates outputs via equal weighting, learned routing, or joint co-adaptation. Includes repo structure and quickstart instructions.

Large Language Model
Key facts
Maturity
Support
C4DT
Inactive
Lab
Active
  • Technical

Machine Learning and Optimization Laboratory

Machine Learning and Optimization Laboratory
Martin Jaggi

Prof. Martin Jaggi

The Machine Learning and Optimization Laboratory is interested in machine learning, optimization algorithms and text understanding, as well as several application domains.

This page was last edited on 2026-03-03.