Modular Mixture-of-Experts (MoE) framework for temporally grounded LLMs. Trains separate GPT experts on two-year data slices; routes queries to experts up to the query timestamp. Aggregates outputs via equal weighting, learned routing, or joint co-adaptation. Includes repo structure and quickstart instructions.
This page was last edited on 2026-03-03.
This page was last edited on 2026-03-03.