CoMiGS

CoMiGS

On-device collaborative language modeling via a mixture of generalists and specialists

CoMiGS introduces a Mixture-of-Experts (MoE) architecture for personalized federated fine-tuning of language models on heterogeneous devices. Experts are categorized as generalists (whose parameters are aggregated across clients via FedAvg) or specialists (whose parameters remain local), and the split is determined through a bi-level optimization formulation. The method handles both system heterogeneity (varying LoRA ranks per device) and data heterogeneity, outperforming baselines including FedAvg, FlexLoRA, HetLoRA, FFA-LoRA, FDLoRA, and pFedMoE on standard benchmarks.

DecentralizedFederated LearningLarge Language Model
Key facts
Maturity
Support
C4DT
Inactive
Lab
Active
  • Technical
  • Research papers

Machine Learning and Optimization Laboratory

Machine Learning and Optimization Laboratory
Martin Jaggi

Prof. Martin Jaggi

The Machine Learning and Optimization Laboratory is interested in machine learning, optimization algorithms and text understanding, as well as several application domains.

This page was last edited on 2026-03-24.