CoMiGS introduces a Mixture-of-Experts (MoE) architecture for personalized federated fine-tuning of language models on heterogeneous devices. Experts are categorized as generalists (whose parameters are aggregated across clients via FedAvg) or specialists (whose parameters remain local), and the split is determined through a bi-level optimization formulation. The method handles both system heterogeneity (varying LoRA ranks per device) and data heterogeneity, outperforming baselines including FedAvg, FlexLoRA, HetLoRA, FFA-LoRA, FDLoRA, and pFedMoE on standard benchmarks.
This page was last edited on 2026-03-24.
This page was last edited on 2026-03-24.