Name:
RelaySGD
Description:
Improved information propagation in decentralized learning
Professor — Lab:
Martin JaggiMachine Learning and Optimization Laboratory

Technical description:
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network. This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers. A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions. To tackle this challenge, we introduce the RelaySum mechanism for information propagation in decentralized learning. RelaySum uses spanning trees to distribute information exactly uniformly across all workers with finite delays depending on the distance between nodes. In contrast, the typical gossip averaging mechanism only distributes data uniformly asymptotically while using the same communication volume per step as RelaySum.
Papers:
Project status:
inactive — entered showcase: 2021-11-04 — entry updated: 2024-04-09

Source code:
Lab GitHub - last commit: 2023-04-21
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Library, Experiments
Programming language:
Python
License:
MIT