PowerGossip
Practical low-rank communication compression in decentralized deep learning
Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression.
inactive
—
entered showcase: 2021-03-05
—
entry updated: 2024-04-09
This project has not yet been evaluated by the C4DT Factory team.
We will be happy to evaluate it upon request.
Library
Python
MIT