论文标题

通过压缩的联合学习:统一分析和敏锐的保证

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

论文作者

Haddadpour, Farzin, Kamani, Mohammad Mahdi, Mokhtari, Aryan, Mahdavi, Mehrdad

论文摘要

在联合学习中,沟通成本通常是一个关键的瓶颈,可以扩展分布式优化算法,以协作从具有潜在不可靠或有限的通信以及异质数据分布的数百万设备中学习模型。处理联邦算法的交流开销的两个值得注意的趋势是梯度压缩和局部计算,并定期通信。尽管有许多尝试,但表征这两种方法之间的关系已被证明是难以捉摸的。我们通过提出一组具有定期压缩(量化或稀疏)通信的算法来解决这一问题,并分析其在均质和异质局部数据分布设置中的收敛属性。对于均匀的环境,我们的分析通过为强凸和非凸目标函数提供更高的收敛速率来提高现有界限。为了减轻数据异质性,我们引入了局部梯度跟踪方案,并获得尖锐的收敛速率,该方案符合最著名的通信复杂性,而无需压缩凸,强烈凸和非convex设置。我们通过对现实世界数据集的一些实验来补充理论结果,并证明了我们提出的方法的有效性。

In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions. Two notable trends to deal with the communication overhead of federated algorithms are gradient compression and local computation with periodic communication. Despite many attempts, characterizing the relationship between these two approaches has proven elusive. We address this by proposing a set of algorithms with periodical compressed (quantized or sparsified) communication and analyze their convergence properties in both homogeneous and heterogeneous local data distribution settings. For the homogeneous setting, our analysis improves existing bounds by providing tighter convergence rates for both strongly convex and non-convex objective functions. To mitigate data heterogeneity, we introduce a local gradient tracking scheme and obtain sharp convergence rates that match the best-known communication complexities without compression for convex, strongly convex, and nonconvex settings. We complement our theoretical results and demonstrate the effectiveness of our proposed methods by several experiments on real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源