论文标题
Flecs:通过压缩和素描联合学习的二阶框架
FLECS: A Federated Learning Second-Order Framework via Compression and Sketching
论文作者
论文摘要
受FedNL最近的工作的启发(Safaryan等,FedNL:制作适用于联邦学习的牛顿型方法),我们为联邦学习,即Flecs提出了一个新的沟通有效的二阶框架。提出的方法通过使用L-SR1类型更新的Hessian近似值来降低FedNL的高内存要求,该更新存储在中央服务器上。每个设备都需要的所有Hessian的低维“草图”以生成更新,因此,代理商的Hessian-vector产品数量也很低。利用有偏见和公正的压缩来使通信成本也很低。在强凸和非凸病例中都提供了FLEC的收敛保证,并且在较强的凸度下还建立了局部线性收敛。数值实验证实了这种新的FLECS算法的实际好处。
Inspired by the recent work FedNL (Safaryan et al, FedNL: Making Newton-Type Methods Applicable to Federated Learning), we propose a new communication efficient second-order framework for Federated learning, namely FLECS. The proposed method reduces the high-memory requirements of FedNL by the usage of an L-SR1 type update for the Hessian approximation which is stored on the central server. A low dimensional `sketch' of the Hessian is all that is needed by each device to generate an update, so that memory costs as well as number of Hessian-vector products for the agent are low. Biased and unbiased compressions are utilized to make communication costs also low. Convergence guarantees for FLECS are provided in both the strongly convex, and nonconvex cases, and local linear convergence is also established under strong convexity. Numerical experiments confirm the practical benefits of this new FLECS algorithm.