论文标题
快速服务器学习率调整编码的联合辍学
Fast Server Learning Rate Tuning for Coded Federated Dropout
论文作者
论文摘要
在联合学习(FL)的跨设备中,通过使用更新而不是潜在的私人数据来交换参数来交换参数,具有低计算功率的客户培训常见的\ line Break [4]机器模型。联合辍学(FD)是一种通过选择要在每个训练回合中更新的模型参数的\ emph {subset}来提高FL会话的通信效率的技术。但是,与标准FL相比,FD产生的精度较低,并且面对更长的收敛时间。在本文中,我们利用\ textit {编码理论}来增强FD,通过允许在每个客户端使用不同的子模型。我们还表明,通过仔细调整服务器学习率超级参数,我们可以达到更高的训练速度,同时还达到了与NO辍学案例相同的最终精度。对于EMNIST数据集,我们的机制达到了NO辍学案例的最终准确性的99.6%,同时需要$ 2.43 \ tims $ $ $ $ $ $ $ $,以达到这种准确性。
In cross-device Federated Learning (FL), clients with low computational power train a common\linebreak[4] machine model by exchanging parameters via updates instead of potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a \emph{subset} of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this paper, we leverage \textit{coding theory} to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also achieving up to the same final accuracy as the no dropout case. For the EMNIST dataset, our mechanism achieves 99.6\% of the final accuracy of the no dropout case while requiring $2.43\times$ less bandwidth to achieve this level of accuracy.