论文标题
有效的分式混合结合学习,用于按需和原地定制
Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization
论文作者
论文摘要
联邦学习(FL)为多个参与者提供了一个分布式学习框架,以协作学习而无需共享原始数据。在许多实际的FL场景中,由于硬件和推理动力学的差异,参与者具有异质资源,这些差异需要快速加载不同尺寸和鲁棒性水平的模型。异质性和动态共同对现有的FL方法构成了重大挑战,因此极大地限制了FL的适用性。在本文中,我们为异质参与者提出了一种新颖的分裂fl策略,一旦训练,它就会提供模型大小和鲁棒性的原位定制。具体而言,我们通过学习一组不同大小和鲁棒性水平的基本子网络来实现自定义,随后根据推理要求将其按需汇总。这种分裂混合策略以高效率在通信,存储和推理方面实现自定义。广泛的实验表明,与现有的异质体系结构FL方法相比,我们的方法提供了更好的原位定制。可以使用代码和预训练的模型:https://github.com/illidanlab/splitmix。
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data. In many practical FL scenarios, participants have heterogeneous resources due to disparities in hardware and inference dynamics that require quickly loading models of different sizes and levels of robustness. The heterogeneity and dynamics together impose significant challenges to existing FL approaches and thus greatly limit FL's applicability. In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness. Specifically, we achieve customization by learning a set of base sub-networks of different sizes and robustness levels, which are later aggregated on-demand according to inference requirements. This split-mix strategy achieves customization with high efficiency in communication, storage, and inference. Extensive experiments demonstrate that our method provides better in-situ customization than the existing heterogeneous-architecture FL methods. Codes and pre-trained models are available: https://github.com/illidanlab/SplitMix.