论文标题
基于动态抽样的深层级别量化压缩算法
Deep Hierarchy Quantization Compression algorithm based on Dynamic Sampling
论文作者
论文摘要
与传统的分布式机器学习不同,联合学习存储在本地进行培训数据,然后在服务器上汇总模型,该模型解决了传统分布式机器学习中可能出现的数据安全问题。但是,在训练过程中,模型参数的传输可能会对网络带宽施加重大负载。已经指出,在模型参数传输过程中,绝大多数模型参数都是冗余的。在本文中,我们在此基础上探讨了选定的部分模型参数的数据分布定律,并提出了深层层次量化压缩算法,该算法进一步压缩了模型并减少了通过模型参数的层次量化数据传输所带来的网络负载。我们采用动态抽样策略来选择客户以加速模型的收敛性。不同公共数据集的实验结果证明了我们算法的有效性。
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.