论文标题
fltrust:通过信任自举通过拜占庭式的联合学习
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
论文作者
论文摘要
拜占庭式联合学习的旨在使服务提供商能够在有限数量的客户恶意的情况下学习准确的全球模型。现有拜占庭式联合学习方法的关键思想是,服务提供商在客户的本地模型更新中执行统计分析并删除可疑的统计分析,然后再汇总它们以更新全局模型。但是,恶意客户仍然可以通过将精心设计的本地模型更新发送给服务提供商来破坏这些方法中的全球模型。基本原因是,现有的联合学习方法没有信任的根源。 在这项工作中,我们通过提出fltrust(一种新的联合学习方法)来弥合差距,服务提供商本身自动引导信任。特别是,服务提供商本身为学习任务收集了一个干净的小型培训数据集(称为root数据集),而服务提供商将基于它的模型(称为服务器模型)维护到Bootstrap Trust。在每次迭代中,服务提供商首先将信任分数分配给客户端的每个本地模型更新,如果其方向与服务器模型更新方向更加偏离,则本地模型更新的信任分数较低。然后,服务提供商将本地模型更新的幅度归一化,以使其位于与矢量空间中的服务器模型更新相同的超音速中。我们的归一化限制了大幅度大小的恶意本地模型更新的影响。最后,服务提供商计算标准化本地模型的平均值,以其信任分数加权作为全局模型更新,用于更新全局模型。我们对来自不同领域的六个数据集进行了广泛的评估表明,我们的FLTRUST既可以抵抗现有攻击和强烈的适应性攻击。
Byzantine-robust federated learning aims to enable a service provider to learn an accurate global model when a bounded number of clients are malicious. The key idea of existing Byzantine-robust federated learning methods is that the service provider performs statistical analysis among the clients' local model updates and removes suspicious ones, before aggregating them to update the global model. However, malicious clients can still corrupt the global models in these methods via sending carefully crafted local model updates to the service provider. The fundamental reason is that there is no root of trust in existing federated learning methods. In this work, we bridge the gap via proposing FLTrust, a new federated learning method in which the service provider itself bootstraps trust. In particular, the service provider itself collects a clean small training dataset (called root dataset) for the learning task and the service provider maintains a model (called server model) based on it to bootstrap trust. In each iteration, the service provider first assigns a trust score to each local model update from the clients, where a local model update has a lower trust score if its direction deviates more from the direction of the server model update. Then, the service provider normalizes the magnitudes of the local model updates such that they lie in the same hyper-sphere as the server model update in the vector space. Our normalization limits the impact of malicious local model updates with large magnitudes. Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model. Our extensive evaluations on six datasets from different domains show that our FLTrust is secure against both existing attacks and strong adaptive attacks.