论文标题
思考两个动作:预计其他用户会改善联邦学习中的后门攻击
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
论文作者
论文摘要
联合学习特别容易受到模型中毒和后门攻击的影响,因为个人用户可以直接控制培训数据和模型更新。同时,单个用户的攻击能力受到限制,因为他们的更新很快被许多其他用户的更新淹没了。现有的攻击并不是其他用户的未来行为,因此需要许多顺序更新,其效果很快就会被删除。我们提出了一项攻击,该攻击预测并说明整个联合学习管道,包括其他客户的行为,并确保即使经过多回合的社区更新,后门也很快就有效并坚持下去。我们表明,这种新攻击在现实的情况下有效,在现实情况下,攻击者只会贡献一小部分随机抽样的回合,并证明对图像分类,下字预测和情感分析的攻击。
Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates. At the same time, the attack power of an individual user is limited because their updates are quickly drowned out by those of many other users. Existing attacks do not account for future behaviors of other users, and thus require many sequential updates and their effects are quickly erased. We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients, and ensures that backdoors are effective quickly and persist even after multiple rounds of community updates. We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds and demonstrate this attack on image classification, next-word prediction, and sentiment analysis.