论文标题
Canife:用于联合学习的经验隐私测量的手工艺品
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning
论文作者
论文摘要
联合学习(FL)是在分布式环境中培训机器学习模型的设置,在该环境中,客户不共享其原始数据而是将模型更新发送到服务器。但是,模型更新可能会受到攻击和泄漏私人信息。差异隐私(DP)是一种领先的缓解策略,涉及将噪音添加到剪裁的模型更新中,以绩效以强大的理论隐私保证。先前的工作表明,DP的威胁模型是保守的,并且获得的保证可能是空置的,或者可能高估了实际上信息泄漏。在本文中,我们旨在通过考虑现实的威胁模型来对模型的暴露进行更严格的测量。我们提出了一种使用金丝雀的新方法Canife-强大的对手精心制作的样品,以评估训练回合的经验隐私。我们将此攻击应用于在Cifar-10和Celeba训练的视觉模型,以及在Send140和Shakespeare训练的语言模型上。特别是,在现实的FL场景中,我们证明了使用Canife获得的经验性每一epsilon比理论结合低4-5倍。
Federated Learning (FL) is a setting for training machine learning models in distributed environments where the clients do not share their raw data but instead send model updates to a server. However, model updates can be subject to attacks and leak private information. Differential Privacy (DP) is a leading mitigation strategy which involves adding noise to clipped model updates, trading off performance for strong theoretical privacy guarantees. Previous work has shown that the threat model of DP is conservative and that the obtained guarantees may be vacuous or may overestimate information leakage in practice. In this paper, we aim to achieve a tighter measurement of the model exposure by considering a realistic threat model. We propose a novel method, CANIFE, that uses canaries - carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round. We apply this attack to vision models trained on CIFAR-10 and CelebA and to language models trained on Sent140 and Shakespeare. In particular, in realistic FL scenarios, we demonstrate that the empirical per-round epsilon obtained with CANIFE is 4-5x lower than the theoretical bound.