论文标题

mystiko ::云介导的,私人,联合梯度下降

MYSTIKO : : Cloud-Mediated, Private, Federated Gradient Descent

论文作者

Jayaram, K. R., Verma, Archit, Verma, Ashish, Thomas, Gegi, Sutcher-Shepard, Colin

论文摘要

联合学习可以使多个分布式参与者(可能在不同的云上)通过共享参数/渐变来协作和训练机器/深度学习模型。但是,共享梯度而不是集中数据,可能并不像人们预期的那样私密。对明文梯度的逆向工程攻击实际上是可行的。现有的私人联合学习解决方案虽然有希望,但却导致了较少准确的模型,并且需要非平凡的超参数调整。 在本文中,我们研究了添加剂同态加密的使用(特别是Paillier密码)来设计安全的联合联合梯度下降技术,而(i)不需要添加统计噪声或超参数调谐,(ii)不会改变最终模型的最终准确性或实用性,以确保最终模型(III)或其他参与者的参与者(III),或者是其他参与者的参与者或其他阶层的参与者,或者是其他阶层的参与者,或者是其他阶层的参与者。参与联邦学习工作,(iv)最大程度地减少任何第三方协调员的信任,并且(v)效率很高,开销很少,而且具有成本效益。

Federated learning enables multiple, distributed participants (potentially on different clouds) to collaborate and train machine/deep learning models by sharing parameters/gradients. However, sharing gradients, instead of centralizing data, may not be as private as one would expect. Reverse engineering attacks on plaintext gradients have been demonstrated to be practically feasible. Existing solutions for differentially private federated learning, while promising, lead to less accurate models and require nontrivial hyperparameter tuning. In this paper, we examine the use of additive homomorphic encryption (specifically the Paillier cipher) to design secure federated gradient descent techniques that (i) do not require addition of statistical noise or hyperparameter tuning, (ii) does not alter the final accuracy or utility of the final model, (iii) ensure that the plaintext model parameters/gradients of a participant are never revealed to any other participant or third party coordinator involved in the federated learning job, (iv) minimize the trust placed in any third party coordinator and (v) are efficient, with minimal overhead, and cost effective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源