论文标题

山羊:GPU对深度学习培训的外包,并在值得信赖的执行环境中使用异步概率完整性验证

GOAT: GPU Outsourcing of Deep Learning Training With Asynchronous Probabilistic Integrity Verification Inside Trusted Execution Environment

论文作者

Asvadishirehjini, Aref, Kantarcioglu, Murat, Malin, Bradley

论文摘要

基于深神经网络(DNN)的机器学习模型越来越多地部署在从自动驾驶汽车到COVID-19 COVID-19治疗发现的广泛应用中。为了支持学习DNN所需的计算能力,具有专用硬件支持的云环境已成为关键基础架构。但是,与外包计算相关的诚信挑战很多。基于值得信赖的执行环境(TEE)建立的各种方法来应对这些挑战。然而,没有现有的方法扩展以支持实现诚信的DNN模型培训,以实现重大工作量(深度建筑和数百万培训示例),而无需保持重大的绩效。为了减轻纯T恤(完全完整性)和纯GPU(无完整性)之间的时间差距,我们将所选计算步骤的随机验证与DNN超参数的系统调整(例如,狭窄的梯度剪辑范围)的系统调整相结合,因此,限制了攻击者在验证阶段中的验证阶段,因此限制了攻击者在验证阶段中的验证阶段的能力。实验结果表明,新方法在基于纯TEE的解决方案上实现了2倍至20倍的性能改善,同时保证了关于最新的DNN后门攻击的完整性可能性很高(例如,0.999)。

Machine learning models based on Deep Neural Networks (DNNs) are increasingly deployed in a wide range of applications ranging from self-driving cars to COVID-19 treatment discovery. To support the computational power necessary to learn a DNN, cloud environments with dedicated hardware support have emerged as critical infrastructure. However, there are many integrity challenges associated with outsourcing computation. Various approaches have been developed to address these challenges, building on trusted execution environments (TEE). Yet, no existing approach scales up to support realistic integrity-preserving DNN model training for heavy workloads (deep architectures and millions of training examples) without sustaining a significant performance hit. To mitigate the time gap between pure TEE (full integrity) and pure GPU (no integrity), we combine random verification of selected computation steps with systematic adjustments of DNN hyper-parameters (e.g., a narrow gradient clipping range), hence limiting the attacker's ability to shift the model parameters significantly provided that the step is not selected for verification during its training phase. Experimental results show the new approach achieves 2X to 20X performance improvement over pure TEE based solution while guaranteeing a very high probability of integrity (e.g., 0.999) with respect to state-of-the-art DNN backdoor attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源