论文标题

机器学习系统的风险

The Risks of Machine Learning Systems

论文作者

Tan, Samson, Taeihagh, Araz, Baxter, Kathy

论文摘要

即使越来越多的研究强调了它们对负面影响的潜力,也会加速机器学习(ML)系统的速度和规模。公司和监管机构显然需要在损害人们之前管理拟议的ML系统的风险。为此,私营部门和公共部门行为者首先需要确定拟议的ML系统带来的风险。系统的整体风险受其直接和间接影响的影响。但是,现有用于ML风险/影响评估的框架通常会解决风险的抽象概念或不具体依赖这种依赖性。 我们建议通过一个识别包括两个组成部分的ML系统风险的上下文敏感框架来解决这一差距:ML系统构成的一阶风险的分类法及其促成因素。一阶风险源于ML系统的各个方面,而二阶风险源于一阶风险的后果。这些后果是设计和开发选择导致的系统故障。我们探讨了不同类型的ML系统,影响每种风险的因素以及一阶风险如何在系统与现实世界相互作用时会导致二阶影响的不同风险。 在整个论文中,我们展示了真实事件和先前的研究适合我们的机器学习系统风险框架(MLSR)。 MLSR在ML系统而不是技术或域上运行,认识到系统的设计,实施和用例都会有助于其风险。这样一来,它统一了具有系统级风险(例如,应用,设计,控制风险),统一道德AI社区(例如,道德/人权风险)中常见的风险,为ML系统的整体风险评估铺平了道路。

The speed and scale at which machine learning (ML) systems are deployed are accelerating even as an increasing number of studies highlight their potential for negative impact. There is a clear need for companies and regulators to manage the risk from proposed ML systems before they harm people. To achieve this, private and public sector actors first need to identify the risks posed by a proposed ML system. A system's overall risk is influenced by its direct and indirect effects. However, existing frameworks for ML risk/impact assessment often address an abstract notion of risk or do not concretize this dependence. We propose to address this gap with a context-sensitive framework for identifying ML system risks comprising two components: a taxonomy of the first- and second-order risks posed by ML systems, and their contributing factors. First-order risks stem from aspects of the ML system, while second-order risks stem from the consequences of first-order risks. These consequences are system failures that result from design and development choices. We explore how different risks may manifest in various types of ML systems, the factors that affect each risk, and how first-order risks may lead to second-order effects when the system interacts with the real world. Throughout the paper, we show how real events and prior research fit into our Machine Learning System Risk framework (MLSR). MLSR operates on ML systems rather than technologies or domains, recognizing that a system's design, implementation, and use case all contribute to its risk. In doing so, it unifies the risks that are commonly discussed in the ethical AI community (e.g., ethical/human rights risks) with system-level risks (e.g., application, design, control risks), paving the way for holistic risk assessments of ML systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源