论文标题
使用因果效应估计器凸出的公平限制模型
Convex Fairness Constrained Model Using Causal Effect Estimators
论文作者
论文摘要
近年来,关于机器学习中的公平性有很多研究。在这里,平均差异(MD)或人口统计奇偶校验是最受欢迎的公平度量之一。但是,MD不仅量化了歧视,还量化了解释性偏见,这是解释性特征证明的结果的差异。在本文中,我们设计了名为iffeces的新颖模型,这些模型在保持解释性偏见的同时消除了歧视。这些模型基于利用倾向评分分析的因果效应的估计量。我们证明,这是平方损失理论上优于幼稚的MD约束模型的纪念物。我们提供了一种有效的算法,用于解决回归和二进制分类任务中的公平。在这两个任务中的合成和现实世界数据的实验中,Fairces优于在特定情况下考虑解释性偏见的现有模型。
Recent years have seen much research on fairness in machine learning. Here, mean difference (MD) or demographic parity is one of the most popular measures of fairness. However, MD quantifies not only discrimination but also explanatory bias which is the difference of outcomes justified by explanatory features. In this paper, we devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias. The models are based on estimators of causal effect utilizing propensity score analysis. We prove that FairCEEs with the squared loss theoretically outperform a naive MD constraint model. We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks. In our experiment on synthetic and real-world data in these two tasks, FairCEEs outperformed an existing model that considers explanatory bias in specific cases.