论文标题

抑郁预测的机器学习中的公平和偏见纠正:来自四个研究人群的结果

Fairness and bias correction in machine learning for depression prediction: results from four study populations

论文作者

Dang, Vien Ngoc, Cascarano, Anna, Mulder, Rosa H., Cecil, Charlotte, Zuluaga, Maria A., Hernández-González, Jerónimo, Lekadir, Karim

论文摘要

心理保健中存在着很大的污名和不平等,尤其是在服务不足的人群中。出于科学目的,收集的数据反映了不平等。如果无法正确解释,从数据学习的机器学习(ML)模型可以增强这些结构性不平等或偏见。在这里,我们提出了一项系统的ML模型中的偏见研究,旨在预测涵盖不同国家和人群的四个不同案例研究中的抑郁症。我们发现标准ML方法表现出定期偏见的​​行为。我们还表明,标准方法和您自己的事后方法都可以有效地降低不公平偏见的水平。没有用于抑郁预测的单一最佳ML模型可以提供结果平等。这强调了在模型选择过程中分析公平性的重要性以及有关辩护干预措施影响的透明报告。最后,我们提供了实用的建议,以开发偏见的ML模型来进行抑郁风险预测。

A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations. Inequalities are reflected in the data collected for scientific purposes. When not properly accounted for, machine learning (ML) models leart from data can reinforce these structural inequalities or biases. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions. Finally, we provide practical recommendations to develop bias-aware ML models for depression risk prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源