论文标题

在大标签空间中的有问题偏见的目视识别

Visual Identification of Problematic Bias in Large Label Spaces

论文作者

Bäuerle, Alex, Turker, Aybuke Gul, Burke, Ken, Aka, Osman, Ropinski, Timo, Greer, Christina, Varadarajan, Mani

论文摘要

尽管对训练有素,公平的ML系统的需求越来越高,但随着它们以前所未有的速度增长,衡量现代模型和数据集的公平性变得越来越困难。将共同公平度量标准缩放到此类模型和数据集的一个关键挑战是详尽的地面真相标签的要求,这不能总是这样做。确实,这通常排除了传统分析指标和系统的应用。同时,由于公平是一个高度主观的问题,因此无法通过算法进行ML-FAIRNESS评估。因此,领域专家需要能够在整个模型和数据集中提取和推理有关偏见的理由,以做出明智的决定。虽然在研究DL模型中的潜在偏见时,视觉分析工具具有很大的帮助,但现有的方法都没有针对在大标签空间中出现的特定任务和挑战。在该领域缺乏可视化工作时,我们提出了针对此类大型标签空间设计可视化的指南,考虑到技术和道德问题。我们提出的可视化方法可以集成到经典模型和数据管道中,我们提供了作为张板插件开源技术的实现。通过我们的方法,可以系统地和视觉分析的大型标签空间的不同模型和数据集进行,并进行比较,以解决解决问题偏见的知情公平评估。

While the need for well-trained, fair ML systems is increasing ever more, measuring fairness for modern models and datasets is becoming increasingly difficult as they grow at an unprecedented pace. One key challenge in scaling common fairness metrics to such models and datasets is the requirement of exhaustive ground truth labeling, which cannot always be done. Indeed, this often rules out the application of traditional analysis metrics and systems. At the same time, ML-fairness assessments cannot be made algorithmically, as fairness is a highly subjective matter. Thus, domain experts need to be able to extract and reason about bias throughout models and datasets to make informed decisions. While visual analysis tools are of great help when investigating potential bias in DL models, none of the existing approaches have been designed for the specific tasks and challenges that arise in large label spaces. Addressing the lack of visualization work in this area, we propose guidelines for designing visualizations for such large label spaces, considering both technical and ethical issues. Our proposed visualization approach can be integrated into classical model and data pipelines, and we provide an implementation of our techniques open-sourced as a TensorBoard plug-in. With our approach, different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源