论文标题

GBDF:性别平衡的深层数据集朝着公平的深击检测

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

论文作者

Nadimpalli, Aakash Varma, Rattani, Ajita

论文摘要

深层伪造的面部伪造引起了严重的社会问题。愿景社区已经提出了几种解决方案,以通过自动化的深层检测系统有效地对待互联网上的错误信息。最近的研究表明,基于面部分析的深度学习模型可以根据受保护的属性来区分。对于深层检测技术的商业采用和大规模推出,评估和理解跨性别和种族等人群变化的深层捕获探测器的公平性(缺乏任何偏见或偏爱)至关重要。由于人口亚组之间的深泡探测器的性能差异会影响数百万被剥夺的子组的人。本文旨在评估男性和女性深层探测器的公平性。但是,现有的DeepFake数据集并未用人口标签注释以促进公平分析。为此,我们用性别标签手动注释了现有的流行DeepFake数据集,并评估了整个性别的当前DeepFake探测器的性能差异。我们对数据集的性别标记版本的分析表明,(a)当前的DeepFake数据集在性别之间偏斜了分布,并且(b)通常采用的深层捕获探测器在大多数男性胜过女性的性别中获得不平等的表现。最后,我们贡献了一个性别平衡和注释的DeepFake数据集GBDF,以减轻性能差异,并促进研究和发展,以朝着公平意识到深度伪造的探测器。 GBDF数据集可公开可用:https://github.com/aakash4305/gbdf

Facial forgery by deepfakes has raised severe societal concerns. Several solutions have been proposed by the vision community to effectively combat the misinformation on the internet via automated deepfake detection systems. Recent studies have demonstrated that facial analysis-based deep learning models can discriminate based on protected attributes. For the commercial adoption and massive roll-out of the deepfake detection technology, it is vital to evaluate and understand the fairness (the absence of any prejudice or favoritism) of deepfake detectors across demographic variations such as gender and race. As the performance differential of deepfake detectors between demographic subgroups would impact millions of people of the deprived sub-group. This paper aims to evaluate the fairness of the deepfake detectors across males and females. However, existing deepfake datasets are not annotated with demographic labels to facilitate fairness analysis. To this aim, we manually annotated existing popular deepfake datasets with gender labels and evaluated the performance differential of current deepfake detectors across gender. Our analysis on the gender-labeled version of the datasets suggests (a) current deepfake datasets have skewed distribution across gender, and (b) commonly adopted deepfake detectors obtain unequal performance across gender with mostly males outperforming females. Finally, we contributed a gender-balanced and annotated deepfake dataset, GBDF, to mitigate the performance differential and to promote research and development towards fairness-aware deep fake detectors. The GBDF dataset is publicly available at: https://github.com/aakash4305/GBDF

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源