论文标题
Authnet:使用时间面部特征运动的基于深度学习的身份验证机制
AuthNet: A Deep Learning based Authentication Mechanism using Temporal Facial Feature Movements
论文作者
论文摘要
基于机器学习和深度学习的生物识别系统被广泛用作资源受限环境(例如智能手机和其他小型计算设备)的身份验证机制。这些AI驱动的面部识别机制近年来由于其透明,无接触性和非侵入性而获得了巨大的流行。尽管它们在很大程度上是有效的,但有多种方法可以使用照片,面具,眼镜等获得未经授权的访问。在本文中,我们提出了一种替代性身份验证机制,该机制同时使用面部识别和该特定面部的独特运动,同时说出密码,即时间范围的面部特征动作。语言障碍不会抑制所提出的模型,因为用户可以用任何语言设置密码。当对标准Miracl-VC1数据集进行评估时,提出的模型的准确度为98.1%,强调了其作为有效且可靠的系统的有效性。提出的方法也是数据效率高的,因为该模型即使仅接受10个正视频样本训练,该模型也能获得良好的结果。还通过针对各种复合的面部识别和唇读模型对拟议系统进行基准测试,也证明了网络培训的能力。
Biometric systems based on Machine learning and Deep learning are being extensively used as authentication mechanisms in resource-constrained environments like smartphones and other small computing devices. These AI-powered facial recognition mechanisms have gained enormous popularity in recent years due to their transparent, contact-less and non-invasive nature. While they are effective to a large extent, there are ways to gain unauthorized access using photographs, masks, glasses, etc. In this paper, we propose an alternative authentication mechanism that uses both facial recognition and the unique movements of that particular face while uttering a password, that is, the temporal facial feature movements. The proposed model is not inhibited by language barriers because a user can set a password in any language. When evaluated on the standard MIRACL-VC1 dataset, the proposed model achieved an accuracy of 98.1%, underscoring its effectiveness as an effective and robust system. The proposed method is also data-efficient since the model gave good results even when trained with only 10 positive video samples. The competence of the training of the network is also demonstrated by benchmarking the proposed system against various compounded Facial recognition and Lip reading models.