论文标题
训练后自适应Mobilenet用于快速反欺骗
Post-Train Adaptive MobileNet for Fast Anti-Spoofing
论文作者
论文摘要
许多应用程序需要高准确性的神经网络以及低延迟和用户数据隐私保证。面对反欺骗就是这样的任务之一。但是,单个模型可能不会为不同的设备性能类别提供最佳结果,而训练多个模型耗时。在这项工作中,我们提出了训练后自适应(PTA)块。这样的块在结构上很简单,并为MobilenEtv2倒残留块提供了替换。 PTA块具有多个分支,具有不同的计算成本。可以在按需和运行时选择要执行的分支;因此,为多个设备层提供不同的推理时间和配置能力。至关重要的是,该模型经过一次训练,并且可以在训练后,甚至直接在移动设备上轻松重新配置。此外,与在Celeba-Spoof数据集中测试的原始MobilenetV2相比,提出的方法显示出总体性能要好得多。在训练时对不同的PTA块配置进行采样,这也减少了训练模型所需的总体壁锁时间。虽然我们提出了针对反欺骗问题的计算结果,但具有PTA块的MobilenETV2适用于卷积神经网络可解决的任何问题,这使得结果实际上显着。
Many applications require high accuracy of neural networks as well as low latency and user data privacy guaranty. Face anti-spoofing is one of such tasks. However, a single model might not give the best results for different device performance categories, while training multiple models is time consuming. In this work we present Post-Train Adaptive (PTA) block. Such a block is simple in structure and offers a drop-in replacement for the MobileNetV2 Inverted Residual block. The PTA block has multiple branches with different computation costs. The branch to execute can be selected on-demand and at runtime; thus, offering different inference times and configuration capability for multiple device tiers. Crucially, the model is trained once and can be easily reconfigured after training, even directly on a mobile device. In addition, the proposed approach shows substantially better overall performance in comparison to the original MobileNetV2 as tested on CelebA-Spoof dataset. Different PTA block configurations are sampled at training time, which also decreases overall wall-clock time needed to train the model. While we present computational results for the anti-spoofing problem, the MobileNetV2 with PTA blocks is applicable to any problem solvable with convolutional neural networks, which makes the results presented practically significant.