Cross-Dataset Face Anti-Spoofing Using Domain Adaptation Techniques
Keywords:
Face anti-spoofing, domain adaptation, cross-dataset learning, presentation attack detection, feature alignment, transfer learning.Abstract
Face recognition systems have become integral to modern security and authentication mechanisms, yet they remain vulnerable to presentation attacks, including print, replay, and 3D mask spoofs. Traditional anti-spoofing methods often rely on training data from a single dataset, which limits their generalization capability to unseen domains. Cross-dataset face anti-spoofing seeks to bridge this performance gap by leveraging domain adaptation techniques to transfer learned knowledge between source and target datasets. This paper presents a comprehensive study on cross-dataset anti-spoofing using advanced domain adaptation frameworks, including adversarial training, feature alignment, and style transfer methods.
We evaluate the effectiveness of these techniques across three benchmark datasets — CASIA-FASD, Replay-Attack, and OULU-NPU — using ResNet-50 and Vision Transformer backbones. Statistical analysis demonstrates significant performance improvement when domain adaptation is incorporated, reducing the average Half Total Error Rate (HTER) from 21.4% to 9.6% in cross-dataset testing scenarios. The results underscore the importance of distribution alignment in enhancing the robustness of face anti-spoofing models against unseen attack modalities.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 The journal retains copyright of all published articles, ensuring that authors have control over their work while allowing wide dissenmination.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles are published under the Creative Commons Attribution NonCommercial 4.0 License (CC BY NC 4.0), allowing others to distribute, remix, adapt, and build upon the work for non-commercial purposes while crediting the original author.
