Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Face presentation attack detection.

Статті в журналах з теми "Face presentation attack detection"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Face presentation attack detection".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Abdullakutty, Faseela, Pamela Johnston, and Eyad Elyan. "Fusion Methods for Face Presentation Attack Detection." Sensors 22, no. 14 (July 12, 2022): 5196. http://dx.doi.org/10.3390/s22145196.

Повний текст джерела
Анотація:
Face presentation attacks (PA) are a serious threat to face recognition (FR) applications. These attacks are easy to execute and difficult to detect. An attack can be carried out simply by presenting a video, photo, or mask to the camera. The literature shows that both modern, pre-trained, deep learning-based methods, and traditional hand-crafted, feature-engineered methods have been effective in detecting PAs. However, the question remains as to whether features learned in existing, deep neural networks sufficiently encompass traditional, low-level features in order to achieve optimal performance on PA detection tasks. In this paper, we present a simple feature-fusion method that integrates features extracted by using pre-trained, deep learning models with more traditional colour and texture features. Extensive experiments clearly show the benefit of enriching the feature space to improve detection rates by using three common public datasets, namely CASIA, Replay Attack, and SiW. This work opens future research to improve face presentation attack detection by exploring new characterizing features and fusion strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhu, Shuaishuai, Xiaobo Lv, Xiaohua Feng, Jie Lin, Peng Jin, and Liang Gao. "Plenoptic Face Presentation Attack Detection." IEEE Access 8 (2020): 59007–14. http://dx.doi.org/10.1109/access.2020.2980755.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kowalski, Marcin. "A Study on Presentation Attack Detection in Thermal Infrared." Sensors 20, no. 14 (July 17, 2020): 3988. http://dx.doi.org/10.3390/s20143988.

Повний текст джерела
Анотація:
Face recognition systems face real challenges from various presentation attacks. New, more sophisticated methods of presentation attacks are becoming more difficult to detect using traditional face recognition systems. Thermal infrared imaging offers specific physical properties that may boost presentation attack detection capabilities. The aim of this paper is to present outcomes of investigations on the detection of various face presentation attacks in thermal infrared in various conditions including thermal heating of masks and various states of subjects. A thorough analysis of presentation attacks using printed and displayed facial photographs, 3D-printed, custom flexible 3D-latex and silicone masks is provided. The paper presents the intensity analysis of thermal energy distribution for specific facial landmarks during long-lasting experiments. Thermalization impact, as well as varying the subject’s state due to physical effort on presentation attack detection are investigated. A new thermal face spoofing dataset is introduced. Finally, a two-step deep learning-based method for the detection of presentation attacks is presented. Validation results of a set of deep learning methods across various presentation attack instruments are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wan, Jun, Guodong Guo, Sergio Escalera, Hugo Jair Escalante, and Stan Z. Li. "Multi-Modal Face Presentation Attack Detection." Synthesis Lectures on Computer Vision 9, no. 1 (July 27, 2020): 1–88. http://dx.doi.org/10.2200/s01032ed1v01y202007cov017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alshareef, Norah, Xiaohong Yuan, Kaushik Roy, and Mustafa Atay. "A Study of Gender Bias in Face Presentation Attack and Its Mitigation." Future Internet 13, no. 9 (September 14, 2021): 234. http://dx.doi.org/10.3390/fi13090234.

Повний текст джерела
Анотація:
In biometric systems, the process of identifying or verifying people using facial data must be highly accurate to ensure a high level of security and credibility. Many researchers investigated the fairness of face recognition systems and reported demographic bias. However, there was not much study on face presentation attack detection technology (PAD) in terms of bias. This research sheds light on bias in face spoofing detection by implementing two phases. First, two CNN (convolutional neural network)-based presentation attack detection models, ResNet50 and VGG16 were used to evaluate the fairness of detecting imposer attacks on the basis of gender. In addition, different sizes of Spoof in the Wild (SiW) testing and training data were used in the first phase to study the effect of gender distribution on the models’ performance. Second, the debiasing variational autoencoder (DB-VAE) (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) was applied in combination with VGG16 to assess its ability to mitigate bias in presentation attack detection. Our experiments exposed minor gender bias in CNN-based presentation attack detection methods. In addition, it was proven that imbalance in training and testing data does not necessarily lead to gender bias in the model’s performance. Results proved that the DB-VAE approach (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) succeeded in mitigating bias in detecting spoof faces.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Benlamoudi, Azeddine, Salah Eddine Bekhouche, Maarouf Korichi, Khaled Bensid, Abdeldjalil Ouahabi, Abdenour Hadid, and Abdelmalik Taleb-Ahmed. "Face Presentation Attack Detection Using Deep Background Subtraction." Sensors 22, no. 10 (May 15, 2022): 3760. http://dx.doi.org/10.3390/s22103760.

Повний текст джерела
Анотація:
Currently, face recognition technology is the most widely used method for verifying an individual’s identity. Nevertheless, it has increased in popularity, raising concerns about face presentation attacks, in which a photo or video of an authorized person’s face is used to obtain access to services. Based on a combination of background subtraction (BS) and convolutional neural network(s) (CNN), as well as an ensemble of classifiers, we propose an efficient and more robust face presentation attack detection algorithm. This algorithm includes a fully connected (FC) classifier with a majority vote (MV) algorithm, which uses different face presentation attack instruments (e.g., printed photo and replayed video). By including a majority vote to determine whether the input video is genuine or not, the proposed method significantly enhances the performance of the face anti-spoofing (FAS) system. For evaluation, we considered the MSU MFSD, REPLAY-ATTACK, and CASIA-FASD databases. The obtained results are very interesting and are much better than those obtained by state-of-the-art methods. For instance, on the REPLAY-ATTACK database, we were able to attain a half-total error rate (HTER) of 0.62% and an equal error rate (EER) of 0.58%. We attained an EER of 0% on both the CASIA-FASD and the MSU MFSD databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wan, Jun, Sergio Escalera, Hugo Jair Escalante, Guodong Guo, and Stan Z. Li. "Special Issue on Face Presentation Attack Detection." IEEE Transactions on Biometrics, Behavior, and Identity Science 3, no. 3 (July 2021): 282–84. http://dx.doi.org/10.1109/tbiom.2021.3089903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nguyen, Dat, Tuyen Pham, Min Lee, and Kang Park. "Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information." Sensors 19, no. 2 (January 20, 2019): 410. http://dx.doi.org/10.3390/s19020410.

Повний текст джерела
Анотація:
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ramachandra, Raghavendra, and Christoph Busch. "Presentation Attack Detection Methods for Face Recognition Systems." ACM Computing Surveys 50, no. 1 (April 13, 2017): 1–37. http://dx.doi.org/10.1145/3038924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Peng, Fei, Le Qin, and Min Long. "Face presentation attack detection using guided scale texture." Multimedia Tools and Applications 77, no. 7 (May 13, 2017): 8883–909. http://dx.doi.org/10.1007/s11042-017-4780-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Nguyen, Dat Tien, Tuyen Danh Pham, Ganbayar Batchuluun, Kyoung Jun Noh, and Kang Ryoung Park. "Presentation Attack Face Image Generation Based on a Deep Generative Adversarial Network." Sensors 20, no. 7 (March 25, 2020): 1810. http://dx.doi.org/10.3390/s20071810.

Повний текст джерела
Анотація:
Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Du, Yuting, Tong Qiao, Ming Xu, and Ning Zheng. "Towards Face Presentation Attack Detection Based on Residual Color Texture Representation." Security and Communication Networks 2021 (March 15, 2021): 1–16. http://dx.doi.org/10.1155/2021/6652727.

Повний текст джерела
Анотація:
Most existing face authentication systems have limitations when facing the challenge raised by presentation attacks, which probably leads to some dangerous activities when using facial unlocking for smart device, facial access to control system, and face scan payment. Accordingly, as a security guarantee to prevent the face authentication from being attacked, the study of face presentation attack detection is developed in this community. In this work, a face presentation attack detector is designed based on residual color texture representation (RCTR). Existing methods lack of effective data preprocessing, and we propose to adopt DW-filter for obtaining residual image, which can effectively improve the detection efficiency. Subsequently, powerful CM texture descriptor is introduced, which performs better than widely used descriptors such as LBP or LPQ. Additionally, representative texture features are extracted from not only RGB space but also more discriminative color spaces such as HSV, YCbCr, and CIE 1976 L∗a∗b (LAB). Meanwhile, the RCTR is fed into the well-designed classifier. Specifically, we compare and analyze the performance of advanced classifiers, among which an ensemble classifier based on a probabilistic voting decision is our optimal choice. Extensive experimental results empirically verify the proposed face presentation attack detector’s superior performance both in the cases of intradataset and interdataset (mismatched training-testing samples) evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kowalski, Marcin, and Krzysztof Mierzejewski. "Detection of 3D face masks with thermal infrared imaging and deep learning techniques." Photonics Letters of Poland 13, no. 2 (June 30, 2021): 22. http://dx.doi.org/10.4302/plp.v13i2.1091.

Повний текст джерела
Анотація:
Biometric systems are becoming more and more efficient due to increasing performance of algorithms. These systems are also vulnerable to various attacks. Presentation of falsified identity to a biometric sensor is one the most urgent challenges for the recent biometric recognition systems. Exploration of specific properties of thermal infrared seems to be a comprehensive solution for detecting face presentation attacks. This letter presents outcome of our study on detecting 3D face masks using thermal infrared imaging and deep learning techniques. We demonstrate results of a two-step neural network-featured method for detecting presentation attacks. Full Text: PDF ReferencesS.R. Arashloo, J. Kittler, W. Christmas, "Face Spoofing Detection Based on Multiple Descriptor Fusion Using Multiscale Dynamic Binarized Statistical Image Features", IEEE Trans. Inf. Forensics Secur. 10, 11 (2015). CrossRef A. Anjos, M.M. Chakka, S. Marcel, "Motion-based counter-measures to photo attacks inface recognition", IET Biometrics 3, 3 (2014). CrossRef M. Killioǧlu, M. Taşkiran, N. Kahraman, "Anti-spoofing in face recognition with liveness detection using pupil tracking", Proc. SAMI IEEE, (2017). CrossRef A. Asaduzzaman, A. Mummidi, M.F. Mridha, F.N. Sibai, "Improving facial recognition accuracy by applying liveness monitoring technique", Proc. ICAEE IEEE, (2015). CrossRef M. Kowalski, "A Study on Presentation Attack Detection in Thermal Infrared", Sensors 20, 14 (2020). CrossRef C. Galdi, et al, "PROTECT: Pervasive and useR fOcused biomeTrics bordEr projeCT - a case study", IET Biometrics 9, 6 (2020). CrossRef D.A. Socolinsky, A. Selinger, J. Neuheisel, "Face recognition with visible and thermal infrared imagery", Comput. Vis Image Underst. 91, 1-2 (2003) CrossRef L. Sun, W. Huang, M. Wu, "TIR/VIS Correlation for Liveness Detection in Face Recognition", Proc. CAIP, (2011). CrossRef J. Seo, I. Chung, "Face Liveness Detection Using Thermal Face-CNN with External Knowledge", Symmetry 2019, 11, 3 (2019). CrossRef A. George, Z. Mostaani, D Geissenbuhler, et al., "Biometric Face Presentation Attack Detection With Multi-Channel Convolutional Neural Network", IEEE Trans. Inf. Forensics Secur. 15, (2020). CrossRef S. Ren, K. He, R. Girshick, J. Sun, "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", Proc. CVPR IEEE 39, (2016). CrossRef K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition", Proc. CVPR, (2016). CrossRef K. Mierzejewski, M. Mazurek, "A New Framework for Assessing Similarity Measure Impact on Classification Confidence Based on Probabilistic Record Linkage Model", Procedia Manufacturing 44, 245-252 (2020). CrossRef
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Wang, Caixun, Bingyao Yu, and Jie Zhou. "A Learnable Gradient operator for face presentation attack detection." Pattern Recognition 135 (March 2023): 109146. http://dx.doi.org/10.1016/j.patcog.2022.109146.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Fang, Meiling, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper. "Real masks and spoof faces: On the masked face presentation attack detection." Pattern Recognition 123 (March 2022): 108398. http://dx.doi.org/10.1016/j.patcog.2021.108398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yu, Bingyao, Jiwen Lu, Xiu Li, and Jie Zhou. "Salience-Aware Face Presentation Attack Detection via Deep Reinforcement Learning." IEEE Transactions on Information Forensics and Security 17 (2022): 413–27. http://dx.doi.org/10.1109/tifs.2021.3135748.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Sequeira, Ana F., Tiago Gonçalves, Wilson Silva, João Ribeiro Pinto, and Jaime S. Cardoso. "An exploratory study of interpretability for face presentation attack detection." IET Biometrics 10, no. 4 (June 12, 2021): 441–55. http://dx.doi.org/10.1049/bme2.12045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Raghavendra, R., Kiran B. Raja, and Christoph Busch. "Presentation Attack Detection for Face Recognition Using Light Field Camera." IEEE Transactions on Image Processing 24, no. 3 (March 2015): 1060–75. http://dx.doi.org/10.1109/tip.2015.2395951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Lei, Zhaoqiang Xia, Xiaoyue Jiang, Fabio Roli, and Xiaoyi Feng. "CompactNet: learning a compact space for face presentation attack detection." Neurocomputing 409 (October 2020): 191–207. http://dx.doi.org/10.1016/j.neucom.2020.05.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jia, Shan, Guodong Guo, Zhengquan Xu, and Qiangchang Wang. "Face presentation attack detection in mobile scenarios: A comprehensive evaluation." Image and Vision Computing 93 (January 2020): 103826. http://dx.doi.org/10.1016/j.imavis.2019.11.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Qin, Le, Fei Peng, Min Long, Raghavendra Ramachandra, and Christoph Busch. "Vulnerabilities of Unattended Face Verification Systems to Facial Components-based Presentation Attacks: An Empirical Study." ACM Transactions on Privacy and Security 25, no. 1 (February 28, 2022): 1–28. http://dx.doi.org/10.1145/3491199.

Повний текст джерела
Анотація:
As face presentation attacks (PAs) are realistic threats for unattended face verification systems, face presentation attack detection (PAD) has been intensively investigated in past years, and the recent advances in face PAD have significantly reduced the success rate of such attacks. In this article, an empirical study on a novel and effective face impostor PA is made. In the proposed PA, a facial artifact is created by using the most vulnerable facial components, which are optimally selected based on the vulnerability analysis of different facial components to impostor PAs. An attacker can launch a face PA by presenting a facial artifact on his or her own real face. With a collected PA database containing various types of artifacts and presentation attack instruments (PAIs), the experimental results and analysis show that the proposed PA poses a more serious threat to face verification and PAD systems compared with the print, replay, and mask PAs. Moreover, the generalization ability of the proposed PA and the vulnerability analysis with regard to commercial systems are also investigated by evaluating unknown face verification and real-world PAD systems. It provides a new paradigm for the study of face PAs.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

H, Vinutha, and Thippeswamy G. "Antispoofing in face biometrics: a comprehensive study on software-based techniques." Computer Science and Information Technologies 4, no. 1 (March 1, 2023): 1–13. http://dx.doi.org/10.11591/csit.v4i1.p1-13.

Повний текст джерела
Анотація:
The vulnerability of the face recognition system to spoofing attacks has piqued the biometric community's interest, motivating them to develop anti-spoofing techniques to secure it. Photo, video, or mask attacks can compromise face biometric systems (types of presentation attacks). Spoofing attacks are detected using liveness detection techniques, which determine whether the facial image presented at a biometric system is a live face or a fake version of it. We discuss the classification of face anti-spoofing techniques in this paper. Anti-spoofing techniques are divided into two categories: hardware and software methods. Hardware-based techniques are summarized briefly. A comprehensive study on software-based countermeasures for presentation attacks is discussed, which are further divided into static and dynamic methods. We cited a few publicly available presentation attack datasets and calculated a few metrics to demonstrate the value of anti-spoofing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Yu, Zitong, Xiaobai Li, Pichao Wang, and Guoying Zhao. "TransRPPG: Remote Photoplethysmography Transformer for 3D Mask Face Presentation Attack Detection." IEEE Signal Processing Letters 28 (2021): 1290–94. http://dx.doi.org/10.1109/lsp.2021.3089908.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Costa‐Pazo, Artur, Daniel Pérez‐Cabo, David Jiménez‐Cabello, José Luis Alba‐Castro, and Esteban Vazquez‐Fernandez. "Face presentation attack detection. A comprehensive evaluation of the generalisation problem." IET Biometrics 10, no. 4 (June 28, 2021): 408–29. http://dx.doi.org/10.1049/bme2.12049.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Muhammad, Usman, Zitong Yu, and Jukka Komulainen. "Self-supervised 2D face presentation attack detection via temporal sequence sampling." Pattern Recognition Letters 156 (April 2022): 15–22. http://dx.doi.org/10.1016/j.patrec.2022.03.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Li, Lei, Zhaoqiang Xia, Jun Wu, Lei Yang, and Huijian Han. "Face presentation attack detection based on optical flow and texture analysis." Journal of King Saud University - Computer and Information Sciences 34, no. 4 (April 2022): 1455–67. http://dx.doi.org/10.1016/j.jksuci.2022.02.019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ma, Yukun, Yaowen Xu, and Fanghao Liu. "Multi-Perspective Dynamic Features for Cross-Database Face Presentation Attack Detection." IEEE Access 8 (2020): 26505–16. http://dx.doi.org/10.1109/access.2020.2971224.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

George, Anjith, Zohreh Mostaani, David Geissenbuhler, Olegs Nikisins, Andre Anjos, and Sebastien Marcel. "Biometric Face Presentation Attack Detection With Multi-Channel Convolutional Neural Network." IEEE Transactions on Information Forensics and Security 15 (2020): 42–55. http://dx.doi.org/10.1109/tifs.2019.2916652.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Wang, Guoqing, Hu Han, Shiguang Shan, and Xilin Chen. "Unsupervised Adversarial Domain Adaptation for Cross-Domain Face Presentation Attack Detection." IEEE Transactions on Information Forensics and Security 16 (2021): 56–69. http://dx.doi.org/10.1109/tifs.2020.3002390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Lei, Zhaoqiang Xia, Xiaoyue Jiang, Yupeng Ma, Fabio Roli, and Xiaoyi Feng. "3D face mask presentation attack detection based on intrinsic image analysis." IET Biometrics 9, no. 3 (March 11, 2020): 100–108. http://dx.doi.org/10.1049/iet-bmt.2019.0155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Sun, Pengcheng, Dan Zeng, Xiaoyan Li, Lin Yang, Liyuan Li, Zhouxia Chen, and Fansheng Chen. "A 3D Mask Presentation Attack Detection Method Based on Polarization Medium Wave Infrared Imaging." Symmetry 12, no. 3 (March 3, 2020): 376. http://dx.doi.org/10.3390/sym12030376.

Повний текст джерела
Анотація:
Facial recognition systems are often spoofed by presentation attack instruments (PAI), especially by the use of three-dimensional (3D) face masks. However, nonuniform illumination conditions and significant differences in facial appearance will lead to the performance degradation of existing presentation attack detection (PAD) methods. Based on conventional thermal infrared imaging, a PAD method based on the medium wave infrared (MWIR) polarization characteristics of the surface material is proposed in this paper for countering a flexible 3D silicone mask presentation attack. A polarization MWIR imaging system for face spoofing detection is designed and built, taking advantage of the fact that polarization-based MWIR imaging is not restricted by external light sources (including visible light and near-infrared light sources) in spite of facial appearance. A sample database of real face images and 3D face mask images is constructed, and the gradient amplitude feature extraction method, based on MWIR polarization facial images, is designed to better distinguish the skin of a real face from the material used to make a 3D mask. Experimental results show that, compared with conventional thermal infrared imaging, polarization-based MWIR imaging is more suitable for the PAD method of 3D silicone masks and shows a certain robustness in the change of facial temperature.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

González‐Soler, Lázaro J., Marta Gomez‐Barrero, and Christoph Busch. "On the generalisation capabilities of Fisher vector‐based face presentation attack detection." IET Biometrics 10, no. 5 (May 2, 2021): 480–96. http://dx.doi.org/10.1049/bme2.12041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Nguyen, Hoai Phuong, Anges Delahaies, Florent Retraint, and Frederic Morain-Nicolier. "Face Presentation Attack Detection Based on a Statistical Model of Image Noise." IEEE Access 7 (2019): 175429–42. http://dx.doi.org/10.1109/access.2019.2957273.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Rehman, Yasar Abbas Ur, Lai-Man Po, and Jukka Komulainen. "Enhancing deep discriminative feature maps via perturbation for face presentation attack detection." Image and Vision Computing 94 (February 2020): 103858. http://dx.doi.org/10.1016/j.imavis.2019.103858.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Pinto, Allan, Siome Goldenstein, Alexandre Ferreira, Tiago Carvalho, Helio Pedrini, and Anderson Rocha. "Leveraging Shape, Reflectance and Albedo From Shading for Face Presentation Attack Detection." IEEE Transactions on Information Forensics and Security 15 (2020): 3347–58. http://dx.doi.org/10.1109/tifs.2020.2988168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Einy, Sajad, Cemil Oz, and Yahya Dorostkar Navaei. "IoT Cloud-Based Framework for Face Spoofing Detection with Deep Multicolor Feature Learning Model." Journal of Sensors 2021 (August 30, 2021): 1–18. http://dx.doi.org/10.1155/2021/5047808.

Повний текст джерела
Анотація:
A face-based authentication system has become an important topic in various fields of IoT applications such as identity validation for social care, crime detection, ATM access, computer security, etc. However, these authentication systems are vulnerable to different attacks. Presentation attacks have become a clear threat for facial biometric-based authentication and security applications. To address this issue, we proposed a deep learning approach for face spoofing detection systems in IoT cloud-based environment. The deep learning approach extracted features from multicolor space to obtain more information from the input face image regarding luminance and chrominance data. These features are combined and selected by the Minimum Redundancy Maximum Relevance (mRMR) algorithm to provide an efficient and discriminate feature set. Finally, the extracted deep color-based features of the face image are used for face spoofing detection in a cloud environment. The proposed method achieves stable results with less training data compared to conventional deep learning methods. This advantage of the proposed approach reduces the time of processing in the training phase and optimizes resource management in storing training data on the cloud. The proposed system was tested and evaluated based on two challenging public access face spoofing databases, namely, Replay-Attack and ROSE-Youtu. The experimental results based on these databases showed that the proposed method achieved satisfactory results compared to the state-of-the-art methods based on an equal error rate (EER) of 0.2% and 3.8%, respectively, for the Replay-Attack and ROSE-Youtu databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Liu, Si-Qi, Xiangyuan Lan, and Pong C. Yuen. "Multi-Channel Remote Photoplethysmography Correspondence Feature for 3D Mask Face Presentation Attack Detection." IEEE Transactions on Information Forensics and Security 16 (2021): 2683–96. http://dx.doi.org/10.1109/tifs.2021.3050060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Sepas-Moghaddam, Alireza, Fernando Pereira, and Paulo Lobato Correia. "Light Field-Based Face Presentation Attack Detection: Reviewing, Benchmarking and One Step Further." IEEE Transactions on Information Forensics and Security 13, no. 7 (July 2018): 1696–709. http://dx.doi.org/10.1109/tifs.2018.2799427.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Dwivedi, Abhishek, and Shekhar Verma. "SCNN Based Classification Technique for the Face Spoof Detection Using Deep Learning Concept." Scientific Temper 13, no. 02 (December 12, 2022): 165–72. http://dx.doi.org/10.58414/scientifictemper.2022.13.2.25.

Повний текст джерела
Анотація:
Face spoofing refers to “tricking” a facial recognition system to gain unauthorized access to aparticular system. It is mostly used to steal data and money or spread malware. The maliciousimpersonation of oneself is a critical component of face spoofing to gain access to a system.It is observed in many identity theft cases, particularly in the financial sector. In 2015, Wen etal. presented experimental results for cutting-edge commercial off-the-shelf face recognitionsystems. These demonstrated the probability of fake face images being accepted as genuine.The probability could be as high as 70%. Despite this, the vulnerabilities of face recognitionsystems to attacks were frequently overlooked. The Presentation Attack Detection (PAD)method that determines whether the source of a biometric sample is a live person or a fakerepresentation is known as Liveness Detection. Algorithms are used to accomplish this byanalyzing biometric sensor data for the determination of the authenticity of a source.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ma, Yukun, Lifang Wu, Zeyu Li, and Fanghao liu. "A novel face presentation attack detection scheme based on multi-regional convolutional neural networks." Pattern Recognition Letters 131 (March 2020): 261–67. http://dx.doi.org/10.1016/j.patrec.2020.01.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ming, Zuheng, Muriel Visani, Muhammad Muzzamil Luqman, and Jean-Christophe Burie. "A Survey on Anti-Spoofing Methods for Facial Recognition with RGB Cameras of Generic Consumer Devices." Journal of Imaging 6, no. 12 (December 15, 2020): 139. http://dx.doi.org/10.3390/jimaging6120139.

Повний текст джерела
Анотація:
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two decades. We present an attack scenario-oriented typology of the existing facial PAD methods, and we provide a review of over 50 of the most influenced facial PAD methods over the past two decades till today and their related issues. We adopt a comprehensive presentation of the reviewed facial PAD methods following the proposed typology and in chronological order. By doing so, we depict the main challenges, evolutions and current trends in the field of facial PAD and provide insights on its future research. From an experimental point of view, this survey paper provides a summarized overview of the available public databases and an extensive comparison of the results reported in PAD-reviewed papers.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Tamtama, Gabriel Indra Widi, and I. Kadek Dendy Senapartha. "Fake Face Detection System Using MobileNets Architecture." CESS (Journal of Computer Engineering, System and Science) 8, no. 2 (July 3, 2023): 329. http://dx.doi.org/10.24114/cess.v8i2.43762.

Повний текст джерела
Анотація:
Sistem pengenalan wajah merupakan salah satu metode dalam teknik biometric yang menggunakan wajah untuk proses identifikasi atau verifikasi seseorang. Teknologi ini tidak memerlukan kontak fisik seperti verifikasi sidik jari dan diklaim lebih aman karena wajah setiap orang memiliki karakter yang berbeda-beda. Terdapat dua fase utama dalam sistem biometrik wajah, yaitu deteksi wajah palsu Presentation Attack (PA) detektor dan pengenalan wajah (face recognition). Penelitian ini melakukan eksperimen dengan tujuan membangun sebuah model pembelajaran mesin (machine learning) berbasis mobile untuk melakukan deteksi wajah palsu ataupun memverifikasi keaslian wajah dengan menggunakan arsitektur Mobilenets. Verifikasi keaslian wajah diperlukan untuk meningkatkan sistem pengenalan wajah sehingga bisa membedakan wajah palsu dengan asli. Wajah palsu bisa dihadirkan dengan menunjukkan rekaman video atau gambar wajah seseorang sehingga bisa memanipulasi sistem. Dengan adanya metode verifikasi wajah asli, maka keamanan sistem bisa ditingkatkan dan meminimalisir penyalahgunaan. Kami menggunakan tiga jenis dataset publik, yaitu Replay-Mobile, Record-MPAD, dan LLC-FSAD untuk bahan training terhadap model anti-spoof yang dibangun. Model anti-spoof wajah dibangun dengan menggunakan arsitektur MobilenetV2 dengan menambahkan 3 layer neural network yang digunakan sebagai layer klasifikasi. Kemudian pengujian secara terkontrol dilakukan dengan menggunakan program komputer menghasilkan nilai HTER 0.17. Sedangkan hasil pengujian secara tidak terkontrol menggunakan aplikasi prototipe Android menghasilkan nilai HTER sebesar 0.21. Hasil pengujian ini menghasilkan selisih nilai HTER sebesar 0.04 yang mengindikasikan bahwa model anti-spoof wajah akan memiliki performa yang cenderung menurun bila digunakan secara real. The facial recognition system is a method in biometric techniques that use faces to identify or verify a person. This technology does not require physical contact such as fingerprint verification and is claimed to be safer because everyone's face has a different character. There are two main phases in the facial biometric system, namely fake face detection (Presentation Attack (PA) detector) and facial recognition. This study conducted experiments with the aim of building a mobile-based machine learning model to detect fake faces or verify facial authenticity using the MobileNets architecture. Verification of facial authenticity is needed to improve the facial recognition system so that it can distinguish fake faces from real ones. Fake faces can be presented by showing video recordings or pictures of someone's face so they can manipulate the system. The real-face verification method can improve system security and minimize misuse. We use three types of public datasets, namely Replay-Mobile, Record-MPAD, and LLC-FSAD for training materials for the built anti-spoof model. The facial anti-spoof model is built using the MobilenetV2 architecture by adding 3 neural network layers which are used as classification layers. Then controlled testing was carried out using a computer program to produce an HTER value of 0.17. While the results of uncontrolled testing using the Android prototype application produce an HTER value of 0.21. The results of this test produce a difference in the HTER value of 0.04, indicating that the facial anti-spoof model will have performance that tends to decrease when used in real terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

George, Anjith, and Sebastien Marcel. "Learning One Class Representations for Face Presentation Attack Detection Using Multi-Channel Convolutional Neural Networks." IEEE Transactions on Information Forensics and Security 16 (2021): 361–75. http://dx.doi.org/10.1109/tifs.2020.3013214.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Safaa El‐Din, Yomna, Mohamed N. Moustafa, and Hani Mahdi. "Deep convolutional neural networks for face and iris presentation attack detection: survey and case study." IET Biometrics 9, no. 5 (July 29, 2020): 179–93. http://dx.doi.org/10.1049/iet-bmt.2020.0004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Galdi, Chiara, Valeria Chiesa, Christoph Busch, Paulo Lobato Correia, Jean-Luc Dugelay, and Christine Guillemot. "Light Fields for Face Analysis." Sensors 19, no. 12 (June 14, 2019): 2687. http://dx.doi.org/10.3390/s19122687.

Повний текст джерела
Анотація:
The term “plenoptic” comes from the Latin words plenus (“full”) + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function. Imaging systems are rapidly evolving with the emergence of light-field-capturing devices. Consequently, existing image-processing techniques need to be revisited to match the richer information provided. This article explores the use of light fields for face analysis. This field of research is very recent but already includes several works reporting promising results. Such works deal with the main steps of face analysis and include but are not limited to: face recognition; face presentation attack detection; facial soft-biometrics classification; and facial landmark detection. This article aims to review the state of the art on light fields for face analysis, identifying future challenges and possible applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Peng, Fei, Le Qin, and Min Long. "Face presentation attack detection based on chromatic co-occurrence of local binary pattern and ensemble learning." Journal of Visual Communication and Image Representation 66 (January 2020): 102746. http://dx.doi.org/10.1016/j.jvcir.2019.102746.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Shen, Meng, Yaqian Wei, Zelin Liao, and Liehuang Zhu. "IriTrack." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 2 (June 23, 2021): 1–21. http://dx.doi.org/10.1145/3463515.

Повний текст джерела
Анотація:
With a growing adoption of face authentication systems in various application scenarios, face Presentation Attack Detection (PAD) has become of great importance to withstand artefacts. Existing methods of face PAD generally focus on designing intelligent classifiers or customized hardware to differentiate between the image or video samples of a real legitimate user and the imitated ones. Although effective, they can be resource-consuming and suffer from performance degradation due to environmental changes. In this paper, we propose IriTrack, which is a simple and efficient PAD system that takes iris movement as a significant evidence to identify face artefacts. More concretely, users are required to move their eyes along with a randomly generated poly-line, where the resulting trajectories of their irises are used as an evidence for PAD i.e., a presentation attack will be identified if the deviation of one's actual iris trajectory from the given poly-line exceeds a threshold. The threshold is carefully selected to balance the latency and accuracy of PAD. We have implemented a prototype and conducted extensive experiments to evaluate the performance of the proposed system. The results show that IriTrack can defend against artefacts with moderate time and memory overheads.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Hassani, Ali, Jon Diedrich, and Hafiz Malik. "Monocular Facial Presentation–Attack–Detection: Classifying Near-Infrared Reflectance Patterns." Applied Sciences 13, no. 3 (February 3, 2023): 1987. http://dx.doi.org/10.3390/app13031987.

Повний текст джерела
Анотація:
This paper presents a novel material spectroscopy approach to facial presentation–attack–defense (PAD). Best-in-class PAD methods typically detect artifacts in the 3D space. This paper proposes similar features can be achieved in a monocular, single-frame approach by using controlled light. A mathematical model is produced to show how live faces and their spoof counterparts have unique reflectance patterns due to geometry and albedo. A rigorous dataset is collected to evaluate this proposal: 30 diverse adults and their spoofs (paper-mask, display-replay, spandex-mask and COVID mask) under varied pose, position, and lighting for 80,000 unique frames. A panel of 13 texture classifiers are then benchmarked to verify the hypothesis. The experimental results are excellent. The material spectroscopy process enables a conventional MobileNetV3 network to achieve 0.8% average-classification-error rate, outperforming the selected state-of-the-art algorithms. This demonstrates the proposed imaging methodology generates extremely robust features.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

B R, Rohini, Dr Yogish H K, and Dr Deepa Y. "DEEP LEARNING BASED CHALLENGE RESPONSE LIVELINESS MATCHING FOR PRESENTATION ATTACK DETECTION IN FACE RECOGNITION BIOMETRIC AUTHENTICATION SYSTEMS." Indian Journal of Computer Science and Engineering 13, no. 3 (June 20, 2022): 709–20. http://dx.doi.org/10.21817/indjcse/2022/v13i3/221303063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Favorskaya, Margarita N., and Andrey I. Pakhirka. "Building depth maps for detection of presentation attacks in face recognition systems." Информационные и математические технологии в науке и управлении, no. 3 (2022): 40–48. http://dx.doi.org/10.38028/esi.2022.27.3.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії