Статті в журналах з теми "Unconstrained face recognition"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Unconstrained face recognition.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Unconstrained face recognition".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Deng, Weihong, Jiani Hu, Zhongjun Wu, and Jun Guo. "Lighting-aware face frontalization for unconstrained face recognition." Pattern Recognition 68 (August 2017): 260–71. http://dx.doi.org/10.1016/j.patcog.2017.03.024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Masi, Iacopo, Anh Tuấn Trần, Tal Hassner, Gozde Sahin, and Gérard Medioni. "Face-Specific Data Augmentation for Unconstrained Face Recognition." International Journal of Computer Vision 127, no. 6-7 (April 1, 2019): 642–67. http://dx.doi.org/10.1007/s11263-019-01178-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tyagi, Ranbeer, Geetam Singh Tomar, and Laxmi Shrivastava. "Unconstrained Face Recognition Quality: A Review." International Journal of Signal Processing, Image Processing and Pattern Recognition 9, no. 11 (November 30, 2016): 199–210. http://dx.doi.org/10.14257/ijsip.2016.9.11.18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Vinay, A., Abhijay Gupta, Aprameya Bharadwaj, Arvind Srinivasan, K. N. Balasubramanya Murthy, and S. Natarajan. "Unconstrained Face Recognition using Bayesian Classification." Procedia Computer Science 143 (2018): 519–27. http://dx.doi.org/10.1016/j.procs.2018.10.425.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rifaee, Mustafa, Mohammad Al Rawajbeh, Basem AlOkosh, and Farhan AbdelFattah. "A New approach to Recognize Human Face Under Unconstrained Environment." International Journal of Advances in Soft Computing and its Applications 14, no. 2 (July 20, 2022): 2–13. http://dx.doi.org/10.15849/ijasca.220720.01.

Повний текст джерела
Анотація:
Human face is considered as one of the most useful traits in biometrics, and it has been widely used in education, security, military and many other applications. However, in most of currently deployed face recognition systems ideal imaging conditions are assumed; to capture a fully featured images with enough quality to perform the recognition process. As the unmasked face will have a considerable impact on the numbers of new infections in the era of COVID-19 pandemic, a new unconstrained partial facial recognition method must be developed. In this research we proposed a mask detection method based on HOG (Histogram of Gradient) features descriptor and SVM (Support Vector Machine) to determine whether the face is masked or not, the proposed method was tested over 10000 randomly selected images from Masked Face-Net database and was able to correctly classify 98.73% of the tested images. Moreover, and to extract enough features from partially occluded face images, a new geometrical features extraction algorithm based on Contourlet transform was proposed. The method achieved 97.86% recognition accuracy when tested over 4784 correctly masked face images from Masked Face-Net database. Keywords: Facial Recognition, Unconstraint conditions, masked faces, HOG, Support Vector Machine.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yu, Aihua, Gang Li, Beiping Hou, Hongan Wang, and Gaoya Zhou. "A novel framework for face recognition using robust local representation–based classification." International Journal of Distributed Sensor Networks 15, no. 3 (March 2019): 155014771983608. http://dx.doi.org/10.1177/1550147719836082.

Повний текст джерела
Анотація:
Face recognition via representation-based classification is a trending technique in the recent years. However, the recognition performance of the systems using such a technique degrades in an unconstrained environment. In this article, a novel framework is proposed for representation-based face recognition. To deal with the unconstrained environment, a pre-process is used to frontalize face images, and aligned downsampling local binary pattern features of the frontalized images are used for classification. A dimension reduction is then adopted in order to reduce the computation complexity via an optimized projection matrix. The recognition is carried out using an improved robust sparse coding algorithm. Such an algorithm is expected to avoid the overfitting problem. The open-universe test on labeled faces in the wild data sets shows that the recognition rate of the proposed system can reach 95% with a recall rate of 80%, which is best among those representation-based classification face recognition systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

TORBATI, Ali, and Önsen TOYGAR. "MASKED AND UNMASKED FACE RECOGNITION ON UNCONSTRAINED FACIAL IMAGES USING HAND-CRAFTED METHODS." Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi 26, Özel Sayı (December 12, 2023): 1133–39. http://dx.doi.org/10.17780/ksujes.1339868.

Повний текст джерела
Анотація:
In this study, the face recognition task is applied on masked and unmasked faces using hand-crafted methods. Due to COVID-19 and masks, facial identification from unconstrained images became a hot topic. To avoid COVID-19, most people use masks outside. In many cases, typical facial recognition technology is useless. The majority of contemporary advanced face recognition methods are based on deep learning, which primarily relies on a huge number of training examples, however, masked face recognition may be investigated using hand-crafted approaches at a lower computing cost than using deep learning systems. A low-cost system is intended to be constructed for recognizing masked faces and compares its performance to that of face recognition systems that do not use masks. The proposed method fuses hand-crafted methods using feature-level fusion strategy. This study compares the performance of masked and unmasked face recognition systems. The experiments are undertaken on two publicly accessible datasets for masked face recognition: Masked Labeled Faces in the Wild (MLFW) and Cross-Age Labeled Faces in the Wild (CALFW). The best accuracy is achieved as 94.8% on MLFW dataset. The rest of the results on different train and test sets from CALFW and MLFW datasets are encouraging compared to the state-of-the-art models.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ruan, Shuai, Chaowei Tang, Xu Zhou, Zhuoyi Jin, Shiyu Chen, Haotian Wen, Hongbin Liu, and Dong Tang. "Multi-Pose Face Recognition Based on Deep Learning in Unconstrained Scene." Applied Sciences 10, no. 13 (July 7, 2020): 4669. http://dx.doi.org/10.3390/app10134669.

Повний текст джерела
Анотація:
At present, deep learning drives the rapid development of face recognition. However, in the unconstrained scenario, the change of facial posture has a great impact on face recognition. Moreover, the current model still has some shortcomings in accuracy and robustness. The existing research has formulated two methods to solve the above problems. One method is to model and train each pose separately. Then, a fusion decision will be made. The other method is to make “frontal” faces on the image or feature level and transform them into “frontal” face recognition. Based on the second idea, we propose a profile to the frontal revise mapping (PTFRM) module. This module realizes the revision of arbitrary poses on the feature level and transforms the multi-pose features into an approximate frontal representation to enhance the recognition ability of the existing recognition models. Finally, we evaluate the PTFRM on unconstrained face validation benchmark datasets such as Labeled Faces in the Wild (LFW), Celebrities in Frontal Profile (CFP), and IARPA Janus Benchmark A(IJB-A). Results show that the chosen method for this study achieves good performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tong, Ying, Jiachao Zhang, and Rui Chen. "Discriminative Sparsity Graph Embedding for Unconstrained Face Recognition." Electronics 8, no. 5 (May 7, 2019): 503. http://dx.doi.org/10.3390/electronics8050503.

Повний текст джерела
Анотація:
In this paper, we propose a new dimensionality reduction method named Discriminative Sparsity Graph Embedding (DSGE) which considers the local structure information and the global distribution information simultaneously. Firstly, we adopt the intra-class compactness constraint to automatically construct the intrinsic adjacent graph, which enhances the reconstruction relationship between the given sample and the non-neighbor samples with the same class. Meanwhile, the inter-class compactness constraint is exploited to construct the penalty adjacent graph, which reduces the reconstruction influence between the given sample and the pseudo-neighbor samples with the different classes. Then, the global distribution constraints are introduced to the projection objective function for seeking the optimal subspace which compacts intra-classes samples and alienates inter-classes samples at the same time. Extensive experiments are carried out on AR, Extended Yale B, LFW and PubFig databases which are four representative face datasets, and the corresponding experimental results illustrate the effectiveness of our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Agrawal, Amrit Kumar, and Yogendra Narain Singh. "Unconstrained face recognition using deep convolution neural network." International Journal of Information and Computer Security 12, no. 2/3 (2020): 332. http://dx.doi.org/10.1504/ijics.2020.10026788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Agrawal, Amrit Kumar, and Yogendra Narain Singh. "Unconstrained face recognition using deep convolution neural network." International Journal of Information and Computer Security 12, no. 2/3 (2020): 332. http://dx.doi.org/10.1504/ijics.2020.105183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Agrawal, Amrit Kumar, and Yogendra Narain Singh. "Evaluation of Face Recognition Methods in Unconstrained Environments." Procedia Computer Science 48 (2015): 644–51. http://dx.doi.org/10.1016/j.procs.2015.04.147.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhang, Monica M. Y., Kun Shang, and Huaming Wu. "Deep compact discriminative representation for unconstrained face recognition." Signal Processing: Image Communication 75 (July 2019): 118–27. http://dx.doi.org/10.1016/j.image.2019.03.015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Tyagi, Ranbeer. "Reference Face Based Technique for Unconstrained Face Recognition from Images Gallery." International Journal of Computer Graphics 10, no. 1 (November 30, 2019): 1–16. http://dx.doi.org/10.21742/ijcg.2019.10.1.01.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Khalifa, Aly, Ahmed A. Abdelrahman, Dominykas Strazdas, Jan Hintz, Thorsten Hempel, and Ayoub Al-Hamadi. "Face Recognition and Tracking Framework for Human–Robot Interaction." Applied Sciences 12, no. 11 (May 30, 2022): 5568. http://dx.doi.org/10.3390/app12115568.

Повний текст джерела
Анотація:
Recently, face recognition became a key element in social cognition which is used in various applications including human–robot interaction (HRI), pedestrian identification, and surveillance systems. Deep convolutional neural networks (CNNs) have achieved notable progress in recognizing faces. However, achieving accurate and real-time face recognition is still a challenging problem, especially in unconstrained environments due to occlusion, lighting conditions, and the diversity in head poses. In this paper, we present a robust face recognition and tracking framework in unconstrained settings. We developed our framework based on lightweight CNNs for all face recognition stages, including face detection, alignment and feature extraction, to achieve higher accuracies in these challenging circumstances while maintaining the real-time capabilities required for HRI systems. To maintain the accuracy, a single-shot multi-level face localization in the wild (RetinaFace) is utilized for face detection, and additive angular margin loss (ArcFace) is employed for recognition. For further enhancement, we introduce a face tracking algorithm that combines the information from tracked faces with the recognized identity to use in the further frames. This tracking algorithm improves the overall processing time and accuracy. The proposed system performance is tested in real-time experiments applied in an HRI study. Our proposed framework achieves real-time capabilities with an average of 99%, 95%, and 97% precision, recall, and F-score respectively. In addition, we implemented our system as a modular ROS package that makes it straightforward for integration in different real-world HRI systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Moghekar, Rajeshwar, and Sachin Ahuja. "Deep Learning Model for Face Recognition in Unconstrained Environment." Journal of Computational and Theoretical Nanoscience 16, no. 10 (October 1, 2019): 4309–12. http://dx.doi.org/10.1166/jctn.2019.8518.

Повний текст джерела
Анотація:
Face recognition from videos is challenging problem as the face image captured has variations in terms of pose, Occlusion, blur and resolution. It has many applications including security monitoring and authentication. A subset of Indian Movies Face database (IMFDB) which has collection of face images retrieved from movie/video of actors which vary in terms of blur, pose, noise and illumination is used in our work. Our work focuses on the use of pre-trained deep learning models and applies transfer learning to the features extracted from the CNN layers. Later we compare it Fine tuned model. The results show that the accuracy is 99.89 using CNN as feature extractor and 96.3 when we fine tune the VGG-Face. The Fine tuned network of VGG-Face learnt more generic features when compared with its counterpart transfer learning. When applied on VGG16 transfer learning achieved 93.9.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

M. P, Milan. "CHALLENGES IN FACE RECOGNITION TECHNIQUE." Journal of University of Shanghai for Science and Technology 23, no. 07 (July 24, 2021): 1201–4. http://dx.doi.org/10.51201/jusst/21/07253.

Повний текст джерела
Анотація:
Face detection is an application that is able of detecting, track, and recognizing human faces from an angle or video captured by a camera. A lot of advances have been made up in the domain of face recognition for security, identification, and appearance purpose, but still, difficult to able to beat humans alike accuracy. There are various problems in human facial presence such as; lighting conditions, image noise, scale, presentation, etc. Unconstrained face detection remains a difficult problem due to intra-class variations acquired by occlusion, disguise, capricious orientations, facial expressions, age variations…etc. The detection rate of face recognition algorithms is actually low in these conditions. With the popularity of AI in recent years, a mass number of enterprises deployed AI algorithms in absolute life settings. it is complete that face patterns observed by robots depend generally on variations such as pose, light environment, location.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Borovikov, Eugene, Szilard Vajda, and Michael Gill. "Face Match for Family Reunification." International Journal of Computer Vision and Image Processing 7, no. 2 (April 2017): 19–35. http://dx.doi.org/10.4018/ijcvip.2017040102.

Повний текст джерела
Анотація:
Despite the many advances in face recognition technology, practical face detection and matching for unconstrained images remain challenging. A real-world Face Image Retrieval (FIR) system is described in this paper. It is based on optimally weighted image descriptor ensemble utilized in single-image-per-person (SIPP) approach that works with large unconstrained digital photo collections. The described visual search can be deployed in many applications, e.g. person location in post-disaster scenarios, helping families reunite quicker. It provides efficient means for face detection, matching and annotation, working with images of variable quality, requiring no time-consuming training, yet showing commercial performance levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ramos-Cooper, Solange, Erick Gomez-Nieto, and Guillermo Camara-Chavez. "VGGFace-Ear: An Extended Dataset for Unconstrained Ear Recognition." Sensors 22, no. 5 (February 23, 2022): 1752. http://dx.doi.org/10.3390/s22051752.

Повний текст джерела
Анотація:
Recognition using ear images has been an active field of research in recent years. Besides faces and fingerprints, ears have a unique structure to identify people and can be captured from a distance, contactless, and without the subject’s cooperation. Therefore, it represents an appealing choice for building surveillance, forensic, and security applications. However, many techniques used in those applications—e.g., convolutional neural networks (CNN)—usually demand large-scale datasets for training. This research work introduces a new dataset of ear images taken under uncontrolled conditions that present high inter-class and intra-class variability. We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3.3 million images. in addition, we perform ear recognition using transfer learning with CNN pretrained on image and face recognition. Finally, we performed two experiments on two unconstrained datasets and reported our results using Rank-based metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zheng, Jingxiao, Rajeev Ranjan, Ching-Hui Chen, Jun-Cheng Chen, Carlos D. Castillo, and Rama Chellappa. "An Automatic System for Unconstrained Video-Based Face Recognition." IEEE Transactions on Biometrics, Behavior, and Identity Science 2, no. 3 (July 2020): 194–209. http://dx.doi.org/10.1109/tbiom.2020.2973504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhao, Jian, Lin Xiong, Jianshu Li, Junliang Xing, Shuicheng Yan, and Jiashi Feng. "3D-Aided Dual-Agent GANs for Unconstrained Face Recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 10 (October 1, 2019): 2380–94. http://dx.doi.org/10.1109/tpami.2018.2858819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chen, Yi-Chen, Vishal M. Patel, P. Jonathon Phillips, and Rama Chellappa. "Dictionary-Based Face and Person Recognition From Unconstrained Video." IEEE Access 3 (2015): 1783–98. http://dx.doi.org/10.1109/access.2015.2485400.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Selvi, Murugesan Chengathir, and Karuppiah Muneeswaran. "Unconstrained face recognition in surveillance videos using moment invariants." International Journal of Biomedical Engineering and Technology 25, no. 2/3/4 (2017): 282. http://dx.doi.org/10.1504/ijbet.2017.087729.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Selvi, Murugesan Chengathir, and Karuppiah Muneeswaran. "Unconstrained face recognition in surveillance videos using moment invariants." International Journal of Biomedical Engineering and Technology 25, no. 2/3/4 (2017): 282. http://dx.doi.org/10.1504/ijbet.2017.10008626.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

., Anubha Pearline S. "FACE RECOGNITION UNDER VARYING BLUR IN AN UNCONSTRAINED ENVIRONMENT." International Journal of Research in Engineering and Technology 05, no. 04 (April 25, 2016): 376–81. http://dx.doi.org/10.15623/ijret.2016.0504070.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gao, Yongbin, and Hyo Jong Lee. "Learning warps based similarity for pose-unconstrained face recognition." Multimedia Tools and Applications 77, no. 2 (January 24, 2017): 1927–42. http://dx.doi.org/10.1007/s11042-017-4359-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sharma, Poonam. "Face recognition under unconstrained environment for videos from internet." CSI Transactions on ICT 8, no. 2 (June 2020): 241–48. http://dx.doi.org/10.1007/s40012-020-00302-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Lv, Jiang-Jing, Cheng Cheng, Guo-Dong Tian, Xiang-Dong Zhou, and Xi Zhou. "Landmark perturbation-based data augmentation for unconstrained face recognition." Signal Processing: Image Communication 47 (September 2016): 465–75. http://dx.doi.org/10.1016/j.image.2016.03.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Haghighat, Mohammad, Mohamed Abdel-Mottaleb, and Wadee Alhalabi. "Fully automatic face normalization and single sample face recognition in unconstrained environments." Expert Systems with Applications 47 (April 2016): 23–34. http://dx.doi.org/10.1016/j.eswa.2015.10.047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Beham, M. Parisa, S. M. Mansoor Roomi, J. Alageshan, and V. Kapileshwaran. "Performance Analysis of Pose Invariant Face Recognition Approaches in Unconstrained Environments." International Journal of Computer Vision and Image Processing 5, no. 1 (January 2015): 66–81. http://dx.doi.org/10.4018/ijcvip.2015010104.

Повний текст джерела
Анотація:
Face recognition and authentication are two significant and dynamic research issues in computer vision applications. There are many factors that should be accounted for face recognition; among them pose variation is a major challenge which severely influence in the performance of face recognition. In order to improve the performance, several research methods have been developed to perform the face recognition process with pose invariant conditions in constrained and unconstrained environments. In this paper, the authors analyzed the performance of a popular texture descriptors viz., Local Binary Pattern, Local Derivative Pattern and Histograms of Oriented Gradients for pose invariant problem. State of the art preprocessing techniques such as Discrete Cosine Transform, Difference of Gaussian, Multi Scale Retinex and Gradient face have also been applied before feature extraction. In the recognition phase K- nearest neighbor classifier is used to accomplish the classification task. To evaluate the efficiency of pose invariant face recognition algorithm three publicly available databases viz. UMIST, ORL and LFW datasets have been used. The above said databases have very wide pose variations and it is proved that the state of the art method is efficient only in constrained situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Amjed, Noor, Fatimah Khalid, Rahmita Wirza O. K. Rahmat, and Hizmawati Bint Madzin. "A Robust Geometric Skin Colour Face Detection Method under Unconstrained Environment of Smartphone Database." Applied Mechanics and Materials 892 (June 2019): 31–37. http://dx.doi.org/10.4028/www.scientific.net/amm.892.31.

Повний текст джерела
Анотація:
Face detection is the primary task in building a vision-based human-computer interaction system and in special applications such as face recognition, face tracking, face identification, expression recognition and also content-based image retrieval. A potent face detection system must be able to detect faces irrespective of illuminations, shadows, cluttered backgrounds, orientation and facial expressions. In previous literature, many approaches for face detection had been proposed. However, face detection in outdoor images with uncontrolled illumination and images with complex background are still a serious problem. Hence, in this paper, we had proposed a Geometric Skin Colour (GSC) method for detecting faces accurately in real world image, under capturing conditions of both indoor and outdoor, and with a variety of illuminations and also in cluttered backgrounds. The selected method was evaluated on two different face video smartphone databases and the obtained results proved the outperformance of the proposed method under the unconstrained environment of these databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhuang, Weiwei, Liang Chen, Chaoqun Hong, Yuxin Liang, and Keshou Wu. "FT-GAN: Face Transformation with Key Points Alignment for Pose-Invariant Face Recognition." Electronics 8, no. 7 (July 19, 2019): 807. http://dx.doi.org/10.3390/electronics8070807.

Повний текст джерела
Анотація:
Face recognition has been comprehensively studied. However, face recognition in the wild still suffers from unconstrained face directions. Frontal face synthesis is a popular solution, but some facial features are missed after synthesis. This paper presents a novel method for pose-invariant face recognition. It is based on face transformation with key points alignment based on generative adversarial networks (FT-GAN). In this method, we introduce CycleGAN for pixel transformation to achieve coarse face transformation results, and these results are refined by key point alignment. In this way, frontal face synthesis is modeled as a two-task process. The results of comprehensive experiments show the effectiveness of FT-GAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Wang, Dongshu, Heshan Wang, Jiwen Sun, Jianbin Xin, and Yong Luo. "Face Recognition in Complex Unconstrained Environment with An Enhanced WWN Algorithm." Journal of Intelligent Systems 30, no. 1 (July 3, 2020): 18–39. http://dx.doi.org/10.1515/jisys-2019-0114.

Повний текст джерела
Анотація:
Abstract Face recognition is one of the core and challenging issues in computer vision field. Compared to computer vision, human visual system can identify a target from complex backgrounds quickly and accurately. This paper proposes a new network model deriving from Where-What Networks (WWNs), which can approximately simulate the information processing pathways (i.e., dorsal pathway and ventral pathway) of human visual cortex and recognize different types of faces with different locations and sizes in complex background. To enhance the recognition performance, synapse maintenance mechanism and neuron regenesis mechanism are both introduced. Synapse maintenance is used to reduce the background interference while neuron regenesis mechanism is introduced to regulate the neuron resource dynamically to improve the network usage efficiency. Experiments have been conducted on human face images of 5 types, 11 sizes, and 225 locations in complex backgrounds. Experiment results demonstrate that the proposed WWN model can basically learn three concepts (type, location and size) simultaneously. The experiment results also show the advantages of the enhanced WWN-7 model for face recognition in comparison with several existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Lakshmi, Napa, and Megha P. Arakeri. "A novel sketch based face recognition in unconstrained video for criminal investigation." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 2 (April 1, 2023): 1499. http://dx.doi.org/10.11591/ijece.v13i2.pp1499-1509.

Повний текст джерела
Анотація:
Face recognition in video surveillance helps to identify an individual by comparing facial features of given photograph or sketch with a video for criminal investigations. Generally, face sketch is used by the police when suspect’s photo is not available. Manual matching of facial sketch with suspect’s image in a long video is tedious and time-consuming task. To overcome these drawbacks, this paper proposes an accurate face recognition technique to recognize a person based on his sketch in an unconstrained video surveillance. In the proposed method, surveillance video and sketch of suspect is taken as an input. Firstly, input video is converted into frames and summarized using the proposed quality indexed three step cross search algorithm. Next, faces are detected by proposed modified Viola-Jones algorithm. Then, necessary features are selected using the proposed salp-cat optimization algorithm. Finally, these features are fused with scale-invariant feature transform (SIFT) features and Euclidean distance is computed between feature vectors of sketch and each face in a video. Face from the video having lowest Euclidean distance with query sketch is considered as suspect’s face. The proposed method’s performance is analyzed on Chokepoint dataset and the system works efficiently with 89.02% of precision, 91.25% of recall and 90.13% of F-measure.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

S., Yallamandaiah, and Purnachand N. "An effective face recognition method using guided image filter and convolutional neural network." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1699. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1699-1707.

Повний текст джерела
Анотація:
<p>In the area of computer vision, face recognition is a challenging task because of the pose, facial expression, and illumination variations. The performance of face recognition systems reduces in an unconstrained environment. In this work, a new face recognition approach is proposed using a guided image filter, and a convolutional neural network (CNN). The guided image filter is a smoothing operator and performs well near the edges. Initially, the ViolaJones algorithm is used to detect the face region and then smoothened by a guided image filter. Later the proposed CNN is used to extract the features and recognize the faces. The experiments were performed on face databases like ORL, JAFFE, and YALE and attained a recognition rate of 98.33%, 99.53%, and 98.65% respectively. The experimental results show that the suggested face recognition method attains good results than some of the state-of-the-art techniques.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
36

BARR, JEREMIAH R., KEVIN W. BOWYER, PATRICK J. FLYNN, and SOMA BISWAS. "FACE RECOGNITION FROM VIDEO: A REVIEW." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 05 (August 2012): 1266002. http://dx.doi.org/10.1142/s0218001412660024.

Повний текст джерела
Анотація:
Driven by key law enforcement and commercial applications, research on face recognition from video sources has intensified in recent years. The ensuing results have demonstrated that videos possess unique properties that allow both humans and automated systems to perform recognition accurately in difficult viewing conditions. However, significant research challenges remain as most video-based applications do not allow for controlled recordings. In this survey, we categorize the research in this area and present a broad and deep review of recently proposed methods for overcoming the difficulties encountered in unconstrained settings. We also draw connections between the ways in which humans and current algorithms recognize faces. An overview of the most popular and difficult publicly available face video databases is provided to complement these discussions. Finally, we cover key research challenges and opportunities that lie ahead for the field as a whole.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Tyagi, Ranbeer, Geetam Singh Tomar, and Namkyun Baik. "A Survey of Unconstrained Face Recognition Algorithm and Its Applications." International Journal of Security and Its Applications 10, no. 12 (December 31, 2016): 369–76. http://dx.doi.org/10.14257/ijsia.2016.10.12.30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Prabhu, U., Jingu Heo, and M. Savvides. "Unconstrained Pose-Invariant Face Recognition Using 3D Generic Elastic Models." IEEE Transactions on Pattern Analysis and Machine Intelligence 33, no. 10 (October 2011): 1952–61. http://dx.doi.org/10.1109/tpami.2011.123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Santiago-Raḿırez, Everardo, J. ́. A. Gonźalez-Fraga, and Sixto Ĺazaro-Mart́ınez. "Face recognition and tracking using unconstrained non-linear correlation filters." Procedia Engineering 35 (2012): 192–201. http://dx.doi.org/10.1016/j.proeng.2012.04.180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chen, Guanhao, Yanqing Shao, Chaowei Tang, Zhuoyi Jin, and Jinkun Zhang. "Deep transformation learning for face recognition in the unconstrained scene." Machine Vision and Applications 29, no. 3 (January 12, 2018): 513–23. http://dx.doi.org/10.1007/s00138-018-0907-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Pinto, Nicolas, and David D. Cox. "High-throughput-derived biologically-inspired features for unconstrained face recognition." Image and Vision Computing 30, no. 3 (March 2012): 159–68. http://dx.doi.org/10.1016/j.imavis.2011.12.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Gupta, Sandeep Kumar, Seid Hassen Yesuf, and Neeta Nain. "Real-Time Gender Recognition for Juvenile and Adult Faces." Computational Intelligence and Neuroscience 2022 (March 17, 2022): 1–15. http://dx.doi.org/10.1155/2022/1503188.

Повний текст джерела
Анотація:
Facial gender recognition is a crucial research topic due to its comprehensive use cases, including a demographic gender survey, visitor profile identification, targeted advertisement, access control, security, and surveillance from CCTV. For these real-time applications, the face of a person can be oriented to any angle from the camera axis, and the person can be of any age group, including juveniles. A child’s face consists of immature craniofacial feature points in texture and edge compared to an adult face, making it very hard to recognize gender using the child’s face. Real-word faces captured in an unconstrained environment make the gender prediction system more complex to identify correctly due to orientation. These factors reduce the accuracy of the existing state-of-the-art models developed so far for real-time facial gender prediction. This paper presents the novelty of facial gender recognition for juveniles, adults, and unconstrained-oriented faces. The progressive calibration network (PCN) detects rotation-invariant faces in the proposed model. Then, a Gabor filter is applied to extract unique edge and texture features from the detected face. The Gabor filter is invariant to illumination and produces texture and edge features with redundant feature coefficients in large dimensions. Gabor has drawbacks such as redundancy and a large dimension resolved by the proposed meanDWT feature optimization method, which optimizes the system’s accuracy, the size of the model, and computational timing. The proposed feature engineering model is classified with different classifiers such as Naïve Bayes, Logistic Regression, SVM with linear, and RBF kernel. Its results are compared with the state-of-the-art techniques; detailed experimental analysis is presented and concluded to support the argument. We also present a review of approaches based on conventional and deep learning methods with their pros and cons for facial gender recognition on different datasets available for facial gender recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Shyam, Radhey, and Yogendra Narain Singh. "Multialgorithmic Frameworks for Human Face Recognition." Journal of Electrical and Computer Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/4645971.

Повний текст джерела
Анотація:
This paper presents a critical evaluation of multialgorithmic face recognition systems for human authentication in unconstrained environment. We propose different frameworks of multialgorithmic face recognition system combining holistic and texture methods. Our aim is to combine the uncorrelated methods of the face recognition that supplement each other and to produce a comprehensive representation of the biometric cue to achieve optimum recognition performance. The multialgorithmic frameworks are designed to combine different face recognition methods such as (i) Eigenfaces and local binary pattern (LBP), (ii) Fisherfaces and LBP, (iii) Eigenfaces and augmented local binary pattern (A-LBP), and (iv) Fisherfaces and A-LBP. The matching scores of these multialgorithmic frameworks are processed using different normalization techniques whereas their performance is evaluated using different fusion strategies. The robustness of proposed multialgorithmic frameworks of face recognition system is tested on publicly available databases, for example, AT & T (ORL) and Labeled Faces in the Wild (LFW). The experimental results show a significant improvement in recognition accuracies of the proposed frameworks of face recognition system in comparison to their individual methods. In particular, the performance of the multialgorithmic frameworks combining face recognition methods with the devised face recognition method such as A-LBP improves significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Hemathilaka, Susith, and Achala Aponso. "A Comprehensive Study on Occlusion Invariant Face Recognition under Face Mask Occlusions." Machine Learning and Applications: An International Journal 8, no. 4 (December 31, 2021): 1–10. http://dx.doi.org/10.5121/mlaij.2021.8401.

Повний текст джерела
Анотація:
The face mask is an essential sanitaryware in daily lives growing during the pandemic period and is a big threat to current face recognition systems. The masks destroy a lot of details in a large area of face, and it makes it difficult to recognize them even for humans. The evaluation report shows the difficulty well when recognizing masked faces. Rapid development and breakthrough of deep learning in the recent past have witnessed most promising results from face recognition algorithms. But they fail to perform far from satisfactory levels in the unconstrained environment during the challenges such as varying lighting conditions, low resolution, facial expressions, pose variation and occlusions. Facial occlusions are considered one of the most intractable problems. Especially when the occlusion occupies a large region of the face because it destroys lots of official features.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Qin, and Guodong Guo. "Gender Recognition from Unconstrained and Articulated Human Body." Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/513240.

Повний текст джерела
Анотація:
Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Adjabi, Insaf, Abdeldjalil Ouahabi, Amir Benzaoui, and Sébastien Jacques. "Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition." Sensors 21, no. 3 (January 21, 2021): 728. http://dx.doi.org/10.3390/s21030728.

Повний текст джерела
Анотація:
Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zhang, Mengya, Yuan Zhang, and Qinghui Zhang. "Attention-Mechanism-Based Models for Unconstrained Face Recognition with Mask Occlusion." Electronics 12, no. 18 (September 17, 2023): 3916. http://dx.doi.org/10.3390/electronics12183916.

Повний текст джерела
Анотація:
Masks cover most areas of the face, resulting in a serious loss of facial identity information; thus, how to alleviate or eliminate the negative impact of occlusion is a significant problem in the field of unconstrained face recognition. Inspired by the successful application of attention mechanisms and capsule networks in computer vision, we propose ECA-Inception-Resnet-Caps, which is a novel framework based on Inception-Resnet-v1 for learning discriminative face features in unconstrained mask-wearing conditions. Firstly, Squeeze-and-Excitation (SE) modules and Efficient Channel Attention (ECA) modules are applied to Inception-Resnet-v1 to increase the attention on unoccluded face areas, which is used to eliminate the negative impact of occlusion during feature extraction. Secondly, the effects of the two attention mechanisms on the different modules in Inception-Resnet-v1 are compared and analyzed, which is the foundation for further constructing the ECA-Inception-Resnet-Caps framework. Finally, ECA-Inception-Resnet-Caps is obtained by improving Inception-Resnet-v1 with capsule modules, which is explored to increase the interpretability and generalization of the model after reducing the negative impact of occlusion. The experimental results demonstrate that both attention mechanisms and the capsule network can effectively enhance the performance of Inception-Resnet-v1 for face recognition in occlusion tasks, with the ECA-Inception-Resnet-Caps model being the most effective, achieving an accuracy of 94.32%, which is 1.42% better than the baseline model.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yuan, Ge, Huicheng Zheng, and Jiayu Dong. "MSML: Enhancing Occlusion-Robustness by Multi-Scale Segmentation-Based Mask Learning for Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3197–205. http://dx.doi.org/10.1609/aaai.v36i3.20228.

Повний текст джерела
Анотація:
In unconstrained scenarios, face recognition remains challenging, particularly when faces are occluded. Existing methods generalize poorly due to the distribution distortion induced by unpredictable occlusions. To tackle this problem, we propose a hierarchical segmentation-based mask learning strategy for face recognition, enhancing occlusion-robustness by integrating segmentation representations of occlusion into face recognition in the latent space. We present a novel multi-scale segmentation-based mask learning (MSML) network, which consists of a face recognition branch (FRB), an occlusion segmentation branch (OSB), and hierarchical elaborate feature masking (FM) operators. With the guidance of hierarchical segmentation representations of occlusion learned by the OSB, the FM operators can generate multi-scale latent masks to eliminate mistaken responses introduced by occlusions and purify the contaminated facial features at multiple layers. In this way, the proposed MSML network can effectively identify and remove the occlusions from feature representations at multiple levels and aggregate features from visible facial areas. Experiments on face verification and recognition under synthetic or realistic occlusions demonstrate the effectiveness of our method compared to state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Rong, ZaiFeng Shi, Qifeng Li, Ronghua Gao, Chunjiang Zhao, and Lu Feng. "Pig Face Recognition Model Based on a Cascaded Network." Applied Engineering in Agriculture 37, no. 5 (2021): 879–90. http://dx.doi.org/10.13031/aea.14482.

Повний текст джерела
Анотація:
HighlightsA pig face recognition model that cascades the pig face detection network and pig face recognition network is proposed.The pig face detection network can automatically extract pig face images to reduce the influence of the background.The proposed cascaded model reaches accuracies of 99.38%, 98.96% and 97.66% on the three datasets.An application is developed to automatically recognize individual pigs.Abstract. The identification and tracking of livestock using artificial intelligence technology have been a research hotspot in recent years. Automatic individual recognition is the key to realizing intelligent feeding. Although RFID can achieve identification tasks, it is expensive and easily fails. In this article, a pig face recognition model that cascades a pig face detection network and a pig face recognition network is proposed. First, the pig face detection network is utilized to crop the pig face images from videos and eliminate the complex background of the pig shed. Second, batch normalization, dropout, skip connection, and residual modules are exploited to design a pig face recognition network for individual identification. Finally, the cascaded network model based on the pig face detection and recognition network is deployed on a GPU server, and an application is developed to automatically recognize individual pigs. Additionally, class activation maps generated by grad-CAM are used to analyze the performance of features of pig faces learned by the model. Under free and unconstrained conditions, 46 pigs are selected to make a positive pig face dataset, original multiangle pig face dataset and enhanced multiangle pig face dataset to verify the pig face recognition cascaded model. The proposed cascaded model reaches accuracies of 99.38%, 98.96%, and 97.66% on the three datasets, which are higher than those of other pig face recognition models. The results of this study improved the recognition performance of pig faces under multiangle and multi-environment conditions. Keywords: CNN, Deep learning, Pig face detection, Pig face recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kim, Daeok, Jongkwang Hong, and Hyeran Byun. "Face Recognition Based on Facial Landmark Feature Descriptor in Unconstrained Environments." Journal of KIISE 41, no. 9 (September 15, 2014): 666–73. http://dx.doi.org/10.5626/jok.2014.41.9.666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії