Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: FACIAL DATASET.

Artykuły w czasopismach na temat „FACIAL DATASET”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „FACIAL DATASET”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Xu, Xiaolin, Yuan Zong, Cheng Lu i Xingxun Jiang. "Enhanced Sample Self-Revised Network for Cross-Dataset Facial Expression Recognition". Entropy 24, nr 10 (17.10.2022): 1475. http://dx.doi.org/10.3390/e24101475.

Pełny tekst źródła
Streszczenie:
Recently, cross-dataset facial expression recognition (FER) has obtained wide attention from researchers. Thanks to the emergence of large-scale facial expression datasets, cross-dataset FER has made great progress. Nevertheless, facial images in large-scale datasets with low quality, subjective annotation, severe occlusion, and rare subject identity can lead to the existence of outlier samples in facial expression datasets. These outlier samples are usually far from the clustering center of the dataset in the feature space, thus resulting in considerable differences in feature distribution, which severely restricts the performance of most cross-dataset facial expression recognition methods. To eliminate the influence of outlier samples on cross-dataset FER, we propose the enhanced sample self-revised network (ESSRN) with a novel outlier-handling mechanism, whose aim is first to seek these outlier samples and then suppress them in dealing with cross-dataset FER. To evaluate the proposed ESSRN, we conduct extensive cross-dataset experiments across RAF-DB, JAFFE, CK+, and FER2013 datasets. Experimental results demonstrate that the proposed outlier-handling mechanism can reduce the negative impact of outlier samples on cross-dataset FER effectively and our ESSRN outperforms classic deep unsupervised domain adaptation (UDA) methods and the recent state-of-the-art cross-dataset FER results.
Style APA, Harvard, Vancouver, ISO itp.
2

Kim, Jung Hwan, Alwin Poulose i Dong Seog Han. "The Extensive Usage of the Facial Image Threshing Machine for Facial Emotion Recognition Performance". Sensors 21, nr 6 (12.03.2021): 2026. http://dx.doi.org/10.3390/s21062026.

Pełny tekst źródła
Streszczenie:
Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.
Style APA, Harvard, Vancouver, ISO itp.
3

Oliver, Miquel Mascaró, i Esperança Amengual Alcover. "UIBVFED: Virtual facial expression dataset". PLOS ONE 15, nr 4 (6.04.2020): e0231266. http://dx.doi.org/10.1371/journal.pone.0231266.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bodavarapu, Pavan Nageswar Reddy, i P. V. V. S. Srinivas. "Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques". Indian Journal of Science and Technology 14, nr 12 (27.03.2021): 971–83. http://dx.doi.org/10.17485/ijst/v14i12.14.

Pełny tekst źródła
Streszczenie:
Background/Objectives: There is only limited research work is going on in the field of facial expression recognition on low resolution images. Mostly, all the images in the real world will be in low resolution and might also contain noise, so this study is to design a novel convolutional neural network model (FERConvNet), which can perform better on low resolution images. Methods: We proposed a model and then compared with state-of-art models on FER2013 dataset. There is no publicly available dataset, which contains low resolution images for facial expression recognition (Anger, Sad, Disgust, Happy, Surprise, Neutral, Fear), so we created a Low Resolution Facial Expression (LRFE) dataset, which contains more than 6000 images of seven types of facial expressions. The existing FER2013 dataset and LRFE dataset were used. These datasets were divided in the ratio 80:20 for training and testing and validation purpose. A HDM is proposed, which is a combination of Gaussian Filter, Bilateral Filter and Non local means denoising Filter. This hybrid denoising method helps us to increase the performance of the convolutional neural network. The proposed model was then compared with VGG16 and VGG19 models. Findings: The experimental results show that the proposed FERConvNet_HDM approach is effective than VGG16 and VGG19 in facial expression recognition on both FER2013 and LRFE dataset. The proposed FERConvNet_HDM approach achieved 85% accuracy on Fer2013 dataset, outperforming the VGG16 and VGG19 models, whose accuracies are 60% and 53% on Fer2013 dataset respectively. The same FERConvNet_HDM approach when applied on LRFE dataset achieved 95% accuracy. After analyzing the results, our FERConvNet_HDM approach performs better than VGG16 and VGG19 on both Fer2013 and LRFE dataset. Novelty/Applications: HDM with convolutional neural networks, helps in increasing the performance of convolutional neural networks in Facial expression recognition. Keywords: Facial expression recognition; facial emotion; convolutional neural network; deep learning; computer vision
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Xiaoqing, Xiangjun Wang i Yubo Ni. "Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks". Computational Intelligence and Neuroscience 2018 (9.07.2018): 1–10. http://dx.doi.org/10.1155/2018/7208794.

Pełny tekst źródła
Streszczenie:
In the facial expression recognition task, a good-performing convolutional neural network (CNN) model trained on one dataset (source dataset) usually performs poorly on another dataset (target dataset). This is because the feature distribution of the same emotion varies in different datasets. To improve the cross-dataset accuracy of the CNN model, we introduce an unsupervised domain adaptation method, which is especially suitable for unlabelled small target dataset. In order to solve the problem of lack of samples from the target dataset, we train a generative adversarial network (GAN) on the target dataset and use the GAN generated samples to fine-tune the model pretrained on the source dataset. In the process of fine-tuning, we give the unlabelled GAN generated samples distributed pseudolabels dynamically according to the current prediction probabilities. Our method can be easily applied to any existing convolutional neural networks (CNN). We demonstrate the effectiveness of our method on four facial expression recognition datasets with two CNN structures and obtain inspiring results.
Style APA, Harvard, Vancouver, ISO itp.
6

Manikowska, Michalina, Damian Sadowski, Adam Sowinski i Michal R. Wrobel. "DevEmo—Software Developers’ Facial Expression Dataset". Applied Sciences 13, nr 6 (17.03.2023): 3839. http://dx.doi.org/10.3390/app13063839.

Pełny tekst źródła
Streszczenie:
The COVID-19 pandemic has increased the relevance of remote activities and digital tools for education, work, and other aspects of daily life. This reality has highlighted the need for emotion recognition technology to better understand the emotions of computer users and provide support in remote environments. Emotion recognition can play a critical role in improving the remote experience and ensuring that individuals are able to effectively engage in computer-based tasks remotely. This paper presents a new dataset, DevEmo, that can be used to train deep learning models for the purpose of emotion recognition of computer users. The dataset consists of 217 video clips of 33 students solving programming tasks. The recordings were collected in the participants’ actual work environment, capturing the students’ facial expressions as they engaged in programming tasks. The DevEmo dataset is labeled to indicate the presence of the four emotions (anger, confusion, happiness, and surprise) and a neutral state. The dataset provides a unique opportunity to explore the relationship between emotions and computer-related activities, and has the potential to support the development of more personalized and effective tools for computer-based learning environments.
Style APA, Harvard, Vancouver, ISO itp.
7

Bordjiba, Yamina, Hayet Farida Merouani i Nabiha Azizi. "Facial expression recognition via a jointly-learned dual-branch network". International journal of electrical and computer engineering systems 13, nr 6 (1.09.2022): 447–56. http://dx.doi.org/10.32985/ijeces.13.6.4.

Pełny tekst źródła
Streszczenie:
Human emotion recognition depends on facial expressions, and essentially on the extraction of relevant features. Accurate feature extraction is generally difficult due to the influence of external interference factors and the mislabelling of some datasets, such as the Fer2013 dataset. Deep learning approaches permit an automatic and intelligent feature extraction based on the input database. But, in the case of poor database distribution or insufficient diversity of database samples, extracted features will be negatively affected. Furthermore, one of the main challenges for efficient facial feature extraction and accurate facial expression recognition is the facial expression datasets, which are usually considerably small compared to other image datasets. To solve these problems, this paper proposes a new approach based on a dual-branch convolutional neural network for facial expression recognition, which is formed by three modules: The two first ones ensure features engineering stage by two branches, and features fusion and classification are performed by the third one. In the first branch, an improved convolutional part of the VGG network is used to benefit from its known robustness, the transfer learning technique with the EfficientNet network is applied in the second branch, to improve the quality of limited training samples in datasets. Finally, and in order to improve the recognition performance, a classification decision will be made based on the fusion of both branches’ feature maps. Based on the experimental results obtained on the Fer2013 and CK+ datasets, the proposed approach shows its superiority compared to several state-of-the-art results as well as using one model at a time. Those results are very competitive, especially for the CK+ dataset, for which the proposed dual branch model reaches an accuracy of 99.32, while for the FER-2013 dataset, the VGG-inspired CNN obtains an accuracy of 67.70, which is considered an acceptable accuracy, given the difficulty of the images of this dataset.
Style APA, Harvard, Vancouver, ISO itp.
8

Büdenbender, Björn, Tim T. A. Höfling, Antje B. M. Gerdes i Georg W. Alpers. "Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions’ prototypicality". PLOS ONE 18, nr 2 (10.02.2023): e0281309. http://dx.doi.org/10.1371/journal.pone.0281309.

Pełny tekst źródła
Streszczenie:
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models’ accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material’s prototypicality on AFC’s ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Style APA, Harvard, Vancouver, ISO itp.
9

Yap, Chuin Hong, Ryan Cunningham, Adrian K. Davison i Moi Hoon Yap. "Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer". Journal of Imaging 7, nr 8 (11.08.2021): 142. http://dx.doi.org/10.3390/jimaging7080142.

Pełny tekst źródła
Streszczenie:
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.
Style APA, Harvard, Vancouver, ISO itp.
10

Jin, Zhijia, Xiaolu Zhang, Jie Wang, Xiaolin Xu i Jiangjian Xiao. "Fine-Grained Facial Expression Recognition in Multiple Smiles". Electronics 12, nr 5 (22.02.2023): 1089. http://dx.doi.org/10.3390/electronics12051089.

Pełny tekst źródła
Streszczenie:
Smiling has often been incorrectly interpreted as “happy” in the popular facial expression datasets (AffectNet, RAF-DB, FERPlus). Smiling is the most complex human expression, with positive, neutral, and negative smiles. We focused on fine-grained facial expression recognition (FER) and built a new smiling face dataset, named Facial Expression Emotions. This dataset categorizes smiles into six classes of smiles, containing a total of 11,000 images labeled with corresponding fine-grained facial expression classes. We propose Smile Transformer, a network architecture for FER based on the Swin Transformer, to enhance the local perception capability of the model and improve the accuracy of fine-grained face recognition. Moreover, a convolutional block attention module (CBAM) was designed, to focus on important features of the face image and suppress unnecessary regional responses. For better classification results, an image quality evaluation module was used to assign different labels to images with different qualities. Additionally, a dynamic weight loss function was designed, to assign different learning strategies according to the labels during training, focusing on hard yet recognizable samples and discarding unidentifiable samples, to achieve better recognition. Overall, we focused on (a) creating a novel dataset of smiling facial images from online annotated images, and (b) developing a method for improved FER in smiling images. Facial Expression Emotions achieved an accuracy of 88.56% and could serve as a new benchmark dataset for future research on fine-grained FER.
Style APA, Harvard, Vancouver, ISO itp.
11

Wafi, Muhammad, Fitra A. Bachtiar i Fitri Utaminingrum. "Feature extraction comparison for facial expression recognition using adaptive extreme learning machine". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 1 (1.02.2023): 1113. http://dx.doi.org/10.11591/ijece.v13i1.pp1113-1122.

Pełny tekst źródła
Streszczenie:
Facial expression recognition is an important part in the field of affective computing. Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypes emotional expressions such as anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. This paper aims to compare feature extraction methods that are used to detect human facial expression. The study compares the gray level co-occurrence matrix, local binary pattern, and facial landmark (FL) with two types of facial expression datasets, namely Japanese female facial expression (JFFE), and extended Cohn-Kanade (CK+). In addition, we also propose an enhancement of extreme learning machine (ELM) method that can adaptively select best number of hidden neurons adaptive ELM (aELM) to reach its maximum performance. The result from this paper is our proposed method can slightly improve the performance of basic ELM method using some feature extractions mentioned before. Our proposed method can obtain maximum mean accuracy score of 88.07% on CK+ dataset, and 83.12% on JFFE dataset with FL feature extraction.
Style APA, Harvard, Vancouver, ISO itp.
12

Tiwari, Er Shesh Mani, i Er Mohd Shah Alam. "Facial Emotion Recognition". International Journal for Research in Applied Science and Engineering Technology 11, nr 2 (28.02.2023): 490–94. http://dx.doi.org/10.22214/ijraset.2023.49067.

Pełny tekst źródła
Streszczenie:
Abstract: Facial Emotion Recognition plays a significant role in interacting with computers which help us in various fields like medical processes, to present content on the basis of human mood, security and other fields. It is challenging because of heterogeneity in human faces, lighting, orientation, poses and noises. This paper aims to improve the accuracy of facial expression recognition. There has been much research done on the fer2013 dataset using CNN (Convolution Neural Network) and their results are quite impressive. In this work we performed CNN on the fer2013 dataset by adding images to improve the accuracy. To our best knowledge, our model achieves the accuracy of 70.23 % on fer2013 dataset after adding images in training and testing parts of disgusted class.
Style APA, Harvard, Vancouver, ISO itp.
13

Davison, Adrian K., Cliff Lansley, Nicholas Costen, Kevin Tan i Moi Hoon Yap. "SAMM: A Spontaneous Micro-Facial Movement Dataset". IEEE Transactions on Affective Computing 9, nr 1 (1.01.2018): 116–29. http://dx.doi.org/10.1109/taffc.2016.2573832.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Bhatti, Yusra Khalid, Afshan Jamil, Nudrat Nida, Muhammad Haroon Yousaf, Serestina Viriri i Sergio A. Velastin. "Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine". Computational Intelligence and Neuroscience 2021 (30.04.2021): 1–17. http://dx.doi.org/10.1155/2021/5570870.

Pełny tekst źródła
Streszczenie:
Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.
Style APA, Harvard, Vancouver, ISO itp.
15

Suryani, Dewi, Valentino Ekaputra i Andry Chowanda. "Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task". International Journal of Electrical and Computer Engineering (IJECE) 8, nr 5 (1.10.2018): 4042. http://dx.doi.org/10.11591/ijece.v8i5.pp4042-4046.

Pełny tekst źródła
Streszczenie:
Images, audio, and videos have been used by researchers for a long time to develop several tasks regarding human facial recognition and emotion detection. Most of the available datasets usually focus on either static expression, a short video of changing emotion from neutral to peak emotion, or difference in sounds to detect the current emotion of a person. Moreover, the common datasets were collected and processed in the United States (US) or Europe, and only several datasets were originated from Asia. In this paper, we present our effort to create a unique dataset that can fill in the gap by currently available datasets. At the time of writing, our datasets contain 10 full HD (1920 1080) video clips with annotated JSON file, which is in total 100 minutes of duration and the total size of 13 GB. We believe this dataset will be useful as a training and benchmark data for a variety of research topics regarding human facial and emotion recognition.
Style APA, Harvard, Vancouver, ISO itp.
16

Fang, Bei, Yujie Zhao, Guangxin Han i Juhou He. "Expression-Guided Deep Joint Learning for Facial Expression Recognition". Sensors 23, nr 16 (13.08.2023): 7148. http://dx.doi.org/10.3390/s23167148.

Pełny tekst źródła
Streszczenie:
In recent years, convolutional neural networks (CNNs) have played a dominant role in facial expression recognition. While CNN-based methods have achieved remarkable success, they are notorious for having an excessive number of parameters, and they rely on a large amount of manually annotated data. To address this challenge, we expand the number of training samples by learning expressions from a face recognition dataset to reduce the impact of a small number of samples on the network training. In the proposed deep joint learning framework, the deep features of the face recognition dataset are clustered, and simultaneously, the parameters of an efficient CNN are learned, thereby marking the data for network training automatically and efficiently. Specifically, first, we develop a new efficient CNN based on the proposed affinity convolution module with much lower computational overhead for deep feature learning and expression classification. Then, we develop an expression-guided deep facial clustering approach to cluster the deep features and generate abundant expression labels from the face recognition dataset. Finally, the AC-based CNN is fine-tuned using an updated training set and a combined loss function. Our framework is evaluated on several challenging facial expression recognition datasets as well as a self-collected dataset. In the context of facial expression recognition applied to the field of education, our proposed method achieved an impressive accuracy of 95.87% on the self-collected dataset, surpassing other existing methods.
Style APA, Harvard, Vancouver, ISO itp.
17

Fan, Deng-Ping, Ziling Huang, Peng Zheng, Hong Liu, Xuebin Qin i Luc Van Gool. "Facial-sketch Synthesis: A New Challenge". Machine Intelligence Research 19, nr 4 (30.07.2022): 257–87. http://dx.doi.org/10.1007/s11633-022-1349-9.

Pełny tekst źródła
Streszczenie:
AbstractThis paper aims to conduct a comprehensive study on facial-sketch synthesis (FSS). However, due to the high cost of obtaining hand-drawn sketch datasets, there is a lack of a complete benchmark for assessing the development of FSS algorithms over the last decade. We first introduce a high-quality dataset for FSS, named FS2K, which consists of 2 104 image-sketch pairs spanning three types of sketch styles, image backgrounds, lighting conditions, skin colors, and facial attributes. FS2K differs from previous FSS datasets in difficulty, diversity, and scalability and should thus facilitate the progress of FSS research. Second, we present the largest-scale FSS investigation by reviewing 89 classic methods, including 25 handcrafted feature-based facial-sketch synthesis approaches, 29 general translation methods, and 35 image-to-sketch approaches. In addition, we elaborate comprehensive experiments on the existing 19 cutting-edge models. Third, we present a simple baseline for FSS, named FSGAN. With only two straightforward components, i.e., facial-aware masking and style-vector expansion, our FSGAN surpasses the performance of all previous state-of-the-art models on the proposed FS2K dataset by a large margin. Finally, we conclude with lessons learned over the past years and point out several unsolved challenges. Our code is available at https://github.com/DengPingFan/FSGAN.
Style APA, Harvard, Vancouver, ISO itp.
18

Lie, Wen-Nung, Dao-Quang Le, Chun-Yu Lai i Yu-Shin Fang. "Heart Rate Estimation from Facial Image Sequences of a Dual-Modality RGB-NIR Camera". Sensors 23, nr 13 (1.07.2023): 6079. http://dx.doi.org/10.3390/s23136079.

Pełny tekst źródła
Streszczenie:
This paper presents an RGB-NIR (Near Infrared) dual-modality technique to analyze the remote photoplethysmogram (rPPG) signal and hence estimate the heart rate (in beats per minute), from a facial image sequence. Our main innovative contribution is the introduction of several denoising techniques such as Modified Amplitude Selective Filtering (MASF), Wavelet Decomposition (WD), and Robust Principal Component Analysis (RPCA), which take advantage of RGB and NIR band characteristics to uncover the rPPG signals effectively through this Independent Component Analysis (ICA)-based algorithm. Two datasets, of which one is the public PURE dataset and the other is the CCUHR dataset built with a popular Intel RealSense D435 RGB-D camera, are adopted in our experiments. Facial video sequences in the two datasets are diverse in nature with normal brightness, under-illumination (i.e., dark), and facial motion. Experimental results show that the proposed method has reached competitive accuracies among the state-of-the-art methods even at a shorter video length. For example, our method achieves MAE = 4.45 bpm (beats per minute) and RMSE = 6.18 bpm for RGB-NIR videos of 10 and 20 s in the CCUHR dataset and MAE = 3.24 bpm and RMSE = 4.1 bpm for RGB videos of 60-s in the PURE dataset. Our system has the advantages of accessible and affordable hardware, simple and fast computations, and wide realistic applications.
Style APA, Harvard, Vancouver, ISO itp.
19

Quintana, Marcos, Sezer Karaoglu, Federico Alvarez, Jose Menendez i Theo Gevers. "Three-D Wide Faces (3DWF): Facial Landmark Detection and 3D Reconstruction over a New RGB–D Multi-Camera Dataset". Sensors 19, nr 5 (4.03.2019): 1103. http://dx.doi.org/10.3390/s19051103.

Pełny tekst źródła
Streszczenie:
Latest advances of deep learning paradigm and 3D imaging systems have raised the necessity for more complete datasets that allow exploitation of facial features such as pose, gender or age. In our work, we propose a new facial dataset collected with an innovative RGB–D multi-camera setup whose optimization is presented and validated. 3DWF includes 3D raw and registered data collection for 92 persons from low-cost RGB–D sensing devices to commercial scanners with great accuracy. 3DWF provides a complete dataset with relevant and accurate visual information for different tasks related to facial properties such as face tracking or 3D face reconstruction by means of annotated density normalized 2K clouds and RGB–D streams. In addition, we validate the reliability of our proposal by an original data augmentation method from a massive set of face meshes for facial landmark detection in 2D domain, and by head pose classification through common Machine Learning techniques directed towards proving alignment of collected data.
Style APA, Harvard, Vancouver, ISO itp.
20

Lin, Qing, Ruili He i Peihe Jiang. "Feature Guided CNN for Baby’s Facial Expression Recognition". Complexity 2020 (22.11.2020): 1–10. http://dx.doi.org/10.1155/2020/8855885.

Pełny tekst źródła
Streszczenie:
State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.
Style APA, Harvard, Vancouver, ISO itp.
21

Bie, Mei, Huan Xu, Quanle Liu, Yan Gao, Kai Song i Xiangjiu Che. "DA-FER: Domain Adaptive Facial Expression Recognition". Applied Sciences 13, nr 10 (22.05.2023): 6314. http://dx.doi.org/10.3390/app13106314.

Pełny tekst źródła
Streszczenie:
Facial expression recognition (FER) is an important field in computer vision with many practical applications. However, one of the challenges in FER is dealing with small sample data, where the number of samples available for training machine learning algorithms is limited. To address this issue, a domain adaptive learning strategy is proposed in this paper. The approach uses a public dataset with sufficient samples as the source domain and a small sample dataset as the target domain. Furthermore, the maximum mean discrepancy with kernel mean embedding is utilized to reduce the disparity between the source and target domain data samples, thereby enhancing expression recognition accuracy. The proposed Domain Adaptive Facial Expression Recognition (DA-FER) method integrates the SSPP module and Slice module to fuse expression features of different dimensions. Moreover, this method retains the regions of interest of the five senses to accomplish more discriminative feature extraction and improve the transfer learning capability of the network. Experimental results indicate that the proposed method can effectively enhance the performance of expression recognition. Specifically, when the self-collected Selfie-Expression dataset is used as the target domain, and the public datasets RAF-DB and Fer2013 are used as the source domain, the performance of expression recognition is improved to varying degrees, which demonstrates the effectiveness of this domain adaptive method.
Style APA, Harvard, Vancouver, ISO itp.
22

Rathod, Manish, Chirag Dalvi, Kulveen Kaur, Shruti Patil, Shilpa Gite, Pooja Kamat, Ketan Kotecha, Ajith Abraham i Lubna Abdelkareim Gabralla. "Kids’ Emotion Recognition Using Various Deep-Learning Models with Explainable AI". Sensors 22, nr 20 (21.10.2022): 8066. http://dx.doi.org/10.3390/s22208066.

Pełny tekst źródła
Streszczenie:
Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer’s focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids’ emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors’ dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors’ dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.
Style APA, Harvard, Vancouver, ISO itp.
23

Porta-Lorenzo, Manuel, Manuel Vázquez-Enríquez, Ania Pérez-Pérez, José Luis Alba-Castro i Laura Docío-Fernández. "Facial Motion Analysis beyond Emotional Expressions". Sensors 22, nr 10 (19.05.2022): 3839. http://dx.doi.org/10.3390/s22103839.

Pełny tekst źródła
Streszczenie:
Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on the recognition of basic facial expressions of emotion and neglects the analysis of facial motions related to non-verbal communication signals. This paper focuses on the classification of facial expressions that are of the utmost importance in sign languages (Grammatical Facial Expressions) but also present in expressive spoken language. We have collected a dataset of Spanish Sign Language sentences and extracted the intervals for three types of Grammatical Facial Expressions: negation, closed queries and open queries. A study of several deep learning models using different input features on the collected dataset (LSE_GFE) and an external dataset (BUHMAP) shows that GFEs can be learned reliably with Graph Convolutional Networks simply fed with face landmarks.
Style APA, Harvard, Vancouver, ISO itp.
24

Sajid, Muhammad, Nouman Ali, Saadat Hanif Dar, Naeem Iqbal Ratyal, Asif Raza Butt, Bushra Zafar, Tamoor Shafique, Mirza Jabbar Aziz Baig, Imran Riaz i Shahbaz Baig. "Data Augmentation-Assisted Makeup-Invariant Face Recognition". Mathematical Problems in Engineering 2018 (4.12.2018): 1–10. http://dx.doi.org/10.1155/2018/2850632.

Pełny tekst źródła
Streszczenie:
Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.
Style APA, Harvard, Vancouver, ISO itp.
25

Xiao, Huafei, Wenbo Li, Guanzhong Zeng, Yingzhang Wu, Jiyong Xue, Juncheng Zhang, Chengmou Li i Gang Guo. "On-Road Driver Emotion Recognition Using Facial Expression". Applied Sciences 12, nr 2 (13.01.2022): 807. http://dx.doi.org/10.3390/app12020807.

Pełny tekst źródła
Streszczenie:
With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.
Style APA, Harvard, Vancouver, ISO itp.
26

Lee, Jiann-Der, Tzu-Yen Lan, Li-Chang Liu, Chung-Hsien Huang, Shin-Tseng Lee, Chien-Tsai Wu i Jyi-Feng Chen. "A VOICE-CONTROL-AID REGISTRATION AND TRACKING SYSTEM USING NDI POLARIS". Biomedical Engineering: Applications, Basis and Communications 19, nr 04 (sierpień 2007): 231–37. http://dx.doi.org/10.4015/s1016237207000318.

Pełny tekst źródła
Streszczenie:
This paper describes an on site registration system for facial point data. This system registers two facial point datasets from the floating dataset obtained from NDI Polaris equipment to the reference dataset extracted from CT images. To benefit the user's flexibility on using Polaris without the remote assistant's help, the voice command ability is added into the system. The user is able to control the procedure and examine the data via his voice without the remote assistant as a traditional way of using Polaris. This system offers different display functions such as point-data, mesh and polygonal functions for the registration results, and is able to track the probe position in the registration result and the CT images.
Style APA, Harvard, Vancouver, ISO itp.
27

Xiang, Xiaoyu, Yang Cheng, Shaoyuan Xu, Qian Lin i Jan Allebach. "The Blessing and the Curse of the Noise behind Facial Landmark Annotations". Electronic Imaging 2020, nr 8 (26.01.2020): 186–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-186.

Pełny tekst źródła
Streszczenie:
The evolving algorithms for 2D facial landmark detection empower people to recognize faces, analyze facial expressions, etc. However, existing methods still encounter problems of unstable facial landmarks when applied to videos. Because previous research shows that the instability of facial landmarks is caused by the inconsistency of labeling quality among the public datasets, we want to have a better understanding of the influence of annotation noise in them. In this paper, we make the following contributions: 1) we propose two metrics that quantitatively measure the stability of detected facial landmarks, 2) we model the annotation noise in an existing public dataset, 3) we investigate the influence of different types of noise in training face alignment neural networks, and propose corresponding solutions. Our results demonstrate improvements in both accuracy and stability of detected facial landmarks.
Style APA, Harvard, Vancouver, ISO itp.
28

Zhao, Lei, Zengcai Wang i Guoxin Zhang. "Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram". Mathematical Problems in Engineering 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/7206041.

Pełny tekst źródła
Streszczenie:
This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP) feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM) classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK) + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.
Style APA, Harvard, Vancouver, ISO itp.
29

Mansouri-Benssassi, Esma, i Juan Ye. "Generalisation and robustness investigation for facial and speech emotion recognition using bio-inspired spiking neural networks". Soft Computing 25, nr 3 (16.01.2021): 1717–30. http://dx.doi.org/10.1007/s00500-020-05501-7.

Pełny tekst źródła
Streszczenie:
AbstractEmotion recognition through facial expression and non-verbal speech represents an important area in affective computing. They have been extensively studied from classical feature extraction techniques to more recent deep learning approaches. However, most of these approaches face two major challenges: (1) robustness—in the face of degradation such as noise, can a model still make correct predictions? and (2) cross-dataset generalisation—when a model is trained on one dataset, can it be used to make inference on another dataset?. To directly address these challenges, we first propose the application of a spiking neural network (SNN) in predicting emotional states based on facial expression and speech data, then investigate, and compare their accuracy when facing data degradation or unseen new input. We evaluate our approach on third-party, publicly available datasets and compare to the state-of-the-art techniques. Our approach demonstrates robustness to noise, where it achieves an accuracy of 56.2% for facial expression recognition (FER) compared to 22.64% and 14.10% for CNN and SVM, respectively, when input images are degraded with the noise intensity of 0.5, and the highest accuracy of 74.3% for speech emotion recognition (SER) compared to 21.95% of CNN and 14.75% for SVM when audio white noise is applied. For generalisation, our approach achieves consistently high accuracy of 89% for FER and 70% for SER in cross-dataset evaluation and suggests that it can learn more effective feature representations, which lead to good generalisation of facial features and vocal characteristics across subjects.
Style APA, Harvard, Vancouver, ISO itp.
30

Thenmozhi, M., i P. Gnanaskanda Parthiban. "Robust Face Recognition from NIR Dataset via Sparse Representation". Applied Mechanics and Materials 573 (czerwiec 2014): 495–500. http://dx.doi.org/10.4028/www.scientific.net/amm.573.495.

Pełny tekst źródła
Streszczenie:
A biometric identification system may be a pc application for mechanically distinctive or confirmative of an individual from a digital image or a video frame from a video supply. One in all the ways that to try and do this can be by examination designated face expression from the image and a facial information. This paper planned dynamic face recognition from near-infrared images by exploitation sparse representation classifier. Most of the prevailing datasets for facial expressions are captured in a very visible light spectrum. However, the visible light (VIS) will modify with time and placement, causing important variations in look and texture. This new framework was designed to attain strength to pose variation and occlusion and to resolve uncontrolled environmental illumination for reliable biometric identification. This paper gift a unique analysis on a dynamic facial features recognition, exploitation near-infrared (NIR) datasets and LBP(Local binary patterns) feature descriptors. It shows sensible and strong results against illumination variations by exploitation infrared imaging system.
Style APA, Harvard, Vancouver, ISO itp.
31

Kingsley, Akputu Oryina, Udoinyang G. Inyang, Ortil Msugh, Fiza T. Mughal i Abel Usoro. "Recognizing facial emotions for educational learning settings". IAES International Journal of Robotics and Automation (IJRA) 11, nr 1 (1.03.2022): 21. http://dx.doi.org/10.11591/ijra.v11i1.pp21-32.

Pełny tekst źródła
Streszczenie:
Educational learning settings exploit cognitive factors as ultimate feedback to enhance personalization in teaching and learning. But besides cognition, the emotions of the learner which reflect the affective learning dimension also play an important role in the learning process. The emotions can be recognized by tracking explicit behaviors of the learner like facial or vocal expressions. Despite reasonable efforts to recognize emotions, the research community is currently constraints by two issues, namely : i) the lack of efficient feature descriptors to accurately represent and prospectively recogniz e (detecting) the emotions of the learner ; ii) lack of contextual datasets to benchmark performances of emotion recognizers in the learning - speci fic scenarios, resulting in poor generalizations. This paper presents a facial emotion recognition technique (FERT). The FERT is realized through results of preliminary analysis across various facial feature descriptors. Emotions are classified using the m ultiple kernel learning (MKL) method which reportedly possesses good merits. A contextually relevant simulated learning emotion ( SLE ) dataset is introduced to validate the FERT scheme. Recognition performance of the FERT scheme generalizes to 90.3% on the SLE dataset. On more popular but noncontextually datasets, the scheme achi e ved 90.0% and 82.8% respectively extended Cohn Kanade (CK+) and acted facial expressions in the wild ( AFEW ) datasets. A test for the null hypothesis that there is no significant difference in the performances accuracies of the descriptors rather proved otherwise (<em> x<sup>2</sup></em> = 14 . 619 , <em>df</em> = 5 , <em>p</em> = 0 . 01212 ) for a model considered at a 95% confidence interval.
Style APA, Harvard, Vancouver, ISO itp.
32

BOLCAȘ, Radu-Daniel, i Diana DRANGA. "Facial Emotions Recognition in Machine Learning". Electrotehnica, Electronica, Automatica 69, nr 4 (15.11.2021): 87–94. http://dx.doi.org/10.46904/eea.21.69.4.1108010.

Pełny tekst źródła
Streszczenie:
Facial expression recognition (FER) is a field where many researchers have tried to create a model able to recognize emotions from a face. With many applications such as interfaces between human and machine, safety or medical, this field has continued to develop with the increase of processing power. This paper contains a broad description on the psychological aspects of the FER and provides a description on the datasets and algorithms that make the neural networks possible. Then a literature review is performed on the recent studies in the facial emotion recognition detailing the methods and algorithms used to improve the capabilities of systems using machine learning. Each interesting aspect of the studies are discussed to highlight the novelty and related concepts and strategies that make the recognition attain a good accuracy. In addition, challenges related to machine learning were discussed, such as overfitting, possible causes and solutions and challenges related to the dataset such as expression unrelated discrepancy such as head orientation, illumination, dataset class bias. Those aspects are discussed in detail, as a review was performed with the difficulties that come with using deep neural networks serving as a guideline to the advancement domain. Finally, those challenges offer an insight in what possible future directions can be taken to develop better FER systems.
Style APA, Harvard, Vancouver, ISO itp.
33

Zulkarnain, Syavira Tiara, i Nanik Suciati. "Selective local binary pattern with convolutional neural network for facial expression recognition". International Journal of Electrical and Computer Engineering (IJECE) 12, nr 6 (1.12.2022): 6724. http://dx.doi.org/10.11591/ijece.v12i6.pp6724-6735.

Pełny tekst źródła
Streszczenie:
<span lang="EN-US">Variation in images in terms of head pose and illumination is a challenge in facial expression recognition. This research presents a hybrid approach that combines the conventional and deep learning, to improve facial expression recognition performance and aims to solve the challenge. We propose a selective local binary pattern (SLBP) method to obtain a more stable image representation fed to the learning process in convolutional neural network (CNN). In the preprocessing stage, we use adaptive gamma transformation to reduce illumination variability. The proposed SLBP selects the discriminant features in facial images with head pose variation using the median-based standard deviation of local binary pattern images. We experimented on the Karolinska directed emotional faces (KDEF) dataset containing thousands of images with variations in head pose and illumination and Japanese female facial expression (JAFFE) dataset containing seven facial expressions of Japanese females’ frontal faces. The experiments show that the proposed method is superior compared to the other related approaches with an accuracy of 92.21% on KDEF dataset and 94.28% on JAFFE dataset.</span>
Style APA, Harvard, Vancouver, ISO itp.
34

Chu, Zixuan. "Facial expression recognition for a seven-class small and medium-sized dataset based on transfer learning CNNs". Applied and Computational Engineering 4, nr 1 (14.06.2023): 696–701. http://dx.doi.org/10.54254/2755-2721/4/2023394.

Pełny tekst źródła
Streszczenie:
As one of the most common biological information for human to express their emotions, facial expression plays an important role in biological research, psychological analysis and even human-computer interaction in the computer field. It is very important to use the efficient computing and processing power of computers to realize automatic facial expression recognition. However, in the research process of this field, most people's technology and model achievements are based on large datasets, which lack the universality of small and medium-sized datasets. Therefore, this project provides an optimization model on a specific seven-class small and medium face image dataset and provides a possible technical optimization reference direction for facial expression recognition models on similar small and medium-sized datasets. During the experiment, the training performance of VGG16 and MobileNet is compared. A comparative experiment is set up to observe the effect of transfer learning mechanism on training results. The results show that transfer learning has a significant effect on the performance of the model, and the accuracy of the optimal test set is more than 90%. Regardless of whether transfer learning mechanism is used, the training performance of VGG16 model structure is better than that of MobileNet structure in the same dataset.
Style APA, Harvard, Vancouver, ISO itp.
35

Khoirullah, Habib Bahari, Novanto Yudistira i Fitra Abdurrachman Bachtiar. "Facial Expression Recognition Using Convolutional Neural Network with Attention Module". JOIV : International Journal on Informatics Visualization 6, nr 4 (31.12.2022): 897. http://dx.doi.org/10.30630/joiv.6.4.963.

Pełny tekst źródła
Streszczenie:
Human Activity Recognition (HAR) is an introduction to human activities that refer to the movements performed by an individual on specific body parts. One branch of HAR is human emotion. Facial emotion is vital in human communication to help convey emotional states and intentions. Facial Expression Recognition (FER) is crucial to understanding how humans communicate. Misinterpreting Facial Expressions can lead to misunderstanding and difficulty reaching a common ground. Deep Learning can help in recognizing these facial expressions. To improve the probation of Facial Expressions Recognition, we propose ResNet attached with an Attention module to push the performance forward. This approach performs better than the standalone ResNet because the localization and sampling grid allows the model to learn how to perform spatial transformations on the input image. Consequently, it improves the model's geometric invariance and picks up the features of the expressions from the human face, resulting in better classification results. This study proves the proposed method with attention is better than without, with a test accuracy of 0.7789 on the FER dataset and 0.8327 on the FER+ dataset. It concludes that the Attention module is essential in recognizing Facial Expressions using a Convolutional Neural Network (CNN). Advice for further research first, add more datasets besides FER and FER+, and second, add a Scheduler to decrease the learning rate during the training data.
Style APA, Harvard, Vancouver, ISO itp.
36

Anjani, Suputri Devi D., i Eluri Suneetha. "Facial emotion recognition using hybrid features-novel leaky rectified triangle linear unit activation function based deep convolutional neural network". i-manager’s Journal on Image Processing 9, nr 2 (2022): 12. http://dx.doi.org/10.26634/jip.9.2.18968.

Pełny tekst źródła
Streszczenie:
Facial Expression Recognition (FER) is an important topic that is used in many areas. FER categorizes facial expressions according to human emotions. Most networks are designed for facial emotion recognition but still have some problems, such as performance degradation and the lowest classification accuracy. To achieve greater classification accuracy, this paper proposes a new Leaky Rectified Triangle Linear Unit (LRTLU) activation function based on the Deep Convolutional Neural Network (DCNN). The input images are pre-processed using the new Adaptive Bilateral Filter Contourlet Transform (ABFCT) filtering algorithm. The face is then detected in the filtered image using the Chehra face detector. From the detected face image, facial landmarks are extracted using a cascading regression tree, and important features are extracted based on the detected landmarks. The extracted feature set is then passed as input to the Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network (LRTLU-DCNN), which classifies the input image expressions into six emotions, such as happiness, sadness, neutrality, anger, disgust, and surprise. Experimentation of the proposed method is carried out using the Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) datasets. The proposed work provides a classification accuracy of 99.67347% for the CK+ dataset along with 99.65986% for the JAFFE dataset.
Style APA, Harvard, Vancouver, ISO itp.
37

Naser, Omer Abdulhaleem, Sharifah Mumtazah Syed Ahmad, Khairulmizam Samsudin, Marsyita Hanafi, Siti Mariam Binti Shafie i Nor Zamri Zarina. "Facial recognition for partially occluded faces". Indonesian Journal of Electrical Engineering and Computer Science 30, nr 3 (1.06.2023): 1846. http://dx.doi.org/10.11591/ijeecs.v30.i3.pp1846-1855.

Pełny tekst źródła
Streszczenie:
Facial recognition is a highly developed method of determining a person's identity just by looking at an image of their face, and it has been used in a wide range of contexts. However, facial recognition models of previous researchers typically have trouble identifying faces behind masks, glasses, or other obstructions. Therefore, this paper aims to efficiently recognise faces obscured with masks and glasses. This research therefore proposes a method to solve the issue of partially obscured faces in facial recognition. The collected datasets for this study include CelebA, MFR2, WiderFace, LFW, and MegaFace Challenge datasets; all of these contain photos of occluded faces. This paper analyses masked facial images using multi-task cascaded convolutional neural networks (MTCNN). FaceNet adds more embeddings and verifications to face recognition. Support vector classification (SVC) labels the datasets to produce a reliable prediction probability. This study achieved around 99.50% accuracy for the training set and 95% for the testing set. This model recognizes partially obscured digital camera faces using the same datasets. We compare our results to comparable dataset studies to show how our method is more effective and accurate.
Style APA, Harvard, Vancouver, ISO itp.
38

Tran, Thi-Dung, Junghee Kim, Ngoc-Huynh Ho, Hyung-Jeong Yang, Sudarshan Pant, Soo-Hyung Kim i Guee-Sang Lee. "Stress Analysis with Dimensions of Valence and Arousal in the Wild". Applied Sciences 11, nr 11 (3.06.2021): 5194. http://dx.doi.org/10.3390/app11115194.

Pełny tekst źródła
Streszczenie:
In the field of stress recognition, the majority of research has conducted experiments on datasets collected from controlled environments with limited stressors. As these datasets cannot represent real-world scenarios, stress identification and analysis are difficult. There is a dire need for reliable, large datasets that are specifically acquired for stress emotion with varying degrees of expression for this task. In this paper, we introduced a dataset for Stress Analysis with Dimensions of Valence and Arousal of Korean Movie in Wild (SADVAW), which includes video clips with diversity in facial expressions from different Korean movies. The SADVAW dataset contains continuous dimensions of valence and arousal. We presented a detailed statistical analysis of the dataset. We also analyzed the correlation between stress and continuous dimensions. Moreover, using the SADVAW dataset, we trained a deep learning-based model for stress recognition.
Style APA, Harvard, Vancouver, ISO itp.
39

Wan, Xintong, Yifan Wu i Xiaoqiang Li. "Learning Robust Shape-Indexed Features for Facial Landmark Detection". Applied Sciences 12, nr 12 (8.06.2022): 5828. http://dx.doi.org/10.3390/app12125828.

Pełny tekst źródła
Streszczenie:
In facial landmark detection, extracting shape-indexed features is widely applied in existing methods to impose shape constraint over landmarks. Commonly, these methods crop shape-indexed patches surrounding landmarks of a given initial shape. All landmarks are then detected jointly based on these patches, with shape constraint naturally embedded in the regressor. However, there are still two remaining challenges that cause the degradation of these methods. First, the initial shape may seriously deviate from the ground truth when presented with a large pose, resulting in considerable noise in the shape-indexed features. Second, extracting local patch features is vulnerable to occlusions due to missing facial context information under severe occlusion. To address the issues above, this paper proposes a facial landmark detection algorithm named Sparse-To-Dense Network (STDN). First, STDN employs a lightweight network to detect sparse facial landmarks and forms a reinitialized shape, which can efficiently improve the quality of cropped patches when presented with large poses. Then, a group-relational module is used to exploit the inherent geometric relations of the face, which further enhances the shape constraint against occlusion. Our method achieves 4.64% mean error with 1.97% failure rate on COFW68 dataset, 3.48% mean error with 0.43% failure rate on 300 W dataset and 7.12% mean error with 11.61% failure rate on Masked 300 W dataset. The results demonstrate that STDN achieves outstanding performance in comparison to state-of-the-art methods, especially on occlusion datasets.
Style APA, Harvard, Vancouver, ISO itp.
40

Bukhari, Nimra, Shabir Hussain, Muhammad Ayoub, Yang Yu i Akmal Khan. "Deep Learning based Framework for Emotion Recognition using Facial Expression". Pakistan Journal of Engineering and Technology 5, nr 3 (17.11.2022): 51–57. http://dx.doi.org/10.51846/vol5iss3pp51-57.

Pełny tekst źródła
Streszczenie:
Human convey their message in different forms. Expressing their emotions and moods through their facial expression is one of them. In this work, to avoid the traditional process of feature extraction (Geometry based method, Template based method, and Appearance based method), CNN model is used as a feature extractor for emotion detection using facial expression. In this study we also used three pre-trained models VGG-16, ResNet-50, Inception-V3. This Experiment is done on Fer-2013 facial expression dataset and Cohn Extended (CK+) dataset. By using FER-2013 dataset the accuracy rates for CNN, ResNet-50, VGG-16 and Inception-V3 are 76.74%, 85.71%, 85.78%s, 97.93% respectively. Similarly, the experimental results using CK+ dataset showed the accuracy rates for CNN, ResNet- 50, VGG-16 and Inception-V3 are 84.18%, 92.91%, 91.07%, and 73.16% respectively. The experimental results showed exceptional results for Inception-V3 with 97.93% using FER-2013 dataset and ResNet-50 with 91.92% using CK+ dataset.
Style APA, Harvard, Vancouver, ISO itp.
41

Zhu, Xiaoliang, Shihao Ye, Liang Zhao i Zhicheng Dai. "Hybrid Attention Cascade Network for Facial Expression Recognition". Sensors 21, nr 6 (12.03.2021): 2003. http://dx.doi.org/10.3390/s21062003.

Pełny tekst źródła
Streszczenie:
As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.
Style APA, Harvard, Vancouver, ISO itp.
42

Hussein, Arezu Rezgar, i Rasber Dhahir Rashid. "KurdFace Morph Dataset Creation Using OpenCV". Science Journal of University of Zakho 10, nr 4 (14.12.2022): 258–67. http://dx.doi.org/10.25271/sjuoz.2022.10.4.943.

Pełny tekst źródła
Streszczenie:
Automated facial recognition is rapidly being used to reliably identify the identities of individuals for a variety of applications, from automated border control to unlocking mobile phones. The attack of Morphing has presented a significant risk to the face recognition system (FRS) at automated border control. Face morphing is a technique for blending the facial images of two or more people such that the outcome looks like both of them. For example, a morphing attack may be used to get a fake passport by using a morphed image. This passport can be used by both the modified image contributors while crossing the border. Due to the publicly available digital altering tools that criminals may use to carry out face morphing attacks. Morph Attack Detection (MAD) systems have received a lot of attention in recent years. In the absence of automated morphing detection, Face Recognition Systems (FRS) are extremely susceptible to morphing attacks. Due to the limited number of publicly available face morph datasets to investigate, especially to our knowledge, there is no Kurdish morph dataset. In this work, we decided to generate a new face dataset, including morphed images which we named as "KurdFace" dataset. OpenCV was used to generate morphed images. Then we study the susceptibility of biometric systems to such morphed face attacks by designing and creating a Morph Attack Detection model to distinguish morphed images from genuine ones. To evaluate the robustness of our dataset regarding morphing attack detection, we compare it with the AMSL dataset to determine the classification error rate on both datasets to see how our dataset is different from others. Local Binary Pattern and Uniform Local Binary Pattern are used as feature extraction techniques, and as a classifier, SVM is utilized. The experimental result shows that our dataset is suitable for research purposes.
Style APA, Harvard, Vancouver, ISO itp.
43

Ngo, Quan T., i Seokhoon Yoon. "Facial Expression Recognition Based on Weighted-Cluster Loss and Deep Transfer Learning Using a Highly Imbalanced Dataset". Sensors 20, nr 9 (5.05.2020): 2639. http://dx.doi.org/10.3390/s20092639.

Pełny tekst źródła
Streszczenie:
Facial expression recognition (FER) is a challenging problem in the fields of pattern recognition and computer vision. The recent success of convolutional neural networks (CNNs) in object detection and object segmentation tasks has shown promise in building an automatic deep CNN-based FER model. However, in real-world scenarios, performance degrades dramatically owing to the great diversity of factors unrelated to facial expressions, and due to a lack of training data and an intrinsic imbalance in the existing facial emotion datasets. To tackle these problems, this paper not only applies deep transfer learning techniques, but also proposes a novel loss function called weighted-cluster loss, which is used during the fine-tuning phase. Specifically, the weighted-cluster loss function simultaneously improves the intra-class compactness and the inter-class separability by learning a class center for each emotion class. It also takes the imbalance in a facial expression dataset into account by giving each emotion class a weight based on its proportion of the total number of images. In addition, a recent, successful deep CNN architecture, pre-trained in the task of face identification with the VGGFace2 database from the Visual Geometry Group at Oxford University, is employed and fine-tuned using the proposed loss function to recognize eight basic facial emotions from the AffectNet database of facial expression, valence, and arousal computing in the wild. Experiments on an AffectNet real-world facial dataset demonstrate that our method outperforms the baseline CNN models that use either weighted-softmax loss or center loss.
Style APA, Harvard, Vancouver, ISO itp.
44

Yang, Yuanzhe, Zhiyi Niu, Yuying Qiu, Biao Song, Xinchang Zhang i Yuan Tian. "A Multi-Input Fusion Model for Privacy and Semantic Preservation in Facial Image Datasets". Applied Sciences 13, nr 11 (2.06.2023): 6799. http://dx.doi.org/10.3390/app13116799.

Pełny tekst źródła
Streszczenie:
The widespread application of multimedia technologies such as video surveillance, online meetings, and drones facilitates the acquisition of a large amount of data that may contain facial features, posing significant concerns with regard to privacy. Protecting privacy while preserving the semantic contents of facial images is a challenging but crucial problem. Contemporary techniques for protecting the privacy of images lack the incorporation of the semantic attributes of faces and disregard the protection of dataset privacy. In this paper, we propose the Facial Privacy and Semantic Preservation (FPSP) model that utilizes similar facial feature replacement to achieve identity concealment, while adding semantic evaluation to the loss function to preserve semantic features. The proposed model is versatile and efficient in different task scenarios, preserving image utility while concealing privacy. Our experiments on the CelebA dataset demonstrate that the model achieves a semantic preservation rate of 77% while concealing the identities in facial images in the dataset.
Style APA, Harvard, Vancouver, ISO itp.
45

Yang, B., Z. Li i E. Cao. "Facial Expression Recognition Based on Multi-dataset Neural Network". Radioengineering 29, nr 1 (14.04.2020): 259–66. http://dx.doi.org/10.13164/re.2020.0259.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Benton, C., A. Clark, R. Cooper, I. Penton-Voak i S. Nikolov. "Different views of facial expressions: an image sequence dataset". Journal of Vision 7, nr 9 (30.03.2010): 945. http://dx.doi.org/10.1167/7.9.945.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Yan, Haibin. "Transfer subspace learning for cross-dataset facial expression recognition". Neurocomputing 208 (październik 2016): 165–73. http://dx.doi.org/10.1016/j.neucom.2015.11.113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Salari, Seyed Reza, i Habib Rostami. "Pgu-Face: A dataset of partially covered facial images". Data in Brief 9 (grudzień 2016): 288–91. http://dx.doi.org/10.1016/j.dib.2016.09.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Ye, Yuping, Zhan Song, Junguang Guo i Yu Qiao. "SIAT-3DFE: A High-Resolution 3D Facial Expression Dataset". IEEE Access 8 (2020): 48205–11. http://dx.doi.org/10.1109/access.2020.2979518.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Liu, Yinglu, Hailin Shi, Hao Shen, Yue Si, Xiaobo Wang i Tao Mei. "A New Dataset and Boundary-Attention Semantic Segmentation for Face Parsing". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11637–44. http://dx.doi.org/10.1609/aaai.v34i07.6832.

Pełny tekst źródła
Streszczenie:
Face parsing has recently attracted increasing interest due to its numerous application potentials, such as facial make up and facial image generation. In this paper, we make contributions on face parsing task from two aspects. First, we develop a high-efficiency framework for pixel-level face parsing annotating and construct a new large-scale Landmark guided face Parsing dataset (LaPa). It consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with an 11-category pixel-level label map and 106-point landmarks. The dataset is publicly accessible to the community for boosting the advance of face parsing.1 Second, a simple yet effective Boundary-Attention Semantic Segmentation (BASS) method is proposed for face parsing, which contains a three-branch network with elaborately developed loss functions to fully exploit the boundary information. Extensive experiments on our LaPa benchmark and the public Helen dataset show the superiority of our proposed method.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii