Статті в журналах з теми "Facial video processing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Facial video processing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Facial video processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Muthanna Shibel, Ahmed, Sharifah Mumtazah Syed Ahmad, Luqman Hakim Musa, and Mohammed Nawfal Yahya. "DEEP LEARNING DETECTION OF FACIAL BIOMETRIC PRESENTATION ATTACK." LIFE: International Journal of Health and Life-Sciences 8 (October 23, 2023): 61–78. http://dx.doi.org/10.20319/lijhls.2022.8.6178.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Face recognition systems have gained increasing importance in today’s society, which applications range from access controls to secure systems to electronic devices such as mobile phones and laptops. However, the security of face recognition systems is currently being threatened by the emergence of spoofing attacks that happens when someone tries to unauthorizedly bypass the biometric system by presenting a photo, 3-dimensional mask, or replay video of a legit user. The video attacks are perhaps one of the most frequent, cheapest, and simplest spoofing techniques to cheat face recognition systems. This research paper focuses on face liveness detection in video attacks, intending to determine if the provided input biometric samples came from a live face or spoof attack by extracting frames from the videos and classifying them by using the Resnet-50 deep learning algorithm. The majority voting mechanism is used as a decision fusion to derive a final verdict. The experiment was conducted on the spoof videos of the Replay-attack dataset. The results demonstrated that the optimal number of frames for video liveness detection is 3 with an accuracy of 96.93 %. This result is encouraging since the low number of frames requires minimal time for processing.
2

Kroczek, Leon O. H., Angelika Lingnau, Valentin Schwind, Christian Wolff, and Andreas Mühlberger. "Angry facial expressions bias towards aversive actions." PLOS ONE 16, no. 9 (September 1, 2021): e0256912. http://dx.doi.org/10.1371/journal.pone.0256912.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social interaction requires fast and efficient processing of another person’s intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants’ recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants’ action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
3

Tej, Maddimsetty Bullaiaha. "Eye of Devil: Face Recognition in Real World Surveillance Video with Feature Extraction and Pattern Matching." International Journal for Research in Applied Science and Engineering Technology 9, no. 12 (December 31, 2021): 2334–37. http://dx.doi.org/10.22214/ijraset.2021.39711.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: People lost, people missing etc., these are the words we come across whenever there is any mass gathering events going on or in crowded areas. To solve this issue some traditional approaches like announcements are in use. One idea is to identify the person using face recognition and pattern matching techniques. There are several techniques to implement face recognition like extraction of facial features by using the position of eyes, nose, jawbone or skin texture analysis etc., By using these techniques a unique dataset can be created for each human. Here the photograph of the missing person can be used to extract these facial features. After getting the dataset of that individual, by using pattern matching techniques, there is a scope to find the person with same facial features in the crowd images or videos. Keywords: Face-Recognition, Image-Processing, Feature extraction, Video-Processing, Pattern-Matching.
4

Mahalim, Vaishnavi Sanjay, Seema Goroba Admane, Divya Vinod Kundgar, and Ankit Hirday Narayan Singh. "Development of Real-Time Emotion Recognition System Using Facial Expressions." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 10 (October 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26415.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research presents a real-time emotion recognition system that combines human-friendly machine interaction with picture processing. For many years, facial detection has been available. Moving further, it is possible to simulate the emotions that people express on their faces and experience in their brains through the use of video, electric signals, or image forms. Since it is hard for computers to detect emotions from images or videos and a difficult task for the human eye, machine emotion detection requires a variety of image processing approaches for feature extraction. The approach proposed in this paper consists of two primary processes: facial expression recognition (FER) and face detection. The experimental investigation of facial emotion recognition is the main topic of this study. An emotion detection system's workflow consists of face detection, feature extraction, pre-processing, classification, and image acquisition. The emotion identification system uses the Haar cascade algorithm, an object detection algorithm, to recognize faces in an image or a real-time video, and the KNN Classifier for image classification in order to identify such emotions. Using the webcam to capture real-time photos, this system operates. The goal of this research is to develop an automatic facial expression recognition system that can recognize various emotions. Based on these studies, the system may be able to distinguish between a number of people who are fearful, furious, shocked, sad, or pleased, among other emotions.
5

Blanes-Vidal, Victoria, Tomas Majtner, Luis David Avendaño-Valencia, Knud B. Yderstraede, and Esmaeil S. Nadimi. "Invisible Color Variations of Facial Erythema: A Novel Early Marker for Diabetic Complications?" Journal of Diabetes Research 2019 (September 2, 2019): 1–7. http://dx.doi.org/10.1155/2019/4583895.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Aim. (1) To quantify the invisible variations of facial erythema that occur as the blood flows in and out of the face of diabetic patients, during the blood pulse wave using an innovative image processing method, on videos recorded with a conventional digital camera and (2) to determine whether this “unveiled” facial red coloration and its periodic variations present specific characteristics in diabetic patients different from those in control subjects. Methods. We video recorded the faces of 20 diabetic patients with peripheral neuropathy, retinopathy, and/or nephropathy and 10 nondiabetic control subjects, using a Canon EOS camera, for 240 s. Only one participant presented visible facial erythema. We applied novel image processing methods to make the facial redness and its variations visible and automatically detected and extracted the redness intensity of eight facial patches, from each frame. We compared average and standard deviations of redness in the two groups using t-tests. Results. Facial redness varies, imperceptibly and periodically, between redder and paler, following the heart pulsation. This variation is consistently and significantly larger in diabetic patients compared to controls (p value < 0.001). Conclusions. Our study and its results (i.e., larger variations of facial redness with the heartbeats in diabetic patients) are unprecedented. One limitation is the sample size. Confirmation in a larger study would ground the development of a noninvasive cost-effective automatic tool for early detection of diabetic complications, based on measuring invisible redness variations, by image processing of facial videos captured at home with the patient’s smartphone.
6

Singh,, Mr Ankit. "Real-Time Emotion Recognition System Using Facial Expressions." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 19, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper describes an emotion detection system based on real-time detection using image processing with human-friendly machine interaction. Facial detection has been around for decades. Taking a step ahead, Human expressions displayed by face and felt by the brain, captured via video, electric signal, or image form can be approximated. To recognize emotions via images or videos is a difficult task for the human eye and challenging for machines thus detection of emotion by a machine requires many image processing techniques for feature extraction. This paper proposes a system that has two main processes such as face detection and Facial expression recognition (FER). This research focuses on an experimental study on identifying facial emotions. The flow for an emotion detection system includes the image acquisition, preprocessing of an image, Face detection, feature extraction, and classification. To identify such emotions, the emotion detection system uses KNN Classifier for image classification, and Haar cascade algorithm an Object Detection Algorithm to identify faces in an image or a real-time video. This system works by taking live images from the webcam. The objective of this research is to produce an automatic facial emotion detection system to identify different emotions based on these experiments the system could identify several people that are sad, surprised, and happy, in fear, are angry, disgust etc.
7

S, Manjunath, Banashree P, Shreya M, Sneha Manjunath Hegde, and Nischal H P. "Driver Drowsiness Detection System." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 129–35. http://dx.doi.org/10.22214/ijraset.2022.42109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Recently, in addition to autonomous vehicle technology research and development, machine learning methods have been used to predict a driver's condition and emotions in order to provide information that will improve road safety. A driver's condition can be estimated not only by basic characteristics such as gender, age, and driving experience, but also by a driver's facial expressions, bio-signals, and driving behaviours. Recent developments in video processing using machine learning have enabled images obtained from cameras to be analysed with high accuracy. Therefore, based on the relationship between facial features and a driver’s drowsy state, variables that reflect facial features have been established. In this paper, we proposed a method for extracting detailed features of the eyes, the mouth, and positions of the head using OpenCV and Dlib library in order to estimate a driver’s level of drowsiness. Keywords: Drowsiness, OpenCV, Dlib, facial features, video processing
8

Lee, Seongmin, Hyunse Yoon, Sohyun Park, Sanghoon Lee, and Jiwoo Kang. "Stabilized Temporal 3D Face Alignment Using Landmark Displacement Learning." Electronics 12, no. 17 (September 4, 2023): 3735. http://dx.doi.org/10.3390/electronics12173735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the most crucial aspects of 3D facial models is facial reconstruction. However, it is unclear if face shape distortion is caused by identity or expression when the 3D morphable model (3DMM) is fitted into largely expressive faces. In order to overcome the problem, we introduce neural networks to reconstruct stable and precise faces in time. The reconstruction network extracts the 3DMM parameters from video sequences to represent 3D faces in time. Meanwhile, our displacement networks learn the changes in facial landmarks. In particular, the networks learn changes caused by facial identity, facial expression, and temporal cues, respectively. The proposed facial alignment network exhibits reliable and precise performance in reconstructing static and dynamic faces by leveraging these displacement networks. The 300 Videos in the Wild (300VW) dataset is utilized for qualitative and quantitative evaluations to confirm the effectiveness of our method. The results demonstrate the considerable advantages of our method in reconstructing 3D faces from video sequences.
9

Rocha Neto, Aluizio, Thiago P. Silva, Thais Batista, Flávia C. Delicato, Paulo F. Pires, and Frederico Lopes. "Leveraging Edge Intelligence for Video Analytics in Smart City Applications." Information 12, no. 1 (December 31, 2020): 14. http://dx.doi.org/10.3390/info12010014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In smart city scenarios, the huge proliferation of monitoring cameras scattered in public spaces has posed many challenges to network and processing infrastructure. A few dozen cameras are enough to saturate the city’s backbone. In addition, most smart city applications require a real-time response from the system in charge of processing such large-scale video streams. Finding a missing person using facial recognition technology is one of these applications that require immediate action on the place where that person is. In this paper, we tackle these challenges presenting a distributed system for video analytics designed to leverage edge computing capabilities. Our approach encompasses architecture, methods, and algorithms for: (i) dividing the burdensome processing of large-scale video streams into various machine learning tasks; and (ii) deploying these tasks as a workflow of data processing in edge devices equipped with hardware accelerators for neural networks. We also propose the reuse of nodes running tasks shared by multiple applications, e.g., facial recognition, thus improving the system’s processing throughput. Simulations showed that, with our algorithm to distribute the workload, the time to process a workflow is about 33% faster than a naive approach.
10

Selva, Selva, and Selva Kumar S. "Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis." Journal of Cybersecurity and Information Management 13, no. 2 (2024): 109–23. http://dx.doi.org/10.54216/jcim.130209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The reliable way to discern human emotions in various circumstances has been proven to be through facial expressions. Facial expression recognition (FER) has emerged as a research topic to identify various essential emotions in the present exponential rise in research for emotion detection. Happiness is one of these basic emotions everyone may experience, and facial expressions are better at detecting it than other emotion-measuring methods. Most techniques have been designed to recognize various emotions to achieve the highest level of general precision. Maximizing the recognition accuracy for a particular emotion is challenging for researchers. Some techniques exist to identify a single happy mood recorded in unrestricted video. Still, they are all limited by the processing of extreme head posture fluctuations that they need to consider, and their accuracy still needs to be improved. This research proposes a novel hybrid facial emotion recognition using unconstraint video to improve accuracy. Here, a Deep Belief Network (DBN) with long short-term memory (LSTM) is employed to extract dynamic data from the video frames. The experiments conducted uses decision-level and feature-level fusion techniques are applied unconstrained video dataset. The outcomes show that the proposed hybrid approach may be more precise than some existing facial expression models.
11

Namboodiri, Sandhya Parameswaran, and Venkataraman D. "A computer vision based image processing system for depression detection among students for counseling." Indonesian Journal of Electrical Engineering and Computer Science 14, no. 1 (April 1, 2019): 503. http://dx.doi.org/10.11591/ijeecs.v14.i1.pp503-512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Psychological problems in college students like depression, pessimism, eccentricity, anxiety etc. are caused principally due to the neglect of continuous monitoring of students’ psychological well-being. Identification of depression at college level is desirable so that it can be controlled by giving better counseling at the starting stage itself. The disturbed mental state of a student suffering from depression would be clearly evident in the student’s facial expressions.Identification of depression in large group of college students becomes a tedious task for an individual. But advances in the Image-Processing field have led to the development of effective systems, which prove capable of detecting emotions from facial images, in a much simpler way. Thus, we need an automated system that captures facial images of students and analyze them, for effective detection of depression. In the proposed system, an attempt is being made to make use of the Image processing techniques, to study the frontal face features of college students and predict depression. This automated system will be trained with facial features of positive and negative facial emotions. To predict depression, a video of the student is captured, from which the face of the student is extracted. Then using Gabor filters, the facial features are extracted. Classification of these facial features is done using SVM classifier. The level of depression is identified by calculating the amount of negative emotions present in the entire video. Based on the level of depression, notification is send to the class advisor, department counselor or university counselor, indicating the student’s disturbed mental state. The present system works with an accuracy of 64.38%. The paper concludes with the description of an extended architecture for depression detection as future work.
12

Bhat, Mr Abhilash L., N. Nithesh Kumar, Poojitha Y, Siripireddy Thulasi, and V. Arvind. "CNN Based Facial Recognition with Age Invariance." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 1061–65. http://dx.doi.org/10.22214/ijraset.2023.56680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract : As the world has seen exponential expansion over the previous decade, there has been an unusual increase in the crime rate as well as an increase in the number of criminals/missing persons. Face recognition can extract the individualistic characteristics of the human face. A straightforward and adaptable biometric technology is face recognition. The technology used to recognize and identify faces in images or videos is called face detection and recognition. The process of removing facial features has gotten easier as technology has advanced. This study describes the use of an automated security camera for realtime face recognition. With this system, we can instantly identify and detect the faces of the criminals in a live video feed captured by a camera. Criminal records typically include the offender's picture and personal information. Thus, we are able to use these photos in addition to his information. The security camera's recorded footage is transformed into frames. After a face is identified in a frame, it undergoes pre-processing and feature extraction. The characteristics of the real-time image processing are compared to those of the images kept in the criminal database.
13

P.Dahake, R., and M. U. Kharat. "Face Detection and Processing: a Survey." International Journal of Engineering & Technology 7, no. 4.19 (November 27, 2018): 1066. http://dx.doi.org/10.14419/ijet.v7i4.19.28287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the recent era facial image processing is gaining more importance and the face detection from image or from video have number of applications which are video surveillance, entertainment, security, multimedia, communication, Ubiquitous computing etc. Various research work are carried out for face detection and processing which includes detection, tracking of the face, estimation of pose, clustering the detected faces etc. Although significant advances have been made, the performance of face detection systems provide satisfactory under controlled environment & may get degraded with some challenging scenario such as in real time video face detection and processing. There are many real-time applications where human face serves as identity and these application are time bound so time for detection of face from image or video and the further processing is very essential, thus here our goal is to discuss the face detection system overview and to review various human skin colors based approaches and Haar feature based approach for better detection performance. Detected faces tagging and clustering is essential in some cases, so for such further processing time factor plays important role. Some of the recent approaches to improve detection speed such as using Graphical Processing Unit are discussed and providing future directions in this area.
14

Abraham, Jobin Reji, Jobin, Saniya P. M, George Dominic, and M. Arjun. "Surveillance System with Face Recognition Using Hog." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (December 31, 2022): 199–206. http://dx.doi.org/10.22214/ijraset.2022.47854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Due to various suspicious activity, the surveillance system continues to emerge in the technical field. Everyday security threats can have a serious impact on people's day-today activity. In this area, numerous techniques have been developed, however some issues have not yet been overcome. The research described in this white paper offers more precise video surveillance with less processing complexity. The system's most crucial component includes location, recognition and facial recognition. The technology pulls highlighted facial information from a live environment or video dataset. The currently recorded video data will then be used to extract the face and background frames. The extracted facial picture data is then compared to the database's face image. A security alarm or signal is issued to prompt the security team to take action if no data is discovered that fits the already-existing data. The suggested solution outperforms current systems in terms of accuracy, efficiency, and cost.
15

Syahputra, Eswin, Irpan Nursukmi, Sony Putra, Bayu Sukma Sani, and Rian Farta Wijaya. "EYE ASPECT RATIO ADJUSTMENT DETECTION FOR STRONG BLINKING SLEEPINESS BASED ON FACIAL LANDMARKS WITH EYE-BLINK DATASET." ZERO: Jurnal Sains, Matematika dan Terapan 6, no. 2 (February 10, 2023): 147. http://dx.doi.org/10.30829/zero.v6i2.14751.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<span lang="id">Blink detection is an important technique in a variety of settings, including facial motion analysis and signal processing. However, automatic blink detection is challenging due to its blink rate. This paper proposes a real-time method for detecting eye blinks in a video series. The method is based on automatic facial landmark detection trained on real-world datasets and demonstrates robustness against various environmental factors, including lighting conditions, facial emotions, and head position. The proposed algorithm calculates the position of facial landmarks, extracts scalar values using the Eye Aspect Ratio (EAR), and characterises eye proximity in each frame. For each video frame, the proposed method calculates the location of the facial landmark and extracts the vertical distance between the eyelids using the position of the facial landmark. Blinks are detected by using the EAR threshold value and recognising the pattern of EAR values in a short temporal window. According to the results from a common data set, it is shown that the proposed approach is more efficient than state-of-the-art techniques.</span>
16

Savostyanov, A. N., E. G. Vergunov, A. E. Saprygin, and D. A. Lebedkin. "Validation of a face image assessment technology to study the dynamics of human functional states in the EEG resting-state paradigm." Vavilov Journal of Genetics and Breeding 26, no. 8 (January 4, 2023): 765–72. http://dx.doi.org/10.18699/vjgb-22-92.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article presents the results of a study aimed at finding covariates to account for the activity of implicit cognitive processes in conditions of functional rest of the subjects and during them being presented their own or someone else’s face in a joint analysis of EEG experiment data. The proposed approach is based on the analysis of the dynamics of the facial muscles of the subject recorded on video. The pilot study involved 18 healthy volunteers. In the experiment, the subjects were sitting in front of a computer screen and performed the following task: sequentially closed their eyes (three trials of 2 minutes each) and opened them (three trials of the same duration between periods of closed eyes) when the screen was either empty or when it was showing a video recording of their own face or the face of an unfamiliar person of the same gender as the participant. EEG, ECG and a video of the face were recorded for all subjects. In the work a separate subtask of the study was also addressed: validating a technique for assessing the dynamics of the subjects’ facial muscle activity using the recorded videos of the “eyes open” trials to obtain covariates that can be included in subsequent processing along with EEG correlates in neurocognitive experiments with a paradigm that does not involve the performance of active cognitive tasks (“resting-state conditions”). It was shown that the subject’s gender, stimulus type (screen empty or showing own/other face), trial number are accompanied by differences in facial activity and can be used as study-specific covariates. It was concluded that the analysis of the dynamics of facial activity based on video recording of “eyes open” trials can be used as an additional method in neurocognitive research to study implicit cognitive processes associated with the perception of oneself and other, in the functional rest paradigm.
17

Sridhar, M. Bhanu, Sai Himaja Kinthada, and Bhargavi Marni. "A Unique Framework for Contactless Estimation of Body Vital Signs using Facial Recognition." International Journal of Computer Science and Mobile Computing 10, no. 12 (December 30, 2021): 14–20. http://dx.doi.org/10.47760/ijcsmc.2021.v10i12.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As one of the consequences of COVID-19 pandemic, a lot of new technologies are developing in fast-track pace in clinical practices. The main idea of our project is to design contactless technology for the support of patients who suffer from blood pressure disorders and coronary heart diseases using machine learning approach. This may intend people to monitor their heart rate, pulse rate, respiratory life and oxygen saturation levels at an ease. The orientation of this paper is to monitor the blood pressure considering the facial changes and movements in a video to get rid of cuff-based measurement of blood pressure. We analyzed whether blood pressure can be obtained in a contactless way utilizing a novel technologies like image processing and machine learning techniques. This innovation estimates vague facial blood stream changes from video recordings captured by camera with the help of machine learning and image processing techniques.
18

Poorna, S. S., S. Devika Nair, Arsha Narayan, Veena Prasad, Parsi Sai Himaja, Suraj S. Kamath, and G. J. Nair. "Bimodal Emotion Recognition Using Audio and Facial Features." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 189–94. http://dx.doi.org/10.1166/jctn.2020.8649.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A multimodal emotion recognition system is proposed using speech and facial images. For this purpose a video database is developed, containing emotions in three affective states viz. anger, sad and happiness. The audio and the snapshots of facial expressions acquired from the videos constituted the bimodal input for recognizing emotions. The spoken sentences in the database included text dependent as well as text independent sentences in Malayalam language. The audio features included short-time processing of speech to obtain: energy, zero crossing count, pitch and Mel Frequency Cepstral Coefficients. For facial expressions, the landmark features of face: eyebrows, eyes and mouth, obtained using Viola Jones Algorithm is used. The supervised learning methods K-Nearest Neighbor and Artificial Neural Network are used for emotion analysis. The system performance is evaluated for 3 cases viz. using audio based features and facial features separately and for both features taken together. Further, the effect of text dependent and text independent audio is also analyzed. The result of the analysis shows that text independent videos (utilizing both modalities) using K-Nearest Neighbor (highest accuracy 82.78%) is found to be more effective in recognizing emotions from the database considered.
19

Bianchini, Edoardo, Domiziana Rinaldi, Marika Alborghetti, Marta Simonelli, Flavia D’Audino, Camilla Onelli, Elena Pegolo, and Francesco E. Pontieri. "The Story behind the Mask: A Narrative Review on Hypomimia in Parkinson’s Disease." Brain Sciences 14, no. 1 (January 22, 2024): 109. http://dx.doi.org/10.3390/brainsci14010109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Facial movements are crucial for social and emotional interaction and well-being. Reduced facial expressions (i.e., hypomimia) is a common feature in patients with Parkinson’s disease (PD) and previous studies linked this manifestation to both motor symptoms of the disease and altered emotion recognition and processing. Nevertheless, research on facial motor impairment in PD has been rather scarce and only a limited number of clinical evaluation tools are available, often suffering from poor validation processes and high inter- and intra-rater variability. In recent years, the availability of technology-enhanced quantification methods of facial movements, such as automated video analysis and machine learning application, led to increasing interest in studying hypomimia in PD. In this narrative review, we summarize the current knowledge on pathophysiological hypotheses at the basis of hypomimia in PD, with particular focus on the association between reduced facial expressions and emotional processing and analyze the current evaluation tools and management strategies for this symptom, as well as future research perspectives.
20

Antoszczyszyn, P. M., J. M. Hannah, and P. M. Grant. "Facial Motion Analysis for Content-based Video Coding." Real-Time Imaging 6, no. 1 (February 2000): 3–16. http://dx.doi.org/10.1006/rtim.1998.0152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Komar, Mayur, Vinod Kolhe, Preeti Gajul, and Rohan Bhukan. "Video Based Student Attendance Management System." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1788–93. http://dx.doi.org/10.22214/ijraset.2022.42628.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract: Mainly there are two conventional methods of marking attendance which are calling out the roll call or by taking student sign on paper. They both were more time consuming and difficult. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance record automatically. In this project we have implemented the automated attendance system using MATLAB. We have projected our ideas to implement “Automated Attendance System Based on Facial Recognition”, in which it imbibes large applications. Keywords: Attendance System, Automated Attendance, Image Processing, Face Detection, Feature Matching, Face Recognition
22

Hajarolasvadi, Noushin, Enver Bashirov, and Hasan Demirel. "Video-based person-dependent and person-independent facial emotion recognition." Signal, Image and Video Processing 15, no. 5 (January 19, 2021): 1049–56. http://dx.doi.org/10.1007/s11760-020-01830-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Antoszczyszyn, P. M., J. M. Hannah, and P. M. Grant. "Reliable tracking of facial features in semantic-based video coding." IEE Proceedings - Vision, Image, and Signal Processing 145, no. 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. "Recognition of emotions from video using acoustic and facial features." Signal, Image and Video Processing 9, no. 5 (July 24, 2013): 1029–45. http://dx.doi.org/10.1007/s11760-013-0522-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hajarolasvadi, Noushin, and Hasan Demirel. "Deep facial emotion recognition in video using eigenframes." IET Image Processing 14, no. 14 (December 1, 2020): 3536–46. http://dx.doi.org/10.1049/iet-ipr.2019.1566.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Peng, Zhuang, Boyi Jiang, Haofei Xu, Wanquan Feng, and Juyong Zhang. "Facial optical flow estimation via neural non-rigid registration." Computational Visual Media 9, no. 1 (October 18, 2022): 109–22. http://dx.doi.org/10.1007/s41095-021-0267-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractOptical flow estimation in human facial video, which provides 2D correspondences between adjacent frames, is a fundamental pre-processing step for many applications, like facial expression capture and recognition. However, it is quite challenging as human facial images contain large areas of similar textures, rich expressions, and large rotations. These characteristics also result in the scarcity of large, annotated real-world datasets. We propose a robust and accurate method to learn facial optical flow in a self-supervised manner. Specifically, we utilize various shape priors, including face depth, landmarks, and parsing, to guide the self-supervised learning task via a differentiable nonrigid registration framework. Extensive experiments demonstrate that our method achieves remarkable improvements for facial optical flow estimation in the presence of significant expressions and large rotations.
27

Oguine, Ozioma Collins, Kanyifeechukwu Jane Oguine, Hashim Ibrahim Bisallah, and Daniel Ofuani. "Hybrid facial expression recognition (FER2013) modelfor real-time emotion classification and prediction." BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning 1, no. 1 (2022): 56–64. http://dx.doi.org/10.54646/bijiam.2022.09.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Facial expression recognition is a vital research topic in most fields ranging from artificial intelligence and gamingto human-computer interaction (HCI) and psychology. This paper proposes a hybrid model for facial expressionrecognition, which comprises a deep convolutional neural network (DCNN) and a Haar Cascade deep learningarchitecture. The objective is to classify real-time and digital facial images into one of the seven facial emotioncategories considered. The DCNN employed in this research has more convolutional layers, ReLU activationfunctions, and multiple kernels to enhance filtering depth and facial feature extraction. In addition, a HaarCascade model was also mutually used to detect facial features in real-time images and video frames. Grayscaleimages from the Kaggle repository (FER2013) and then exploited graphics processing unit (GPU) computation toexpedite the training and validation process. Pre-processing and data augmentation techniques are applied toimprove training efficiency and classification performance. The experimental results show a significantly improvedclassification performance compared to state-of-the-art (SoTA) experiments and research. Also, compared to otherconventional models, this paper validates that the proposed architecture is superior in classification performancewith an improvement of up to 6%, totaling up to 70% accuracy, and with less execution time of 2,098.8 s
28

Sumathi, J. k. "Dynamic Image Forensics and Forgery Analytics using Open Computer Vision Framework." Wasit Journal of Computer and Mathematics Science 1, no. 1 (March 17, 2021): 1–8. http://dx.doi.org/10.31185/wjcm.vol1.iss1.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The key advances in Computer Vision and Optical Image Processing are the emerging technologies nowadays in diverse fields including Facial Recognition, Biometric Verifications, Internet of Things (IoT), Criminal Investigation, Signature Identification in banking and several others. Thus, these applications use image and live video processing for facilitating different applications for analyzing and forecasting." Computer vision is used in tons of activities such as monitoring, face recognition, motion recognition, object detection, among many others. The development of social networking platforms such as Facebook and Instagram led to an increase in the volume of image data that was being generated. Use of image and video processing software is a major concern for Facebook because the photos and videos that people post to the social network are doctored images. These kind of images are frequently cited as fake and used in malevolent ways such as motivating violence and death. You need to authenticate the questionable images before take action. It is very hard to ensure photo authenticity due to the power of photo manipulations. Image formation can be determined by image forensic techniques. The technique of image duplication is used to conceal missing areas.
29

Wang, Rui Hu, and Bin Fang. "Emotion Fusion Recognition for Intelligent Surveillance with PSO-CSVM." Advanced Materials Research 225-226 (April 2011): 51–56. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The next generation of intelligent surveillance system should be able to recognize human’s spontaneous emotion state automatically. Compared to speaker recognition, sensor signals analyzing, fingerprint or iris recognition, etc, facial expression and body gesture processing are two mainly non-intrusive vision modalities, which provides potential action information for video surveillance. In our work, we care one kind of facial expression, i.e. anxiety and gesture motion only. Firstly facial expression and body gesture feature are extracted. Particle Swarm Optimization algorithm is used to select feature subset and parameters optimization. The selected features are trained or tested for cascaded Support Vector Machine to obtain a high-accuracy classifier.
30

D’Ulizia, Arianna, Alessia D’Andrea, Patrizia Grifoni, and Fernando Ferri. "Detecting Deceptive Behaviours through Facial Cues from Videos: A Systematic Review." Applied Sciences 13, no. 16 (August 12, 2023): 9188. http://dx.doi.org/10.3390/app13169188.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Interest in detecting deceptive behaviours by various application fields, such as security systems, political debates, advanced intelligent user interfaces, etc., makes automatic deception detection an active research topic. This interest has stimulated the development of many deception-detection methods in the literature in recent years. This work systematically reviews the literature focused on facial cues of deception. The most relevant methods applied in the literature of the last decade have been surveyed and classified according to the main steps of the facial-deception-detection process (video pre-processing, facial feature extraction, and decision making). Moreover, datasets used for the evaluation and future research directions have also been analysed.
31

Zekhnine, Chérifa, and Nasr Eddine Berrached. "Human-Robots Interaction by Facial Expression Recognition." International Journal of Engineering Research in Africa 46 (January 2020): 76–87. http://dx.doi.org/10.4028/www.scientific.net/jera.46.76.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a facial expressions recognition system to command both mobile and arm robot. The proposed system mainly consists of two modules: facial expressions recognition and robots command. The first module aims to extract the ROI (Region Of Interest like: mouth, eyes, eyebrow) using Gradient Vector Flow (GVF) snake segmentation and the Euclidian distance calculation (compatible with the MPEG-4 description of the six universal emotions). To preserve the temporal aspect of the processing from FEEDTUM database (video file), Time Delay Neural Network (TDNN) is used as classifier of the universal facial expressions such as happiness, sadness, surprise, anger, fear, disgust and neutral. While the second module, analyzes recognized facial expressions and translates them into a language to communicate with robots by establishing command law.
32

Chu, Wangbin, and Yepeng Guan. "Identity Verification Based on Facial Pose Pool and Bag of Words Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 3 (May 19, 2017): 448–55. http://dx.doi.org/10.20965/jaciii.2017.p0448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are many challenges for face based identity verification. It is one of fundamental topics in image processing and video analysis, and so on. A novel approach has been developed for facial identity verification based on a facial pose pool, which is constructed in an incremental clustering way to find both facial spatial information and orientation diversity. Bag of words is selected to extract image features from the facial pose pool in affine SIFT descriptor. The visual codebook is generated ink-means and Gaussian mixture model. Posterior pseudo probabilities are used to compute the similarities between each visual word and corresponding local features for image representation. Comparisons with some state-of-the-arts have highlighted the superior performance of the proposed method.
33

Jakkaew, Prasara, and Takao Onoye. "Non-Contact Respiration Monitoring and Body Movements Detection for Sleep Using Thermal Imaging." Sensors 20, no. 21 (November 5, 2020): 6307. http://dx.doi.org/10.3390/s20216307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Monitoring of respiration and body movements during sleep is a part of screening sleep disorders related to health status. Nowadays, thermal-based methods are presented to monitor the sleeping person without any sensors attached to the body to protect privacy. A non-contact respiration monitoring based on thermal videos requires visible facial landmarks like nostril and mouth. The limitation of these techniques is the failure of face detection while sleeping with a fixed camera position. This study presents the non-contact respiration monitoring approach that does not require facial landmark visibility under the natural sleep environment, which implies an uncontrolled sleep posture, darkness, and subjects covered with a blanket. The automatic region of interest (ROI) extraction by temperature detection and breathing motion detection is based on image processing integrated to obtain the respiration signals. A signal processing technique was used to estimate respiration and body movements information from a sequence of thermal video. The proposed approach has been tested on 16 volunteers, for which video recordings were carried out by themselves. The participants were also asked to wear the Go Direct respiratory belt for capturing reference data. The result revealed that our proposed measuring respiratory rate obtains root mean square error (RMSE) of 1.82±0.75 bpm. The advantage of this approach lies in its simplicity and accessibility to serve users who require monitoring the respiration during sleep without direct contact by themselves.
34

Liao, Yuxuan, Zhenyu Tang, Jiehong Lei, Jiajia Chen, and Zhong Tang. "Video Face Detection Technology and Its Application in Health Information Management System." Scientific Programming 2022 (February 4, 2022): 1–11. http://dx.doi.org/10.1155/2022/3828478.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer face detection, as an early step and prerequisite for applications such as face recognition and face analysis, has attracted people's attention for a long time. With the popularization of computer applications, the improvement of performance, and the gradual maturity of research in the field of image processing and pattern recognition, face-related applications have become more and more a reality, so the research on face detection and positioning is also receiving more and more attention. Face detection and positioning are an important part of face analysis technology. Its goal is to search for the location of facial features (such as eyes, nose, mouth, and ears) in images or image sequences. It can be widely used in the fields of face tracking, face recognition, gesture recognition, facial expression recognition, head image compression and reconstruction, facial animation, etc. Based on the health information management system, this study mainly discusses the application of face recognition technology in video systems. Compared with other biological characteristics, such as fingerprints and eye masks, human faces are easier to obtain. In research and exploration, stable and effective face detection and face recognition algorithms have been proposed, which can achieve good recognition results even in real-time video surveillance. Aiming at the automatic face recognition technology in video surveillance, this study introduces in detail the video face detection technology in the health information management system of video image collection, image preprocessing, face detection, and face recognition. The prototype system of hygiene management is recognized.
35

Dewi, Christine, Rung-Ching Chen, Xiaoyi Jiang, and Hui Yu. "Adjusting eye aspect ratio for strong eye blink detection based on facial landmarks." PeerJ Computer Science 8 (April 18, 2022): e943. http://dx.doi.org/10.7717/peerj-cs.943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Blink detection is an important technique in a variety of settings, including facial movement analysis and signal processing. However, automatic blink detection is very challenging because of the blink rate. This research work proposed a real-time method for detecting eye blinks in a video series. Automatic facial landmarks detectors are trained on a real-world dataset and demonstrate exceptional resilience to a wide range of environmental factors, including lighting conditions, face emotions, and head position. For each video frame, the proposed method calculates the facial landmark locations and extracts the vertical distance between the eyelids using the facial landmark positions. Our results show that the recognizable landmarks are sufficiently accurate to determine the degree of eye-opening and closing consistently. The proposed algorithm estimates the facial landmark positions, extracts a single scalar quantity by using Modified Eye Aspect Ratio (Modified EAR) and characterizing the eye closeness in each frame. Finally, blinks are detected by the Modified EAR threshold value and detecting eye blinks as a pattern of EAR values in a short temporal window. According to the results from a typical data set, it is seen that the suggested approach is more efficient than the state-of-the-art technique.
36

Ozioma, Collins Oguine, Jane Oguine Kanyifeechukwu, Ibrahim Bisallah Hashim, and Ofuani Daniel. "Hybrid Facial Expression Recognition (FER2013) Model for Real-Time Emotion Classification and Prediction." BOHR International Journal of Internet of things, Artificial Intelligence and Machine Learning 1, no. 1 (2022): 63–71. http://dx.doi.org/10.54646/bijiam.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Facial expression recognition is a vital research topic in most fields ranging from artificial intelligence and gaming to human–computer interaction (HCI) and psychology. This paper proposes a hybrid model for facial expression recognition, which comprises a deep convolutional neural network (DCNN) and a Haar Cascade deep learning architecture. The objective is to classify real-time and digital facial images into one of the seven facial emotion categories considered. The DCNN employed in this research has more convolutional layers, ReLU activation functions, and multiple kernels to enhance filtering depth and facial feature extraction. In addition, a Haar Cascade model was also mutually used to detect facial features in real-time images and video frames. Grayscale images from the Kaggle repository (FER-2013) and then exploited graphics processing unit (GPU) computation to expedite the training and validation process. Pre-processing and data augmentation techniques are applied to improve training efficiency and classification performance. The experimental results show a significantly improved classification performance compared to state-of-the-art (SoTA) experiments and research. Also, compared to other conventional models, this paper validates that the proposed architecture is superior in classification performance with an improvement of up to 6%, totaling up to 70% accuracy, and with less execution time of 2,098.8 s.
37

Alemayehu, Kidist, Worku Jifara, and Demissie Jobir. "Attention-Based Image-to-Video Translation for Synthesizing Facial Expression Using GAN." Journal of Electrical and Computer Engineering 2023 (November 14, 2023): 1–13. http://dx.doi.org/10.1155/2023/6645356.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The fundamental challenge in video generation is not only generating high-quality image sequences but also generating consistent frames with no abrupt shifts. With the development of generative adversarial networks (GANs), great progress has been made in image generation tasks which can be used for facial expression synthesis. Most previous works focused on synthesizing frontal and near frontal faces and manual annotation. However, considering only the frontal and near frontal area is not sufficient for many real-world applications, and manual annotation fails when the video is incomplete. AffineGAN, a recent study, uses affine transformation in latent space to automatically infer the expression intensity value; however, this work requires extraction of the feature of the target ground truth image, and the generated sequence of images is also not sufficient. To address these issues, this study is proposed to infer the expression of intensity value automatically without the need to extract the feature of the ground truth images. The local dataset is prepared with frontal and with two different face positions (the left and right sides). Average content distance metrics of the proposed solution along with different experiments have been measured, and the proposed solution has shown improvements. The proposed method has improved the ACD-I of affine GAN from 1.606 ± 0.018 to 1.584 ± 0.00, ACD-C of affine GAN from 1.452 ± 0.008 to 1.430 ± 0.009, and ACD-G of affine GAN from 1.769 ± 0.007 to 1.744 ± 0.01, which is far better than AffineGAN. This work concludes that integrating self-attention into the generator network improves a quality of the generated images sequences. In addition, evenly distributing values based on frame size to assign expression intensity value improves the consistency of image sequences being generated. It also enables the generator to generate different frame size videos while remaining within the range [0, 1].
38

Parikibanda, Sushmitha. "Face Recognition Framework based on Convolution Neural Network with modified Long Short Term memory Method." Journal of Computational Science and Intelligent Technologies 1, no. 3 (2020): 22–28. http://dx.doi.org/10.53409/mnaa.jcsit20201304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For real-world applications, such as video monitoring, interaction between human machines and safety systems, face recognition is very critical. Deep learning approaches have demonstrated better results in terms of precision and processing speed in image recognition compared to conventional methods. In comparison to traditional methods. While facial detection problems with different commercial applications have been extensively studied for several decades, they still face problems with many specific scenarios, due to various problems such as severe facial occlusions, very low resolutions, intense lighting and exceptional changes in image or video compression artifacts, etc. The aim of this work is to robustly solve the issues listed above with a facial detection approach called Convolution Neural Network with Long short-term Model (CNN-mLSTM). This method first flattened the original frame, calculating the gradient image with Gaussian filter. The edge detection algorithm Canny-Kirsch Method will then be used to identify edge of the human face. The experimental findings suggest that the technique proposed exceeds the current modern methods of face detection.
39

Li, Yichun, Shuanglin Li, Christian Nash, Syed Mohsen Naqvi, and Rajesh Nair. "24 Intelligent sensing in ADHD trial (ISAT) – pilot study." Journal of Neurology, Neurosurgery & Psychiatry 94, no. 12 (November 15, 2023): e2.35. http://dx.doi.org/10.1136/jnnp-2023-bnpa.40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objectives/AimsTo the best of our knowledge, there are no studies using intelligent sensing to diagnose ADHD. The aim of this interdisciplinary (Medical and Engineering) research is to contribute intelligent sensing based multimodal (audio, video, touch, & text) data and trustworthy and reproducible AI in diagnosing ADHD.MethodsUnmedicated subjects with ADHD were interviewed at the Intelligent Sensing Lab in Newcastle University UK and the multimodal data were captured (figure 1). They completed CANTAB tasks, watched stimulating and neutral videos to elicit hyperfocus and distraction. Other distraction cues like noise, images on a monitor were introduced. Healthy volunteers were used as controls. Data were analysed on speech analysis, action analysis and facial action coding system. We used the existing multimodal signal and information processing algorithms developed at Newcastle UniversityAbstract 24 Figure 1Recording room layout for the intelligent sensing (audio-video-touch/keypad sensors) ADHD datasetsResultsThe accuracy of the Audio-Based ADHD Diagnosis reached over 80%. The accuracy of the ADHD Action-Based Diagnosis System reached over 90%. The accuracy of the Facial Action Coding System was 94%.ConclusionsEven with a small number of subject and controls, we were able to develop a proof of the concept system which generates high accuracy results. We are in the process of multimodal fusion and conduct a much larger study.
40

Winmalar D, Haretha, Vani A K, Sudharsan R, and Hari Krishna R. "Generalized Omnipresence Detection (GOD)." Journal of Innovative Image Processing 2, no. 2 (June 5, 2020): 85–92. http://dx.doi.org/10.36548/jiip.2020.2.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Identification and Tracking of a person in a video are useful in applications such as video surveillance. Two levels of tracking are carried out. They are Classification and monitoring of individuals. The human body’s color histogram is used as the basis for monitoring individuals. Our project can detect a human face in a video and store the detected facial features of the Local Binary Pattern Histogram (LBPH). In a video, once a person is detected, it automatically track that individual and assigns a label to that individual. We use the stored LBPH features to track him in any other videos. In this paper, we proposed and compared the efficiency of two algorithms. One constantly updates the background to make it suitable for illumination changes and other uses depth information with RGB. This is the first step in many complex algorithms in computer vision, such as identification of human activity and behavior recognition. The main challenges in human/object detection and tracking are changing illumination and background. Our work is based on image processing and also it learns the activities and stores them using machine learning with the help of OpenCV, an open source computer vision library.
41

Santos, Isabel M., Pedro Bem-Haja, André Silva, Catarina Rosa, Diâner F. Queiroz, Miguel F. Alves, Talles Barroso, Luíza Cerri, and Carlos F. Silva. "The Interplay between Chronotype and Emotion Regulation in the Recognition of Facial Expressions of Emotion." Behavioral Sciences 13, no. 1 (December 31, 2022): 38. http://dx.doi.org/10.3390/bs13010038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotion regulation strategies affect the experience and processing of emotions and emotional stimuli. Chronotype has also been shown to influence the processing of emotional stimuli, with late chronotypes showing a bias towards better processing of negative stimuli. Additionally, greater eveningness has been associated with increased difficulties in emotion regulation and preferential use of expressive suppression strategies. Therefore, the present study aimed to understand the interplay between chronotype and emotion regulation on the recognition of dynamic facial expressions of emotion. To that end, 287 participants answered self-report measures and performed an online facial emotion recognition task from short video clips where a neutral face gradually morphed into a full-emotion expression (one of the six basic emotions). Participants should press the spacebar to stop each video as soon as they could recognize the emotional expression, and then identify it from six provided labels/emotions. Greater eveningness was associated with shorter response times (RT) in the identification of sadness, disgust and happiness. Higher scores of expressive suppression were associated with longer RT in identifying sadness, disgust, anger and surprise. Expressive suppression significantly moderated the relationship between chronotype and the recognition of sadness and anger, with chronotype being a significant predictor of emotion recognition times only at higher levels of expressive suppression. No significant effects were observed for cognitive reappraisal. These results are consistent with a negative bias in emotion processing in late chronotypes and increased difficulty in anger and sadness recognition for expressive suppressor morning-types.
42

Wu, Haopeng, Zhiying Lu, Jianfeng Zhang, Xin Li, Mingyue Zhao, and Xudong Ding. "Facial Expression Recognition Based on Multi-Features Cooperative Deep Convolutional Network." Applied Sciences 11, no. 4 (February 4, 2021): 1428. http://dx.doi.org/10.3390/app11041428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.
43

Sun, Yanjia, Hasan Ayaz, and Ali N. Akansu. "Multimodal Affective State Assessment Using fNIRS + EEG and Spontaneous Facial Expression." Brain Sciences 10, no. 2 (February 6, 2020): 85. http://dx.doi.org/10.3390/brainsci10020085.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Human facial expressions are regarded as a vital indicator of one’s emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective states of twelve male participants detected via fNIRS, EEG, and spontaneous facial expressions were investigated in response to both image-content stimuli and video-content stimuli. We propose a method to jointly evaluate fNIRS and EEG signals for affective state detection (emotional valence as positive or negative). Experimental results reveal a strong correlation between spontaneous facial affective expressions and the perceived emotional valence. Moreover, the affective states were estimated by the fNIRS, EEG, and fNIRS + EEG brain activity measurements. We show that the proposed EEG + fNIRS hybrid method outperforms fNIRS-only and EEG-only approaches. Our findings indicate that the dynamic (video-content based) stimuli triggers a larger affective response than the static (image-content based) stimuli. These findings also suggest joint utilization of facial expression and wearable neuroimaging, fNIRS, and EEG, for improved emotional analysis and affective brain–computer interface applications.
44

Korolkova, O. A. "The effect of perceptual adaptation to dynamic facial expressions." Experimental Psychology (Russia) 10, no. 1 (2017): 67–88. http://dx.doi.org/10.17759/exppsy.2017100106.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We present three experiments investigating the perceptual adaptation to dynamic facial emotional expressions. Dynamic expressions of six basic emotions were obtained by video recording of a poser’s face. In Experiment 1 participants (n=20) evaluated the intensity of 6 emotions, neutral state, genuineness and naturalness of dynamic expressions. The validated stimuli were further used as adaptors in Experiments 2 and 3 aimed at exploring the structure of facial expressions perceptual space by adaptation effects. In Experiment 2 participants (n=16) categorized neutral/emotion morphs after adaptation to dynamic expressions. In Experiment 3 (n=26) the task of the first stage was to categorize static frames derived from video records of the poser. Next individual psychometric functions were fitted for each participant and each emotion, to find the frame with emotion recognized correctly in 50% trials. These latter images were presented on the second stage in adaptation experiment, with dynamic video records as adaptors. Based on the three experiments, we found that facial expressions of happiness and sadness are perceived as opponent emotions and mutually facilitate the recognition of each other, whereas disgust and anger, and fear and surprise are perceptually similar and reduce the recognition accuracy of each other. We describe the categorical fields of dynamic facial expressions and of static images of initial phases of expression development. The obtained results suggest that dimensional and categorical approaches to perception of emotions are not mutually exclusive and probably describe different stages of face information processing. The study was supported by the Russian Foundation for Basic Research, project № 15-36-01281 “Structure of dynamic facial expressions perception”.
45

Kulkarni, Narayan, and Ashok V. Sutagundar. "Detection of Human Facial Parts Using Viola-Jones Algorithm in Group of Faces." International Journal of Applied Evolutionary Computation 10, no. 1 (January 2019): 39–48. http://dx.doi.org/10.4018/ijaec.2019010103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Face detection is an image processing technique used in computer system to detect face in digital image. This article proposes an approach to detect faces and facial parts from an image of a group of people using the Viola Jones algorithm. Face detection is used in face recognition and identification systems. Automatic face detection and recognition is most challenging and a fast-growing research area in real-time applications like CC TV surveillance, video tracking, facial expression recognition, gesture recognition, human computer interaction, computer vision, and gender recognition. For face detection purposes various techniques and methods are applied in a computer system. In proposed system, a Viola Jones algorithm is implemented for multiple faces and facial parts and detected with a high rate of accuracy.
46

Yu, Zitong, Xiaobai Li, and Guoying Zhao. "Facial-Video-Based Physiological Signal Measurement: Recent advances and affective applications." IEEE Signal Processing Magazine 38, no. 6 (November 2021): 50–58. http://dx.doi.org/10.1109/msp.2021.3106285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Dudekula, Usen, and Purnachand Nalluri. "Analysis of facial emotion recognition rate for real-time application using NVIDIA Jetson Nano in deep learning models." Indonesian Journal of Electrical Engineering and Computer Science 30, no. 1 (April 1, 2023): 598. http://dx.doi.org/10.11591/ijeecs.v30.i1.pp598-605.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Detecting <span>facial emotion expression is a classic research problem in image processing. Face expression detection can be used to help human users monitor their stress levels. Perceiving an individual's failure to communicate specific looks might help analyze early psychological disorders. several issues like lighting changes, rotations, occlusions, and accessories persist. These are not simply traditional image processing issues, yet additionally, action units that make gathering activity of facial acknowledgment troublesome look information, and order of the demeanor. In this study, we use Xception taking into account Xception and convolution neural network (CNN), which is easy to focus on incredible parts like the face, and visual geometric group (VGG-19) used to extract the facial feature using the OpenCV framework classifying the image into any of the basic facial emotions. NVIDIA Jetson Nano has a high video handling outline rate. Accomplishing preferable precision over the recently evolved models on software. The average accuracies for standard data set CK+,” on NVIDIA Jetson Nano, the accuracy rate is 97.1% in the Xception model in the convolutional neural network, 98.4% in VGG-19, and real-time environment accuracy using OpenCV, accuracy rate is 95.6%.</span>
48

Li, Kai, Qionghai Dai, Ruiping Wang, Yebin Liu, Feng Xu, and Jue Wang. "A Data-Driven Approach for Facial Expression Retargeting in Video." IEEE Transactions on Multimedia 16, no. 2 (February 2014): 299–310. http://dx.doi.org/10.1109/tmm.2013.2293064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Setiawan, Arif Budi, Kaspul Anwar, Laelatul Azizah, and Adhi Prahara. "Real-time Facial Expression Recognition to Track Non-verbal Behaviors as Lie Indicators During Interview." Signal and Image Processing Letters 1, no. 1 (March 31, 2019): 25–31. http://dx.doi.org/10.31763/simple.v1i1.144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During interview, a psychologist should pay attention to every gesture and response, both verbal and nonverbal language/behaviors, made by the client. Psychologist certainly has limitation in recognizing every gesture and response that indicates a lie, especially in interpreting nonverbal behaviors that usually occurs in a short time. In this research, a real time facial expression recognition is proposed to track nonverbal behaviors to help psychologist keep informed about the change of facial expression that indicate a lie. The method tracks eye gaze, wrinkles on the forehead, and false smile using combination of face detection and facial landmark recognition to find the facial features and image processing method to track the nonverbal behaviors in facial features. Every nonverbal behavior is recorded and logged according to the video timeline to assist the psychologist analyze the behavior of the client. The result of tracking nonverbal behaviors of face is accurate and expected to be useful assistant for the psychologists.
50

Dhandapani, Ragavesh, and Sara Marhoon Humaid Al-Ghafri. "Implementation of facial mask detection and verification of vaccination certificate using Jetson Xavier kit." IOP Conference Series: Earth and Environmental Science 1055, no. 1 (July 1, 2022): 012013. http://dx.doi.org/10.1088/1755-1315/1055/1/012013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Computer Vision (CV) based object detection system is used for locating and recognizing objects of interest in the digital images or videos/live streams. In this work, real-time face mask detection and verification of vaccination certificates are implemented using a Jetson Xavier kit. Initially, live stream video is captured through a Universal Serial Bus (USB) High Definition (HD) video camera. Then, pre-processed using the standard Python CV library to grab a frame for further processing. Using Haar-Cascade Classifier and face encoding, recognition of the face from the frame is performed and labeled appropriately, then a pre-trained Deep Learning (DL) model is used to locate the target class (mask) on the frame. The Quick Response (QR) code embedded in the vaccination certificate is also analyzed from the frame. The beneficiary and vaccination details are obtained from the website of the concerned authority. Finally, the unique QR code with the relevant information is stored in the Jetson Xavier kit for detection and retrieval during the next instance. The system’s performance is verified and found to be good in terms of accuracy compared to other methods considered in this work. The implementation of this work will cater to the local community and help achieve the nation’s sustainable development goal.

До бібліографії