Letteratura scientifica selezionata sul tema "Facial video processing"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Facial video processing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Facial video processing":

1

Muthanna Shibel, Ahmed, Sharifah Mumtazah Syed Ahmad, Luqman Hakim Musa e Mohammed Nawfal Yahya. "DEEP LEARNING DETECTION OF FACIAL BIOMETRIC PRESENTATION ATTACK". LIFE: International Journal of Health and Life-Sciences 8 (23 ottobre 2023): 61–78. http://dx.doi.org/10.20319/lijhls.2022.8.6178.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Face recognition systems have gained increasing importance in today’s society, which applications range from access controls to secure systems to electronic devices such as mobile phones and laptops. However, the security of face recognition systems is currently being threatened by the emergence of spoofing attacks that happens when someone tries to unauthorizedly bypass the biometric system by presenting a photo, 3-dimensional mask, or replay video of a legit user. The video attacks are perhaps one of the most frequent, cheapest, and simplest spoofing techniques to cheat face recognition systems. This research paper focuses on face liveness detection in video attacks, intending to determine if the provided input biometric samples came from a live face or spoof attack by extracting frames from the videos and classifying them by using the Resnet-50 deep learning algorithm. The majority voting mechanism is used as a decision fusion to derive a final verdict. The experiment was conducted on the spoof videos of the Replay-attack dataset. The results demonstrated that the optimal number of frames for video liveness detection is 3 with an accuracy of 96.93 %. This result is encouraging since the low number of frames requires minimal time for processing.
2

Kroczek, Leon O. H., Angelika Lingnau, Valentin Schwind, Christian Wolff e Andreas Mühlberger. "Angry facial expressions bias towards aversive actions". PLOS ONE 16, n. 9 (1 settembre 2021): e0256912. http://dx.doi.org/10.1371/journal.pone.0256912.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Social interaction requires fast and efficient processing of another person’s intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants’ recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants’ action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
3

Tej, Maddimsetty Bullaiaha. "Eye of Devil: Face Recognition in Real World Surveillance Video with Feature Extraction and Pattern Matching". International Journal for Research in Applied Science and Engineering Technology 9, n. 12 (31 dicembre 2021): 2334–37. http://dx.doi.org/10.22214/ijraset.2021.39711.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract: People lost, people missing etc., these are the words we come across whenever there is any mass gathering events going on or in crowded areas. To solve this issue some traditional approaches like announcements are in use. One idea is to identify the person using face recognition and pattern matching techniques. There are several techniques to implement face recognition like extraction of facial features by using the position of eyes, nose, jawbone or skin texture analysis etc., By using these techniques a unique dataset can be created for each human. Here the photograph of the missing person can be used to extract these facial features. After getting the dataset of that individual, by using pattern matching techniques, there is a scope to find the person with same facial features in the crowd images or videos. Keywords: Face-Recognition, Image-Processing, Feature extraction, Video-Processing, Pattern-Matching.
4

Mahalim, Vaishnavi Sanjay, Seema Goroba Admane, Divya Vinod Kundgar e Ankit Hirday Narayan Singh. "Development of Real-Time Emotion Recognition System Using Facial Expressions". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n. 10 (1 ottobre 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26415.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research presents a real-time emotion recognition system that combines human-friendly machine interaction with picture processing. For many years, facial detection has been available. Moving further, it is possible to simulate the emotions that people express on their faces and experience in their brains through the use of video, electric signals, or image forms. Since it is hard for computers to detect emotions from images or videos and a difficult task for the human eye, machine emotion detection requires a variety of image processing approaches for feature extraction. The approach proposed in this paper consists of two primary processes: facial expression recognition (FER) and face detection. The experimental investigation of facial emotion recognition is the main topic of this study. An emotion detection system's workflow consists of face detection, feature extraction, pre-processing, classification, and image acquisition. The emotion identification system uses the Haar cascade algorithm, an object detection algorithm, to recognize faces in an image or a real-time video, and the KNN Classifier for image classification in order to identify such emotions. Using the webcam to capture real-time photos, this system operates. The goal of this research is to develop an automatic facial expression recognition system that can recognize various emotions. Based on these studies, the system may be able to distinguish between a number of people who are fearful, furious, shocked, sad, or pleased, among other emotions.
5

Blanes-Vidal, Victoria, Tomas Majtner, Luis David Avendaño-Valencia, Knud B. Yderstraede e Esmaeil S. Nadimi. "Invisible Color Variations of Facial Erythema: A Novel Early Marker for Diabetic Complications?" Journal of Diabetes Research 2019 (2 settembre 2019): 1–7. http://dx.doi.org/10.1155/2019/4583895.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Aim. (1) To quantify the invisible variations of facial erythema that occur as the blood flows in and out of the face of diabetic patients, during the blood pulse wave using an innovative image processing method, on videos recorded with a conventional digital camera and (2) to determine whether this “unveiled” facial red coloration and its periodic variations present specific characteristics in diabetic patients different from those in control subjects. Methods. We video recorded the faces of 20 diabetic patients with peripheral neuropathy, retinopathy, and/or nephropathy and 10 nondiabetic control subjects, using a Canon EOS camera, for 240 s. Only one participant presented visible facial erythema. We applied novel image processing methods to make the facial redness and its variations visible and automatically detected and extracted the redness intensity of eight facial patches, from each frame. We compared average and standard deviations of redness in the two groups using t-tests. Results. Facial redness varies, imperceptibly and periodically, between redder and paler, following the heart pulsation. This variation is consistently and significantly larger in diabetic patients compared to controls (p value < 0.001). Conclusions. Our study and its results (i.e., larger variations of facial redness with the heartbeats in diabetic patients) are unprecedented. One limitation is the sample size. Confirmation in a larger study would ground the development of a noninvasive cost-effective automatic tool for early detection of diabetic complications, based on measuring invisible redness variations, by image processing of facial videos captured at home with the patient’s smartphone.
6

Singh,, Mr Ankit. "Real-Time Emotion Recognition System Using Facial Expressions". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 04 (19 aprile 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31021.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper describes an emotion detection system based on real-time detection using image processing with human-friendly machine interaction. Facial detection has been around for decades. Taking a step ahead, Human expressions displayed by face and felt by the brain, captured via video, electric signal, or image form can be approximated. To recognize emotions via images or videos is a difficult task for the human eye and challenging for machines thus detection of emotion by a machine requires many image processing techniques for feature extraction. This paper proposes a system that has two main processes such as face detection and Facial expression recognition (FER). This research focuses on an experimental study on identifying facial emotions. The flow for an emotion detection system includes the image acquisition, preprocessing of an image, Face detection, feature extraction, and classification. To identify such emotions, the emotion detection system uses KNN Classifier for image classification, and Haar cascade algorithm an Object Detection Algorithm to identify faces in an image or a real-time video. This system works by taking live images from the webcam. The objective of this research is to produce an automatic facial emotion detection system to identify different emotions based on these experiments the system could identify several people that are sad, surprised, and happy, in fear, are angry, disgust etc.
7

S, Manjunath, Banashree P, Shreya M, Sneha Manjunath Hegde e Nischal H P. "Driver Drowsiness Detection System". International Journal for Research in Applied Science and Engineering Technology 10, n. 5 (31 maggio 2022): 129–35. http://dx.doi.org/10.22214/ijraset.2022.42109.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract: Recently, in addition to autonomous vehicle technology research and development, machine learning methods have been used to predict a driver's condition and emotions in order to provide information that will improve road safety. A driver's condition can be estimated not only by basic characteristics such as gender, age, and driving experience, but also by a driver's facial expressions, bio-signals, and driving behaviours. Recent developments in video processing using machine learning have enabled images obtained from cameras to be analysed with high accuracy. Therefore, based on the relationship between facial features and a driver’s drowsy state, variables that reflect facial features have been established. In this paper, we proposed a method for extracting detailed features of the eyes, the mouth, and positions of the head using OpenCV and Dlib library in order to estimate a driver’s level of drowsiness. Keywords: Drowsiness, OpenCV, Dlib, facial features, video processing
8

Lee, Seongmin, Hyunse Yoon, Sohyun Park, Sanghoon Lee e Jiwoo Kang. "Stabilized Temporal 3D Face Alignment Using Landmark Displacement Learning". Electronics 12, n. 17 (4 settembre 2023): 3735. http://dx.doi.org/10.3390/electronics12173735.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the most crucial aspects of 3D facial models is facial reconstruction. However, it is unclear if face shape distortion is caused by identity or expression when the 3D morphable model (3DMM) is fitted into largely expressive faces. In order to overcome the problem, we introduce neural networks to reconstruct stable and precise faces in time. The reconstruction network extracts the 3DMM parameters from video sequences to represent 3D faces in time. Meanwhile, our displacement networks learn the changes in facial landmarks. In particular, the networks learn changes caused by facial identity, facial expression, and temporal cues, respectively. The proposed facial alignment network exhibits reliable and precise performance in reconstructing static and dynamic faces by leveraging these displacement networks. The 300 Videos in the Wild (300VW) dataset is utilized for qualitative and quantitative evaluations to confirm the effectiveness of our method. The results demonstrate the considerable advantages of our method in reconstructing 3D faces from video sequences.
9

Rocha Neto, Aluizio, Thiago P. Silva, Thais Batista, Flávia C. Delicato, Paulo F. Pires e Frederico Lopes. "Leveraging Edge Intelligence for Video Analytics in Smart City Applications". Information 12, n. 1 (31 dicembre 2020): 14. http://dx.doi.org/10.3390/info12010014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In smart city scenarios, the huge proliferation of monitoring cameras scattered in public spaces has posed many challenges to network and processing infrastructure. A few dozen cameras are enough to saturate the city’s backbone. In addition, most smart city applications require a real-time response from the system in charge of processing such large-scale video streams. Finding a missing person using facial recognition technology is one of these applications that require immediate action on the place where that person is. In this paper, we tackle these challenges presenting a distributed system for video analytics designed to leverage edge computing capabilities. Our approach encompasses architecture, methods, and algorithms for: (i) dividing the burdensome processing of large-scale video streams into various machine learning tasks; and (ii) deploying these tasks as a workflow of data processing in edge devices equipped with hardware accelerators for neural networks. We also propose the reuse of nodes running tasks shared by multiple applications, e.g., facial recognition, thus improving the system’s processing throughput. Simulations showed that, with our algorithm to distribute the workload, the time to process a workflow is about 33% faster than a naive approach.
10

Selva, Selva, e Selva Kumar S. "Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis". Journal of Cybersecurity and Information Management 13, n. 2 (2024): 109–23. http://dx.doi.org/10.54216/jcim.130209.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The reliable way to discern human emotions in various circumstances has been proven to be through facial expressions. Facial expression recognition (FER) has emerged as a research topic to identify various essential emotions in the present exponential rise in research for emotion detection. Happiness is one of these basic emotions everyone may experience, and facial expressions are better at detecting it than other emotion-measuring methods. Most techniques have been designed to recognize various emotions to achieve the highest level of general precision. Maximizing the recognition accuracy for a particular emotion is challenging for researchers. Some techniques exist to identify a single happy mood recorded in unrestricted video. Still, they are all limited by the processing of extreme head posture fluctuations that they need to consider, and their accuracy still needs to be improved. This research proposes a novel hybrid facial emotion recognition using unconstraint video to improve accuracy. Here, a Deep Belief Network (DBN) with long short-term memory (LSTM) is employed to extract dynamic data from the video frames. The experiments conducted uses decision-level and feature-level fusion techniques are applied unconstrained video dataset. The outcomes show that the proposed hybrid approach may be more precise than some existing facial expression models.

Tesi sul tema "Facial video processing":

1

Söderström, Ulrik. "Very low bitrate facial video coding : based on principal component analysis". Licentiate thesis, Umeå University, Applied Physics and Electronics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-895.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

This thesis introduces a coding scheme for very low bitrate video coding through the aid of principal component analysis. Principal information of the facial mimic for a person can be extracted and stored in an Eigenspace. Entire video frames of this persons face can then be compressed with the Eigenspace to only a few projection coefficients. Principal component video coding encodes entire frames at once and increased frame size does not increase the necessary bitrate for encoding, as standard coding schemes do. This enables video communication with high frame rate, spatial resolution and visual quality at very low bitrates. No standard video coding technique provides these four features at the same time.

Theoretical bounds for using principal components to encode facial video sequences are presented. Two different theoretical bounds are derived. One that describes the minimal distortion when a certain number of Eigenimages are used and one that describes the minimum distortion when a minimum number of bits are used.

We investigate how the reconstruction quality for the coding scheme is affected when the Eigenspace, mean image and coefficients are compressed to enable efficient transmission. The Eigenspace and mean image are compressed through JPEG-compression while the while the coefficients are quantized. We show that high compression ratios can be used almost without any decrease in reconstruction quality for the coding scheme.

Different ways of re-using the Eigenspace for a person extracted from one video sequence to encode other video sequences are examined. The most important factor is the positioning of the facial features in the video frames.

Through a user test we find that it is extremely important to consider secondary workloads and how users make use of video when experimental setups are designed.

2

Doyle, Jason Emory. "Automatic Dynamic Tracking of Horse Head Facial Features in Video Using Image Processing Techniques". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/87582.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The wellbeing of horses is very important to their care takers, trainers, veterinarians, and owners. This thesis describes the development of a non-invasive image processing technique that allows for automatic detection and tracking of horse head and ear motion, respectively, in videos or camera feed, both of which may provide indications of horse pain, stress, or well-being. The algorithm developed here can automatically detect and track head motion and ear motion, respectively, in videos of a standing horse. Results demonstrating the technique for nine different horses are presented, where the data from the algorithm is utilized to plot absolute motion vs. time, velocity vs. time, and acceleration vs. time for the head and ear motion, respectively, of a variety of horses and ponies. Two-dimensional plotting of x and y motion over time is also presented. Additionally, results of pilot work in eye detection in light colored horses is also presented. Detection of pain in horses is particularly difficult because they are prey animals and have mechanisms to disguise their pain, and these instincts may be particularly strong in the presence of an unknown human, such as a veterinarian. Current state-of-the art for detecting pain in horses primarily involves invasive methods, such as heart rate monitors around the body, drawing blood for cortisol levels, and pressing on painful areas to elicit a response, although some work has been done for humans to sort and score photographs subjectively in terms of a "horse grimace scale." The algorithms developed in this thesis are the first that the author is aware for exploiting proven image processing approaches from other applications for development of an automatic tool for detection and tracking of horse facial indicators. The algorithms were done in common open source programs Python and OpenCV, and standard image processing approaches including Canny Edge detection Hue, Saturation, Value color filtering, and contour tracking were utilized in algorithm development. The work in this thesis provides the foundational development of a non -invasive and automatic detection and tracking program for horse head and ear motion, including demonstration of the viability of this approach using videos of standing horses. This approach lays the groundwork for robust tool development for monitoring horses non-invasively and without the required presence of humans in such applications as post-operative monitoring, foaling, evaluation of performance horses in competition and/or training, as well as for providing data for research on animal welfare, among other scenarios.
MS
3

Cheng, Xin. "Nonrigid face alignment for unknown subject in video". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.
4

Ouzar, Yassine. "Reconnaissance automatique sans contact de l'état affectif de la personne par fusion physio-visuelle à partir de vidéo du visage". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0076.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La reconnaissance automatique de l'état affectif reste un sujet difficile en raison de la complexité des émotions / stress, qui impliquent des éléments expérientiels, comportementaux et physiologiques. Comme il est difficile de décrire l'état affectif de la personne de manière exhaustive en termes de modalités uniques, des études récentes se sont concentrées sur des stratégies de fusion afin d'exploiter la complémentarité des signaux multimodaux. L'objectif principal de cette thèse consiste à étudier la faisabilité d'une fusion physio-visuelle pour la reconnaissance automatique de l'état affectif de la personne (émotions / stress) à partir des vidéos du visage. La fusion des expressions faciales et des signaux physiologiques permet de tirer les avantages de chaque modalité. Les expressions faciales sont simple à acquérir et permettent d'avoir une vision externe de l'état affectif, tandis que les signaux physiologiques permettent d'améliorer la fiabilité et relever le problème des expressions faciales contrefaites. Les recherches développées dans cette thèse se situent à l'intersection de l'intelligence artificielle, l'informatique affective ainsi que l'ingénierie biomédicale. Notre contribution s'axe sur deux aspects. Nous proposons en premier lieu une nouvelle approche bout-en-bout permettant d'estimer la fréquence cardiaque à partir d'enregistrements vidéo du visage à l'aide du principe de photopléthysmographie par imagerie (iPPG). La méthode repose sur un réseau spatio-temporel profond (X-iPPGNet) qui apprend le concept d'iPPG à partir de zéro, sans incorporer de connaissances préalables ni passer par l'extraction manuelle des signaux iPPG. Le seconde aspect porte sur une chaine de traitement physio-visuelle pour la reconnaissance automatique des émotions spontanées et du stress à partir des vidéos du visage. Le modèle proposé comprend deux étages permettant d'extraire les caractéristiques de chaque modalité. Le pipeline physiologique est commun au système de reconnaissance d'émotion et celui du stress. Il est basé sur MTTS-CAN, une méthode récente d'estimation du signal iPPG. Deux modèles neuronaux distincts ont été utilisés pour prédire les émotions et le stress de la personne à partir des informations visuelles contenues dans la vidéo (e.g. expressions faciales) : un réseau spatio-temporel combinant le module Squeeze-Excitation et l'architecture Xception pour estimer l'état émotionnel et une approche d'apprentissage par transfert pour l'estimation du niveau de stress. Cette approche a été privilégiée afin de réduire les efforts de développement et surmonter le problème du manque de données. Une fusion des caractéristiques physiologiques et des expressions faciales est ensuite effectuée pour prédire les états émotionnels ou de stress
Human affective state recognition remains a challenging topic due to the complexity of emotions, which involves experiential, behavioral, and physiological elements. Since it is difficult to comprehensively describe emotion in terms of single modalities, recent studies have focused on artificial intelligence approaches and fusion strategy to exploit the complementarity of multimodal signals using artificial intelligence approaches. The main objective is to study the feasibility of a physio-visual fusion for the recognition of the affective state of the person (emotions/stress) from facial videos. The fusion of facial expressions and physiological signals allows to take advantage of each modality. Facial expressions are easy to acquire and provide an external view of the affective state, while physiological signals improve reliability and address the problem of falsified facial expressions. The research developed in this thesis lies at the intersection of artificial intelligence, affective computing, and biomedical engineering. Our contribution focuses on two points. First, we propose a new end-to-end approach for instantaneous pulse rate estimation directly from facial video recordings using the principle of imaging photoplethysmography (iPPG). This method is based on a deep spatio-temporal network (X-iPPGNet) that learns the iPPG concept from scratch, without incorporating prior knowledge or going through manual iPPG signal extraction. The second contribution focuses on a physio-visual fusion for spontaneous emotions and stress recognition from facial videos. The proposed model includes two pipelines to extract the features of each modality. The physiological pipeline is common to both the emotion and stress recognition systems. It is based on MTTS-CAN, a recent method for estimating the iPPG signal, while two distinct neural models were used to predict the person's emotions and stress from the visual information contained in the video (e.g. facial expressions): a spatio-temporal network combining the Squeeze-Excitation module and the Xception architecture for estimating the emotional state and a transfer learning approach for estimating the stress level. This approach reduces development effort and overcomes the lack of data. A fusion of physiological and facial features is then performed to predict the emotional or stress states
5

Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos". Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
6

Skalban, Yvonne. "Automatic generation of factual questions from video documentaries". Thesis, University of Wolverhampton, 2013. http://hdl.handle.net/2436/314607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Questioning sessions are an essential part of teachers’ daily instructional activities. Questions are used to assess students’ knowledge and comprehension and to promote learning. The manual creation of such learning material is a laborious and time-consuming task. Research in Natural Language Processing (NLP) has shown that Question Generation (QG) systems can be used to efficiently create high-quality learning materials to support teachers in their work and students in their learning process. A number of successful QG applications for education and training have been developed, but these focus mainly on supporting reading materials. However, digital technology is always evolving; there is an ever-growing amount of multimedia content available, and more and more delivery methods for audio-visual content are emerging and easily accessible. At the same time, research provides empirical evidence that multimedia use in the classroom has beneficial effects on student learning. Thus, there is a need to investigate whether QG systems can be used to assist teachers in creating assessment materials from these different types of media that are being employed in classrooms. This thesis serves to explore how NLP tools and techniques can be harnessed to generate questions from non-traditional learning materials, in particular videos. A QG framework which allows the generation of factual questions from video documentaries has been developed and a number of evaluations to analyse the quality of the produced questions have been performed. The developed framework uses several readily available NLP tools to generate questions from the subtitles accompanying a video documentary. The reason for choosing video vii documentaries is two-fold: firstly, they are frequently used by teachers and secondly, their factual nature lends itself well to question generation, as will be explained within the thesis. The questions generated by the framework can be used as a quick way of testing students’ comprehension of what they have learned from the documentary. As part of this research project, the characteristics of documentary videos and their subtitles were analysed and the methodology has been adapted to be able to exploit these characteristics. An evaluation of the system output by domain experts showed promising results but also revealed that generating even shallow questions is a task which is far from trivial. To this end, the evaluation and subsequent error analysis contribute to the literature by highlighting the challenges QG from documentary videos can face. In a user study, it was investigated whether questions generated automatically by the system developed as part of this thesis and a state-of-the-art system can successfully be used to assist multimedia-based learning. Using a novel evaluation methodology, the feasibility of using a QG system’s output as ‘pre-questions’ with different types of prequestions (text-based and with images) used was examined. The psychometric parameters of the automatically generated questions by the two systems and of those generated manually were compared. The results indicate that the presence of pre-questions (preferably with images) improves the performance of test-takers and they highlight that the psychometric parameters of the questions generated by the system are comparable if not better than those of the state-of-the-art system. In another experiment, the productivity of questions in terms of time taken to generate questions manually vs. time taken to post-edit system-generated questions was analysed. A viii post-editing tool which allows for the tracking of several statistics such as edit distance measures, editing time, etc, was used. The quality of questions before and after postediting was also analysed. Not only did the experiments provide quantitative data about automatically and manually generated questions, but qualitative data in the form of user feedback, which provides an insight into how users perceived the quality of questions, was also gathered.
7

Baccouche, Moez. "Apprentissage neuronal de caractéristiques spatio-temporelles pour la classification automatique de séquences vidéo". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00932662.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse s'intéresse à la problématique de la classification automatique des séquences vidéo. L'idée est de se démarquer de la méthodologie dominante qui se base sur l'utilisation de caractéristiques conçues manuellement, et de proposer des modèles qui soient les plus génériques possibles et indépendants du domaine. Ceci est fait en automatisant la phase d'extraction des caractéristiques, qui sont dans notre cas générées par apprentissage à partir d'exemples, sans aucune connaissance a priori. Nous nous appuyons pour ce faire sur des travaux existants sur les modèles neuronaux pour la reconnaissance d'objets dans les images fixes, et nous étudions leur extension au cas de la vidéo. Plus concrètement, nous proposons deux modèles d'apprentissage des caractéristiques spatio-temporelles pour la classification vidéo : (i) Un modèle d'apprentissage supervisé profond, qui peut être vu comme une extension des modèles ConvNets au cas de la vidéo, et (ii) Un modèle d'apprentissage non supervisé, qui se base sur un schéma d'auto-encodage, et sur une représentation parcimonieuse sur-complète des données. Outre les originalités liées à chacune de ces deux approches, une contribution supplémentaire de cette thèse est une étude comparative entre plusieurs modèles de classification de séquences parmi les plus populaires de l'état de l'art. Cette étude a été réalisée en se basant sur des caractéristiques manuelles adaptées à la problématique de la reconnaissance d'actions dans les vidéos de football. Ceci a permis d'identifier le modèle de classification le plus performant (un réseau de neurone récurrent bidirectionnel à longue mémoire à court-terme -BLSTM-), et de justifier son utilisation pour le reste des expérimentations. Enfin, afin de valider la généricité des deux modèles proposés, ceux-ci ont été évalués sur deux problématiques différentes, à savoir la reconnaissance d'actions humaines (sur la base KTH), et la reconnaissance d'expressions faciales (sur la base GEMEP-FERA). L'étude des résultats a permis de valider les approches, et de montrer qu'elles obtiennent des performances parmi les meilleures de l'état de l'art (avec 95,83% de bonne reconnaissance pour la base KTH, et 87,57% pour la base GEMEP-FERA).
8

Lin, Jie-Zhua, e 林界專. "Driver Smoking Behavior Detection with Facial Video Processing". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/22527977596947216050.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立中興大學
電機工程學系所
103
Researches and analyses from the US bureau of safety experts shows that the probability of car accidents for smokers is 1.5 times larger than that for non-smokers. The United Kingdom and Germany experts believe that the 5% of car accidents are related to smoking while driving. According to the US experts, smoking can lead to three driving hazards: reduced vision, distracting, and odors stimulations. People need a high degree of attention and a clear vision while driving, and the smoke may irritate eyes and respiratory tract, and then the vision will be blurred. The acrid smoke causes coughing and distraction. Besides, the influence of smoke on the vision is very terrible. Based on statistics, when at dusk, smoking four cigarettes during a short time will reduce the vision by 20-30%. When driving in such circumstances, the risk is very high. Therefore, this thesis proposes the cigarette detection algorithms by facial and mouth region detections when the driver smoking behavior happens. The proposed algorithms use four major skills, which include the face detection, face boundary detection, mouth positioning, and cigarette detection. Driver facial videos are captured through the camera, and the proposed algorithms use color information and separate the statistical threshold condition at different daytime. Firstly, the facial region detection is active based on facial features, and the possible facial region will be found. Next, based on the symmetry and concentration properties of facial features, the facial boundaries are detected, and the face size is estimated. Finally, we use the luminance threshold condition to detect the cigarette near the mouth region for driver smoking behavior detection. In the thesis, the proposed algorithms use the brightness information for the cigarette detection. The test video sequences are analyzed by statistics, and the threshold value is set to filter out a cigarette, and then the projection method is used to detect the location of the cigarette. Based on 29 test video sequences taken by our own, after experiments we find that the proposed method II can perform the highest average detection accuracy and the lowest average false positive rate, which are 79.81 % and 20.19 % respectively for cigarette detection, and which are 95.71 % and 4.29 % respectively for non-cigarette detection. The experimental results show that the processing speed is up to 38 frames per second by the Quad-core PC (2.6GHz, 4GB memory capacity).
9

Arpino, Vincent J. "An assessment of facial profile preference of surgical patients using video imaging". 1996. http://catalog.hathitrust.org/api/volumes/oclc/48079456.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Dahmane, Mohamed. "Analyse de mouvements faciaux à partir d'images vidéo". Thèse, 2011. http://hdl.handle.net/1866/7120.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.
In a face-to-face talk, language is supported by nonverbal communication, which plays a central role in human social behavior by adding cues to the meaning of speech, providing feedback, and managing synchronization. Information about the emotional state of a person is usually carried out by facial attributes. In fact, 55% of a message is communicated by facial expressions whereas only 7% is due to linguistic language and 38% to paralanguage. However, there are currently no established instruments to measure such behavior. The computer vision community is therefore interested in the development of automated techniques for prototypic facial expression analysis, for human computer interaction applications, meeting video analysis, security and clinical applications. For gathering observable cues, we try to design, in this research, a framework that can build a relatively comprehensive source of visual information, which will be able to distinguish the facial deformations, thus allowing to point out the presence or absence of a particular facial action. A detailed review of identified techniques led us to explore two different approaches. The first approach involves appearance modeling, in which we use the gradient orientations to generate a dense representation of facial attributes. Besides the facial representation problem, the main difficulty of a system, which is intended to be general, is the implementation of a generic model independent of individual identity, face geometry and size. We therefore introduce a concept of prototypic referential mapping through a SIFT-flow registration that demonstrates, in this thesis, its superiority to the conventional eyes-based alignment. In a second approach, we use a geometric model through which the facial primitives are represented by Gabor filtering. Motivated by the fact that facial expressions are not only ambiguous and inconsistent across human but also dependent on the behavioral context; in this approach, we present a personalized facial expression recognition system whose overall performance is directly related to the localization performance of a set of facial fiducial points. These points are tracked through a sequence of video frames by a modification of a fast Gabor phase-based disparity estimation technique. In this thesis, we revisit the confidence measure, and introduce an iterative conditional procedure for displacement estimation that improves the robustness of the original methods.

Libri sul tema "Facial video processing":

1

Colmenarez, Antonio J. Facial analysis from continuous video with applications to human-computer interface. Boston: Kluwer Academic Publishers, 2004.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

S, Pandzic Igor, e Forchheimer Robert, a cura di. MPEG-4 facial animation: The standard, implementation and applications. Chichester: J. Wiley, 2002.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Fleming, Bill. Animating facial features and expression. Rockland, Mass: Charles River Media, 1999.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Forchheimer, Robert, e Igor S. Pandzic. MPEG-4 Facial Animation. Wiley & Sons, Incorporated, John, 2003.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Colmenarez, Antonio J. Facial Analysis from Continuous Video with Applications to Human-Computer Interface. Springer, 2013.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Forchheimer, Robert, e Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2003.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Forchheimer, Robert, e Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

(Editor), Igor S. Pandzic, e Robert Forchheimer (Editor), a cura di. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, 2002.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Forchheimer, Robert, e Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2003.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Dobbs, Darris, e Bill Fleming. Animating Facial Features & Expressions. Charles River Media, 1998.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Facial video processing":

1

Boccignone, Giuseppe, Vittorio Cuculo, Giuliano Grossi, Raffaella Lanzarotti e Raffaella Migliaccio. "Virtual EMG via Facial Video Analysis". In Image Analysis and Processing - ICIAP 2017, 197–207. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68560-1_18.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lin, Chenhan, Fei Long, Junfeng Yao, Ming-Ting Sun e Jinsong Su. "Learning Spatiotemporal and Geometric Features with ISA for Video-Based Facial Expression Recognition". In Neural Information Processing, 435–44. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_45.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Clippingdale, Simon, e Mahito Fujii. "Face Recognition for Video Indexing: Randomization of Face Templates Improves Robustness to Facial Expression". In Visual Content Processing and Representation, 32–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39798-4_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ouzar, Yassine, Lynda Lagha, Frédéric Bousefsaf e Choubeila Maaoui. "Multimodal Stress State Detection from Facial Videos Using Physiological Signals and Facial Features". In Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, 139–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37745-7_10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Wang, Chao, Yunhong Wang e Zhaoxiang Zhang. "Incremental Learning of Patch-Based Bag of Facial Words Representation for Online Face Recognition in Videos". In Advances in Multimedia Information Processing – PCM 2012, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34778-8_1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Weismayer, Christian, e Ilona Pezenka. "Cross-Cultural Differences in Emotional Response to Destination Commercials". In Information and Communication Technologies in Tourism 2024, 43–54. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractIn this paper we examine whether cultural characteristics lead to different emotion expressions whilst watching tourist destination ads via digital media, and thus, ads might be perceived differently depending on the country of origin of the viewer. To test this assumption, participants from two different countries, located on two different continents, Austria/Europe, and Colombia/South America, are exposed to a destination ad. Their faces are recorded and post-processing analysis on the recorded videos using the AFFDEX algorithm, that is capable of inferring emotions based on the facial action coding system (FACS), is conducted. Valence scores are compared among the viewers of the two countries over the time span of the whole commercial, and subpopulation differences of basic emotions (joy, surprise, anger, sadness, disgust, fear, and contempt) are explored using time series clustering along with optimizations for the dynamic time warping (DTW) distance. Screening sequences in this way reveals insight on the emotional reactions of different viewer groups. The findings instruct tourism marketers on how to fit the targeted emotions elicited by tourist destination advertising with various cultural settings.
7

Waters, Keith, e Demetri Terzopoulos. "The computer synthesis of expressive faces". In Processing the Facial Image, 87–94. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780198522614.003.0013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract This paper presents a methodology for the computer synthesis of realistic faces capable of expressive articulations. A sophisticated three-dimensional model of the human face is developed that incorporates a physical model of facial tissue with an anatomical model of facial muscles. The tissue and muscle models are generic, in that their structures are independent of specific facial geometries. To synthesize specific faces, these models are automatically mapped onto geometrically accurate polygonal facial representations constructed by photogrammetry of stereo facial images or by non-uniform meshing of detailed facial topographies acquired by using range sensors. The methodology offers superior realism by utilizing physical modelling to emulate complex tissue deformations in response to coordinated facial muscle activity. To provide realistic muscle actions to the face model, a performance driven animation technique is developed which estimates the dynamic contractions of a performer’s facial muscles from video imagery.
8

Ji, Yi, e Khalid Idrissi. "Automatic Facial Expression Recognition by Facial Parts Location with Boosted-LBP". In Intelligent Computer Vision and Image Processing, 42–55. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3906-5.ch004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes an automatic facial expression recognition system, which uses new methods in both face detection and feature extraction. In this system, considering that facial expressions are related to a small set of muscles and limited ranges of motions, the facial expressions are recognized by these changes in video sequences. First, the differences between neutral and emotional states are detected. Faces can be automatically located from changing facial organs. Then, LBP features are applied and AdaBoost is used to find the most important features for each expression on essential facial parts. At last, SVM with polynomial kernel is used to classify expressions. The method is evaluated on JAFFE and MMI databases. The performances are better than other automatic or manual annotated systems.
9

Uddin, Md Zia. "A local feature-based facial expression recognition system from depth video". In Emerging Trends in Image Processing, Computer Vision and Pattern Recognition, 407–19. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-12-802045-6.00026-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Fatima, Eram, Ankit Kumar e Anil Kumar Singh. "Face Recognition using Convolutional Neural Network Algorithms". In Artificial intelligence and Multimedia Data Engineering, 60–69. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815196443123010007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Biometric applications have massive demand in today’s era. The areas of applications are mostly linked with the security of the system. Biometric features are regarded as the primary resource for security purposes due to their own distinctiveness and non-volatile essence. System authentication using biometrics is considered to be a sophisticated technology. Noise effect inducts variation in the biometric subject that causes an adverse impact on establishing the recognition. The proposed model supported the development of an effective method for performing facial biometric feature recognition. The model's goal is to reduce the number of false approvals and refusals. The proposed algorithm has been applied over a video dataset containing surveillance video frames that capture facial subjects dynamically. The first step is the pre-processing of the video frames that have been carried out in the proposed model. Then, the Viola-Jones algorithm was applied to detect the facial subjects in the video frames. Feature extraction from the facial subject has been accomplished by applying a deep reinforcement learning algorithm. Further, the proposed model applied a convolutional neural network (CNN) algorithm to perform feature recognition of facial identity accurately. The proposed technique aims to maintain a huge recognition rate of dynamic facial subjects under various unprecedented noise variations. In the classification algorithm, the recognition accuracy is found to be 98.85%

Atti di convegni sul tema "Facial video processing":

1

Kalayci, Sacide, Hazim Kemal Ekenel e Hatice Gunes. "Automatic analysis of facial attractiveness from video". In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025851.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Chen, Xin, Chen Cao, Zehao Xue e Wei Chu. "Joint Audio-Video Driven Facial Animation". In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461502.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Liu, Sijiang, Yifeng Zhou, Wensha Tian, Chengrong Yu e Bo Jiang. "An automatic facial beautification method for video post-processing". In Third International Workshop on Pattern Recognition, a cura di Xudong Jiang, Guojian Chen e Zhenxiang Chen. SPIE, 2018. http://dx.doi.org/10.1117/12.2501849.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Modak, Masooda, Namrata Patel e Kalyani Pampattiwar. "Facial Expression based Assistant for Interviewee using Video Processing". In 2023 6th International Conference on Advances in Science and Technology (ICAST). IEEE, 2023. http://dx.doi.org/10.1109/icast59062.2023.10454925.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Saeed, Usman, e Jean-Luc Dugelay. "Person Recognition Form Video using Facial Mimics". In 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366724.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Chen, Bo, e Klara Nahrstedt. "FIS: Facial Information Segmentation for Video Redaction". In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019. http://dx.doi.org/10.1109/mipr.2019.00071.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Chen, Xu, Anustup Choudhury, Peter van Beek e Andrew Segall. "Facial video super resolution using semantic exemplar components". In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Jiangang Yu e Bir Bhanu. "Super-resolution of deformed facial images in video". In 2008 15th IEEE International Conference on Image Processing. IEEE, 2008. http://dx.doi.org/10.1109/icip.2008.4711966.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kim, Minsu, Hong Joo Lee, Sangmin Lee e Yong Man Ro. "Robust Video Facial Authentication With Unsupervised Mode Disentanglement". In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191052.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Chatterjee, S., S. Banerjee e K. K. Biswas. "Reconstruction of local features for facial video compression". In Proceedings of 7th IEEE International Conference on Image Processing. IEEE, 2000. http://dx.doi.org/10.1109/icip.2000.899272.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia