Academic literature on the topic 'Reconnaissance faciale automatisée'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reconnaissance faciale automatisée.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Reconnaissance faciale automatisée"
Messaoudi, Aïssa. "Les défis de l’IA dans l’éducation : de la protection des données aux biais algorithmiques." Médiations et médiatisations, no. 18 (October 30, 2024): 148–60. http://dx.doi.org/10.52358/mm.vi18.409.
Full textMolnar, Petra, and Isabelle Saint-Saëns. "Les nouvelles technologies frontalières." Plein droit 140, no. 1 (May 28, 2024): 39–42. http://dx.doi.org/10.3917/pld.140.0041.
Full textFontaine, D., and S. Santucci-Sivolotto. "Évaluer la douleur par reconnaissance automatique de l’expression faciale : un espoir illusoire ou la réalité pour demain ?" Douleur et Analgésie 34, no. 3 (September 2021): 155–61. http://dx.doi.org/10.3166/dea-2021-0174.
Full textNzobonimpa, Stany. "Algorithmes et intelligence artificielle : une note sur l’état de la réglementation des technologies utilisant la reconnaissance faciale automatique au Canada et aux États-Unis." Revue Gouvernance 19, no. 2 (2022): 99. http://dx.doi.org/10.7202/1094078ar.
Full textBraun Binder, Nadja, Eliane Kunz, and Liliane Obrecht. "Maschinelle Gesichtserkennung im öffentlichen Raum." sui generis, April 11, 2022. http://dx.doi.org/10.21257/sg.204.
Full textDissertations / Theses on the topic "Reconnaissance faciale automatisée"
Maalej, Ahmed. "Reconnaissance d'Expressions Faciale 3D Basée sur l'Analyse de Forme et l'Apprentissage Automatique." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00726298.
Full textAbdat, Faiza. "Reconnaissance automatique des émotions par données multimodales : expressions faciales et des signaux physiologiques." Thesis, Metz, 2010. http://www.theses.fr/2010METZ035S/document.
Full textThis thesis presents a generic method for automatic recognition of emotions from a bimodal system based on facial expressions and physiological signals. This data processing approach leads to better extraction of information and is more reliable than single modality. The proposed algorithm for facial expression recognition is based on the distance variation of facial muscles from the neutral state and on the classification by means of Support Vector Machines (SVM). And the emotion recognition from physiological signals is based on the classification of statistical parameters by the same classifier. In order to have a more reliable recognition system, we have combined the facial expressions and physiological signals. The direct combination of such information is not trivial giving the differences of characteristics (such as frequency, amplitude, variation, and dimensionality). To remedy this, we have merged the information at different levels of implementation. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion we have implemented two methods; the first based on voting process and another based on dynamic Bayesian networks. The optimal results were obtained with the fusion of features based on Principal Component Analysis. These methods have been tested on a database developed in our laboratory from healthy subjects and inducing with IAPS pictures. A self-assessment step has been applied to all subjects in order to improve the annotation of images used for induction. The obtained results have shown good performance even in presence of variability among individuals and the emotional state variability for several days
Abdat, Faiza. "Reconnaissance automatique des émotions par données multimodales : expressions faciales et des signaux physiologiques." Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ035S.
Full textThis thesis presents a generic method for automatic recognition of emotions from a bimodal system based on facial expressions and physiological signals. This data processing approach leads to better extraction of information and is more reliable than single modality. The proposed algorithm for facial expression recognition is based on the distance variation of facial muscles from the neutral state and on the classification by means of Support Vector Machines (SVM). And the emotion recognition from physiological signals is based on the classification of statistical parameters by the same classifier. In order to have a more reliable recognition system, we have combined the facial expressions and physiological signals. The direct combination of such information is not trivial giving the differences of characteristics (such as frequency, amplitude, variation, and dimensionality). To remedy this, we have merged the information at different levels of implementation. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion we have implemented two methods; the first based on voting process and another based on dynamic Bayesian networks. The optimal results were obtained with the fusion of features based on Principal Component Analysis. These methods have been tested on a database developed in our laboratory from healthy subjects and inducing with IAPS pictures. A self-assessment step has been applied to all subjects in order to improve the annotation of images used for induction. The obtained results have shown good performance even in presence of variability among individuals and the emotional state variability for several days
Al, chanti Dawood. "Analyse Automatique des Macro et Micro Expressions Faciales : Détection et Reconnaissance par Machine Learning." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT058.
Full textFacial expression analysis is an important problem in many biometric tasks, such as face recognition, face animation, affective computing and human computer interface. In this thesis, we aim at analyzing facial expressions of a face using images and video sequences. We divided the problem into three leading parts.First, we study Macro Facial Expressions for Emotion Recognition and we propose three different levels of feature representations. Low-level feature through a Bag of Visual Word model, mid-level feature through Sparse Representation and hierarchical features through a Deep Learning based method. The objective of doing this is to find the most effective and efficient representation that contains distinctive information of expressions and that overcomes various challenges coming from: 1) intrinsic factors such as appearance and expressiveness variability and 2) extrinsic factors such as illumination, pose, scale and imaging parameters, e.g., resolution, focus, imaging, noise. Then, we incorporate the time dimension to extract spatio-temporal features with the objective to describe subtle feature deformations to discriminate ambiguous classes.Second, we direct our research toward transfer learning, where we aim at Adapting Facial Expression Category Models to New Domains and Tasks. Thus we study domain adaptation and zero shot learning for developing a method that solves the two tasks jointly. Our method is suitable for unlabelled target datasets coming from different data distributions than the source domain and for unlabelled target datasets with different label distributions but sharing the same context as the source domain. Therefore, to permit knowledge transfer between domains and tasks, we use Euclidean learning and Convolutional Neural Networks to design a mapping function that map the visual information coming from facial expressions into a semantic space coming from a Natural Language model that encodes the visual attribute description or use the label information. The consistency between the two subspaces is maximized by aligning them using the visual feature distribution.Third, we study Micro Facial Expression Detection. We propose an algorithm to spot micro-expression segments including the onset and offset frames and to spatially pinpoint in each image space the regions involved in the micro-facial muscle movements. The problem is formulated into Anomaly Detection due to the fact that micro-expressions occur infrequently and thus leading to few data generation compared to natural facial behaviours. In this manner, first, we propose a deep Recurrent Convolutional Auto-Encoder to capture spatial and motion feature changes of natural facial behaviours. Then, a statistical based model for estimating the probability density function of normal facial behaviours while associating a discriminating score to spot micro-expressions is learned based on a Gaussian Mixture Model. Finally, an adaptive thresholding technique for identifying micro expressions from natural facial behaviour is proposed.Our algorithms are tested over deliberate and spontaneous facial expression benchmarks
Ouzar, Yassine. "Reconnaissance automatique sans contact de l'état affectif de la personne par fusion physio-visuelle à partir de vidéo du visage." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0076.
Full textHuman affective state recognition remains a challenging topic due to the complexity of emotions, which involves experiential, behavioral, and physiological elements. Since it is difficult to comprehensively describe emotion in terms of single modalities, recent studies have focused on artificial intelligence approaches and fusion strategy to exploit the complementarity of multimodal signals using artificial intelligence approaches. The main objective is to study the feasibility of a physio-visual fusion for the recognition of the affective state of the person (emotions/stress) from facial videos. The fusion of facial expressions and physiological signals allows to take advantage of each modality. Facial expressions are easy to acquire and provide an external view of the affective state, while physiological signals improve reliability and address the problem of falsified facial expressions. The research developed in this thesis lies at the intersection of artificial intelligence, affective computing, and biomedical engineering. Our contribution focuses on two points. First, we propose a new end-to-end approach for instantaneous pulse rate estimation directly from facial video recordings using the principle of imaging photoplethysmography (iPPG). This method is based on a deep spatio-temporal network (X-iPPGNet) that learns the iPPG concept from scratch, without incorporating prior knowledge or going through manual iPPG signal extraction. The second contribution focuses on a physio-visual fusion for spontaneous emotions and stress recognition from facial videos. The proposed model includes two pipelines to extract the features of each modality. The physiological pipeline is common to both the emotion and stress recognition systems. It is based on MTTS-CAN, a recent method for estimating the iPPG signal, while two distinct neural models were used to predict the person's emotions and stress from the visual information contained in the video (e.g. facial expressions): a spatio-temporal network combining the Squeeze-Excitation module and the Xception architecture for estimating the emotional state and a transfer learning approach for estimating the stress level. This approach reduces development effort and overcomes the lack of data. A fusion of physiological and facial features is then performed to predict the emotional or stress states
Alashkar, Taleb. "3D dynamic facial sequences analysis for face recognition and emotion detection." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10109/document.
Full textIn this thesis, we have investigated the problems of identity recognition and emotion detection from facial 3D shapes animations (called 4D faces). In particular, we have studied the role of facial (shapes) dynamics in revealing the human identity and their exhibited spontaneous emotion. To this end, we have adopted a comprehensive geometric framework for the purpose of analyzing 3D faces and their dynamics across time. That is, a sequence of 3D faces is first split to an indexed collection of short-term sub-sequences that are represented as matrix (subspace) which define a special matrix manifold called, Grassmann manifold (set of k-dimensional linear subspaces). The geometry of the underlying space is used to effectively compare the 3D sub-sequences, compute statistical summaries (e.g. sample mean, etc.) and quantify densely the divergence between subspaces. Two different representations have been proposed to address the problems of face recognition and emotion detection. They are respectively (1) a dictionary (of subspaces) representation associated to Dictionary Learning and Sparse Coding techniques and (2) a time-parameterized curve (trajectory) representation on the underlying space associated with the Structured-Output SVM classifier for early emotion detection. Experimental evaluations conducted on publicly available BU-4DFE, BU4D-Spontaneous and Cam3D Kinect datasets illustrate the effectiveness of these representations and the algorithmic solutions for identity recognition and emotion detection proposed in this thesis
Moufidi, Abderrazzaq. "Machine Learning-Based Multimodal integration for Short Utterance-Based Biometrics Identification and Engagement Detection." Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0026.
Full textThe rapid advancement and democratization of technology have led to an abundance of sensors. Consequently, the integration of these diverse modalities presents an advantage for numerous real-life applications, such as biometrics recognition and engage ment detection. In the field of multimodality, researchers have developed various fusion ar chitectures, ranging from early, hybrid, to late fusion approaches. However, these architec tures may have limitations involving short utterances and brief video segments, necessi tating a paradigm shift towards the development of multimodal machine learning techniques that promise precision and efficiency for short-duration data analysis. In this thesis, we lean on integration of multimodality to tackle these previous challenges ranging from supervised biometrics identification to unsupervised student engagement detection. This PhD began with the first contribution on the integration of multiscale Wavelet Scattering Transform with x-vectors architecture, through which we enhanced the accuracy of speaker identification in scenarios involving short utterances. Going through multimodality benefits, a late fusion architecture combining lips depth videos and audio signals further improved identification accuracy under short utterances, utilizing an effective and less computational methods to extract spatiotemporal features. In the realm of biometrics challenges, there is the threat emergence of deepfakes. There-fore, we focalized on elaborating a deepfake detection methods based on, shallow learning and a fine-tuned architecture of our previous late fusion architecture applied on RGB lips videos and audios. By employing hand-crafted anomaly detection methods for both audio and visual modalities, the study demonstrated robust detection capabilities across various datasets and conditions, emphasizing the importance of multimodal approaches in countering evolving deepfake techniques. Expanding to educational contexts, the dissertation explores multimodal student engagement detection in classrooms. Using low-cost sensors to capture Heart Rate signals and facial expressions, the study developed a reproducible dataset and pipeline for identifying significant moments, accounting for cultural nuances. The analysis of facial expressions using Vision Transformer (ViT) fused with heart rate signal processing, validated through expert observations, showcased the potential for real-time monitoring to enhance educational outcomes through timely interventions
Allaert, Benjamin. "Analyse des expressions faciales dans un flux vidéo." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I021/document.
Full textFacial expression recognition has attracted great interest over the past decade in wide application areas, such as human behavior analysis, e-health and marketing. In this thesis we explore a new approach to step forward towards in-the-wild expression recognition. Special attention has been paid to encode respectively small/large facial expression amplitudes, and to analyze facial expressions in presence of varying head pose. The first challenge addressed concerns varying facial expression amplitudes. We propose an innovative motion descriptor called LMP. This descriptor takes into account mechanical facial skin deformation properties. When extracting motion information from the face, the unified approach deals with inconsistencies and noise, caused by face characteristics. The main originality of our approach is a unified approach for both micro and macro expression recognition, with the same facial recognition framework. The second challenge addressed concerns important head pose variations. In facial expression analysis, the face registration step must ensure that minimal deformation appears. Registration techniques must be used with care in presence of unconstrained head pose as facial texture transformations apply. Hence, it is valuable to estimate the impact of alignment-related induced noise on the global recognition performance. For this, we propose a new database, called SNaP-2DFe, allowing to study the impact of head pose and intra-facial occlusions on expression recognition approaches. We prove that the usage of face registration approach does not seem adequate for preserving the features encoding facial expression deformations
Deramgozin, Mohammadmahdi. "Développement de modèles de reconnaissance des expressions faciales à base d’apprentissage profond pour les applications embarquées." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0286.
Full textThe field of Facial Emotion Recognition (FER) is pivotal in advancing human-machine interactions and finds essential applications in healthcare for conditions like depression and anxiety. Leveraging Convolutional Neural Networks (CNNs), this thesis presents a progression of models aimed at optimizing emotion detection and interpretation. The initial model is resource-frugal but competes favorably with state-of-the-art solutions, making it a strong candidate for embedded systems constrained in computational and memory resources. To capture the complexity and ambiguity of human emotions, the research work presented in this thesis enhances this CNN-based foundational model by incorporating facial Action Units (AUs). This approach not only refines emotion detection but also provides interpretability by identifying specific AUs tied to each emotion. Further sophistication is achieved by introducing neural attention mechanisms—both spatial and channel-based—improving the model's focus on salient facial features. This makes the CNN-based model adapted well to real-world scenarios, such as partially obscured or subtle facial expressions. Based on the previous results, in this thesis we propose finally an optimized, yet computationally efficient, CNN model that is ideal for resource-limited environments like embedded systems. While it provides a robust solution for FER, this research also identifies perspectives for future work, such as real-time applications and advanced techniques for model interpretability
Ruiz, hernandez John alexander. "Analyse faciale avec dérivées Gaussiennes." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00646718.
Full textBook chapters on the topic "Reconnaissance faciale automatisée"
Monteiro, Stephen. "La reconnaissance faciale automatique promue au rang de nécessité sociale." In Attentes et promesses technoscientifiques, 117–36. Les Presses de l’Université de Montréal, 2022. http://dx.doi.org/10.1515/9782760645028-006.
Full text