Dissertationen zum Thema „Visable learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-17 Dissertationen für die Forschung zum Thema "Visable learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Karlsson, Elin, und Rebecca Pontán. „"Elevinflytande är väl när det flyter på?" : Ett utvecklingsinriktat arbete om att synliggöra elevinflytandet i fritidshemmet“. Thesis, Linnéuniversitetet, Institutionen för didaktik och lärares praktik (DLP), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105482.
Der volle Inhalt der QuelleXia, Baiqiang. „Learning 3D geometric features for soft-biometrics recognition“. Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10132/document.
Der volle Inhalt der QuelleSoft-Biometric (gender, age, etc.) recognition has shown growingapplications in different domains. Previous 2D face based studies aresensitive to illumination and pose changes, and insufficient to representthe facial morphology. To overcome these problems, this thesis employsthe 3D face in Soft-Biometric recognition. Based on a Riemannian shapeanalysis of facial radial curves, four types of Dense Scalar Field (DSF) featuresare proposed, which represent the Averageness, the Symmetry, theglobal Spatiality and the local Gradient of 3D face. Experiments with RandomForest on the 3D FRGCv2 dataset demonstrate the effectiveness ofthe proposed features in Soft-Biometric recognition. Furtherly, we demonstratethe correlations of Soft-Biometrics are useful in the recognition. Tothe best of our knowledge, this is the first work which studies age estimation,and the correlations of Soft-Biometrics, using 3D face
Cousin, Stéphanie. „Apprentissage dans le développement de la discrimination des stimuli sociaux chez l’enfant avec ou sans troubles du développement“. Thesis, Lille 3, 2013. http://www.theses.fr/2013LIL30016/document.
Der volle Inhalt der QuelleThe role of the environment has been demonstrated in the development of the discrimination of social stimuli. The discrimination of social stimuli such as faces and facial expressions have been extensively studied during the past decades. In addition, people with autism show atypical responses to social stimuli compared to typically functioning individuals. Those discrepancies can be seen very early in life. However, there is still much to know about how this learning takes place, particularly on the face parts that are relevant for the discrimination. The focus of this work is to study more precisely how face parts come to control the responses of children with autism. The goal of our studies was first, to build a task to measure precisely which parts of the face are involved in facial expressions discrimination in children with autism and in typically developing children (Experiments 1 & 2). Subsequently, we devised a task which evaluated the role of the eyes' and mouth regions in children with autism and typically developing children in order to see the effect of the modification of observing patterns of faces on the way eyes and mouth come to control the responses of children with autism (Experiments 3 & 4). Results are discussed in line with the role of the environment in participating in the development of facial expressions discrimination. Implications for the study in early facial expression discrimination learning in typically developing children are discussed. Direction of gaze, in adition to the eyes region expression, is discussed as a relevant element for the discrimination of facial stimuli
Maalej, Ahmed. „3D Facial Expressions Recognition Using Shape Analysis and Machine Learning“. Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10025/document.
Der volle Inhalt der QuelleFacial expression recognition is a challenging task, which has received growing interest within the research community, impacting important applications in fields related to human machine interaction (HMI). Toward building human-like emotionally intelligent HMI devices, scientists are trying to include the essence of human emotional state in such systems. The recent development of 3D acquisition sensors has made 3D data more available, and this kind of data comes to alleviate the problems inherent in 2D data such as illumination, pose and scale variations as well as low resolution. Several 3D facial databases are publicly available for the researchers in the field of face and facial expression recognition to validate and evaluate their approaches. This thesis deals with facial expression recognition (FER) problem and proposes an approach based on shape analysis to handle both static and dynamic FER tasks. Our approach includes the following steps: first, a curve-based representation of the 3D face model is proposed to describe facial features. Then, once these curves are extracted, their shape information is quantified using a Riemannain framework. We end up with similarity scores between different facial local shapes constituting feature vectors associated with each facial surface. Afterwards, these features are used as entry parameters to some machine learning and classification algorithms to recognize expressions. Exhaustive experiments are derived to validate our approach and results are presented and compared to the related work achievements
Nicolle, Jérémie. „Reading Faces. Using Hard Multi-Task Metric Learning for Kernel Regression“. Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066043/document.
Der volle Inhalt der QuelleCollecting and labeling various and relevant data for training automatic facial information prediction systems is both hard and time-consuming. As a consequence, available data is often of limited size compared to the difficulty of the prediction tasks. This makes overfitting a particularly important issue in several face-related machine learning applications. In this PhD, we introduce a novel method for multi-dimensional label regression, namely Hard Multi-Task Metric Learning for Kernel Regression (H-MT-MLKR). Our proposed method has been designed taking a particular focus on overfitting reduction. The Metric Learning for Kernel Regression method (MLKR) that has been proposed by Kilian Q. Weinberger in 2007 aims at learning a subspace for minimizing the quadratic training error of a Nadaraya-Watson estimator. In our method, we extend MLKR for multi-dimensional label regression by adding a novel multi-task regularization that reduces the degrees of freedom of the learned model along with potential overfitting. We evaluate our regression method on two different applications, namely landmark localization and Action Unit intensity prediction. We also present our work on automatic emotion prediction in a continuous space which is based on the Nadaraya-Watson estimator as well. Two of our frameworks let us win international data science challenges, namely the Audio-Visual Emotion Challenge (AVEC’12) and the fully continuous Facial Expression Recognition and Analysis challenge (FERA’15)
Al, chanti Dawood. „Analyse Automatique des Macro et Micro Expressions Faciales : Détection et Reconnaissance par Machine Learning“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT058.
Der volle Inhalt der QuelleFacial expression analysis is an important problem in many biometric tasks, such as face recognition, face animation, affective computing and human computer interface. In this thesis, we aim at analyzing facial expressions of a face using images and video sequences. We divided the problem into three leading parts.First, we study Macro Facial Expressions for Emotion Recognition and we propose three different levels of feature representations. Low-level feature through a Bag of Visual Word model, mid-level feature through Sparse Representation and hierarchical features through a Deep Learning based method. The objective of doing this is to find the most effective and efficient representation that contains distinctive information of expressions and that overcomes various challenges coming from: 1) intrinsic factors such as appearance and expressiveness variability and 2) extrinsic factors such as illumination, pose, scale and imaging parameters, e.g., resolution, focus, imaging, noise. Then, we incorporate the time dimension to extract spatio-temporal features with the objective to describe subtle feature deformations to discriminate ambiguous classes.Second, we direct our research toward transfer learning, where we aim at Adapting Facial Expression Category Models to New Domains and Tasks. Thus we study domain adaptation and zero shot learning for developing a method that solves the two tasks jointly. Our method is suitable for unlabelled target datasets coming from different data distributions than the source domain and for unlabelled target datasets with different label distributions but sharing the same context as the source domain. Therefore, to permit knowledge transfer between domains and tasks, we use Euclidean learning and Convolutional Neural Networks to design a mapping function that map the visual information coming from facial expressions into a semantic space coming from a Natural Language model that encodes the visual attribute description or use the label information. The consistency between the two subspaces is maximized by aligning them using the visual feature distribution.Third, we study Micro Facial Expression Detection. We propose an algorithm to spot micro-expression segments including the onset and offset frames and to spatially pinpoint in each image space the regions involved in the micro-facial muscle movements. The problem is formulated into Anomaly Detection due to the fact that micro-expressions occur infrequently and thus leading to few data generation compared to natural facial behaviours. In this manner, first, we propose a deep Recurrent Convolutional Auto-Encoder to capture spatial and motion feature changes of natural facial behaviours. Then, a statistical based model for estimating the probability density function of normal facial behaviours while associating a discriminating score to spot micro-expressions is learned based on a Gaussian Mixture Model. Finally, an adaptive thresholding technique for identifying micro expressions from natural facial behaviour is proposed.Our algorithms are tested over deliberate and spontaneous facial expression benchmarks
Fayet, Cédric. „Multimodal anomaly detection in discourse using speech and facial expressions“. Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S131.
Der volle Inhalt der QuelleThis thesis is about multimodal anomaly detection in discourse using facial expressions ans speech expressivity. These two modalities are vectors of emotions, intentions, and can reflect the state of mind of a human being. In this work, a corpus on discourse containing some induced and acted anomalies has been built. This corpus has enabled testing a detection chain based on semi-supervised classification. GMM, One class SVM and Isolation forest are examples of models that have been used. It also has enabled to study the contribution of each modality and their joint contribution to the detection efficiency
Biasutto-Lervat, Théo. „Modélisation de la coarticulation multimodale : vers l'animation d'une tête parlante intelligible“. Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0019.
Der volle Inhalt der QuelleThis thesis deals with neural network based coarticulation modeling, and aims to synchronize facial animation of a 3D talking head with speech. Predicting articulatory movements is not a trivial task, as it is well known that production of a phoneme is greatly affected by its phonetic context, a phoneme called coarticulation. We propose in this work a coarticulation model, i.e. a model able to predict spatial trajectories of articulators from speech. We rely on a sequential model, the recurrent neural networks, and more specifically the Gated Recurrent Units, which are able to consider the articulation dynamic as a central component of its modeling. Unfortunately, the typical amount of data in articulatory and audiovisual databases seems to be quite low for a deep learning approach. To overcome this difficulty, we propose to integrate articulatory knowledge into the networks during its initialization. The RNNs robustness allow uw to apply our coarticulation model to predict both face and tongue movements, in french and german for the face, and in english and german for the tongue. Evaluation has been conducted through objective measures of the trajectories, and through experiments to ensure a complete reach of critical articulatory targets. We also conducted a subjective evaluation to attest the perceptual quality of the predicted articulation once applied to our facial animation system. Finally, we analyzed the model after training to explore phonetic knowledges learned
Zhang, Yuyao. „Non-linear dimensionality reduction and sparse representation models for facial analysis“. Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.
Der volle Inhalt der QuelleFace analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
Bouges, Pierre. „Gestion de données manquantes dans des cascades de boosting : application à la détection de visages“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840842.
Der volle Inhalt der QuelleReverdy, Clément. „Annotation et synthèse basée données des expressions faciales de la Langue des Signes Française“. Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS550.
Der volle Inhalt der QuelleFrench Sign Language (LSF) represents part of the identity and culture of the deaf community in France. One way to promote this language is to generate signed content through virtual characters called signing avatars. The system we propose is part of a more general project of gestural synthesis of LSF by concatenation that allows to generate new sentences from a corpus of annotated motion data captured via a marker-based motion capture device (MoCap) by editing existing data. In LSF, facial expressivity is particularly important since it is the vector of numerous information (e.g., affective, clausal or adjectival). This thesis aims to integrate the facial aspect of LSF into the concatenative synthesis system described above. Thus, a processing pipeline is proposed, from data capture via a MoCap device to facial animation of the avatar from these data and to automatic annotation of the corpus thus constituted. The first contribution of this thesis concerns the employed methodology and the representation by blendshapes both for the synthesis of facial animations and for automatic annotation. It enables the analysis/synthesis scheme to be processed at an abstract level, with homogeneous and meaningful descriptors. The second contribution concerns the development of an automatic annotation method based on the recognition of expressive facial expressions using machine learning techniques. The last contribution lies in the synthesis method, which is expressed as a rather classic optimization problem but in which we have included
Guillaumin, Matthieu. „Données multimodales pour l'analyse d'image“. Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00522278/en/.
Der volle Inhalt der QuelleDeregnaucourt, Thomas. „Prédiction spatio-temporelle de surfaces issues de l'imagerie en utilisant des processus stochastiques“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC088.
Der volle Inhalt der QuelleThe prediction of a surface is now an important problem due to its use in multiple domains, such as computer vision, the simulation of avatars for cinematography or video games, etc. Since a surface can be static or dynamic, i.e. evolving with time, this problem can be separated in two classes: a spatial prediction problem and a spatio-temporal one. In order to propose a new approach for each of these problems, this thesis works have been separated in two parts.First of all, we have searched to predict a static surface, which is supposed cylindrical, knowing it partially from curves. The proposed approach consisted in deforming a cylinder on the known curves in order to reconstruct the surface of interest. First, a correspondence between known curves and the cylinder is generated with the help of shape analysis tools. Once this step done, an interpolation of the deformation field, which is supposed Gaussian, have been estimated using maximum likelihood and Bayesian inference. This methodology has then been applied to real data from two domains of imaging: medical imaging and infography. The obtained results show that the proposed approach exceeds the existing methods in the literature, with better results using Bayesian inference.In a second hand, we have been interested in the spatio-temporal prediction of dynamic surfaces. The objective was to predict a dynamic surface based on its initial surface. Since the prediction needs to learn on known observations, we first have developed a spatio-temporal surface analysis tool. This analysis is based on shape analysis tools, and allows a better learning. Once this preliminary step done, we have estimated the temporal deformation of the dynamic surface of interest. More precisely, an adaptation, with is usable on the space of surfaces, of usual statistical estimators has been used. Using this estimated deformation on the initial surface, an estimation of the dynamic surface has been created. This process has then been applied for predicting 4D expressions of faces, which allow us to generate visually convincing expressions
Khan, Rizwan Ahmed. „Détection des émotions à partir de vidéos dans un environnement non contrôlé“. Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10227/document.
Der volle Inhalt der QuelleCommunication in any form i.e. verbal or non-verbal is vital to complete various daily routine tasks and plays a significant role inlife. Facial expression is the most effective form of non-verbal communication and it provides a clue about emotional state, mindset and intention. Generally automatic facial expression recognition framework consists of three step: face tracking, feature extraction and expression classification. In order to built robust facial expression recognition framework that is capable of producing reliable results, it is necessary to extract features (from the appropriate facial regions) that have strong discriminative abilities. Recently different methods for automatic facial expression recognition have been proposed, but invariably they all are computationally expensive and spend computational time on whole face image or divides the facial image based on some mathematical or geometrical heuristic for features extraction. None of them take inspiration from the human visual system in completing the same task. In this research thesis we took inspiration from the human visual system in order to find from where (facial region) to extract features. We argue that the task of expression analysis and recognition could be done in more conducive manner, if only some regions are selected for further processing (i.e.salient regions) as it happens in human visual system. In this research thesis we have proposed different frameworks for automatic recognition of expressions, all getting inspiration from the human vision. Every subsequently proposed addresses the shortcomings of the previously proposed framework. Our proposed frameworks in general, achieve results that exceeds state-of-the-artmethods for expression recognition. Secondly, they are computationally efficient and simple as they process only perceptually salient region(s) of face for feature extraction. By processing only perceptually salient region(s) of the face, reduction in feature vector dimensionality and reduction in computational time for feature extraction is achieved. Thus making them suitable for real-time applications
Ballihi, Lahoucine. „Biométrie faciale 3D par apprentissage des caractéristiques géométriques : Application à la reconnaissance des visages et à la classification du genre“. Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00726299.
Der volle Inhalt der QuelleHonari, Sina. „Feature extraction on faces : from landmark localization to depth estimation“. Thèse, 2018. http://hdl.handle.net/1866/22658.
Der volle Inhalt der QuelleCouët-Garand, Alexandre. „Induction d'une stratégie visuelle de reconnaissance du genre“. Thèse, 2014. http://hdl.handle.net/1866/11214.
Der volle Inhalt der QuelleThe goal of the following experiment is to make subjects unconsciously learn a visual strategy allowing them to use only part of the available visual information from the human face to correctly identify the gender of a face. Normally, the gender of a face is recognized using certain regions, like those of the mouth and the eyes (Dupuis-Roy, Fortin, Fiset et Gosselin, 2009). Our participants had to accomplish an operant conditionning task. They were informed that a number of points would be given to them according to their performance. At the end of training, the subjects that were encouraged to use the left eye indeed used the left eye more than the right. Also, those that were conditionned to use the right eye used the right eye more than the left. We will discuss the potential clinical applications of this method of conditionning.