Academic literature on the topic 'Visable learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visable learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Visable learning"
Swinson, Jeremy. "Visable learning for teachers maximizing impact on learning." Educational Psychology in Practice 28, no. 2 (June 2012): 215–16. http://dx.doi.org/10.1080/02667363.2012.693677.
Full textCHEN, HUNG-CHING (JUSTIN), MARK GOLDBERG, MALIK MAGDON-ISMAIL, and WILLIAM A. WALLACE. "REVERSE ENGINEERING A SOCIAL AGENT-BASED HIDDEN MARKOV MODEL — ViSAGE." International Journal of Neural Systems 18, no. 06 (December 2008): 491–526. http://dx.doi.org/10.1142/s0129065708001750.
Full textTambovceva, Tatjana, and Ineta Geipele. "ENVIRONMENTAL MANAGEMENT SYSTEMS EXPERIENCE AMONG LATVIAN CONSTRUCTION COMPANIES / APLINKOS APSAUGOS VADYBOS SISTEMŲ TAIKYMO PATIRTIS LATVIJOS STATYBOS ĮMONĖSE." Technological and Economic Development of Economy 17, no. 4 (January 18, 2012): 595–610. http://dx.doi.org/10.3846/20294913.2011.603179.
Full textRimkutė, E., and M. Dovydaitienė. "MOKYMOSI NEGALIOS: SKIRTINGI TEORINIAI POŽIŪRIAI IR PSICHOLOGINĖS PAGALBOS BŪDAI." Psichologija 44 (January 1, 2011): 118–33. http://dx.doi.org/10.15388/psichol.2011.44.2544.
Full textDissertations / Theses on the topic "Visable learning"
Karlsson, Elin, and Rebecca Pontán. ""Elevinflytande är väl när det flyter på?" : Ett utvecklingsinriktat arbete om att synliggöra elevinflytandet i fritidshemmet." Thesis, Linnéuniversitetet, Institutionen för didaktik och lärares praktik (DLP), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105482.
Full textXia, Baiqiang. "Learning 3D geometric features for soft-biometrics recognition." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10132/document.
Full textSoft-Biometric (gender, age, etc.) recognition has shown growingapplications in different domains. Previous 2D face based studies aresensitive to illumination and pose changes, and insufficient to representthe facial morphology. To overcome these problems, this thesis employsthe 3D face in Soft-Biometric recognition. Based on a Riemannian shapeanalysis of facial radial curves, four types of Dense Scalar Field (DSF) featuresare proposed, which represent the Averageness, the Symmetry, theglobal Spatiality and the local Gradient of 3D face. Experiments with RandomForest on the 3D FRGCv2 dataset demonstrate the effectiveness ofthe proposed features in Soft-Biometric recognition. Furtherly, we demonstratethe correlations of Soft-Biometrics are useful in the recognition. Tothe best of our knowledge, this is the first work which studies age estimation,and the correlations of Soft-Biometrics, using 3D face
Cousin, Stéphanie. "Apprentissage dans le développement de la discrimination des stimuli sociaux chez l’enfant avec ou sans troubles du développement." Thesis, Lille 3, 2013. http://www.theses.fr/2013LIL30016/document.
Full textThe role of the environment has been demonstrated in the development of the discrimination of social stimuli. The discrimination of social stimuli such as faces and facial expressions have been extensively studied during the past decades. In addition, people with autism show atypical responses to social stimuli compared to typically functioning individuals. Those discrepancies can be seen very early in life. However, there is still much to know about how this learning takes place, particularly on the face parts that are relevant for the discrimination. The focus of this work is to study more precisely how face parts come to control the responses of children with autism. The goal of our studies was first, to build a task to measure precisely which parts of the face are involved in facial expressions discrimination in children with autism and in typically developing children (Experiments 1 & 2). Subsequently, we devised a task which evaluated the role of the eyes' and mouth regions in children with autism and typically developing children in order to see the effect of the modification of observing patterns of faces on the way eyes and mouth come to control the responses of children with autism (Experiments 3 & 4). Results are discussed in line with the role of the environment in participating in the development of facial expressions discrimination. Implications for the study in early facial expression discrimination learning in typically developing children are discussed. Direction of gaze, in adition to the eyes region expression, is discussed as a relevant element for the discrimination of facial stimuli
Maalej, Ahmed. "3D Facial Expressions Recognition Using Shape Analysis and Machine Learning." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10025/document.
Full textFacial expression recognition is a challenging task, which has received growing interest within the research community, impacting important applications in fields related to human machine interaction (HMI). Toward building human-like emotionally intelligent HMI devices, scientists are trying to include the essence of human emotional state in such systems. The recent development of 3D acquisition sensors has made 3D data more available, and this kind of data comes to alleviate the problems inherent in 2D data such as illumination, pose and scale variations as well as low resolution. Several 3D facial databases are publicly available for the researchers in the field of face and facial expression recognition to validate and evaluate their approaches. This thesis deals with facial expression recognition (FER) problem and proposes an approach based on shape analysis to handle both static and dynamic FER tasks. Our approach includes the following steps: first, a curve-based representation of the 3D face model is proposed to describe facial features. Then, once these curves are extracted, their shape information is quantified using a Riemannain framework. We end up with similarity scores between different facial local shapes constituting feature vectors associated with each facial surface. Afterwards, these features are used as entry parameters to some machine learning and classification algorithms to recognize expressions. Exhaustive experiments are derived to validate our approach and results are presented and compared to the related work achievements
Nicolle, Jérémie. "Reading Faces. Using Hard Multi-Task Metric Learning for Kernel Regression." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066043/document.
Full textCollecting and labeling various and relevant data for training automatic facial information prediction systems is both hard and time-consuming. As a consequence, available data is often of limited size compared to the difficulty of the prediction tasks. This makes overfitting a particularly important issue in several face-related machine learning applications. In this PhD, we introduce a novel method for multi-dimensional label regression, namely Hard Multi-Task Metric Learning for Kernel Regression (H-MT-MLKR). Our proposed method has been designed taking a particular focus on overfitting reduction. The Metric Learning for Kernel Regression method (MLKR) that has been proposed by Kilian Q. Weinberger in 2007 aims at learning a subspace for minimizing the quadratic training error of a Nadaraya-Watson estimator. In our method, we extend MLKR for multi-dimensional label regression by adding a novel multi-task regularization that reduces the degrees of freedom of the learned model along with potential overfitting. We evaluate our regression method on two different applications, namely landmark localization and Action Unit intensity prediction. We also present our work on automatic emotion prediction in a continuous space which is based on the Nadaraya-Watson estimator as well. Two of our frameworks let us win international data science challenges, namely the Audio-Visual Emotion Challenge (AVEC’12) and the fully continuous Facial Expression Recognition and Analysis challenge (FERA’15)
Al, chanti Dawood. "Analyse Automatique des Macro et Micro Expressions Faciales : Détection et Reconnaissance par Machine Learning." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT058.
Full textFacial expression analysis is an important problem in many biometric tasks, such as face recognition, face animation, affective computing and human computer interface. In this thesis, we aim at analyzing facial expressions of a face using images and video sequences. We divided the problem into three leading parts.First, we study Macro Facial Expressions for Emotion Recognition and we propose three different levels of feature representations. Low-level feature through a Bag of Visual Word model, mid-level feature through Sparse Representation and hierarchical features through a Deep Learning based method. The objective of doing this is to find the most effective and efficient representation that contains distinctive information of expressions and that overcomes various challenges coming from: 1) intrinsic factors such as appearance and expressiveness variability and 2) extrinsic factors such as illumination, pose, scale and imaging parameters, e.g., resolution, focus, imaging, noise. Then, we incorporate the time dimension to extract spatio-temporal features with the objective to describe subtle feature deformations to discriminate ambiguous classes.Second, we direct our research toward transfer learning, where we aim at Adapting Facial Expression Category Models to New Domains and Tasks. Thus we study domain adaptation and zero shot learning for developing a method that solves the two tasks jointly. Our method is suitable for unlabelled target datasets coming from different data distributions than the source domain and for unlabelled target datasets with different label distributions but sharing the same context as the source domain. Therefore, to permit knowledge transfer between domains and tasks, we use Euclidean learning and Convolutional Neural Networks to design a mapping function that map the visual information coming from facial expressions into a semantic space coming from a Natural Language model that encodes the visual attribute description or use the label information. The consistency between the two subspaces is maximized by aligning them using the visual feature distribution.Third, we study Micro Facial Expression Detection. We propose an algorithm to spot micro-expression segments including the onset and offset frames and to spatially pinpoint in each image space the regions involved in the micro-facial muscle movements. The problem is formulated into Anomaly Detection due to the fact that micro-expressions occur infrequently and thus leading to few data generation compared to natural facial behaviours. In this manner, first, we propose a deep Recurrent Convolutional Auto-Encoder to capture spatial and motion feature changes of natural facial behaviours. Then, a statistical based model for estimating the probability density function of normal facial behaviours while associating a discriminating score to spot micro-expressions is learned based on a Gaussian Mixture Model. Finally, an adaptive thresholding technique for identifying micro expressions from natural facial behaviour is proposed.Our algorithms are tested over deliberate and spontaneous facial expression benchmarks
Fayet, Cédric. "Multimodal anomaly detection in discourse using speech and facial expressions." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S131.
Full textThis thesis is about multimodal anomaly detection in discourse using facial expressions ans speech expressivity. These two modalities are vectors of emotions, intentions, and can reflect the state of mind of a human being. In this work, a corpus on discourse containing some induced and acted anomalies has been built. This corpus has enabled testing a detection chain based on semi-supervised classification. GMM, One class SVM and Isolation forest are examples of models that have been used. It also has enabled to study the contribution of each modality and their joint contribution to the detection efficiency
Biasutto-Lervat, Théo. "Modélisation de la coarticulation multimodale : vers l'animation d'une tête parlante intelligible." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0019.
Full textThis thesis deals with neural network based coarticulation modeling, and aims to synchronize facial animation of a 3D talking head with speech. Predicting articulatory movements is not a trivial task, as it is well known that production of a phoneme is greatly affected by its phonetic context, a phoneme called coarticulation. We propose in this work a coarticulation model, i.e. a model able to predict spatial trajectories of articulators from speech. We rely on a sequential model, the recurrent neural networks, and more specifically the Gated Recurrent Units, which are able to consider the articulation dynamic as a central component of its modeling. Unfortunately, the typical amount of data in articulatory and audiovisual databases seems to be quite low for a deep learning approach. To overcome this difficulty, we propose to integrate articulatory knowledge into the networks during its initialization. The RNNs robustness allow uw to apply our coarticulation model to predict both face and tongue movements, in french and german for the face, and in english and german for the tongue. Evaluation has been conducted through objective measures of the trajectories, and through experiments to ensure a complete reach of critical articulatory targets. We also conducted a subjective evaluation to attest the perceptual quality of the predicted articulation once applied to our facial animation system. Finally, we analyzed the model after training to explore phonetic knowledges learned
Zhang, Yuyao. "Non-linear dimensionality reduction and sparse representation models for facial analysis." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.
Full textFace analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
Bouges, Pierre. "Gestion de données manquantes dans des cascades de boosting : application à la détection de visages." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840842.
Full textBooks on the topic "Visable learning"
Feuerstein, Reuven. La pédagogie à visage humain: La méthode Feuerstein. [Latresne]: Bord de l'eau, 2006.
Find full textConference papers on the topic "Visable learning"
Ravichander, Abhilasha, Supriya Vijay, Varshini Ramaseshan, and S. Natarajan. "VISAGE: A Support Vector Machine Approach to Group Dynamic Analysis." In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2015. http://dx.doi.org/10.1109/icmla.2015.146.
Full text