Dissertations / Theses on the topic 'Facial expression'

To see the other types of publications on this topic, follow the link: Facial expression.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Facial expression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Testa, Rafael Luiz. "Síntese de expressões faciais em fotografias para representação de emoções." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-31012019-165605/.

Full text
Abstract:
O processamento e a identificação de emoções faciais constituem ações essenciais para estabelecer interação entre pessoas. Alguns transtornos psiquiátricos podem limitar a capacidade de um indivíduo em reconhecer emoções em expressões faciais. De modo a contribuir com a solução deste problema, técnicas computacionais podem ser utilizadas para compor ferramentas destinadas ao diagnóstico, avaliação e treinamento no reconhecimento de tais expressões. Com esta motivação, o objetivo deste trabalho é definir, implementar e avaliar um método para sintetizar expressões faciais que representam emoções em imagens de pessoas reais. Nos trabalhos encontrados na literatura a principal ideia é que a expressão facial da imagem de uma pessoa pode ser reconstituída na imagem de outra pessoa. Este estudo difere-se das abordagens apresentadas na literatura ao propor uma técnica que considera a similaridade entre imagens faciais para escolher aquela que será empregada como origem para a reconstituição. Desta maneira, pretende-se aumentar o realismo das imagens sintetizadas. A abordagem sugerida para resolver o problema, além de buscar as faces mais similares em banco de imagens, faz a deformação dos componentes faciais e o mapeamento das diferenças de iluminação na imagem destino. O realismo das imagens geradas foi mensurado de forma objetiva e subjetiva usando imagens disponíveis em bancos de imagens públicos. Uma análise visual mostrou que as imagens sintetizadas com base em faces similares apresentaram um grau de realismo adequado, principalmente quando comparadas com imagens sintetizadas a partir de faces aleatórias. Além de constituir uma contribuição para a geração de imagens a serem aplicadas em ferramentas de auxílio ao diagnóstico e terapia de distúrbios psiquiátricos, oferece uma contribuição para a área de Ciência da Computação, por meio da proposição de novas técnicas de síntese de expressões faciais
The ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
APA, Harvard, Vancouver, ISO, and other styles
2

Neth, Donald C. "Facial configuration and the perception of facial expression." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189090729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.

Full text
Abstract:
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
APA, Harvard, Vancouver, ISO, and other styles
4

Mikheeva, Olga. "Perceptual facial expression representation." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217307.

Full text
Abstract:
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process.
Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jingting. "Facial Micro-Expression Analysis." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.

Full text
Abstract:
Les micro-expressions (MEs) sont porteuses d'informations non verbales spécifiques. Cependant, en raison de leur nature locale et brève, il est difficile de les détecter. Dans cette thèse, nous proposons une méthode de détection par reconnaissance d'un motif local et temporel de mouvement du visage. Ce motif a une forme spécifique (S-pattern) lorsque la ME apparait. Ainsi, à l'aide de SVM, nous distinguons les MEs des autres mouvements faciaux. Nous proposons également une fusion spatiale et temporelle afin d'améliorer la distinction entre les MEs (locaux) et les mouvements de la tête (globaux). Cependant, l'apprentissage des S-patterns est limité par le petit nombre de bases de données de ME et par le faible volume d'échantillons de ME. Les modèles de Hammerstein (HM) est une bonne approximation des mouvements musculaires. En approximant chaque S-pattern par un HM, nous pouvons filtrer les S-patterns réels et générer de nouveaux S-patterns similaires. Ainsi, nous effectuons une augmentation et une fiabilisation des S-patterns pour l'apprentissage et améliorons ainsi la capacité de différencier les MEs d'autres mouvements. Lors du premier challenge de détection de MEs, nous avons participé à la création d’une nouvelle méthode d'évaluation des résultats. Cela a aussi été l’occasion d’appliquer notre méthode à longues vidéos. Nous avons fourni le résultat de base au challenge.Les expérimentions sont effectuées sur CASME I, CASME II, SAMM et CAS(ME)2. Les résultats montrent que notre méthode proposée surpasse la méthode la plus populaire en termes de F1-score. L'ajout du processus de fusion et de l'augmentation des données améliore encore les performances de détection
The Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
APA, Harvard, Vancouver, ISO, and other styles
6

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Full text
Abstract:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
APA, Harvard, Vancouver, ISO, and other styles
7

Miao, Yu. "A Real Time Facial Expression Recognition System Using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38488.

Full text
Abstract:
This thesis presents an image-based real-time facial expression recognition system that is capable of recognizing basic facial expressions of several subjects simultaneously from a webcam. Our proposed methodology combines a supervised transfer learning strategy and a joint supervision method with a new supervision signal that is crucial for facial tasks. A convolutional neural network (CNN) model, MobileNet, that contains both accuracy and speed is deployed in both offline and real-time frameworks to enable fast and accurate real-time output. Evaluations for both offline and real-time experiments are provided in our work. The offline evaluation is carried out by first evaluating two publicly available datasets, JAFFE and CK+, and then presenting the results of the cross-dataset evaluation between these two datasets to verify the generalization ability of the proposed method. A comprehensive evaluation configuration for the CK+ dataset is given in this work, providing a baseline for a fair comparison. It reaches an accuracy of 95.24% on JAFFE dataset, and an accuracy of 96.92% on 6-class CK+ dataset which only contains the last frames of image sequences. The resulting average run-time cost for recognition in the real-time implementation is reported, which is approximately 3.57 ms/frame on an NVIDIA Quadro K4200 GPU. The results demonstrate that our proposed CNN-based framework for facial expression recognition, which does not require a massive preprocessing module, can not only achieve state-of-art accuracy on these two datasets but also perform the classification task much faster than a conventional machine learning methodology as a result of the lightweight structure of MobileNet.
APA, Harvard, Vancouver, ISO, and other styles
8

Pierce, Meghan. "Facial Expression Intelligence Scale (FEIS): Recognizing and Interpreting Facial Expressions and Implications for Consumer Behavior." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26786.

Full text
Abstract:
Each time we meet a new person, we draw inferences based on our impressions. The first thing we are likely to notice is a personâ s face. The face functions as one source of information, which we combine with the spoken word, body language, past experience, and the context of the situation to form judgments. Facial expressions serve as pieces of information we use to understand what another person is thinking, saying, or feeling. While there is strong support for the universality of emotion recognition, the ability to identify and interpret facial expressions varies by individual. Existing scales fail to include the dynamicity of the face. Five studies are proposed to examine the viability of the Facial Expression Intelligence Scale (FEIS) to measure individual ability to identify and interpret facial expressions. Consumer behavior implications are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Carter, Jeffrey R. "Facial expression analysis in schizophrenia." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Kaimin. "Towards Realistic Facial Expression Recognition." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.

Full text
Abstract:
Automatic facial expression recognition has attracted significant attention over the past decades. Although substantial progress has been achieved for certain scenarios (such as frontal faces in strictly controlled laboratory settings), accurate recognition of facial expression in realistic environments remains unsolved for the most part. The main objective of this thesis is to investigate facial expression recognition in unconstrained environments. As one major problem faced by the literature is the lack of realistic training and testing data, this thesis presents a web search based framework to collect realistic facial expression dataset from the Web. By adopting an active learning based method to remove noisy images from text based image search results, the proposed approach minimizes the human efforts during the dataset construction and maximizes the scalability for future research. Various novel facial expression features are then proposed to address the challenges imposed by the newly collected dataset. Finally, a spectral embedding based feature fusion framework is presented to combine the proposed facial expression features to form a more descriptive representation. This thesis also systematically investigates how the number of frames of a facial expression sequence can affect the performance of facial expression recognition algorithms, since facial expression sequences may be captured under different frame rates in realistic scenarios. A facial expression keyframe selection method is proposed based on keypoint based frame representation. Comprehensive experiments have been performed to demonstrate the effectiveness of the presented methods.
APA, Harvard, Vancouver, ISO, and other styles
11

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Full text
Abstract:
>Magister Scientiae - MSc
The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Wild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.

Full text
Abstract:
Entgegen traditioneller Gesichtererkennungsmodelle konnte in einigen Studien gezeigt werden, dass die Erkennung des Emotionsausdrucks und der Bekanntheit interagieren. In dieser Dissertation wurde mit Hilfe von ereigniskorrelierten Potentialen untersucht, welche funktionalen Prozesse bei einer Interaktion moduliert werden. Teil I untersuchte, ob die Bekanntheit eines Gesichtes die Emotionsdiskrimination erleichtert. In mehreren Experimenten diskriminierten Versuchspersonen zwei Emotionen, die von bekannten und unbekannten Gesichtern praesentiert wurden . Dabei war die Entscheidung fuer persoenlich bekannte Gesichter mit froehlichem Ausdruck schneller und fehlerfreier. Dies zeigt sich in einer kuerzeren Latenz der P300 Komponente (Trend), welche die Dauer der Reizklassifikation auswies, sowie in einem verkuerzten Intervall zwischen Stimulus und Beginn des Lateralisierten Bereitschaftspotentials (S-LRP), welches die handspezifische Reaktionsauswahl anzeigt. Diese Befunde sprechen fuer eine Erleichterung der Emotionsdiskrimination auf spaeten perzeptuellen Verarbeitungsstufen bei persoenlich bekannten Gesichtern. In weiteren Experimenten mit oeffentlich bekannten, gelernten und unbekannten Gesichtern zeigte sich keine Erleichterung der Emotionsdiskrimination für bekannte Gesichter. Teil II untersuchte, ob es einen Einfluss des Emotionsausdrucks auf die Bekanntheitsentscheidung gibt. Eine Erleichterung zeigte sich fuer neutrale oder froehliche Emotionen nur bei persoenlich bekannten Gesichtern, nicht aber bei gelernten oder unbekannten Gesichtern. Sie spiegelt sich in einer Verkuerzung des S-LRP fuer persoenlich bekannte Gesichter wider, was eine Erleichterung der Reaktionsauswahl nahelegt. Zusammenfassend konnte gezeigt werden, dass eine Interaktion der Bekanntheit mit der Emotionserkennung unter bestimmten Bedingungen auftritt. In einer abschließenden Diskussion werden die experimentellen Ergebnisse in Beziehung gesetzt und in Hinblick auf bisherige Befunde diskutiert.
Contrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
APA, Harvard, Vancouver, ISO, and other styles
13

Nelson, Nicole L. "A Facial Expression of Pax: Revisiting Preschoolers' "Recognition" of Expressions." Thesis, Boston College, 2011. http://hdl.handle.net/2345/2458.

Full text
Abstract:
Thesis advisor: James A. Russell
Prior research showing that children recognize emotional expressions has used a choice-from-array style task; for example, children are asked to find the fear face in an array of several expressions. However, these choice-from-array tasks allow for the use of a process of elimination strategy in which children could select an expression they are unfamiliar with when presented a label that does not apply to other expressions in the array. Across six studies (N = 144), 80% of 2- to 4-year-olds selected a novel expression when presented a target label and performed similarly when the label was novel (such as pax) or familiar (such as fear). In addition, 46% of children went on to freely label the expression with the target label in a subsequent task. These data are the first to show that children extend the process of elimination strategy to facial expressions and also call into question the findings of prior choice-from-array studies
Thesis (PhD) — Boston College, 2011
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Psychology
APA, Harvard, Vancouver, ISO, and other styles
14

Ribeiro, João Paulo Alves. "Expressão facial da emoção: Fatores preditores da agressividade na expressão facial." Bachelor's thesis, [s.n.], 2014. http://hdl.handle.net/10284/4257.

Full text
Abstract:
Projecto de Graduação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Licenciado em Criminologia
O estudo da expressão facial da emoção em Portugal destaca-se, e destaca, em particular, o trabalho científico produzido pelo Professor Doutor Freitas-Magalhães. Contudo, a literatura e a evidência empírica produzida é ainda, parca e diminuta. O objetivo do presente projeto de graduação é, não só, acompanhar a tendência científica de investigação da aplicação da análise da emoção através da expressão facial a diferentes áreas do saber, como também, aprofundar e difundir esta área. Assim, partindo das características usualmente associadas à agressividade, pretende-se estabelecer um paralelismo e, possível correlação, de determinados marcadores na expressão facial que atuem como preditores da agressividade. Para o efeito, foram analisadas imagens em vídeo e fotografias de determinados indivíduos quando em situações de confronto com terceiros (e.g. vítimas, juízes, policias).
The study on facial expression of emotion in Portugal stands out and highlights in particular the scientific work produced by Professor Freitas-Magalhães. However, literature and produced empirical evidence is still scant and small. The objective of this graduation project is not only to follow the trend of scientific research of the application of the analysis of emotion through facial to different areas of knowledge, but also deepen and spread this area expression. Thus, based on the characteristics usually associated with aggressiveness, we intend to establish a parallel and possible correlation of certain markers in facial expression acting as predictors of aggression. To this end, we analyzed video footage and photographs of certain individuals when in a confrontational situation with third parties (eg victims, judges, police).
APA, Harvard, Vancouver, ISO, and other styles
15

Shang, Lifeng, and 尚利峰. "Facial expression analysis with graphical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.

Full text
Abstract:
Facial expression recognition has become an active research topic in recent years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain, temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as neural networks, Gaussian processes, support vector machines, etc.) have been applied to facial expression analysis. Recently graphical models have emerged as a general framework for applying probabilistic models. They provide a natural framework for describing the generative process of facial expressions. However, these models often su?er from too many latent variables or too complex model structures, which makes learning and inference di±cult. In this thesis, we will try to analyze the deformation of facial expression by introducing some recently developed graphical models (e.g. latent topic model) or improving the recognition ability of some already widely used models (e.g. HMM). In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of exemplars and topics in between. Our ¯rst model incorporates exemplar-based representation into graphical models. To further improve computational e±- ciency of the proposed model, we build it in a local linear subspace constructed by principal component analysis. The second model is an extension of the recently developed topic model by introducing temporal and categorical information into Latent Dirichlet Allocation model. In our discriminative temporal topic model (DTTM), temporal information is integrated by placing an asymmetric Dirichlet prior over document-topic distributions. The discriminative ability is improved by a supervised term weighting scheme. We describe the resulting DTTM in detail and show how it can be applied to facial expression recognition. Our third model is a nonparametric discriminative variation of HMM. HMM can be viewed as a prototype model, and transition parameters act as the prototype for one category. To increase the discrimination ability of HMM at both class level and state level, we introduce linear interpolation with maximum entropy (LIME) and member- ship coe±cients to HMM. Furthermore, we present a general formula for output probability estimation, which provides a way to develop new HMM. Experimental results show that the performance of some existing HMMs can be improved by integrating the proposed nonparametric kernel method and parameters adaption formula. In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal and categorical information into Latent Dirichlet Allocation (LDA) topic model, and (iii) increasing the discrimination ability of HMM at both hidden state level and class level.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
16

Fraser, Matthew Paul. "Repetition priming of facial expression recognition." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hsu, Shen-Mou. "Adaptation effects in facial expression recognition." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zalewski, Lukasz. "Statistical modelling for facial expression dynamics." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/2518.

Full text
Abstract:
One of the most powerful and fastest means of relaying emotions between humans are facial expressions. The ability to capture, understand and mimic those emotions and their underlying dynamics in the synthetic counterpart is a challenging task because of the complexity of human emotions, different ways of conveying them, non-linearities caused by facial feature and head motion, and the ever critical eye of the viewer. This thesis sets out to address some of the limitations of existing techniques by investigating three components of expression modelling and parameterisation framework: (1) Feature and expression manifold representation, (2) Pose estimation, and (3) Expression dynamics modelling and their parameterisation for the purpose of driving a synthetic head avatar. First, we introduce a hierarchical representation based on the Point Distribution Model (PDM). Holistic representations imply that non-linearities caused by the motion of facial features, and intrafeature correlations are implicitly embedded and hence have to be accounted for in the resulting expression space. Also such representations require large training datasets to account for all possible variations. To address those shortcomings, and to provide a basis for learning more subtle, localised variations, our representation consists of tree-like structure where a holistic root component is decomposed into leaves containing the jaw outline, each of the eye and eyebrows and the mouth. Each of the hierarchical components is modelled according to its intrinsic functionality, rather than the final, holistic expression label. Secondly, we introduce a statistical approach for capturing an underlying low-dimension expression manifold by utilising components of the previously defined hierarchical representation. As Principal Component Analysis (PCA) based approaches cannot reliably capture variations caused by large facial feature changes because of its linear nature, the underlying dynamics manifold for each of the hierarchical components is modelled using a Hierarchical Latent Variable Model (HLVM) approach. Whilst retaining PCA properties, such a model introduces a probability density model which can deal with missing or incomplete data and allows discovery of internal within cluster structures. All of the model parameters and underlying density model are automatically estimated during the training stage. We investigate the usefulness of such a model to larger and unseen datasets. Thirdly, we extend the concept of HLVM model to pose estimation to address the non-linear shape deformations and definition of the plausible pose space caused by large head motion. Since our head rarely stays still, and its movements are intrinsically connected with the way we perceive and understand the expressions, pose information is an integral part of their dynamics. The proposed 3 approach integrates into our existing hierarchical representation model. It is learned using sparse and discreetly sampled training dataset, and generalises to a larger and continuous view-sphere. Finally, we introduce a framework that models and extracts expression dynamics. In existing frameworks, explicit definition of expression intensity and pose information, is often overlooked, although usually implicitly embedded in the underlying representation. We investigate modelling of the expression dynamics based on use of static information only, and focus on its sufficiency for the task at hand. We compare a rule-based method that utilises the existing latent structure and provides a fusion of different components with holistic and Bayesian Network (BN) approaches. An Active Appearance Model (AAM) based tracker is used to extract relevant information from input sequences. Such information is subsequently used to define the parametric structure of the underlying expression dynamics. We demonstrate that such information can be utilised to animate a synthetic head avatar. Submitted
APA, Harvard, Vancouver, ISO, and other styles
19

Harris, Richard J. "The neural representation of facial expression." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3990/.

Full text
Abstract:
Faces provide information critical for effective social interactions. A face can be used to determine who someone is, where they are looking and how they are feeling. How these different aspects of a face are processed has proved a popular topic of research over the last 25 years. However, much of this research has focused on the perception of facial identity and as a result less is known about how facial expression is represented in the brain. For this reason, the primary aim of this thesis was to explore the neural representation of facial expression. First, this thesis investigated which regions of the brain are sensitive to expression and how these regions represent facial expression. Two regions of the brain, the posterior superior temporal sulcus (pSTS) and the amygdala, were more sensitive to changes in facial expression than identity. There was, however, a dissociation between how these regions represented information about facial expression. The pSTS was sensitive to any change in facial expression, consistent with a continuous representation of expression. In comparison, the amygdala was only sensitive to changes in expression that resulted in a change in the emotion category. This reflects a more categorical response in which expressions are assigned into discrete categories of emotion. Next, the representation of expression was further explored by asking what information from a face is used in the perception of expression. Photographic negation was used to disrupt the surface-based facial cues (i.e. pattern of light and dark across the face) while preserving the shape-based information carried by the features of the face. This manipulation had a minimal effect on judgements of expression, highlighting the important role of the shape-based information in judgements of expression. Furthermore, combining the photo negation technique with fMRI demonstrated that the representation of faces in the pSTS was predominately based on feature shape information. Finally, the influence of facial identity on the neural representation of facial expression was measured. The pSTS, but not the amygdala, was most responsive to changes in facial expression when the identity of the face remained the same. It was found that this sensitivity to facial identity in the pSTS was a result of interactions with regions thought to be involved in the processing of facial identity. In this way identity information can be used to process expression in a socially meaningful way.
APA, Harvard, Vancouver, ISO, and other styles
20

Yasuda, Maiko. "Color and facial expressions." abstract (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Andréasson, Per. "Emotional Empathy, Facial Reactions, and Facial Feedback." Doctoral thesis, Uppsala universitet, Institutionen för psykologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126825.

Full text
Abstract:
The human face has a fascinating capability to express emotions. The facial feedback hypothesis suggests that the human face not only expresses emotions but is also able to send feedback to the brain and modulate the ongoing emotional experience. It has furthermore been suggested that this feedback from the facial muscles could be involved in empathic reactions. This thesis explores the concept of emotional empathy and relates it to two aspects concerning activity in the facial muscles. First, do people high versus low in emotional empathy differ in regard to in what degree they spontaneously mimic emotional facial expressions? Second, is there any difference between people with high as compared to low emotional empathy in respect to how sensitive they are to feedback from their own facial muscles? Regarding the first question, people with high emotional empathy were found to spontaneously mimic pictures of emotional facial expressions while people with low emotional empathy were lacking this mimicking reaction. The answer to the second question is a bit more complicated. People with low emotional empathy were found to rate humorous films as funnier in a manipulated sulky facial expression than in a manipulated happy facial expression, whereas people with high emotional empathy did not react significantly. On the other hand, when the facial manipulations were a smile and a frown, people with low as well as high emotional empathy reacted in line with the facial feedback hypothesis. In conclusion, the experiments in the present thesis indicate that mimicking and feedback from the facial muscles may be involved in emotional contagion and thereby influence emotional empathic reactions. Thus, differences in emotional empathy may in part be accounted for by different degree of mimicking reactions and different emotional effects of feedback from the facial muscles.
APA, Harvard, Vancouver, ISO, and other styles
22

Sloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.

Full text
Abstract:
As a universal element of human nature, the experience, expression, and perception of emotions permeate our daily lives. Many emotions are thought to be basic and common to all humanity, irrespective of social or cultural background. Of these emotions, the corresponding facial expressions of a select few are known to be truly universal, in that they can be identified by most observers without the need for training. Facial expressions of emotion are subsequently used as a method of communication, whether through close face-to-face contact, or the use of emoticons online and in mobile texting. Facial expressions are fundamental to acting for stage and screen, and to animation for film and computer games. Expressions of emotion have been the subject of intense experimentation in psychology and computer science research, both in terms of their naturalistic appearance and the virtual replication of facial movements. From this work much is known about expression universality, anatomy, psychology, and synthesis. Beyond the realm of scientific research, animation practitioners have scrutinised facial expressions and developed an artistic understanding of movement and performance. However, despite the ubiquitous quality of facial expressions in life and research, our understanding of how to produce synthetic, dynamic imitations of emotional expressions which are perceptually valid remains somewhat limited. The research covered in this thesis sought to unite an artistic understanding of expression animation with scientific approaches to facial expression assessment. Acting as both an animation practitioner and as a scientific researcher, the author set out to investigate emotional facial expression dynamics, with the particular aim of identifying spatio-temporal configurations of animated expressions that not only satisfied artistic judgement, but which also stood up to empirical assessment. These configurations became known as emotional expression choreographies. The final work presented in this thesis covers the performative, practice-led research into emotional expression choreography, the results of empirical experimentation (where choreographed animations were assessed by observers), and the findings of qualitative studies (which painted a more detailed picture of the potential context of choreographed expressions). The holistic evaluation of expression animation from these three epistemological perspectives indicated that emotional expressions can indeed be choreographed in order to create refined performances which have empirically measurable effects on observers, and which may be contextualised by the phenomenological interpretations of both student animators and general audiences.
APA, Harvard, Vancouver, ISO, and other styles
23

Correa, Renata. "Animação facial por computador baseada em modelagem biomecanica." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259447.

Full text
Abstract:
Orientadores: Leo Pini Magalhães, Jose Mario De Martino
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-10T02:00:25Z (GMT). No. of bitstreams: 1 Correa_Renata_M.pdf: 4570462 bytes, checksum: c427bcfe94559d86730c51711bd67985 (MD5) Previous issue date: 2007
Resumo: A crescente busca pelo realismo em personagens virtuais encontrados em diversas aplicações na indústria do cinema, no ensino, jogos, entre outras, é a motivação do presente trabalho. O trabalho descreve um modelo de animação que emprega a estratégia biomecânica para o desenvolvimento de um protótipo computacional, chamado SABiom. A técnica utilizada baseia-se na simulação de características físicas da face humana, tais como as camadas de pele e músculos, que são modeladas de forma a permitir a simulação do comportamento mecânico do tecido facial sob a ação de forças musculares. Embora existam vários movimentos produzidos por uma face, o presente trabalho restringiu-se às simulações dos movimentos de expressões faciais focalizando os lábios. Para validar os resultados obtidos com o SABiom, comparou-se as imagens do modelo virtual obtidas através do protótipo desenvolvido com imagens obtidas de um modelo humano
Abstract: The increasing search for realism in virtual characters found in' many applications as movies, education, games, so on, is the motivation ofthis thesis. The thesis describes an animation model that employs the biomechanics strategy for the development of a computing prototype, called SABiom. The method used is based on simulation of physical features of the human face, such as layers of the skin and musc1es, that are modeled for simulation of the mechanical behavior of the facial tissue under the action of muscle forces. Although there are several movements produced by a face, the current work limits itself to the simulations of the facial expressions focusing the lips. To validate the results obtained from SABiom, we compared the images of the virtual model with images from a human model
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
24

Bannerman, Rachel L. "Orienting to emotion : a psychophysical approach." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=59429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kaufmann, Jurgen Michael. "Interactions between the processing of facial identity, emotional expression and facial speech." Thesis, University of Glasgow, 2002. http://theses.gla.ac.uk/3110/.

Full text
Abstract:
The experiments investigate the functional relationship between the processing of facial identity, emotional expression and facial speech. They were designed in order to further explore a widely accepted model of parallel, independent face perception components (Bruce and Young, 1986), which has been challenged recently (e.g. Walker et. al., 1995; Yakel et. al., 2000; Schweinberger et. al., 1998; Schweinberger et.al., 1999). In addition to applying a selective attention paradigm (Garner, 1974; 1976), dependencies between face related processes are explored by morphing, a digital graphic editing technique which allows for the selective manipulation of facial dimensions, and by studying the influence of face familiarity on the processing of emotional expression and speechreading. The role of dynamic information for speechreading (lipreading) is acknowledged by investigating the influence of natural facial speech movements on the integration of identity specific talker information and facial speech cues. As for the relationship between the processing of facial identity and emotional expression, overall the results are in line with the notion of independent parallel routes. Recent findings of an "asymmetric interaction" between the two dimensions in the selective attention paradigm, in the sense that facial identity can be processed independently from expression but not vice versa (Schweinberger et. al., 1998; Schweinberger et. al., 1999) could not be unequivocally corroborated. Critical factors for the interpretation of results based on the selective attention paradigm when used with complex stimuli such as faces are outlined and tested empirically. However, the experiments do give evidence that stored facial representations might be less abstract than previously thought and might preserve some information about typical expressions. The results indicate that classifications of unfamiliar faces are not influenced by emotional expression, while familiar faces are recognized fastest for certain expressions.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Douglas Spencer. "Facial action determinants of pain judgment." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25812.

Full text
Abstract:
Nonverbal indices of pain are some of the least researched sources of data for assessing pain. The extensive literature on the communicative functions of nonverbal facial expressions suggests that there is potentially much information to be gained in studying facial expressions associated with pain. Results from two studies support the position that facial expressions related to pain may indeed be a source of information for pain assessment. A review of the literature found several studies indicating that judges could make discriminations amongst levels of discomfort from viewing a person's facial expressions. Other studies found that the occurrence of a small set of facial movements could be used to discriminate amongst several levels of self-reported discomfort. However, there was no research directly addressing the question of whether judges ratings would vary in response to different patterns of the identified facial movements. Issues regarding the facial cues used by naive judges in making ratings of another person's discomfort were investigated. Four hypotheses were developed. From prior research using the Facial Action Coding System (FACS) (Ekman S. Friesen, 1978) a small set of facial muscle movements, termed Action Units (AUs), were found to be the best facial movements for discriminating amongst different levels of pain. The first hypothesis was that increasing the number of AUs per expression would lead to increased ratings of discomfort. The second hypothesis was that video segments with the AUs portrayed simultaneously would be rated higher than segments with the same AUs portrayed in a sequential configuration. Four encoders portrayed all configurations. The configurations were randomly editted onto video tape and presented to the judges. The judges used the scale of affective discomfort developed by Gracely, McGrath, and Dubner (1978). Twenty-five male and 25 female university students volunteered as judges. The results supported both hypotheses. Increasing the number of AUs per expression led to a sharp rise in judges' ratings. Video segments of overlapping AU configurations were rated higher than segments with non-averlapping configurations. Female judges always rated higher than male judges. The second study was methodologically similar to the first study. The major hypothesis was that expressions with only upper face AUs would be rated as more often indicating attempts to hide an expression than lower face expressions. This study contained a subset of expressions that were identical to ones used in the first study. This allowed for testing of the fourth hypothesis which stated that the ratings of this subset of expressions would differ between the studies due to the differences in the judgment conditions. Both hypotheses were again supported. Upper face expressions were more often judged as portraying attempts by the encoders to hide their expressions. Analysis of the fourth hypothesis revealed that the expressions were rated higher in study 2 than study 1. A sex of judge X judgment condition interaction indicated that females rated higher in study 1 but males rated higher in study 2. The results from these studies indicated that the nonverbal communication of facial expressions of pain was defined by a number of parameters which led judges to alter their ratings depending on the parameters of the facial expressions being viewed. While studies of the micro-behavioral aspects of facial expressions are new, the present studies suggest that such research is integral to understanding the complex communication functions of nonverbal facial expressions.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
27

Ersotelos, Nikolaos. "Highly automated method for facial expression synthesis." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4524.

Full text
Abstract:
The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost. This thesis, proposes a highly automated approach for achieving a realistic facial expression synthesis, which allows for enhanced performance in speed (3 minutes processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal physical input. Moreover, it will describe a novel approach to the normalization of the illumination settings values between source and target images, thereby allowing the algorithm to work accurately, even in different lighting conditions. Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Shihai. "Boosting learning applied to facial expression recognition." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Besel, Lana Diane Shyla. "Empathy : the role of facial expression recognition." Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/30730.

Full text
Abstract:
This research examined whether people with higher dispositional empathy are better at recognizing facial expressions of emotion at faster presentation speeds. Facial expressions of emotion, taken from Pictures o f Facial Affect (Ekman & Friesen, 1976), were presented at two different durations: 47 ms and 2008 ms. Participants were 135 undergraduate students. They identified the emotion displayed in the expression from a list of the basic emotions. The first part of this research explored connections between expression recognition and the common cognitive empathy/emotional empathy distinction. Two factors from the Interpersonal Reactivity Scale (IRS; Davis, 1983) measured self-reported tendencies to experience cognitive empathy and emotional empathy: Perspective Taking (IRS-PT), and Empathic Concern (IRS-EC), respectively. Results showed that emotional empathy significantly positively predicted performance at 47 ms, but not at 2008 ms and cognitive empathy did not significantly predict performance at either duration. The second part examined empathy deficits. The kinds of emotional empathy deficits that comprise psychopathy were measured by the Self-Report Psychopathy Scale (SRP-III; Paulhus, Hemphill; & Hare, in press). Cognitive empathy deficits were explored using the Empathy Quotient (EQ; Shaw et al., 2004). Results showed that the callous affect factor of the SRP (SRP-CA) was the only significant predictor at 47 ms, with higher callous affect scores associated with lower performance. SRP-CA is a deficit in emotional empathy, and thus, these results match the first paper's results. At 2008 ms, the social skills factor of the EQ was significantly positively predictive, indicating that people with less social competence had more trouble recognizing facial expressions at longer presentation durations. Neither the total scores for SRP nor EQ were significant predictors of identification accuracy at 47 ms and 2008 ms. Together, the results suggest that a disposition to react emotionally to the emotions of others, and remain other-focussed, provides a specific advantage for accurately recognizing briefly presented facial expressions, compared to people with lower dispositional emotional empathy.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
30

Santos, Patrick John. "Facial Expression Cloning with Fuzzy Membership Functions." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26260.

Full text
Abstract:
This thesis describes the development and experimental results of a system to explore cloning of facial expressions between dissimilar face models, so new faces can be animated using the animations from existing faces. The system described in this thesis uses fuzzy membership functions and subtractive clustering to represent faces and expressions in an intermediate space. This intermediate space allows expressions for face models with different resolutions to be compared. The algorithm is trained for each pair of faces using particle swarm optimization, which selects appropriate weights and radii to construct the intermediate space. These techniques allow the described system to be more flexible than previous systems, since it does not require prior knowledge of the underlying implementation of the face models to function.
APA, Harvard, Vancouver, ISO, and other styles
31

Fan, Xijian. "Spatio-temporal framework on facial expression recognition." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.

Full text
Abstract:
This thesis presents an investigation into two topics that are important in facial expression recognition: how to employ the dynamic information from facial expression image sequences and how to efficiently extract context and other relevant information of different facial regions. This involves the development of spatio-temporal frameworks for recognising facial expression. The thesis proposed three novel frameworks for recognising facial expression. The first framework uses sparse representation to extract features from patches of a face to improve the recognition performance, where part-based methods which are robust to image alignment are applied. In addition, the use of sparse representation reduces the dimensionality of features, and improves the semantic meaning and represents a face image more efficiently. Since a facial expression involves a dynamic process, and the process contains information that describes a facial expression more effectively, it is important to capture such dynamic information so as to recognise facial expressions over the entire video sequence. Thus, the second framework uses two types of dynamic information to enhance the recognition: a novel spatio-temporal descriptor based on PHOG (pyramid histogram of gradient) to represent changes in facial shape, and dense optical flow to estimate the movement (displacement) of facial landmarks. The framework views an image sequence as a spatio-temporal volume, and uses temporal information to represent the dynamic movement of facial landmarks associated with a facial expression. Specifically, spatial based descriptor representing spatial local shape is extended to spatio-temporal domain to capture the changes in local shape of facial sub-regions in the temporal dimension to give 3D facial component sub-regions of forehead, mouth, eyebrow and nose. The descriptor of optical flow is also employed to extract the information of temporal. The fusion of these two descriptors enhance the dynamic information and achieves better performance than the individual descriptors. The third framework also focuses on analysing the dynamics of facial expression sequences to represent spatial-temporal dynamic information (i.e., velocity). Two types of features are generated: a spatio-temporal shape representation to enhance the local spatial and dynamic information, and a dynamic appearance representation. In addition, an entropy-based method is introduced to provide spatial relationship of different parts of a face by computing the entropy value of different sub-regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Full text
Abstract:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
APA, Harvard, Vancouver, ISO, and other styles
33

Feffer, Michael A. (Michael Anthony). "Personalized machine learning for facial expression analysis." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119763.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 35-36).
For this MEng Thesis Project, I investigated the personalization of deep convolutional networks for facial expression analysis. While prior work focused on population-based ("one-size-fits-all") models for prediction of affective states (valence/arousal), I constructed personalized versions of these models to improve upon state-of-the-art general models through solving a domain adaptation problem. This was done by starting with pre-trained deep models for face analysis and fine-tuning the last layers to specific subjects or subpopulations. For prediction, a "mixture of experts" (MoE) solution was employed to select the proper outputs based on the given input. The research questions answered in this project are: (1) What are the effects of model personalization on the estimation of valence and arousal from faces? (2) What is the amount of (un)supervised data needed to reach a target performance? Models produced in this research provide the foundation of a novel tool for personalized real-time estimation of target metrics.
by Michael A. Feffer.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Tang, Wing Hei Iris. "Facial expression recognition for a sociable robot." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/46467.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 53-54).
In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.
by Wing Hei Iris Tang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
35

Schulze, Martin Michael. "Facial expression recognition with support vector machines." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ainsworth, Kirsty. "Facial expression recognition and the autism spectrum." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/8287/.

Full text
Abstract:
An atypical recognition of facial expressions of emotion is thought to be part of the characteristics associated with an autism spectrum disorder diagnosis (DSM-5, 2013). However, despite over three decades of experimental research into facial expression recognition (FER) in autism spectrum disorder (ASD), conflicting results are still reported (Harms, Martin, and Wallace, 2010). The thesis presented here aims to explore FER in ASD using novel techniques, as well as assessing the contribution of a co-occurring emotion-blindness condition (alexithymia) and autism-like personality traits. Chapter 1 provides a review of the current literature surrounding emotion perception in ASD, focussing specifically on evidence for, and against, atypical recognition of facial expressions of emotion in ASD. The experimental chapters presented in this thesis (Chapters 2, 3 and 4) explore FER in adults with ASD, children with ASD and in the wider, typical population. In Chapter 2, a novel psychophysics method is presented along with its use in assessing FER in individuals with ASD. Chapter 2 also presents a research experiment in adults with ASD, indicating that FER is similar compared to typically developed (TD) adults in terms of the facial muscle components (action units; AUs), the intensity levels and the timing components utilised from the stimuli. In addition to this, individual differences within groups are shown, indicating that better FER ability is associated with lower levels of ASD symptoms in adults with ASD (measured using the ADOS; Lord et al. (2000)) and lower levels of autism-like personality traits in TD adults (measured using the Autism-Spectrum Quotient; (S. Baron-Cohen, Wheelwright, Skinner, Martin, and Clubley, 2001)). Similarly, Chapter 3 indicates that children with ASD are not significantly different from TD children in their perception of facial expressions of emotion as assessed using AU, intensity and timing components. Chapter 4 assesses the contribution of alexithymia and autism-like personality traits (AQ) to FER ability in a sample of individuals from the typical population. This chapter provides evidence against the idea that alexithymia levels predict FER ability over and above AQ levels. The importance of the aforementioned results are discussed in Chapter 5 in the context of previous research in the field, and in relation to established theoretical approaches to FER in ASD. In particular, arguments are made that FER cannot be conceptualised under an ‘all-or-nothing’ framework, which has been implied for a number of years (Harms et al., 2010). Instead it is proposed that FER is a multifaceted skill in individuals with ASD, which varies according to an individual’s skillset. Lastly, limitations of the research presented in this thesis are discussed in addition to suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
37

Mistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization." Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.

Full text
Abstract:
Facial expression is one of the effective channels to convey emotions and feelings. Many shape-based, appearance-based or hybrid methods for automatic facial expression recognition have been proposed. However, it is still a challenging task to identify emotions from facial images with scaling differences, pose variations, and occlusions. In addition, it is also difficult to identify significant discriminating facial features that could represent the characteristic of each expression because of the subtlety and variability of facial expressions. In order to deal with the above challenges, this research proposes two novel approaches: unsupervised facial point detection and texture-based facial expression recognition with feature optimisation. First of all, unsupervised automatic facial point detection integrated with regression-based intensity estimation for facial Action Units (AUs) and emotion clustering is proposed to deal with challenges such as scaling differences, pose variations, and occlusions. The proposed facial point detector can detect 54 facial points in images of faces with occlusions, pose variations and scaling differences. We conduct AU intensity estimation respectively using support vector regression and neural networks for 18 selected AUs. FCM is also subsequently employed to recognise seven basic emotions as well as neutral expressions. It also shows great potential to deal with compound and newly arrived novel emotion class detection. The second proposed system focuses on a texture-based approach for facial expression recognition by proposing a novel variant of the local binary pattern for discriminative feature extraction and Particle Swarm Optimization (PSO)-based feature optimisation. Multiple classifiers are applied for recognising seven facial expressions. Finally, evaluations are conducted to show the efficiency of the above two proposed systems. Evaluated using well-known facial databases: Helen, labelled faces in the wild, PUT, and CK+ the proposed unsupervised facial point detector outperforms other supervised landmark detection models dramatically and shows excellent robustness and capability in dealing with rotations, occlusions and illumination changes. Moreover, a comprehensive evaluation is also conducted for the proposed texture-based facial expression recognition with mGA-embedded PSO feature optimisation. Evaluated using the CK+ and MMI benchmark databases, the experimental results indicate that it outperforms other state-of-the-art metaheuristic search methods and facial emotion recognition research reported in the literature by a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
38

Meyer, Eric C. "A visual scanpath study of facial affect recognition in schizotypy and social anxiety." Diss., Online access via UMI:, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.

Full text
Abstract:
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
APA, Harvard, Vancouver, ISO, and other styles
40

Stefani, Fabiane Miron. "Estudo eletromiográfico do padrão de contração muscular da face de adultos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5160/tde-19112008-162050/.

Full text
Abstract:
A motricidade orofacial é a especialidade da Fonoaudiologia, que tem como objetivo a prevenção, diagnóstico e tratamento das alterações miofuncionais do sistema estomatognático. Atualmente, muitos pesquisadores desta área, nacional e internacionalmente, têm buscado metodologias mais objetivas de avaliação e conduta. Dentre tais aparatos está a eletromiografia de superfície (EMG). A EMG é a medida da atividade elétrica de um músculo. Os objetivos deste trabalho foram o de identificar, por meio da EMG, a atividade elétrica dos músculos faciais de adultos saudáveis durante movimentos faciais normalmente utilizados terapeuticamente na clínica fonoaudiológica, para identificar o papel de cada músculo durante os movimentos e para diferenciar a atividade elétrica destes músculos nestes mesmos movimentos, bem como avaliar a validade da EMG na clínica fonoaudiológica. Foram avaliadas 31 pessoas (18 mulheres) com média de idade de 29,48 anos e sem queixas fonoaudiológicas ou odontológicas. Os eletrodos de superfície bipolares foram aderidos aos músculos masseteres, bucinadores e supra-hióides bilateralmente e aos músculos orbicular da boca superior e inferior. Os eletrodos foram conectados a um eletromiógrafo EMG 1000 da Lynx Tecnologia Eletrônica de oito canais, e foi pedido que cada participante realizasse os seguintes movimentos: Protrusão Labial (PL), Protrusão Lingual (L), Inflar Bochechas (IB), Sorriso Aberto (SA), Sorriso Fechado (SF), Lateralização Labial Direita (LD) e Esquerda (LE) e Pressão de um lábio contra o outro (AL). Os dados eletromiográficos foram registrados em microvolts (RMS) e foi considerada a média dos movimentos para a realização da análise dos dados, que foram normalizados utilizando como base o registro da EMG no repouso e os resultados demonstram que os músculos orbiculares da boca inferior e superior apresentam maior atividade elétrica que os outros músculos na maior parte dos movimentos, com exceção dos movimentos de L e SF, Nos movimentos de LD e LE, os orbiculares da boca também estavam mais ativos, mas os músculos bucinadores demonstraram participação importante, especialmente o bucinador direito em LD A Protrusão Lingual não demonstrou diferenças significativas entre os músculos estudados. O SA teve maior participação do orbicular da boca Inferior que o superior, e demonstrou ser o movimento que mais movimenta os músculos da face como um todo e o músculo com maior atividade durante o SF foi o bucinador. Concluímos que o aparato da EMG é eficiente não só para a avaliação dos músculos mastigatórios, mas também dos da mímica, a não ser no movimento de Protrusão lingual, onde o EMG de superfície não foi eficiente. Os músculos orbiculares foram mais ativos durante os movimentos testados, portanto, são também os mais exercitados durante os exercícios de motricidade oral. O movimento que envolve a maior atividade dos músculos da face como um todo foi o Sorriso Aberto
Speech Therapy has been considered subjective during many years due to its manual and visual methods. Many researchers have been searching for more objective methodology of evaluation, based on electronics devises. One of them is the EMG- Surface Electromyography, which is the electric unit measure of a muscle. Literature presents many works in TMJ and Orthodontics areas, special attention to the chewing muscles- temporal and masseter- for been bigger muscles, presenting more evident results in EMG. Less attention is paid for mimic muscles. The objective of our work is to identify, by means of EMG, the electrical activity of facial muscles of healthy adults during facial movements normally used in speech therapy clinic, to identify the role of each muscle during movements and to differentiate the electrical activity of these muscles during this movements. 31 volunteers have been evaluated (18 women) with mean age of 29,84 years, no speech therapy or odontological complains. Bipolar surface electrodes have been adhered to masseter, buccinator and suprahyoid muscles bilaterally and to superior and inferior orbicular oris muscles. Electrodes were connected to a EMG 1000 from Lynx Tecnologia Eletrônica of 8 channels, and it was asked each participant to carry out the following movements: Labial Protrusion (PL), Lingual Protrusion (L), Cheek Inflating (CI), Opened Smile (OS), Closed Smile (CS), Labial Lateralization (LL) and pressure of one lip against the other (LP). EMG data was registered in microvolts (RMS) and the movement media was considered for data analyses, which were normalized using as bases the rest EMG and results show that orbicular oris are more electric activity than other muscles in PL, CI, OS, LL and LP. In LL movements, orbicularis oris also showed greater activity, but buccinator muscles showed effective participation in movement, especially in right LL. L didnt show any differences between evaluated muscles. Buccinator was the most active muscle during CS. We concluded that Orbicularis Ores were the most active muscles during the tasks, exception made to L and CS. In L no muscle was significantly higher and in CS Buccinators were the most active. Opened Smile is the movement where the muscles are more activated in a role. This results shows that EMG is of great use for mimic muscles evaluation, but should be used carefully in specific tongue assessment
APA, Harvard, Vancouver, ISO, and other styles
41

Hattiangadi, Nina Uday. "Facial affect processing across a perceptual timeline : a comparison of two models of facial affect processing /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Saeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kusano, Maria Elisa. "Assimetrias nos reconhecimentos de expressões faciais entre hemicampos visuais de homens e mulheres." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/59/59134/tde-15062015-210503/.

Full text
Abstract:
O reconhecimento das diferentes expressões faciais de emoções é de grande valia para as relações interpessoais. Porém, não há consenso sobre como ocorrem os processos inerentes a esse reconhecimento. Estudos sugerem diferenças no processamento de expressões faciais relacionadas à valência da emoção, assimetria funcional entre os hemisférios cerebrais, e características do observador (destreza manual, sexo e doenças) e ao tempo de exposição dos estímulos. Utilizando o método de campo visual dividido, associado ao de escolha forçada, foram investigados o desempenho de reconhecimento de faces tristes e alegres nos hemicampos visuais esquerdo e direito de 24 participantes (13M, 11H), todos adultos, destros e com acuidade normal ou superior. Todos foram submetidos a sessões experimentais em que pares de faces foram apresentados sucessivamente, por 100ms para um dos hemicampos visuais, direito ou esquerdo, sendo uma das faces neutra e outra emotiva (alegre ou triste), em ordem aleatória de apresentação. O gênero de cada face mantinha-se o mesmo no par (só masculina ou feminina), e trata-se da foto de uma mesma pessoa. As faces emotivas eram apresentadas aleatoriamente entre 11 níveis de intensidade emocional, obtidos pela técnica de morfinização gráfica. A tarefa do participante consistia em escolher em cada sucessão de par de faces em qual hemicampo visual encontrava-se a face mais emotiva. As taxas de acertos emcada nível de intensidade emocional de cada face emotiva permitiram estimar os parâmetros de curvas psicométricas ajustadas a curva acumulada normal para cada hemicampo visual de cada indivíduo. As análises estatísticas das taxas de acertos, em conjunto com os gráficos dos parâmetros das curvas psicométricas dos participantes, permitiu notar que houve maiores taxas de acerto às faces alegres em relação a faces tristes. Além disso, os resultados mostram que enquanto mulheres foram simétricas no reconhecimento das faces felizes e tristes, independente do hemicampo visual (HV); homens foram assimétricos: apresentaram superioridade do HV esquerdo no reconhecimento da face masculina e do HV direito em relação à face feminina. Foi possível observar diferenças no reconhecimento das faces, havendo interação entre sexo do participante e o da face-estímulo apresentada, valência da emoção e hemisfério cerebral. Este trabalho embasa parcialmente a Teoria do Hemisfério Direito e sugere que o tipo de delineamento experimental usado pode estar relacionado com a diferença de desempenho entre sexos feminino e masculino.
Recognizing different emotional facial expressions is worth for interpersonal relationship, although there is not a consensus how this recognition process really occurs. Studies suggest differences during the processing of facial expressions related with emotional valence, functional asymmetry between both brain hemisphere and observer\'s characteristics (manual dexterity, gender and diseases) and by the stimulus exposure time. By the divided visual field method associated with two-interval forced choice we investigated the performance of recognizing sad and happy faces in the left and the right visual hemifield of 24 participants (13 women, 11 men), all adults, right-handed and with normal or higher visual acuity. They were all submitted to experimental sessions where pair of faces were successively presented for 100ms in one of the visual hemifield, right or left, being one neutral and other emotive (happy or sad), in random order presentation . Each pair of faces were only masculine or feminine, of a single person, and the emotional faces could have emotional intensity chosen randomly between 11 intensity levels obtained by computer graphic morphing technique. The participant task was to choose in each pair of face sequence in the visual hemifield which one was the most emotive. The hit rate to each level of emotional intensity of each face allowed to estimate the parameters of psychometric curves adjusted to cumulative normal distribution for each visual hemifields. The statistic analysis of the parameters of the psychometric curves allowed pointed out e that the hit rates for the happy faces were higher than for the sad ones. Also, while women showed symmetric performance in recognizing happy and sad faces between the visual hemifields, men showed asymmetric performance, with superiority of recognizing the male face in the left visual field and of recognizing the female face in the right visual field. There were evidences that recognizing emotional faces are somewhat different, considering interaction between the gender of the participants and the gender of the stimulus face, emotional valences and brain hemisphere. This research partially supports the Right Hemisphere Theory and suggest experimental design influence the sex differences performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Chuang, YungChuan, and 莊詠筌. "Facial Expression Mapping Based on Facial Features." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65155250635080513268.

Full text
Abstract:
碩士
國立臺北教育大學
數位科技設計學系(含玩具與遊戲設計碩士班)
99
With the advancement of computer technology, the communication and interaction between people was enhanced through instant messaging softwares. However, users who communicate via these softwares can not reveal their ‘true’ emotions with their own facial expression but through the use of emoticons. Yet, these emoticons could not fully express facial expressions. Inspired by this shortcoming, if a specific user’s face can automatically make facial expressions, instant messaging would be more interesting. The purpose of this thesis is to make any specific type or user-defined facial expressions on images of neutral facial expression by image processing techniques. Two phases of process is carried out to serve this purpose: image pre-processing and feature extraction , and expression mapping. The main purpose of image pre-processing is to determine face region based on skin-tone segmentation and morphological operations. After that, localize expression features by their color information, viz.: brows, eyes, and mouth. Then, set its shape control points which are simplified from the definition of Facial Animation Parameters (FAP) by MPEG-4 standard. In the expression mapping phase, users can move those control points via an interactive interface. When this procedure is finished, the system changes the selected image texture to fit a new shape by Delaunay triangulation, image registration, and image interpolation operations. Finally, the result of a new facial expression on the original image is generated.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Wei-Cheng, and 徐瑋呈. "Facial Expression Recognition Based on Facial Features." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50258463357861831524.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
101
We propose an expression recognition method based on facial features from the psychological perspective. According to the American psychologist Paul Ekman’s work on action units, we divide a face into different facial feature regions for expression recognition via the movements of individual facial muscles during slight different instant changes in facial expression. This thesis starts from introducing Paul Ekman’s work, 6 basic emotions, and existing methods based on feature extraction or facial models. Our system have two main parts: preprocessing and recognition method. The difference in training and test environments, such as illumination, or face size and skin color of different subjects under testing, is usually the major influencing factor in recognition accuracy. It is therefore we propose a preprocessing step in our first part of the system: we first perform face detection and facial feature detection to locate facial features. We then perform a rotation calibration based on the horizontal line obtained by connecting both eyes. The complete face region can be extracted by using facial models. Lastly, the face region is calibrated for illumination and resized to same resolution for dimensionality of feature vector. After preprocessing, we can reduce the difference among images. Second part of our proposed system is the recognition method. Here we use Gabor filter banks with ROI capture to obtain the feature vector and principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction to reduce the computation time. Finally, a support vector machine (SVM) is adopted as our classifier. The experimental result shows that the proposed method can archive 86.1%, 96.9%, and 89.0% accuracy on three existing datasets JAFFE, TFEID, and CK+ respectively (based on leave-one-person-out evaluation). We also tested the performance on the 101SC dataset that were collected and prepared by ourselves. This dataset is relatively difficult in recognition but closer to the scenario in reality. The proposed method is able to achieve 62.1% accuracy on it. We also use this method to participate the 8th UTMVP (Utechzone Machine Vision Prize) competition, and we were ranked the second place out of 10 teams.
APA, Harvard, Vancouver, ISO, and other styles
46

Ren, Yuan. "Facial Expression Recognition System." Thesis, 2008. http://hdl.handle.net/10012/3516.

Full text
Abstract:
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition. Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations. Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
APA, Harvard, Vancouver, ISO, and other styles
47

蘇芳生. "Facial Expression Detection System." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/14647436644952580750.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
92
In this research, we develop a system to recognize facial expression automatically. First , we extract the seventeen significant feature points of a face and then use these feature points to recognize facial expression. During recognition, we first compare feature points of the novel face and the expressional face. We then project the feature vector on decided expressions feature vector in order to choose the possible emotion by using database of facial expression. Finally, we decide an emotion of a facial expression. In here, we train expressions feature vectors by using the method of multi-category Gradient Descent to improve our recognition rate. We use pictures offered by Department of Psychology, NCCU, to recognize facial expression in this research. The four emotion are anger, happiness, surprise, and sadness and we have successfully classified these four emotions with average error smaller than 15%.
APA, Harvard, Vancouver, ISO, and other styles
48

Hsia, Hua-Wei, and 夏華偉. "Facial expression synthesis system." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90349791892751755828.

Full text
Abstract:
碩士
淡江大學
資訊工程學系碩士班
99
It is an interesting and challenging problem to synthesis vivid facial expression images. In this paper, we proposed a “facial expression synthesis system” which imitates the reference facial expression image according to the difference between shape feature vectors of the neutral and expression image. Experimental results show vivid and flexible results.
APA, Harvard, Vancouver, ISO, and other styles
49

Zih-Syuan, Lin, and 林子軒. "2D Facial Expression Synthesis." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/68693376776668761519.

Full text
Abstract:
碩士
國立宜蘭大學
資訊工程學系碩士班
103
The most common and intuitive human communication method is via facial expressions. Due to the rapid development of technology, people communicate with each other more frequently nowadays. Recently, there have been many researches involved in the study related to human facial expression. The developed technologies have been used in entertainments and communications in daily life, such as emoticons or stickers in the social networking apps, virtual avatars, photo warping, and etc. Facial expression synthesis is a challenge research topic in computer animations. This thesis proposes an automatic facial expression synthesis system. The system is able to automatically detect the facial organs based on the input facial image and extract a set of facial feature points. If the feature points are misplaced, users may correct their position through an adjustment interface. In the expression synthesis stage, users are able to choose between a set of pre-defined or customized modes to warp the input photo and create the desired expressions. For the pre-defined mode, the system generates a set of common expressions automatically based on the input facial photo. For the customized mode, users are able to freely adjust the feature points to create exaggerated or funny expressions. Keywords: face detection, facial feature point detection, facial expression synthesis, image warping
APA, Harvard, Vancouver, ISO, and other styles
50

Chiang, Wei-Ting, and 姜威廷. "Interactive Facial Expression System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/77a7uk.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography