Tesis sobre el tema "Facial expression"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Facial expression".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Testa, Rafael Luiz. "Síntese de expressões faciais em fotografias para representação de emoções". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-31012019-165605/.
Texto completoThe ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
Neth, Donald C. "Facial configuration and the perception of facial expression". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189090729.
Texto completoBaltrušaitis, Tadas. "Automatic facial expression analysis". Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.
Texto completoMikheeva, Olga. "Perceptual facial expression representation". Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217307.
Texto completoAnsiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
Li, Jingting. "Facial Micro-Expression Analysis". Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.
Texto completoThe Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.
Texto completoMiao, Yu. "A Real Time Facial Expression Recognition System Using Deep Learning". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38488.
Texto completoPierce, Meghan. "Facial Expression Intelligence Scale (FEIS): Recognizing and Interpreting Facial Expressions and Implications for Consumer Behavior". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26786.
Texto completoPh. D.
Carter, Jeffrey R. "Facial expression analysis in schizophrenia". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.
Texto completoYu, Kaimin. "Towards Realistic Facial Expression Recognition". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.
Texto completode, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system". University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.
Texto completoThe South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
Wild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.
Texto completoContrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
Nelson, Nicole L. "A Facial Expression of Pax: Revisiting Preschoolers' "Recognition" of Expressions". Thesis, Boston College, 2011. http://hdl.handle.net/2345/2458.
Texto completoPrior research showing that children recognize emotional expressions has used a choice-from-array style task; for example, children are asked to find the fear face in an array of several expressions. However, these choice-from-array tasks allow for the use of a process of elimination strategy in which children could select an expression they are unfamiliar with when presented a label that does not apply to other expressions in the array. Across six studies (N = 144), 80% of 2- to 4-year-olds selected a novel expression when presented a target label and performed similarly when the label was novel (such as pax) or familiar (such as fear). In addition, 46% of children went on to freely label the expression with the target label in a subsequent task. These data are the first to show that children extend the process of elimination strategy to facial expressions and also call into question the findings of prior choice-from-array studies
Thesis (PhD) — Boston College, 2011
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Psychology
Ribeiro, João Paulo Alves. "Expressão facial da emoção: Fatores preditores da agressividade na expressão facial". Bachelor's thesis, [s.n.], 2014. http://hdl.handle.net/10284/4257.
Texto completoO estudo da expressão facial da emoção em Portugal destaca-se, e destaca, em particular, o trabalho científico produzido pelo Professor Doutor Freitas-Magalhães. Contudo, a literatura e a evidência empírica produzida é ainda, parca e diminuta. O objetivo do presente projeto de graduação é, não só, acompanhar a tendência científica de investigação da aplicação da análise da emoção através da expressão facial a diferentes áreas do saber, como também, aprofundar e difundir esta área. Assim, partindo das características usualmente associadas à agressividade, pretende-se estabelecer um paralelismo e, possível correlação, de determinados marcadores na expressão facial que atuem como preditores da agressividade. Para o efeito, foram analisadas imagens em vídeo e fotografias de determinados indivíduos quando em situações de confronto com terceiros (e.g. vítimas, juízes, policias).
The study on facial expression of emotion in Portugal stands out and highlights in particular the scientific work produced by Professor Freitas-Magalhães. However, literature and produced empirical evidence is still scant and small. The objective of this graduation project is not only to follow the trend of scientific research of the application of the analysis of emotion through facial to different areas of knowledge, but also deepen and spread this area expression. Thus, based on the characteristics usually associated with aggressiveness, we intend to establish a parallel and possible correlation of certain markers in facial expression acting as predictors of aggression. To this end, we analyzed video footage and photographs of certain individuals when in a confrontational situation with third parties (eg victims, judges, police).
Shang, Lifeng y 尚利峰. "Facial expression analysis with graphical models". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.
Texto completopublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Fraser, Matthew Paul. "Repetition priming of facial expression recognition". Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.
Texto completoHsu, Shen-Mou. "Adaptation effects in facial expression recognition". Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.
Texto completoZalewski, Lukasz. "Statistical modelling for facial expression dynamics". Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/2518.
Texto completoHarris, Richard J. "The neural representation of facial expression". Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3990/.
Texto completoYasuda, Maiko. "Color and facial expressions". abstract (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447610.
Texto completoAndréasson, Per. "Emotional Empathy, Facial Reactions, and Facial Feedback". Doctoral thesis, Uppsala universitet, Institutionen för psykologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126825.
Texto completoSloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation". Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.
Texto completoCorrea, Renata. "Animação facial por computador baseada em modelagem biomecanica". [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259447.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-10T02:00:25Z (GMT). No. of bitstreams: 1 Correa_Renata_M.pdf: 4570462 bytes, checksum: c427bcfe94559d86730c51711bd67985 (MD5) Previous issue date: 2007
Resumo: A crescente busca pelo realismo em personagens virtuais encontrados em diversas aplicações na indústria do cinema, no ensino, jogos, entre outras, é a motivação do presente trabalho. O trabalho descreve um modelo de animação que emprega a estratégia biomecânica para o desenvolvimento de um protótipo computacional, chamado SABiom. A técnica utilizada baseia-se na simulação de características físicas da face humana, tais como as camadas de pele e músculos, que são modeladas de forma a permitir a simulação do comportamento mecânico do tecido facial sob a ação de forças musculares. Embora existam vários movimentos produzidos por uma face, o presente trabalho restringiu-se às simulações dos movimentos de expressões faciais focalizando os lábios. Para validar os resultados obtidos com o SABiom, comparou-se as imagens do modelo virtual obtidas através do protótipo desenvolvido com imagens obtidas de um modelo humano
Abstract: The increasing search for realism in virtual characters found in' many applications as movies, education, games, so on, is the motivation ofthis thesis. The thesis describes an animation model that employs the biomechanics strategy for the development of a computing prototype, called SABiom. The method used is based on simulation of physical features of the human face, such as layers of the skin and musc1es, that are modeled for simulation of the mechanical behavior of the facial tissue under the action of muscle forces. Although there are several movements produced by a face, the current work limits itself to the simulations of the facial expressions focusing the lips. To validate the results obtained from SABiom, we compared the images of the virtual model with images from a human model
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Bannerman, Rachel L. "Orienting to emotion : a psychophysical approach". Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=59429.
Texto completoKaufmann, Jurgen Michael. "Interactions between the processing of facial identity, emotional expression and facial speech". Thesis, University of Glasgow, 2002. http://theses.gla.ac.uk/3110/.
Texto completoLee, Douglas Spencer. "Facial action determinants of pain judgment". Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25812.
Texto completoArts, Faculty of
Psychology, Department of
Graduate
Ersotelos, Nikolaos. "Highly automated method for facial expression synthesis". Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4524.
Texto completoWang, Shihai. "Boosting learning applied to facial expression recognition". Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.
Texto completoBesel, Lana Diane Shyla. "Empathy : the role of facial expression recognition". Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/30730.
Texto completoArts, Faculty of
Psychology, Department of
Graduate
Santos, Patrick John. "Facial Expression Cloning with Fuzzy Membership Functions". Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26260.
Texto completoFan, Xijian. "Spatio-temporal framework on facial expression recognition". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.
Texto completoZhou, Yun. "Embedded Face Detection and Facial Expression Recognition". Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.
Texto completoFeffer, Michael A. (Michael Anthony). "Personalized machine learning for facial expression analysis". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119763.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 35-36).
For this MEng Thesis Project, I investigated the personalization of deep convolutional networks for facial expression analysis. While prior work focused on population-based ("one-size-fits-all") models for prediction of affective states (valence/arousal), I constructed personalized versions of these models to improve upon state-of-the-art general models through solving a domain adaptation problem. This was done by starting with pre-trained deep models for face analysis and fine-tuning the last layers to specific subjects or subpopulations. For prediction, a "mixture of experts" (MoE) solution was employed to select the proper outputs based on the given input. The research questions answered in this project are: (1) What are the effects of model personalization on the estimation of valence and arousal from faces? (2) What is the amount of (un)supervised data needed to reach a target performance? Models produced in this research provide the foundation of a novel tool for personalized real-time estimation of target metrics.
by Michael A. Feffer.
M. Eng.
Tang, Wing Hei Iris. "Facial expression recognition for a sociable robot". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/46467.
Texto completoIncludes bibliographical references (p. 53-54).
In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.
by Wing Hei Iris Tang.
M.Eng.
Schulze, Martin Michael. "Facial expression recognition with support vector machines". [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.
Texto completoAinsworth, Kirsty. "Facial expression recognition and the autism spectrum". Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/8287/.
Texto completoMistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization". Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.
Texto completoMeyer, Eric C. "A visual scanpath study of facial affect recognition in schizotypy and social anxiety". Diss., Online access via UMI:, 2005.
Buscar texto completoLin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS". UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.
Texto completoStefani, Fabiane Miron. "Estudo eletromiográfico do padrão de contração muscular da face de adultos". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5160/tde-19112008-162050/.
Texto completoSpeech Therapy has been considered subjective during many years due to its manual and visual methods. Many researchers have been searching for more objective methodology of evaluation, based on electronics devises. One of them is the EMG- Surface Electromyography, which is the electric unit measure of a muscle. Literature presents many works in TMJ and Orthodontics areas, special attention to the chewing muscles- temporal and masseter- for been bigger muscles, presenting more evident results in EMG. Less attention is paid for mimic muscles. The objective of our work is to identify, by means of EMG, the electrical activity of facial muscles of healthy adults during facial movements normally used in speech therapy clinic, to identify the role of each muscle during movements and to differentiate the electrical activity of these muscles during this movements. 31 volunteers have been evaluated (18 women) with mean age of 29,84 years, no speech therapy or odontological complains. Bipolar surface electrodes have been adhered to masseter, buccinator and suprahyoid muscles bilaterally and to superior and inferior orbicular oris muscles. Electrodes were connected to a EMG 1000 from Lynx Tecnologia Eletrônica of 8 channels, and it was asked each participant to carry out the following movements: Labial Protrusion (PL), Lingual Protrusion (L), Cheek Inflating (CI), Opened Smile (OS), Closed Smile (CS), Labial Lateralization (LL) and pressure of one lip against the other (LP). EMG data was registered in microvolts (RMS) and the movement media was considered for data analyses, which were normalized using as bases the rest EMG and results show that orbicular oris are more electric activity than other muscles in PL, CI, OS, LL and LP. In LL movements, orbicularis oris also showed greater activity, but buccinator muscles showed effective participation in movement, especially in right LL. L didnt show any differences between evaluated muscles. Buccinator was the most active muscle during CS. We concluded that Orbicularis Ores were the most active muscles during the tasks, exception made to L and CS. In L no muscle was significantly higher and in CS Buccinators were the most active. Opened Smile is the movement where the muscles are more activated in a role. This results shows that EMG is of great use for mimic muscles evaluation, but should be used carefully in specific tongue assessment
Hattiangadi, Nina Uday. "Facial affect processing across a perceptual timeline : a comparison of two models of facial affect processing /". Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004278.
Texto completoSaeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed". Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.
Texto completoKusano, Maria Elisa. "Assimetrias nos reconhecimentos de expressões faciais entre hemicampos visuais de homens e mulheres". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/59/59134/tde-15062015-210503/.
Texto completoRecognizing different emotional facial expressions is worth for interpersonal relationship, although there is not a consensus how this recognition process really occurs. Studies suggest differences during the processing of facial expressions related with emotional valence, functional asymmetry between both brain hemisphere and observer\'s characteristics (manual dexterity, gender and diseases) and by the stimulus exposure time. By the divided visual field method associated with two-interval forced choice we investigated the performance of recognizing sad and happy faces in the left and the right visual hemifield of 24 participants (13 women, 11 men), all adults, right-handed and with normal or higher visual acuity. They were all submitted to experimental sessions where pair of faces were successively presented for 100ms in one of the visual hemifield, right or left, being one neutral and other emotive (happy or sad), in random order presentation . Each pair of faces were only masculine or feminine, of a single person, and the emotional faces could have emotional intensity chosen randomly between 11 intensity levels obtained by computer graphic morphing technique. The participant task was to choose in each pair of face sequence in the visual hemifield which one was the most emotive. The hit rate to each level of emotional intensity of each face allowed to estimate the parameters of psychometric curves adjusted to cumulative normal distribution for each visual hemifields. The statistic analysis of the parameters of the psychometric curves allowed pointed out e that the hit rates for the happy faces were higher than for the sad ones. Also, while women showed symmetric performance in recognizing happy and sad faces between the visual hemifields, men showed asymmetric performance, with superiority of recognizing the male face in the left visual field and of recognizing the female face in the right visual field. There were evidences that recognizing emotional faces are somewhat different, considering interaction between the gender of the participants and the gender of the stimulus face, emotional valences and brain hemisphere. This research partially supports the Right Hemisphere Theory and suggest experimental design influence the sex differences performance.
Chuang, YungChuan y 莊詠筌. "Facial Expression Mapping Based on Facial Features". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65155250635080513268.
Texto completo國立臺北教育大學
數位科技設計學系(含玩具與遊戲設計碩士班)
99
With the advancement of computer technology, the communication and interaction between people was enhanced through instant messaging softwares. However, users who communicate via these softwares can not reveal their ‘true’ emotions with their own facial expression but through the use of emoticons. Yet, these emoticons could not fully express facial expressions. Inspired by this shortcoming, if a specific user’s face can automatically make facial expressions, instant messaging would be more interesting. The purpose of this thesis is to make any specific type or user-defined facial expressions on images of neutral facial expression by image processing techniques. Two phases of process is carried out to serve this purpose: image pre-processing and feature extraction , and expression mapping. The main purpose of image pre-processing is to determine face region based on skin-tone segmentation and morphological operations. After that, localize expression features by their color information, viz.: brows, eyes, and mouth. Then, set its shape control points which are simplified from the definition of Facial Animation Parameters (FAP) by MPEG-4 standard. In the expression mapping phase, users can move those control points via an interactive interface. When this procedure is finished, the system changes the selected image texture to fit a new shape by Delaunay triangulation, image registration, and image interpolation operations. Finally, the result of a new facial expression on the original image is generated.
Hsu, Wei-Cheng y 徐瑋呈. "Facial Expression Recognition Based on Facial Features". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50258463357861831524.
Texto completo國立清華大學
資訊工程學系
101
We propose an expression recognition method based on facial features from the psychological perspective. According to the American psychologist Paul Ekman’s work on action units, we divide a face into different facial feature regions for expression recognition via the movements of individual facial muscles during slight different instant changes in facial expression. This thesis starts from introducing Paul Ekman’s work, 6 basic emotions, and existing methods based on feature extraction or facial models. Our system have two main parts: preprocessing and recognition method. The difference in training and test environments, such as illumination, or face size and skin color of different subjects under testing, is usually the major influencing factor in recognition accuracy. It is therefore we propose a preprocessing step in our first part of the system: we first perform face detection and facial feature detection to locate facial features. We then perform a rotation calibration based on the horizontal line obtained by connecting both eyes. The complete face region can be extracted by using facial models. Lastly, the face region is calibrated for illumination and resized to same resolution for dimensionality of feature vector. After preprocessing, we can reduce the difference among images. Second part of our proposed system is the recognition method. Here we use Gabor filter banks with ROI capture to obtain the feature vector and principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction to reduce the computation time. Finally, a support vector machine (SVM) is adopted as our classifier. The experimental result shows that the proposed method can archive 86.1%, 96.9%, and 89.0% accuracy on three existing datasets JAFFE, TFEID, and CK+ respectively (based on leave-one-person-out evaluation). We also tested the performance on the 101SC dataset that were collected and prepared by ourselves. This dataset is relatively difficult in recognition but closer to the scenario in reality. The proposed method is able to achieve 62.1% accuracy on it. We also use this method to participate the 8th UTMVP (Utechzone Machine Vision Prize) competition, and we were ranked the second place out of 10 teams.
Ren, Yuan. "Facial Expression Recognition System". Thesis, 2008. http://hdl.handle.net/10012/3516.
Texto completo蘇芳生. "Facial Expression Detection System". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/14647436644952580750.
Texto completo國立中正大學
通訊工程研究所
92
In this research, we develop a system to recognize facial expression automatically. First , we extract the seventeen significant feature points of a face and then use these feature points to recognize facial expression. During recognition, we first compare feature points of the novel face and the expressional face. We then project the feature vector on decided expressions feature vector in order to choose the possible emotion by using database of facial expression. Finally, we decide an emotion of a facial expression. In here, we train expressions feature vectors by using the method of multi-category Gradient Descent to improve our recognition rate. We use pictures offered by Department of Psychology, NCCU, to recognize facial expression in this research. The four emotion are anger, happiness, surprise, and sadness and we have successfully classified these four emotions with average error smaller than 15%.
Hsia, Hua-Wei y 夏華偉. "Facial expression synthesis system". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90349791892751755828.
Texto completo淡江大學
資訊工程學系碩士班
99
It is an interesting and challenging problem to synthesis vivid facial expression images. In this paper, we proposed a “facial expression synthesis system” which imitates the reference facial expression image according to the difference between shape feature vectors of the neutral and expression image. Experimental results show vivid and flexible results.
Zih-Syuan, Lin y 林子軒. "2D Facial Expression Synthesis". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/68693376776668761519.
Texto completo國立宜蘭大學
資訊工程學系碩士班
103
The most common and intuitive human communication method is via facial expressions. Due to the rapid development of technology, people communicate with each other more frequently nowadays. Recently, there have been many researches involved in the study related to human facial expression. The developed technologies have been used in entertainments and communications in daily life, such as emoticons or stickers in the social networking apps, virtual avatars, photo warping, and etc. Facial expression synthesis is a challenge research topic in computer animations. This thesis proposes an automatic facial expression synthesis system. The system is able to automatically detect the facial organs based on the input facial image and extract a set of facial feature points. If the feature points are misplaced, users may correct their position through an adjustment interface. In the expression synthesis stage, users are able to choose between a set of pre-defined or customized modes to warp the input photo and create the desired expressions. For the pre-defined mode, the system generates a set of common expressions automatically based on the input facial photo. For the customized mode, users are able to freely adjust the feature points to create exaggerated or funny expressions. Keywords: face detection, facial feature point detection, facial expression synthesis, image warping
Chiang, Wei-Ting y 姜威廷. "Interactive Facial Expression System". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/77a7uk.
Texto completo