Dissertations / Theses on the topic 'Facial recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Facial recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Boraston, Zillah Louise. "Emotion recognition from facial and non-facial cues." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1445207/.
Full textSutherland, Kenneth Gavin Neil. "Automatic facial recognition based on facial feature analysis." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/13048.
Full textMunasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.
Full textBordon, Natalie Sarah. "Facial affect recognition in psychosis." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/22865.
Full textHuang, Weilin. "Robust facial representation for recognition." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/robust-facial-representation-for-recognition(ee2f295c-7b1a-4966-bd12-17edba43b2b4).html.
Full textYu, Kaimin. "Towards Realistic Facial Expression Recognition." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.
Full textSheikh, Munaf. "Robust recognition of facial expressions on noise degraded facial images." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7054_1306828003.
Full textWe investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.
de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.
Full textThe South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
Fraser, Matthew Paul. "Repetition priming of facial expression recognition." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.
Full textHsu, Shen-Mou. "Adaptation effects in facial expression recognition." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.
Full textPapazachariou, Konstantinos. "Facial analytics for emotional state recognition." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28672.
Full textFält, Pontus. "ADVERSARIAL ATTACKS ON FACIAL RECOGNITION SYSTEMS." Thesis, Umeå universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-175887.
Full textHenriques, Marco António Silva. "Facial recognition based on image compression." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17207.
Full textO reconhecimento facial tem recebido uma importante atenção em termos de investigação, especialmente nos últimos anos, podendo ser considerado como uma das mais bem sucessidas aplicações de análise e "compreensão" de imagens. Prova disso são as várias conferências e novos artigos que são publicados sobre o tema. O foco na investigação deve-se à grande quantidade de aplicações a que pode estar associado, podendo servir de "auxílio" para muitas tarefas diárias do ser humano. Apesar de existirem diversos algoritmos para efetuar reconhecimento facial, muitos deles até bastante precisos, este problema ainda não está completamente resolvido: existem vários obstáculos relacionados com as condições do ambiente da imagem que alteram a aquisição da mesma e que, por isso, afetam o reconhecimento. Esta tese apresenta uma nova solução ao problema do reconhecimento facial que utiliza métricas de similaridade entre imagens, obtidas com recurso a compressão de dados, nomeadamente a partir de Modelos de Contexto Finito. Existem na literatura algumas abordagens ao reconhecimento facial através de compressão de dados que recorrem principalmente ao uso de transformadas. O método proposto nesta tese tenta uma abordagem inovadora, baseada na utilização de Modelos de Contexto Finito para estimar o número de bits necessários para codificar uma imagem de um sujeito, utilizando um modelo de treino de uma base de dados. Esta tese tem como objectivo o estudo da abordagem descrita acima, isto é, resolver o problema de reconhecimento facial, para uma possível utilização num sistema de autenticação real. São apresentados resultados experimentais detalhados em bases de dados bem conhecidas, o que comprova a eficácia da abordagem proposta.
Facial recognition has received an important attention in terms of research, especially in recent years, and can be considered as one of the best succeeded applications on image analysis and understanding. Proof of this are the several conferences and new articles that are published about the subject. The focus on this research is due to the large amount of applications that facial recognition can be related to, which can be used to help on many daily tasks of the human being. Although there are many algorithms to perform facial recognition, many of them very precise, this problem is not completely solved: there are several obstacles associated with the conditions of the environment that change the image’s acquisition, and therefore affect the recognition. This thesis presents a new solution to the problem of face recognition, using metrics of similarity between images obtained based on data compression, namely by the use of Finite Context Models. There are on the literature some proposed approaches which relate facial recognition and data compression, mainly regarding the use of transform-based methods. The method proposed in this thesis tries an innovative approach based on the use of Finite Context Models to estimate the number of bits needed to encode an image of a subject, using a trained model from a database. This thesis studies the approach described above to solve the problem of facial recognition for a possible use in a real authentication system. Detailed experimental results based on well known databases proves the effectiveness of the proposed approach.
Kreklewetz, Kimberly. "Facial affect recognition in psychopathic offenders /." Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2166.
Full textEdmonds, Emily Charlotte. "Cognitive Mechanisms of False Facial Recognition." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145362.
Full textSierra, Brandon Luis. "COMPARING AND IMPROVING FACIAL RECOGNITION METHOD." CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/575.
Full textLincoln, Michael C. "Pose-independent face recognition." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250063.
Full textMontgomery, Tracy L. "Composite artistry meets facial recognition technology : exploring the use of facial recognition technology to identify composite images." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5477.
Full textApproved for public release; distribution is unlimited
Forensic art has been used for decades as a tool for law enforcement. When crime witnesses can provide a suspect description, an artist can create a composite drawing in hopes that a member of the public will recognize the subject. In cases where a suspect is captured on film, that photograph can be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the suspect. Because composite images are reliant on a chance opportunity for a member of the public to see and recognize the subject depicted, they are unable to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better informing artists and program creators on how to improve the success of merging these technologies. This research ultimately reveals that while facial recognition programs can recognize composite renderings, they cannot achieve a level of accuracy that is useful to investigators. It also suggests opportunities to better design facial recognition programs to be more successful in the identification of composite images.
Wang, Shihai. "Boosting learning applied to facial expression recognition." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.
Full textBesel, Lana Diane Shyla. "Empathy : the role of facial expression recognition." Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/30730.
Full textArts, Faculty of
Psychology, Department of
Graduate
Oberst, Leah. "Facial and Body Emotion Recognition in Infancy." UKnowledge, 2014. http://uknowledge.uky.edu/psychology_etds/48.
Full text張晶凝 and Ching-ying Crystal Cheung. "Facial emotion recognition after subcortical cerebrovascular diseases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224155.
Full textFan, Xijian. "Spatio-temporal framework on facial expression recognition." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.
Full textZhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.
Full textTang, Wing Hei Iris. "Facial expression recognition for a sociable robot." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/46467.
Full textIncludes bibliographical references (p. 53-54).
In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.
by Wing Hei Iris Tang.
M.Eng.
Muller, Neil. "Facial recognition, eigenfaces and synthetic discriminant functions." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51756.
Full textENGLISH ABSTRACT: In this thesis we examine some aspects of automatic face recognition, with specific reference to the eigenface technique. We provide a thorough theoretical analysis of this technique which allows us to explain many of the results reported in the literature. It also suggests that clustering can improve the performance of the system and we provide experimental evidence of this. From the analysis, we also derive an efficient algorithm for updating the eigenfaces. We demonstrate the ability of an eigenface-based system to represent faces efficiently (using at most forty values in our experiments) and also demonstrate our updating algorithm. Since we are concerned with aspects of face recognition, one of the important practical problems is locating the face in a image, subject to distortions such as rotation. We review two well-known methods for locating faces based on the eigenface technique.e These algorithms are computationally expensive, so we illustrate how the Synthetic Discriminant Function can be used to reduce the cost. For our purposes, we propose the concept of a linearly interpolating SDF and we show how this can be used not only to locate the face, but also to estimate the extent of the distortion. We derive conditions which will ensure a SDF is linearly interpolating. We show how many of the more popular SDF-type filters are related to the classic SDF and thus extend our analysis to a wide range of SDF-type filters. Our analysis suggests that by carefully choosing the training set to satisfy our condition, we can significantly reduce the size of the training set required. This is demonstrated by using the equidistributing principle to design a suitable training set for the SDF. All this is illustrated with several examples. Our results with the SDF allow us to construct a two-stage algorithm for locating faces. We use the SDF-type filters to obtain initial estimates of the location and extent of the distortion. This information is then used by one of the more accurate eigenface-based techniques to obtain the final location from a reduced search space. This significantly reduces the computational cost of the process.
AFRIKAANSE OPSOMMING: In hierdie tesis ondersoek ons sommige aspekte van automatiese gesigs- herkenning met spesifieke verwysing na die sogenaamde eigengesig ("eigen- face") tegniek. ‘n Deeglike teoretiese analise van hierdie tegniek stel ons in staat om heelparty van die resultate wat in die literatuur verskyn te verduidelik. Dit bied ook die moontlikheid dat die gedrag van die stelsel sal verbeter as die gesigte in verskillende klasse gegroepeer word. Uit die analise, herlei ons ook ‘n doeltreffende algoritme om die eigegesigte op te dateer. Ons demonstreer die vermoë van die stelsel om gesigte op ‘n doeltreffende manier te beskryf (ons gebruik hoogstens veertig eigegesigte) asook ons opdateringsalgoritme met praktiese voorbeelde. Verder ondersoek ons die belangrike probleem om gesigte in ‘n beeld te vind, veral as rotasie- en skaalveranderinge plaasvind. Ons bespreek twee welbekende algoritmes om gesigte te vind wat op eigengesigte gebaseer is. Hierdie algoritme is baie duur in terme van numerise berekeninge en ons ontwikkel n koste-effektiewe metode wat op die sogenaamde "Synthetic Discriminant Functions" (SDF) gebaseer is. Vir hierdie doel word die begrip van lineêr interpolerende SDF’s ingevoer. Dit stel ons in staat om nie net die gesig te vind nie, maar ook ‘n skatting van sy versteuring te bereken. Voorts kon ons voorwaardes aflei wat verseker dat ‘n SDF lineêr interpolerend is. Aangesien ons aantoon dat baie van die gewilde SDF-tipe filters aan die klassieke SDF verwant is, geld ons resultate vir ‘n hele verskeidenheid SDF- tipe filters. Ons analise toon ook dat ‘n versigtige keuse van die afrigdata mens in staat stel om die grootte van die afrigstel aansienlik te verminder. Dit word duidelik met behulp van die sogenaamde gelykverspreidings beginsel ("equidistributing principle") gedemonstreer. Al hierdie aspekte van die SDF’s word met voorbeelde geïllustreer. Ons resultate met die SDF laat ons toe om ‘n tweestap algoritme vir die vind van ‘n gesig in ‘n beeld te ontwikkel. Ons gebruik eers die SDF-tipe filters om skattings vir die posisie en versteuring van die gesig te kry en dan verfyn ons hierdie skattings deur een van die teknieke wat op eigengesigte gebaseer is te gebruik. Dit lei tot ‘n aansienlike vermindering in die berekeningstyd.
Li, Zhenghong. "Automated Facial Action Unit Recognition in Horses." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281323.
Full textUnder de senaste åren, med utvecklingen av djupinlärning och dess tillämpningar, har datorseendeuppgifter så som igenkänning av mänskliga ansiktsaktionsenheter gjort stora framsteg. Inspirerad av dessa arbeten har vi undersökt möjligheten att hitta en modell för att automatiskt känna igen hästars ansiktsuttryck. Med hjälp av Equine Facial Action Coding System som nyligen skapats av veterinärer kan vi upptäcka ansiktsaktionsenheter hos hästar som definieras i detta system från bilder och videor. I detta projekt föreslog vi ett kaskadramverk för igenkänning av hästens ansiktsaktionsenheter från bilder. Först tränade vi flera objektdetektorer för att upptäcka de fördefinierade regionerna av intresse. Sedan använde vi binära klassificeringar för varje aktionsenhet i relaterade regioner.Vi testade olika modeller av klassificerare och fann att AlexNet fungerade bäst i våra experiment. Dessutom överförde vi också en modell för mänsklig ansiktsaktionsenhetsigenkänning till hästar och utforskade strategier för att lära sig korrelationerna mellan olika aktionsenheter.
Toure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.
Full textForch, Valentin, Julien Vitay, and Fred H. Hamker. "Recurrent Spatial Attention for Facial Emotion Recognition." Technische Universität Chemnitz, 2020. https://monarch.qucosa.de/id/qucosa%3A72453.
Full textSchulze, Martin Michael. "Facial expression recognition with support vector machines." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.
Full textCheung, Ching-ying Crystal. "Facial emotion recognition after subcortical cerebrovascular diseases /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B23425027.
Full textAinsworth, Kirsty. "Facial expression recognition and the autism spectrum." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/8287/.
Full textVadapalli, Hima Bindu. "Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition." University of the Western Cape, 2011. http://hdl.handle.net/11394/5415.
Full textThis research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
Mistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization." Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.
Full textSaeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.
Full textPoon, Bruce Siu-Lung. "Recognition of human faces in distorted images based on principal component analysis and gabor wavelets." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15891.
Full textSpagnuolo, Imerio. "Landmark based facial recognition in the NAO robot." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13227/.
Full textDursun, Pinar. "Recognition Of Facial Expressions In Alcohol Dependent Inpatients." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608450/index.pdf.
Full textö
z June 2007, 130 pages The ability to recognize emotional facial expressions (EFE) is very critical for social interaction and daily functioning. Recent studies have shown that alcohol dependent individuals have deficits in the recognition of these expressions. Thereby, the objective of this study was to explore the presence of impairment in the decoding of universally recognized facial expressions -happiness, sadness, anger, disgust, fear, surprise, and neutral expressions- and to measure their manual reaction times (RT) toward these expressions in alcohol dependent inpatients. Demographic Information Form, CAGE Alcoholism Inventory, State- Trait Anxiety Inventory (STAI), Beck Depression Inventory (BDI), The Symptom Checklist, and lastly a constructed computer program (Emotion Recognition Test) were administered to 50 detoxified alcohol dependent inpatients and 50 matched-control group participants. It was hypothesized that alcohol dependents would show more deficits in the accuracy of reading EFE and would react more rapidly toward negative EFE -fear, anger, disgust, sadness than control group. Series of ANOVA, ANCOVA, MANOVA and MANCOVA analyses revealed that alcohol dependent individuals were more likely to have depression and anxiety disorders than non-dependents. They recognized less but responded faster toward disgusted expressions than non-dependent individuals. On the other hand, two groups did not differ significantly in the total accuracy responses. In addition, the levels of depression and anxiety did not affect the recognition accuracy or reaction times. Stepwise multiple regression analysis indicated that obsessive-compulsive subscale of SCL, BDI, STAI-S Form, and the recognition of fearful as well as disgusted expressions were associated with alcoholism. Results were discussed in relation to the previous findings in the literature. The inaccurate identification of disgusted faces might be associated with organic deficits resulted from alcohol consumption or cultural factors that play very important role in displaying expressions.
Kokin, Jessica. "Facial Expression Recognition and Interpretation in Shy Children." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32079.
Full textSherman, Adam Grant. "Development of a test of facial affect recognition /." Access abstract and link to full text, 1994. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9510111.
Full textAdam, Mohamad Z. "Unfamiliar facial identity registration and recognition performance enhancement." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/11431.
Full textClark, Clifford. "Recall and Recognition Tasks within Facial Composite Production." Thesis, Open University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518193.
Full textNg, Hau-hei. "The effect of mood on facial emotion recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hdl.handle.net/10722/210312.
Full textAlrasheed, Waleed. "Time and Space Efficient Techniques for Facial Recognition." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6238.
Full textPh.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Hsu, Wei-Cheng, and 徐瑋呈. "Facial Expression Recognition Based on Facial Features." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50258463357861831524.
Full text國立清華大學
資訊工程學系
101
We propose an expression recognition method based on facial features from the psychological perspective. According to the American psychologist Paul Ekman’s work on action units, we divide a face into different facial feature regions for expression recognition via the movements of individual facial muscles during slight different instant changes in facial expression. This thesis starts from introducing Paul Ekman’s work, 6 basic emotions, and existing methods based on feature extraction or facial models. Our system have two main parts: preprocessing and recognition method. The difference in training and test environments, such as illumination, or face size and skin color of different subjects under testing, is usually the major influencing factor in recognition accuracy. It is therefore we propose a preprocessing step in our first part of the system: we first perform face detection and facial feature detection to locate facial features. We then perform a rotation calibration based on the horizontal line obtained by connecting both eyes. The complete face region can be extracted by using facial models. Lastly, the face region is calibrated for illumination and resized to same resolution for dimensionality of feature vector. After preprocessing, we can reduce the difference among images. Second part of our proposed system is the recognition method. Here we use Gabor filter banks with ROI capture to obtain the feature vector and principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction to reduce the computation time. Finally, a support vector machine (SVM) is adopted as our classifier. The experimental result shows that the proposed method can archive 86.1%, 96.9%, and 89.0% accuracy on three existing datasets JAFFE, TFEID, and CK+ respectively (based on leave-one-person-out evaluation). We also tested the performance on the 101SC dataset that were collected and prepared by ourselves. This dataset is relatively difficult in recognition but closer to the scenario in reality. The proposed method is able to achieve 62.1% accuracy on it. We also use this method to participate the 8th UTMVP (Utechzone Machine Vision Prize) competition, and we were ranked the second place out of 10 teams.
Ren, Yuan. "Facial Expression Recognition System." Thesis, 2008. http://hdl.handle.net/10012/3516.
Full textBarbosa, Pedro Nelson Sampaio. "Automotive Facial Driver Recognition." Master's thesis, 2017. http://hdl.handle.net/1822/54768.
Full textDependency on technology is quite real when it comes to the automotive world. Currently there are several automated systems that can be found in cars: lights control, seats control, brakes control and the list goes on. These systems enhance the safety of drivers but raise other issues that need to be addressed. In the automotive market, when trying to stand out, some brands seek to attract consumers through the originality and provide options that were never seen before in their vehicles. Nowadays some vehicles have the option to change some funcionalities depending on the driver’s will, such as seat and steering wheel height, suspension/motor mode, among others. But with this level of changes a question arises: will the driver have to make these changes every time he uses the car? The answer is no, because if the user somehow performs a driver’s identification, the car system can load the personal costumizations maked by that particular person. This work realize the driver’s recognition automatically and with the minimum of hindrance. For this purpose it is acquire an image of the driver’s face in order to know his identification. This system will recognize the driver and send his identification to an external system, so that this second system can perform the required customization. The system to build in this dissertation have to be small in size, quickly realize the driver’s recognition and to inform her identity to an external system. The necessary process for the use of this system must be simple, and allow the graphic visualization of the whole process that is taking place. The final system aims to increase the technical knowledge of the "Project INNOVCAR", which is a project resulting from the partnership between the Minho’s University and the renowned multinational in the automotive world, Bosch. All the development and conclusions taken during this work will revert to this project, and the system to be developed also has the possibility of being integrated into the DSM (Driver Simulator Mockup). The DSM is a simulator of an automobile in a virtual world, in which all the systems that compose it, use the TCP/IP protocol suit as a means of communication. The final system to be developed, in this dissertation, should use the TCP/IP protocol suit, as a means of receiving commands and sending the driver’s identity.
A dependência pela tecnologia é algo muito real quando se trata do mundo automóvel. Atualmente podem ser encontrados muitos sistemas automáticos num carro: controlo das luzes, controlo dos assentos, controlo dos travões e a lista continua. Estes sistemas aprimoram a segurança dos condutores nos seus veículos, mas levantam outros problemas que têm de ser resolvidos. No mercado automóvel, de forma a se destacarem dos restantes, algumas marcas procuram atrair o consumidor através da inserção de opções e originalidade, nos seus veículos, nunca vistas anteriormente. Atualmente alguns veículos têm opções para alterarem algumas funcionalidades, dependendo do gosto do condutor, opções como altura dos assentos ou volante, estações no menu de acesso rápido, entre outras. Mas com o aumento da quantidade de alterações que se pode realizar num veiculo, uma questão surge: o condutor terá sempre que realizar essas alterações sempre que vá usar o carro? A resposta a essa questão é não, pois se de algum modo for realizada uma identificação da identidade do condutor, o sistema do carro pode realizar o load das personalizações associadas a um especifico condutor. Este trabalho pretende realizar o reconhecimento da identidade do condutor de forma automática, com o mínimo de desconforto possível. Para isso é proposto a aquisição de uma imagem da face do condutor, de forma a realizar a sua identificação através dela. Este sistema irá realizar o reconhecimento do condutor, e de seguida enviar a sua identidade para um sistema externo, para que este segundo sistema possa adquirir as personalizações anteriormente feitas, pelo condutor em questão. O sistema a desenvolver nesta dissertação propõe ser de pequenas dimensões, realizar o reconhecimento rapidamente e enviar a identidade obtida, para um sistema externo. O processo necessário para usar este sistema deve ser simples, e permitir a visualização gráfica de tudo o que está a acontecer. O sistema final tem como objetivo o aumento do conhecimento critico do "Project INNOVCAR", que se trata de um projeto resultante da parceria entre a Universidade do Minho e a empresa de renome mundial no mundo automóvel, Bosch. Todo o desenvolvimento e conclusões executadas/retiradas durante este trabalho revertem para este projeto, e o sistema a desenvolver tem também a possibilidade de ser integrado no DSM (Driver Simulator Mockup). O DSM é um simulador automóvel num mundo virtual, e todos os sistemas que o compõem, usam o protocolo TCP/IP como forma de comunicação. O sistema final a desenvolver, nesta dissertação, também deve usar o protocolo de comunicação TCP/IP, como meio de receção de comando e envio da identidade do condutor.
Este trabalho foi financiado pelo projeto "INNOVCAR: Inovação para Veículos Inteligentes", n02797, cofinanciado pelo FEDER através do Portugal 2020 – Programa Operacional Competitividade e Internacionalização (COMPETE2020).
GUPTA, MUSKAN. "FACIAL DETECTION and RECOGNITION." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19521.
Full textWu, Hung-Yu, and 吳弘裕. "Facial Feature Extraction and Recognition." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/48308473141317313685.
Full text大同工學院
電機工程研究所
87
In recent years, the authentication problem becomes more serious. In the thesis, we propose an automatic face recognition system. The system consists of two parts : one is facial feature extraction and the other is face recognition. In facial feature extraction part, we extract the edge of the input image first and we will get a binary image expressing the contour of face. Then based on the symmetry property of a face and the relationships in head of hair, eyes, nose, mouth and neck, one can locate the positions which the facial features lie in. So we can get the square margin of the face what we want and normalize it. To reduce the influence of illumination, we deal the image with histogram equalization before we build the reference database and recognize the test image. In the recognition part, the eigenface approach is used to identify the human.
Hsueh, Ming-Kai, and 薛名凱. "Facial Expression Recognition with WebCam." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24120668987886256360.
Full text國立臺北科技大學
自動化科技研究所
92
It’s very easy for human being to recognize the emotion through facial expression, but it’s not so simple for computers. In this research, we use the common video device to establish the system which distinguishs people’s emotion, automatically. This system can work very well with neural network. In this paper, we are base on Ekman’s (Action Units; AUs) to catch the characteristics on the faces. First of all, it catches images on people’s faces by CMOS WebCam, and then detects the moving route of the five organs by image processing technique. Then, using those characteristics through neural network to recognize people’s emotion. Our system can recognize moving emotion within a short time, and the correction percentage can be over eighty percent. Therefore, it’s strong enough to approve the availability of our system.