Rozprawy doktorskie na temat „Facial recognition”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Facial recognition.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Facial recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Boraston, Zillah Louise. "Emotion recognition from facial and non-facial cues". Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1445207/.

Pełny tekst źródła
Streszczenie:
The recognition of another's emotion is a vital component of social interaction, and a number of brain regions have been implicated in this process. This thesis describes a series of experiments which investigate further the neural basis of emotion recognition, and its disruption in autism, a disorder characterised by profound impairments in social and emotional understanding. First, I attempted to determine more precisely the role of two brain regions, the amygdala and fusiform gyrus, using multivariate analysis to investigate whether the identity of observed emotions is represented in the spatial pattern of activity in these regions. I next focused on a particular cue to emotion - that of social movement. For this purpose, I designed a novel test of emotion recognition using abstract animations. I used this in an fMRI study together with emotion recognition tasks relying on facial expression and prosody. I found that some brain regions involved in processing these more commonly studied cues were also recruited in emotion recognition from the animations. The final studies described here are concerned with emotion recognition in autism. I administered the social movement-based test of emotion recognition to adults with autism and found a deficit in sadness recognition, which extended to the recognition of sadness from facial expressions. Finally, I investigated the impact on emotion recognition of expertise with sensory cues, returning again to the processing of facial expressions. I employed a more subtle test of emotion processing, a posed smile discrimination task, and found impaired performance in the autism group and also reduced gaze to the eye region. These findings are discussed in view of current models of emotion recognition, with reference to the role of the amygdala and its interactions with specialised cortical regions, and the impact of early social experience on subsequent social perceptual and social cognitive ability.
Style APA, Harvard, Vancouver, ISO itp.
2

Sutherland, Kenneth Gavin Neil. "Automatic facial recognition based on facial feature analysis". Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/13048.

Pełny tekst źródła
Streszczenie:
As computerised storage and control of information is now a reality, it is increasingly necessary that personal identity verification be used as the automated method of access control to this information. Automatic facial recognition is now being viewed as an ideal solution to the problem of unobtrusive, high security, personal identity verification. However, few researchers have yet managed to produce a face recognition algorithm capable of performing successful recognition, without requiring substantial data storage for the personal information. This thesis reports the development of a feature and measurement based system of facial recognition, capable of storing the intrinsics of a facial image in a very small amount of data. The parameterisation of the face into its component characteristics is essential to both human and automated face recognition. Psychological and behavioural research has been reviewed in this thesis in an attempt to establish any key pointers, in human recognition, which can be exploited for use in an automated system. A number of different methods of automated facial recognition which perform facial parameterisation in a variety of different ways are discussed. In order to store the relevant characteristics and measurements about the face, the pertinent facial features must be precisely located from within the image data. A novel technique of Limited Feature Embedding, which locates the primary facial features with a minimum of computational load, has been successfully designed and implemented. The location process has been extended to isolate a number of other facial features. With regard to the earlier review, a new method of facial parameterisation has been devised. Incorporated in this feature set are local feature data and structural measurement information about the face. A probabilistic method of inter-person comparisons which facilitates recognition even in the presence of expressional and temporal changes, has been successfully implemented. Comprehensive results of this novel recognition technique are presented for a variety of different operating conditions.
Style APA, Harvard, Vancouver, ISO itp.
3

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
Style APA, Harvard, Vancouver, ISO itp.
4

Bordon, Natalie Sarah. "Facial affect recognition in psychosis". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/22865.

Pełny tekst źródła
Streszczenie:
While a correlation between suffering from psychosis and an increased risk of engaging in aggressive behaviours has been established, many factors have been explored which may contribute to increasing this risk. Patients with a diagnosis of psychosis have been shown to have significant difficulties in facial affect recognition (FAR) and some authors have proposed that this may contribute to increasing the risk of displaying aggressive or violent behaviours. A systematic review of the current evidence regarding the links between facial affect recognition and aggression was conducted. Results were varied with some studies providing evidence of a link between emotion recognition difficulties and aggression, while others were unable to establish such an association. Results should be interpreted with some caution as the quality of included studies was poor due to small sample sizes, insufficient power and limited reporting of results. Adequately powered, randomised controlled studies using appropriate blinding procedures and validated measures are therefore required. There is a substantial evidence base demonstrating difficulties in emotional perception in patients with psychosis, with evidence suggesting a relationship with reduced social functioning, increased aggression and more severe symptoms of psychosis. In this review we aim to review this field to assess if there is a causal link between facial affect recognition difficulties and psychosis. The Bradford Hill criteria for establishing a causal relationship from observational data were used to generate key hypotheses, which were then tested against existing evidence. Where a published meta-analysis was not already available, new meta-analyses were conducted. A large effect of FAR difficulties in those with a diagnosis of psychosis, with a small to moderate correlation between FAR problems and symptoms of psychosis was found. Evidence was provided for the existence of FAR problems in those at clinical high risk of psychosis, while remediation of psychosis symptoms did not appear to impact FAR difficulties. There appears to be good evidence of the existence of facial affect recognition difficulties in the causation of psychosis, though larger, longitudinal studies are required to provide further evidence of this.
Style APA, Harvard, Vancouver, ISO itp.
5

Huang, Weilin. "Robust facial representation for recognition". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/robust-facial-representation-for-recognition(ee2f295c-7b1a-4966-bd12-17edba43b2b4).html.

Pełny tekst źródła
Streszczenie:
One of the main challenges in face recognition lies in robust representation of facial images in unconstrained real-world environment, where face appearances of a same person often vary significantly. This thesis investigates both holistic and local feature based representations, and develops several novel representation models in an effort to mitigate within-person variations and enhance discriminative power.The work first focuses on feature extraction of high-dimensional holistic representation based on intensities. Several linear and nonlinear dimensionality reduction methods are systematically compared. One of key findings is that linear PCA has comparable performances to the most recent nonlinear methods for extracting low-dimensional facial features. Extensive experiments are conducted and results are presented to support the findings, together with a quantitative measure of nonlinearity showing theoretical insights. Following these findings, a robust framework combining an automatic outlier detector and a nearest subspace classifier, is presented. The detector computes the corrupted regions of face images by measuring their reconstructive capabilities, while the classifier models face data by multiple linear subspaces.
Style APA, Harvard, Vancouver, ISO itp.
6

Yu, Kaimin. "Towards Realistic Facial Expression Recognition". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.

Pełny tekst źródła
Streszczenie:
Automatic facial expression recognition has attracted significant attention over the past decades. Although substantial progress has been achieved for certain scenarios (such as frontal faces in strictly controlled laboratory settings), accurate recognition of facial expression in realistic environments remains unsolved for the most part. The main objective of this thesis is to investigate facial expression recognition in unconstrained environments. As one major problem faced by the literature is the lack of realistic training and testing data, this thesis presents a web search based framework to collect realistic facial expression dataset from the Web. By adopting an active learning based method to remove noisy images from text based image search results, the proposed approach minimizes the human efforts during the dataset construction and maximizes the scalability for future research. Various novel facial expression features are then proposed to address the challenges imposed by the newly collected dataset. Finally, a spectral embedding based feature fusion framework is presented to combine the proposed facial expression features to form a more descriptive representation. This thesis also systematically investigates how the number of frames of a facial expression sequence can affect the performance of facial expression recognition algorithms, since facial expression sequences may be captured under different frame rates in realistic scenarios. A facial expression keyframe selection method is proposed based on keypoint based frame representation. Comprehensive experiments have been performed to demonstrate the effectiveness of the presented methods.
Style APA, Harvard, Vancouver, ISO itp.
7

Sheikh, Munaf. "Robust recognition of facial expressions on noise degraded facial images". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7054_1306828003.

Pełny tekst źródła
Streszczenie:

We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.

Style APA, Harvard, Vancouver, ISO itp.
8

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system". University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Pełny tekst źródła
Streszczenie:
>Magister Scientiae - MSc
The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
Style APA, Harvard, Vancouver, ISO itp.
9

Fraser, Matthew Paul. "Repetition priming of facial expression recognition". Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hsu, Shen-Mou. "Adaptation effects in facial expression recognition". Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Papazachariou, Konstantinos. "Facial analytics for emotional state recognition". Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28672.

Pełny tekst źródła
Streszczenie:
For more than 75 years, social scientists study the human emotions. Whereas numerous theories developed about the provenance and number of basic emotions, most agreed that they could categorize into six categories: angrer, disgust, fear, joy, sadness and surprise. To evaluate emotions, psychologists focused their research in facial expressions analysis. In recent years, the progress in digital technologies field has steered the researchers in psychology, computer science, linguistics, neuroscience, and related disciplines towards the usage of computer systems that analyze and detect the human emotions. Usually, these algorithms are referred in the literature as facial emotion recognition (FER) systems. In this thesis, two different approaches are described and evaluated in order to recognize the six basic emotions automatically from still images. An effective face detection scheme, based on color techniques and the well-known Viola and Jones (VJ) algorithm is proposed for the face and facial characteristics localization within an image. A novel algorithm which exploits the eyes’ centers coordinates, is applied on the image to align the detected face. In order to reduce the effects of illumination, homomorphic filtering is applied on the face area. Three regions (mouth, eyes and glabella) are localized and further processed for texture analysis. Although many methods have been proposed in the literature to recognize the emotion from the human face, they are not designed to be able to handle partial occlusions and multiple faces. Therefore, a novel algorithm that extracts information through texture analysis, from each region of interest, is evaluated. Two popular techniques (histograms of oriented gradients and local binary patterns) are utilized to perform texture analysis in the abovementioned facial patches. By evaluating several combinations of their principal parameters and two classification techniques (support vector machine and linear discriminant analysis), three classifiers are proposed. These three models are enabled depending on the regions’ availability. Although both classification approaches have shown impressive results, LDA proved to be slightly better especially regarding the amount of data management. Therefore, the final models, which utilized for comparison purpose, were trained using LDA classification. Experiments using Cohn-Kanade plus (CK+) and Amsterdam Dynamic Facial Expression Set (ADFES) datasets demonstrate that the presented FER algorithm has surpassed other significant FER systems in terms of processing time and accuracy. The evaluation of the system involved three experiments: intra-testing experiment (train and test with the same dataset), train/test process between CK+ and ADFES and finally the development of a new database based on selfie-photos, which is tested on the pre-trained models. The last two experiments constitute a certain evidence that Emotion Recognition System (ERS) can operate under various pose and light circumstances.
Style APA, Harvard, Vancouver, ISO itp.
12

Fält, Pontus. "ADVERSARIAL ATTACKS ON FACIAL RECOGNITION SYSTEMS". Thesis, Umeå universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-175887.

Pełny tekst źródła
Streszczenie:
In machine learning, neural networks have shown to achieve state-of-the-art performance within image classifi€cation problems. ‘Though, recent work has brought up a threat to these high performing networks in the form of adversarial att‹acks. ‘These a‹ttacks fool the networks by applying small and hardly perceivable perturbations and questions the reliability of neural networks. ‘This paper will analyze and compare the behavior of adversarial a‹ttacks where reliability and safety is crucial, within facial recognition systems.
Style APA, Harvard, Vancouver, ISO itp.
13

Henriques, Marco António Silva. "Facial recognition based on image compression". Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17207.

Pełny tekst źródła
Streszczenie:
Mestrado em Engenharia Electrónica e Telecomunicações
O reconhecimento facial tem recebido uma importante atenção em termos de investigação, especialmente nos últimos anos, podendo ser considerado como uma das mais bem sucessidas aplicações de análise e "compreensão" de imagens. Prova disso são as várias conferências e novos artigos que são publicados sobre o tema. O foco na investigação deve-se à grande quantidade de aplicações a que pode estar associado, podendo servir de "auxílio" para muitas tarefas diárias do ser humano. Apesar de existirem diversos algoritmos para efetuar reconhecimento facial, muitos deles até bastante precisos, este problema ainda não está completamente resolvido: existem vários obstáculos relacionados com as condições do ambiente da imagem que alteram a aquisição da mesma e que, por isso, afetam o reconhecimento. Esta tese apresenta uma nova solução ao problema do reconhecimento facial que utiliza métricas de similaridade entre imagens, obtidas com recurso a compressão de dados, nomeadamente a partir de Modelos de Contexto Finito. Existem na literatura algumas abordagens ao reconhecimento facial através de compressão de dados que recorrem principalmente ao uso de transformadas. O método proposto nesta tese tenta uma abordagem inovadora, baseada na utilização de Modelos de Contexto Finito para estimar o número de bits necessários para codificar uma imagem de um sujeito, utilizando um modelo de treino de uma base de dados. Esta tese tem como objectivo o estudo da abordagem descrita acima, isto é, resolver o problema de reconhecimento facial, para uma possível utilização num sistema de autenticação real. São apresentados resultados experimentais detalhados em bases de dados bem conhecidas, o que comprova a eficácia da abordagem proposta.
Facial recognition has received an important attention in terms of research, especially in recent years, and can be considered as one of the best succeeded applications on image analysis and understanding. Proof of this are the several conferences and new articles that are published about the subject. The focus on this research is due to the large amount of applications that facial recognition can be related to, which can be used to help on many daily tasks of the human being. Although there are many algorithms to perform facial recognition, many of them very precise, this problem is not completely solved: there are several obstacles associated with the conditions of the environment that change the image’s acquisition, and therefore affect the recognition. This thesis presents a new solution to the problem of face recognition, using metrics of similarity between images obtained based on data compression, namely by the use of Finite Context Models. There are on the literature some proposed approaches which relate facial recognition and data compression, mainly regarding the use of transform-based methods. The method proposed in this thesis tries an innovative approach based on the use of Finite Context Models to estimate the number of bits needed to encode an image of a subject, using a trained model from a database. This thesis studies the approach described above to solve the problem of facial recognition for a possible use in a real authentication system. Detailed experimental results based on well known databases proves the effectiveness of the proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
14

Kreklewetz, Kimberly. "Facial affect recognition in psychopathic offenders /". Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Edmonds, Emily Charlotte. "Cognitive Mechanisms of False Facial Recognition". Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145362.

Pełny tekst źródła
Streszczenie:
Face recognition involves a number of complex cognitive processes, including memory, executive functioning, and perception. A breakdown of one or more of these processes may result in false facial recognition, a memory distortion in which one mistakenly believes that novel faces are familiar. This study examined the cognitive mechanisms underlying false facial recognition in healthy older and younger adults, patients with frontotemporal dementia, and individuals with congenital prosopagnosia. Participants completed face recognition memory tests that included several different types of lures, as well as tests of face perception. Older adults demonstrated a familiarity-based response strategy, reflecting a deficit in source monitoring and impaired recollection of context, as they could not reliably discriminate between study faces and highly familiar lures. In patients with frontotemporal dementia, temporal lobe atrophy alone was associated with a reduction of true facial recognition, while concurrent frontal lobe damage was associated with increased false recognition, a liberal response bias, and an overreliance on "gist" memory when making recognition decisions. Individuals with congenital prosopagnosia demonstrated deficits in configural processing of faces and a reliance on feature-based processing, leading to false recognition of lures that had features in common from study to test. These findings may have important implications for the development of training programs that could serve to help individuals improve their ability to accurately recognize faces.
Style APA, Harvard, Vancouver, ISO itp.
16

Sierra, Brandon Luis. "COMPARING AND IMPROVING FACIAL RECOGNITION METHOD". CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/575.

Pełny tekst źródła
Streszczenie:
Facial recognition is the process in which a sample face can be correctly identified by a machine amongst a group of different faces. With the never-ending need for improvement in the fields of security, surveillance, and identification, facial recognition is becoming increasingly important. Considering this importance, it is imperative that the correct faces are recognized and the error rate is as minimal as possible. Despite the wide variety of current methods for facial recognition, there is no clear cut best method. This project reviews and examines three different methods for facial recognition: Eigenfaces, Fisherfaces, and Local Binary Patterns to determine which method has the highest accuracy of prediction rate. The three methods are reviewed and then compared via experiments. OpenCV, CMake, and Visual Studios were used as tools to conduct experiments. Analysis were conducted to identify which method has the highest accuracy of prediction rate with various experimental factors. By feeding a number of sample images of different people which serve as experimental subjects. The machine is first trained to generate features for each person among the testing subjects. Then, a new image was tested against the “learned” data and be labeled as one of the subjects. With experimental data analysis, the Eigenfaces method was determined to have the highest prediction rate of the three algorithms tested. The Local Binary Pattern Histogram (LBP) was found to have the lowest prediction rate. Finally, LBP was selected for the algorithm improvement. In this project, LBP was improved by identifying the most significant regions of the histograms for each person in training time. The weights of each region are assigned depending on the gray scale contrast. At recognition time, given a new face, different weights are assigned to different regions to increase prediction rate and also speed up the real time recognition. The experimental results confirmed the performance improvement.
Style APA, Harvard, Vancouver, ISO itp.
17

Lincoln, Michael C. "Pose-independent face recognition". Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Montgomery, Tracy L. "Composite artistry meets facial recognition technology : exploring the use of facial recognition technology to identify composite images". Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5477.

Pełny tekst źródła
Streszczenie:
CHDS State/Local
Approved for public release; distribution is unlimited
Forensic art has been used for decades as a tool for law enforcement. When crime witnesses can provide a suspect description, an artist can create a composite drawing in hopes that a member of the public will recognize the subject. In cases where a suspect is captured on film, that photograph can be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the suspect. Because composite images are reliant on a chance opportunity for a member of the public to see and recognize the subject depicted, they are unable to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better informing artists and program creators on how to improve the success of merging these technologies. This research ultimately reveals that while facial recognition programs can recognize composite renderings, they cannot achieve a level of accuracy that is useful to investigators. It also suggests opportunities to better design facial recognition programs to be more successful in the identification of composite images.
Style APA, Harvard, Vancouver, ISO itp.
19

Wang, Shihai. "Boosting learning applied to facial expression recognition". Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Besel, Lana Diane Shyla. "Empathy : the role of facial expression recognition". Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/30730.

Pełny tekst źródła
Streszczenie:
This research examined whether people with higher dispositional empathy are better at recognizing facial expressions of emotion at faster presentation speeds. Facial expressions of emotion, taken from Pictures o f Facial Affect (Ekman & Friesen, 1976), were presented at two different durations: 47 ms and 2008 ms. Participants were 135 undergraduate students. They identified the emotion displayed in the expression from a list of the basic emotions. The first part of this research explored connections between expression recognition and the common cognitive empathy/emotional empathy distinction. Two factors from the Interpersonal Reactivity Scale (IRS; Davis, 1983) measured self-reported tendencies to experience cognitive empathy and emotional empathy: Perspective Taking (IRS-PT), and Empathic Concern (IRS-EC), respectively. Results showed that emotional empathy significantly positively predicted performance at 47 ms, but not at 2008 ms and cognitive empathy did not significantly predict performance at either duration. The second part examined empathy deficits. The kinds of emotional empathy deficits that comprise psychopathy were measured by the Self-Report Psychopathy Scale (SRP-III; Paulhus, Hemphill; & Hare, in press). Cognitive empathy deficits were explored using the Empathy Quotient (EQ; Shaw et al., 2004). Results showed that the callous affect factor of the SRP (SRP-CA) was the only significant predictor at 47 ms, with higher callous affect scores associated with lower performance. SRP-CA is a deficit in emotional empathy, and thus, these results match the first paper's results. At 2008 ms, the social skills factor of the EQ was significantly positively predictive, indicating that people with less social competence had more trouble recognizing facial expressions at longer presentation durations. Neither the total scores for SRP nor EQ were significant predictors of identification accuracy at 47 ms and 2008 ms. Together, the results suggest that a disposition to react emotionally to the emotions of others, and remain other-focussed, provides a specific advantage for accurately recognizing briefly presented facial expressions, compared to people with lower dispositional emotional empathy.
Arts, Faculty of
Psychology, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
21

Oberst, Leah. "Facial and Body Emotion Recognition in Infancy". UKnowledge, 2014. http://uknowledge.uky.edu/psychology_etds/48.

Pełny tekst źródła
Streszczenie:
Adults are experts at assessing emotions, an ability essential for appropriate social interaction. The present study, investigated this ability’s development, examining infants’ matching of facial and body emotional information. In Experiment 1, 18 6.5-month-olds were familiarized to angry or happy bodies or faces. Those familiarized to bodies were tested with familiar and novel emotional faces. Those habituated to faces were tested with bodies. The 6.5-month-old infants exhibited a preference for the familiar emotion, matching between faces and bodies. In Experiment 2, 18 6.5-month-olds were tested with faces and bodies displaying anger and sadness. Infants familiarized to faces showed a familiarity preference; Infants familiarized to bodies failed to discriminate. Thus, infants generalized from faces to bodies, but failed in the reverse. A follow-up study increased the duration of familiarization: 12 additional 6.5-month-olds were exposed to two-30s familiarizations with bodies, and tested with faces. Additional exposure induced matching of emotions. In Experiment 3, 18 3.5-month-olds were tested using Experiment 1’s stimuli and methodology. The 3.5-month-old infants did not discriminate during test trials. These results suggest 6.5-month-old infants are capable of matching angry, sad and happy faces and bodies. However, 3.5-month-olds are not, suggesting a developmental change between 3.5- and 6.5-months.
Style APA, Harvard, Vancouver, ISO itp.
22

張晶凝 i Ching-ying Crystal Cheung. "Facial emotion recognition after subcortical cerebrovascular diseases". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224155.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Fan, Xijian. "Spatio-temporal framework on facial expression recognition". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.

Pełny tekst źródła
Streszczenie:
This thesis presents an investigation into two topics that are important in facial expression recognition: how to employ the dynamic information from facial expression image sequences and how to efficiently extract context and other relevant information of different facial regions. This involves the development of spatio-temporal frameworks for recognising facial expression. The thesis proposed three novel frameworks for recognising facial expression. The first framework uses sparse representation to extract features from patches of a face to improve the recognition performance, where part-based methods which are robust to image alignment are applied. In addition, the use of sparse representation reduces the dimensionality of features, and improves the semantic meaning and represents a face image more efficiently. Since a facial expression involves a dynamic process, and the process contains information that describes a facial expression more effectively, it is important to capture such dynamic information so as to recognise facial expressions over the entire video sequence. Thus, the second framework uses two types of dynamic information to enhance the recognition: a novel spatio-temporal descriptor based on PHOG (pyramid histogram of gradient) to represent changes in facial shape, and dense optical flow to estimate the movement (displacement) of facial landmarks. The framework views an image sequence as a spatio-temporal volume, and uses temporal information to represent the dynamic movement of facial landmarks associated with a facial expression. Specifically, spatial based descriptor representing spatial local shape is extended to spatio-temporal domain to capture the changes in local shape of facial sub-regions in the temporal dimension to give 3D facial component sub-regions of forehead, mouth, eyebrow and nose. The descriptor of optical flow is also employed to extract the information of temporal. The fusion of these two descriptors enhance the dynamic information and achieves better performance than the individual descriptors. The third framework also focuses on analysing the dynamics of facial expression sequences to represent spatial-temporal dynamic information (i.e., velocity). Two types of features are generated: a spatio-temporal shape representation to enhance the local spatial and dynamic information, and a dynamic appearance representation. In addition, an entropy-based method is introduced to provide spatial relationship of different parts of a face by computing the entropy value of different sub-regions of a face.
Style APA, Harvard, Vancouver, ISO itp.
24

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition". Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Pełny tekst źródła
Streszczenie:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
Style APA, Harvard, Vancouver, ISO itp.
25

Tang, Wing Hei Iris. "Facial expression recognition for a sociable robot". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/46467.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 53-54).
In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.
by Wing Hei Iris Tang.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
26

Muller, Neil. "Facial recognition, eigenfaces and synthetic discriminant functions". Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51756.

Pełny tekst źródła
Streszczenie:
Thesis (PhD)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: In this thesis we examine some aspects of automatic face recognition, with specific reference to the eigenface technique. We provide a thorough theoretical analysis of this technique which allows us to explain many of the results reported in the literature. It also suggests that clustering can improve the performance of the system and we provide experimental evidence of this. From the analysis, we also derive an efficient algorithm for updating the eigenfaces. We demonstrate the ability of an eigenface-based system to represent faces efficiently (using at most forty values in our experiments) and also demonstrate our updating algorithm. Since we are concerned with aspects of face recognition, one of the important practical problems is locating the face in a image, subject to distortions such as rotation. We review two well-known methods for locating faces based on the eigenface technique.e These algorithms are computationally expensive, so we illustrate how the Synthetic Discriminant Function can be used to reduce the cost. For our purposes, we propose the concept of a linearly interpolating SDF and we show how this can be used not only to locate the face, but also to estimate the extent of the distortion. We derive conditions which will ensure a SDF is linearly interpolating. We show how many of the more popular SDF-type filters are related to the classic SDF and thus extend our analysis to a wide range of SDF-type filters. Our analysis suggests that by carefully choosing the training set to satisfy our condition, we can significantly reduce the size of the training set required. This is demonstrated by using the equidistributing principle to design a suitable training set for the SDF. All this is illustrated with several examples. Our results with the SDF allow us to construct a two-stage algorithm for locating faces. We use the SDF-type filters to obtain initial estimates of the location and extent of the distortion. This information is then used by one of the more accurate eigenface-based techniques to obtain the final location from a reduced search space. This significantly reduces the computational cost of the process.
AFRIKAANSE OPSOMMING: In hierdie tesis ondersoek ons sommige aspekte van automatiese gesigs- herkenning met spesifieke verwysing na die sogenaamde eigengesig ("eigen- face") tegniek. ‘n Deeglike teoretiese analise van hierdie tegniek stel ons in staat om heelparty van die resultate wat in die literatuur verskyn te verduidelik. Dit bied ook die moontlikheid dat die gedrag van die stelsel sal verbeter as die gesigte in verskillende klasse gegroepeer word. Uit die analise, herlei ons ook ‘n doeltreffende algoritme om die eigegesigte op te dateer. Ons demonstreer die vermoë van die stelsel om gesigte op ‘n doeltreffende manier te beskryf (ons gebruik hoogstens veertig eigegesigte) asook ons opdateringsalgoritme met praktiese voorbeelde. Verder ondersoek ons die belangrike probleem om gesigte in ‘n beeld te vind, veral as rotasie- en skaalveranderinge plaasvind. Ons bespreek twee welbekende algoritmes om gesigte te vind wat op eigengesigte gebaseer is. Hierdie algoritme is baie duur in terme van numerise berekeninge en ons ontwikkel n koste-effektiewe metode wat op die sogenaamde "Synthetic Discriminant Functions" (SDF) gebaseer is. Vir hierdie doel word die begrip van lineêr interpolerende SDF’s ingevoer. Dit stel ons in staat om nie net die gesig te vind nie, maar ook ‘n skatting van sy versteuring te bereken. Voorts kon ons voorwaardes aflei wat verseker dat ‘n SDF lineêr interpolerend is. Aangesien ons aantoon dat baie van die gewilde SDF-tipe filters aan die klassieke SDF verwant is, geld ons resultate vir ‘n hele verskeidenheid SDF- tipe filters. Ons analise toon ook dat ‘n versigtige keuse van die afrigdata mens in staat stel om die grootte van die afrigstel aansienlik te verminder. Dit word duidelik met behulp van die sogenaamde gelykverspreidings beginsel ("equidistributing principle") gedemonstreer. Al hierdie aspekte van die SDF’s word met voorbeelde geïllustreer. Ons resultate met die SDF laat ons toe om ‘n tweestap algoritme vir die vind van ‘n gesig in ‘n beeld te ontwikkel. Ons gebruik eers die SDF-tipe filters om skattings vir die posisie en versteuring van die gesig te kry en dan verfyn ons hierdie skattings deur een van die teknieke wat op eigengesigte gebaseer is te gebruik. Dit lei tot ‘n aansienlike vermindering in die berekeningstyd.
Style APA, Harvard, Vancouver, ISO itp.
27

Li, Zhenghong. "Automated Facial Action Unit Recognition in Horses". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281323.

Pełny tekst źródła
Streszczenie:
In recent years, with the development of deep learning and the applications of deep learning models, computer vision tasks such as human facial action unit recognition have made significant progress. Inspired by these works, we have investigated the possibility of training a model to recognize horse facial action units automatically. With the help of the Equine Facial Action Coding System (EquiFACS) created by veterinarians recently, our aim has been to detect EquiFACS units from images and videos. In this project, we proposed a cascade framework for horse facial action unit recognition from images. We firstly trained several object detectors to detect the predefined regions of interest. Then we applied binary classifiers for each action unit in related regions. We experimented with different types of neural network classifiers and found AlexNet to work the best in our framework. Additionally, we also transferred a model for human facial action unit recognition to horses and explored strategies to learn the correlations among different action units.
Under de senaste åren, med utvecklingen av djupinlärning och dess tillämpningar, har datorseendeuppgifter så som igenkänning av mänskliga ansiktsaktionsenheter gjort stora framsteg. Inspirerad av dessa arbeten har vi undersökt möjligheten att hitta en modell för att automatiskt känna igen hästars ansiktsuttryck. Med hjälp av Equine Facial Action Coding System som nyligen skapats av veterinärer kan vi upptäcka ansiktsaktionsenheter hos hästar som definieras i detta system från bilder och videor. I detta projekt föreslog vi ett kaskadramverk för igenkänning av hästens ansiktsaktionsenheter från bilder. Först tränade vi flera objektdetektorer för att upptäcka de fördefinierade regionerna av intresse. Sedan använde vi binära klassificeringar för varje aktionsenhet i relaterade regioner.Vi testade olika modeller av klassificerare och fann att AlexNet fungerade bäst i våra experiment. Dessutom överförde vi också en modell för mänsklig  ansiktsaktionsenhetsigenkänning till hästar och utforskade strategier för att lära sig korrelationerna mellan olika aktionsenheter.
Style APA, Harvard, Vancouver, ISO itp.
28

Toure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition". Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.

Pełny tekst źródła
Streszczenie:
This Master thesis proposes a human-computer interface for individual with limited hand movements that incorporate the use of facial gesture as a means of communication. The system recognizes faces and extracts facial gestures to map them into Morse code that would be translated in English in real time. The system is implemented on a MACBOOK computer using Python software, OpenCV library, and Dlib library. The system is tested by 6 students. Five of the testers were not familiar with Morse code. They performed the experiments in an average of 90 seconds. One of the tester was familiar with Morse code and performed the experiment in 53 seconds. It is concluded that errors occurred due to variations in features of the testers, lighting conditions, and unfamiliarity with the system. Implementing an auto correction and auto prediction system will decrease typing time considerably and make the system more robust.
Style APA, Harvard, Vancouver, ISO itp.
29

Forch, Valentin, Julien Vitay i Fred H. Hamker. "Recurrent Spatial Attention for Facial Emotion Recognition". Technische Universität Chemnitz, 2020. https://monarch.qucosa.de/id/qucosa%3A72453.

Pełny tekst źródła
Streszczenie:
Automatic processing of emotion information through deep neural networks (DNN) can have great benefits (e.g., for human-machine interaction). Vice versa, machine learning can profit from concepts known from human information processing (e.g., visual attention). We employed a recurrent DNN incorporating a spatial attention mechanism for facial emotion recognition (FER) and compared the output of the network with results from human experiments. The attention mechanism enabled the network to select relevant face regions to achieve state-of-the-art performance on a FER database containing images from realistic settings. A visual search strategy showing some similarities with human saccading behavior emerged when the model’s perceptive capabilities were restricted. However, the model then failed to form a useful scene representation.
Style APA, Harvard, Vancouver, ISO itp.
30

Schulze, Martin Michael. "Facial expression recognition with support vector machines". [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Cheung, Ching-ying Crystal. "Facial emotion recognition after subcortical cerebrovascular diseases /". Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B23425027.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Ainsworth, Kirsty. "Facial expression recognition and the autism spectrum". Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/8287/.

Pełny tekst źródła
Streszczenie:
An atypical recognition of facial expressions of emotion is thought to be part of the characteristics associated with an autism spectrum disorder diagnosis (DSM-5, 2013). However, despite over three decades of experimental research into facial expression recognition (FER) in autism spectrum disorder (ASD), conflicting results are still reported (Harms, Martin, and Wallace, 2010). The thesis presented here aims to explore FER in ASD using novel techniques, as well as assessing the contribution of a co-occurring emotion-blindness condition (alexithymia) and autism-like personality traits. Chapter 1 provides a review of the current literature surrounding emotion perception in ASD, focussing specifically on evidence for, and against, atypical recognition of facial expressions of emotion in ASD. The experimental chapters presented in this thesis (Chapters 2, 3 and 4) explore FER in adults with ASD, children with ASD and in the wider, typical population. In Chapter 2, a novel psychophysics method is presented along with its use in assessing FER in individuals with ASD. Chapter 2 also presents a research experiment in adults with ASD, indicating that FER is similar compared to typically developed (TD) adults in terms of the facial muscle components (action units; AUs), the intensity levels and the timing components utilised from the stimuli. In addition to this, individual differences within groups are shown, indicating that better FER ability is associated with lower levels of ASD symptoms in adults with ASD (measured using the ADOS; Lord et al. (2000)) and lower levels of autism-like personality traits in TD adults (measured using the Autism-Spectrum Quotient; (S. Baron-Cohen, Wheelwright, Skinner, Martin, and Clubley, 2001)). Similarly, Chapter 3 indicates that children with ASD are not significantly different from TD children in their perception of facial expressions of emotion as assessed using AU, intensity and timing components. Chapter 4 assesses the contribution of alexithymia and autism-like personality traits (AQ) to FER ability in a sample of individuals from the typical population. This chapter provides evidence against the idea that alexithymia levels predict FER ability over and above AQ levels. The importance of the aforementioned results are discussed in Chapter 5 in the context of previous research in the field, and in relation to established theoretical approaches to FER in ASD. In particular, arguments are made that FER cannot be conceptualised under an ‘all-or-nothing’ framework, which has been implied for a number of years (Harms et al., 2010). Instead it is proposed that FER is a multifaceted skill in individuals with ASD, which varies according to an individual’s skillset. Lastly, limitations of the research presented in this thesis are discussed in addition to suggestions for future research.
Style APA, Harvard, Vancouver, ISO itp.
33

Vadapalli, Hima Bindu. "Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition". University of the Western Cape, 2011. http://hdl.handle.net/11394/5415.

Pełny tekst źródła
Streszczenie:
Philosophiae Doctor - PhD
This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
Style APA, Harvard, Vancouver, ISO itp.
34

Mistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization". Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.

Pełny tekst źródła
Streszczenie:
Facial expression is one of the effective channels to convey emotions and feelings. Many shape-based, appearance-based or hybrid methods for automatic facial expression recognition have been proposed. However, it is still a challenging task to identify emotions from facial images with scaling differences, pose variations, and occlusions. In addition, it is also difficult to identify significant discriminating facial features that could represent the characteristic of each expression because of the subtlety and variability of facial expressions. In order to deal with the above challenges, this research proposes two novel approaches: unsupervised facial point detection and texture-based facial expression recognition with feature optimisation. First of all, unsupervised automatic facial point detection integrated with regression-based intensity estimation for facial Action Units (AUs) and emotion clustering is proposed to deal with challenges such as scaling differences, pose variations, and occlusions. The proposed facial point detector can detect 54 facial points in images of faces with occlusions, pose variations and scaling differences. We conduct AU intensity estimation respectively using support vector regression and neural networks for 18 selected AUs. FCM is also subsequently employed to recognise seven basic emotions as well as neutral expressions. It also shows great potential to deal with compound and newly arrived novel emotion class detection. The second proposed system focuses on a texture-based approach for facial expression recognition by proposing a novel variant of the local binary pattern for discriminative feature extraction and Particle Swarm Optimization (PSO)-based feature optimisation. Multiple classifiers are applied for recognising seven facial expressions. Finally, evaluations are conducted to show the efficiency of the above two proposed systems. Evaluated using well-known facial databases: Helen, labelled faces in the wild, PUT, and CK+ the proposed unsupervised facial point detector outperforms other supervised landmark detection models dramatically and shows excellent robustness and capability in dealing with rotations, occlusions and illumination changes. Moreover, a comprehensive evaluation is also conducted for the proposed texture-based facial expression recognition with mGA-embedded PSO feature optimisation. Evaluated using the CK+ and MMI benchmark databases, the experimental results indicate that it outperforms other state-of-the-art metaheuristic search methods and facial emotion recognition research reported in the literature by a significant margin.
Style APA, Harvard, Vancouver, ISO itp.
35

Saeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed". Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Poon, Bruce Siu-Lung. "Recognition of human faces in distorted images based on principal component analysis and gabor wavelets". Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15891.

Pełny tekst źródła
Streszczenie:
The technology of human face recognition is now mature enough to be widely used in security by law enforcement agencies for dealing with criminal activities, government agencies for border control as well as government organizations and private enterprises for access control. However, there are still a lot of room for improvement. In this thesis, we study strategies to improve the recognition accuracy. Two proposed schemes, Principal Component Analysis (PCA) and Gabor wavelet, for human face recognition are discussed and the implementations of sub-modules in each scheme are introduced. In principal component analysis based human face recognition, we started from the basic method by first identified and developed the major testing criteria. Once the major testing criteria were developed, we analyzed those criteria which had significant impacts on the accuracy of recognition. More face databases were used in order to positively identify those criteria. Distorted images with poor illumination at the background had been identified as one of the major impacts on the accuracy of recognition. We then further investigated on this criteria and found ways to improve the results of recognition. In Gabor wavelet based human face recognition, we first examined the classification capability of 40 different basic Gabor phase representations. We utilized those Gabor features from facial images, tested on the selected distorted images and compared the results and findings with the principle component analysis based human face recognition.
Style APA, Harvard, Vancouver, ISO itp.
37

Spagnuolo, Imerio. "Landmark based facial recognition in the NAO robot". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13227/.

Pełny tekst źródła
Streszczenie:
Lo scopo di questa tesi è permettere ad un robot umanoide (NAO) di rilevare i volti nelle immediate vicinanze e riconoscerli. Riconoscere il volto di una persona è un’abilità difficile da automatizzare, innanzitutto bisogna scomporre il problema in due differenti problemi: face detection e face recognition. Per face detection si intende quel processo attraverso cui è possibile rilevare la presenza o meno di uno o più volti all’interno di un’immagine o di un flusso di immagini. Una volta accertata la presenza di un volto è possibile passare alla fase di recognition. Tale fase viene realizzata tramite l’ausilio di un classificatore che prende in ingresso delle feature estratte dal volto identificato e restituisce il nome della persona a cui esso potrebbe appartenere con la relativa probabilità. Il sistema proposto in questa tesi è caratterizzato da una fase di rilevazione del volto realizzata attraverso la tecnica dei gradienti orientati, delle fasi di pre-processing delle immagini ed una classificazione dei volti attraverso l’implementazione di due differenti reti neurali in cascata. La prima rete è una rete neurale convoluzionale che prende in ingresso l’intera immagine e restituisce il nome della persona. La seconda rete è una rete neurale multistrato che discrimina i volti solo tra le classi su cui la prima rete fa “confusione” basandosi su determinate misure (distanza tra naso e bocca, lunghezza naso, ecc...).
Style APA, Harvard, Vancouver, ISO itp.
38

Dursun, Pinar. "Recognition Of Facial Expressions In Alcohol Dependent Inpatients". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608450/index.pdf.

Pełny tekst źródła
Streszczenie:
ABSTRACT RECOGNITION OF EMOTIONAL FACIAL EXPRESSION IN ALCOHOL DEPENDENT INPATIENTS Dursun, Pinar M.S., Department of Psychology Supervisor: Assoc. Prof. Faruk Genç
ö
z June 2007, 130 pages The ability to recognize emotional facial expressions (EFE) is very critical for social interaction and daily functioning. Recent studies have shown that alcohol dependent individuals have deficits in the recognition of these expressions. Thereby, the objective of this study was to explore the presence of impairment in the decoding of universally recognized facial expressions -happiness, sadness, anger, disgust, fear, surprise, and neutral expressions- and to measure their manual reaction times (RT) toward these expressions in alcohol dependent inpatients. Demographic Information Form, CAGE Alcoholism Inventory, State- Trait Anxiety Inventory (STAI), Beck Depression Inventory (BDI), The Symptom Checklist, and lastly a constructed computer program (Emotion Recognition Test) were administered to 50 detoxified alcohol dependent inpatients and 50 matched-control group participants. It was hypothesized that alcohol dependents would show more deficits in the accuracy of reading EFE and would react more rapidly toward negative EFE -fear, anger, disgust, sadness than control group. Series of ANOVA, ANCOVA, MANOVA and MANCOVA analyses revealed that alcohol dependent individuals were more likely to have depression and anxiety disorders than non-dependents. They recognized less but responded faster toward disgusted expressions than non-dependent individuals. On the other hand, two groups did not differ significantly in the total accuracy responses. In addition, the levels of depression and anxiety did not affect the recognition accuracy or reaction times. Stepwise multiple regression analysis indicated that obsessive-compulsive subscale of SCL, BDI, STAI-S Form, and the recognition of fearful as well as disgusted expressions were associated with alcoholism. Results were discussed in relation to the previous findings in the literature. The inaccurate identification of disgusted faces might be associated with organic deficits resulted from alcohol consumption or cultural factors that play very important role in displaying expressions.
Style APA, Harvard, Vancouver, ISO itp.
39

Kokin, Jessica. "Facial Expression Recognition and Interpretation in Shy Children". Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32079.

Pełny tekst źródła
Streszczenie:
Two studies were conducted in which we examined the relation between shyness and facial expression processing in children. In Study 1, facial expression recognition was examined by asking 97 children ages 12 to 14 years to identify six different expressions displayed at 50% and 100% intensity, as well as a neutral expression. In Study 2, the focus shifted from the recognition of emotions to the interpretation of emotions. In this study, 123 children aged 12 to 14 years were asked a series of questions regarding how they would perceive different facial expressions. Findings from Study 1 showed that, in the case of shy boys, higher levels of shyness were related to lower recognition accuracy for sad faces displayed at 50% intensity. However, in most cases, shyness was not related to facial expression recognition. The results from Study 2 suggested broader implications for shy children. The findings of Study 2 demonstrated that shyness is predictive of biased facial expression interpretation and that rejection sensitivity mediates this relation. Overall the results of these two studies add to the research on facial expression processing in shy children and suggest that cognitive biases in the way facial expressions are interpreted may be related to shy children’s discomfort in social situations.
Style APA, Harvard, Vancouver, ISO itp.
40

Sherman, Adam Grant. "Development of a test of facial affect recognition /". Access abstract and link to full text, 1994. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9510111.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Adam, Mohamad Z. "Unfamiliar facial identity registration and recognition performance enhancement". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/11431.

Pełny tekst źródła
Streszczenie:
The work in this thesis aims at studying the problems related to the robustness of a face recognition system where specific attention is given to the issues of handling the image variation complexity and inherent limited Unique Characteristic Information (UCI) within the scope of unfamiliar identity recognition environment. These issues will be the main themes in developing a mutual understanding of extraction and classification tasking strategies and are carried out as a two interdependent but related blocks of research work. Naturally, the complexity of the image variation problem is built up from factors including the viewing geometry, illumination, occlusion and other kind of intrinsic and extrinsic image variation. Ideally, the recognition performance will be increased whenever the variation is reduced and/or the UCI is increased. However, the variation reduction on 2D facial images may result in loss of important clues or UCI data for a particular face alternatively increasing the UCI may also increase the image variation. To reduce the lost of information, while reducing or compensating the variation complexity, a hybrid technique is proposed in this thesis. The technique is derived from three conventional approaches for the variation compensation and feature extraction tasks. In this first research block, transformation, modelling and compensation approaches are combined to deal with the variation complexity. The ultimate aim of this combination is to represent (transformation) the UCI without losing the important features by modelling and discard (compensation) and reduce the level of the variation complexity of a given face image. Experimental results have shown that discarding a certain obvious variation will enhance the desired information rather than sceptical in losing the interested UCI. The modelling and compensation stages will benefit both variation reduction and UCI enhancement. Colour, gray level and edge image information are used to manipulate the UCI which involve the analysis on the skin colour, facial texture and features measurement respectively. The Derivative Linear Binary transformation (DLBT) technique is proposed for the features measurement consistency. Prior knowledge of input image with symmetrical properties, the informative region and consistency of some features will be fully utilized in preserving the UCI feature information. As a result, the similarity and dissimilarity representation for identity parameters or classes are obtained from the selected UCI representation which involves the derivative features size and distance measurement, facial texture and skin colour. These are mainly used to accommodate the strategy of unfamiliar identity classification in the second block of the research work. Since all faces share similar structure, classification technique should be able to increase the similarities within the class while increase the dissimilarity between the classes. Furthermore, a smaller class will result on less burden on the identification or recognition processes. The proposed method or collateral classification strategy of identity representation introduced in this thesis is by manipulating the availability of the collateral UCI for classifying the identity parameters of regional appearance, gender and age classes. In this regard, the registration of collateral UCI s have been made in such a way to collect more identity information. As a result, the performance of unfamiliar identity recognition positively is upgraded with respect to the special UCI for the class recognition and possibly with the small size of the class. The experiment was done using data from our developed database and open database comprising three different regional appearances, two different age groups and two different genders and is incorporated with pose and illumination image variations.
Style APA, Harvard, Vancouver, ISO itp.
42

Clark, Clifford. "Recall and Recognition Tasks within Facial Composite Production". Thesis, Open University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518193.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Ng, Hau-hei. "The effect of mood on facial emotion recognition". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hdl.handle.net/10722/210312.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Alrasheed, Waleed. "Time and Space Efficient Techniques for Facial Recognition". Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6238.

Pełny tekst źródła
Streszczenie:
In recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face recognition techniques succeed in improving one or two of these factors at the expense of the others. In this dissertation, four novel face recognition techniques that improve the storage and computational requirements of face recognition systems are presented and analyzed. Three of the four novel face recognition techniques to be introduced, namely, Quantized/truncated Transform Domain (QTD), Frequency Domain Thresholding and Quantization (FD-TQ), and Normalized Transform Domain (NTD). All the three techniques utilize the Two-dimensional Discrete Cosine Transform (DCT-II), which reduces the dimensionality of facial feature images, thereby reducing the computational complexity. The fourth novel face recognition technique is introduced, namely, the Normalized Histogram Intensity (NHI). It is based on utilizing the pixel intensity histogram of poses' subimages, which reduces the computational complexity and the needed storage requirements. Various simulation experiments using MATLAB were conducted to test the proposed methods. For the purpose of benchmarking the performance of the proposed methods, the simulation experiments were performed using current state-of-the-art face recognition techniques, namely, Two Dimensional Principal Component Analysis (2DPCA), Two-Directional Two-Dimensional Principal Component Analysis ((2D)^2PCA), and Transform Domain Two Dimensional Principal Component Analysis (TD2DPCA). The experiments were applied to the ORL, Yale, and FERET databases. The experimental results for the proposed techniques confirm that the use of any of the four novel techniques examined in this study results in a significant reduction in computational complexity and storage requirements compared to the state-of-the-art techniques without sacrificing the recognition accuracy.
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Style APA, Harvard, Vancouver, ISO itp.
45

Hsu, Wei-Cheng, i 徐瑋呈. "Facial Expression Recognition Based on Facial Features". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50258463357861831524.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
資訊工程學系
101
We propose an expression recognition method based on facial features from the psychological perspective. According to the American psychologist Paul Ekman’s work on action units, we divide a face into different facial feature regions for expression recognition via the movements of individual facial muscles during slight different instant changes in facial expression. This thesis starts from introducing Paul Ekman’s work, 6 basic emotions, and existing methods based on feature extraction or facial models. Our system have two main parts: preprocessing and recognition method. The difference in training and test environments, such as illumination, or face size and skin color of different subjects under testing, is usually the major influencing factor in recognition accuracy. It is therefore we propose a preprocessing step in our first part of the system: we first perform face detection and facial feature detection to locate facial features. We then perform a rotation calibration based on the horizontal line obtained by connecting both eyes. The complete face region can be extracted by using facial models. Lastly, the face region is calibrated for illumination and resized to same resolution for dimensionality of feature vector. After preprocessing, we can reduce the difference among images. Second part of our proposed system is the recognition method. Here we use Gabor filter banks with ROI capture to obtain the feature vector and principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction to reduce the computation time. Finally, a support vector machine (SVM) is adopted as our classifier. The experimental result shows that the proposed method can archive 86.1%, 96.9%, and 89.0% accuracy on three existing datasets JAFFE, TFEID, and CK+ respectively (based on leave-one-person-out evaluation). We also tested the performance on the 101SC dataset that were collected and prepared by ourselves. This dataset is relatively difficult in recognition but closer to the scenario in reality. The proposed method is able to achieve 62.1% accuracy on it. We also use this method to participate the 8th UTMVP (Utechzone Machine Vision Prize) competition, and we were ranked the second place out of 10 teams.
Style APA, Harvard, Vancouver, ISO itp.
46

Ren, Yuan. "Facial Expression Recognition System". Thesis, 2008. http://hdl.handle.net/10012/3516.

Pełny tekst źródła
Streszczenie:
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition. Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations. Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
Style APA, Harvard, Vancouver, ISO itp.
47

Barbosa, Pedro Nelson Sampaio. "Automotive Facial Driver Recognition". Master's thesis, 2017. http://hdl.handle.net/1822/54768.

Pełny tekst źródła
Streszczenie:
Dissertação de mestrado em Industrial Electronic and Computers Engineering
Dependency on technology is quite real when it comes to the automotive world. Currently there are several automated systems that can be found in cars: lights control, seats control, brakes control and the list goes on. These systems enhance the safety of drivers but raise other issues that need to be addressed. In the automotive market, when trying to stand out, some brands seek to attract consumers through the originality and provide options that were never seen before in their vehicles. Nowadays some vehicles have the option to change some funcionalities depending on the driver’s will, such as seat and steering wheel height, suspension/motor mode, among others. But with this level of changes a question arises: will the driver have to make these changes every time he uses the car? The answer is no, because if the user somehow performs a driver’s identification, the car system can load the personal costumizations maked by that particular person. This work realize the driver’s recognition automatically and with the minimum of hindrance. For this purpose it is acquire an image of the driver’s face in order to know his identification. This system will recognize the driver and send his identification to an external system, so that this second system can perform the required customization. The system to build in this dissertation have to be small in size, quickly realize the driver’s recognition and to inform her identity to an external system. The necessary process for the use of this system must be simple, and allow the graphic visualization of the whole process that is taking place. The final system aims to increase the technical knowledge of the "Project INNOVCAR", which is a project resulting from the partnership between the Minho’s University and the renowned multinational in the automotive world, Bosch. All the development and conclusions taken during this work will revert to this project, and the system to be developed also has the possibility of being integrated into the DSM (Driver Simulator Mockup). The DSM is a simulator of an automobile in a virtual world, in which all the systems that compose it, use the TCP/IP protocol suit as a means of communication. The final system to be developed, in this dissertation, should use the TCP/IP protocol suit, as a means of receiving commands and sending the driver’s identity.
A dependência pela tecnologia é algo muito real quando se trata do mundo automóvel. Atualmente podem ser encontrados muitos sistemas automáticos num carro: controlo das luzes, controlo dos assentos, controlo dos travões e a lista continua. Estes sistemas aprimoram a segurança dos condutores nos seus veículos, mas levantam outros problemas que têm de ser resolvidos. No mercado automóvel, de forma a se destacarem dos restantes, algumas marcas procuram atrair o consumidor através da inserção de opções e originalidade, nos seus veículos, nunca vistas anteriormente. Atualmente alguns veículos têm opções para alterarem algumas funcionalidades, dependendo do gosto do condutor, opções como altura dos assentos ou volante, estações no menu de acesso rápido, entre outras. Mas com o aumento da quantidade de alterações que se pode realizar num veiculo, uma questão surge: o condutor terá sempre que realizar essas alterações sempre que vá usar o carro? A resposta a essa questão é não, pois se de algum modo for realizada uma identificação da identidade do condutor, o sistema do carro pode realizar o load das personalizações associadas a um especifico condutor. Este trabalho pretende realizar o reconhecimento da identidade do condutor de forma automática, com o mínimo de desconforto possível. Para isso é proposto a aquisição de uma imagem da face do condutor, de forma a realizar a sua identificação através dela. Este sistema irá realizar o reconhecimento do condutor, e de seguida enviar a sua identidade para um sistema externo, para que este segundo sistema possa adquirir as personalizações anteriormente feitas, pelo condutor em questão. O sistema a desenvolver nesta dissertação propõe ser de pequenas dimensões, realizar o reconhecimento rapidamente e enviar a identidade obtida, para um sistema externo. O processo necessário para usar este sistema deve ser simples, e permitir a visualização gráfica de tudo o que está a acontecer. O sistema final tem como objetivo o aumento do conhecimento critico do "Project INNOVCAR", que se trata de um projeto resultante da parceria entre a Universidade do Minho e a empresa de renome mundial no mundo automóvel, Bosch. Todo o desenvolvimento e conclusões executadas/retiradas durante este trabalho revertem para este projeto, e o sistema a desenvolver tem também a possibilidade de ser integrado no DSM (Driver Simulator Mockup). O DSM é um simulador automóvel num mundo virtual, e todos os sistemas que o compõem, usam o protocolo TCP/IP como forma de comunicação. O sistema final a desenvolver, nesta dissertação, também deve usar o protocolo de comunicação TCP/IP, como meio de receção de comando e envio da identidade do condutor.
Este trabalho foi financiado pelo projeto "INNOVCAR: Inovação para Veículos Inteligentes", n02797, cofinanciado pelo FEDER através do Portugal 2020 – Programa Operacional Competitividade e Internacionalização (COMPETE2020).
Style APA, Harvard, Vancouver, ISO itp.
48

GUPTA, MUSKAN. "FACIAL DETECTION and RECOGNITION". Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19521.

Pełny tekst źródła
Streszczenie:
Face recognition has been one of the most interesting and important research fields in the past two decades. The reasons come from the need of automatic recognitions and surveillance systems, the interest in human visual system on face recognition, and the design of human-computer interface, etc. These researches involve knowledge and researchers from disciplines such as neuroscience, psychology, computer vision, pattern recognition, image processing, and machine learning, etc. A bunch of papers have been published to overcome different factors (such as illumination, expression, scale, pose, etc.) and achieve better recognition rate, while there is still no robust technique against uncontrolled practical cases which may involve kinds of factors simultaneously. In this report, we’ll go through general ideas and structures of recognition, important issues and factors of human faces, critical techniques and algorithms, and finally give a conclusion.
Style APA, Harvard, Vancouver, ISO itp.
49

Wu, Hung-Yu, i 吳弘裕. "Facial Feature Extraction and Recognition". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/48308473141317313685.

Pełny tekst źródła
Streszczenie:
碩士
大同工學院
電機工程研究所
87
In recent years, the authentication problem becomes more serious. In the thesis, we propose an automatic face recognition system. The system consists of two parts : one is facial feature extraction and the other is face recognition. In facial feature extraction part, we extract the edge of the input image first and we will get a binary image expressing the contour of face. Then based on the symmetry property of a face and the relationships in head of hair, eyes, nose, mouth and neck, one can locate the positions which the facial features lie in. So we can get the square margin of the face what we want and normalize it. To reduce the influence of illumination, we deal the image with histogram equalization before we build the reference database and recognize the test image. In the recognition part, the eigenface approach is used to identify the human.
Style APA, Harvard, Vancouver, ISO itp.
50

Hsueh, Ming-Kai, i 薛名凱. "Facial Expression Recognition with WebCam". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24120668987886256360.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
自動化科技研究所
92
It’s very easy for human being to recognize the emotion through facial expression, but it’s not so simple for computers. In this research, we use the common video device to establish the system which distinguishs people’s emotion, automatically. This system can work very well with neural network. In this paper, we are base on Ekman’s (Action Units; AUs) to catch the characteristics on the faces. First of all, it catches images on people’s faces by CMOS WebCam, and then detects the moving route of the five organs by image processing technique. Then, using those characteristics through neural network to recognize people’s emotion. Our system can recognize moving emotion within a short time, and the correction percentage can be over eighty percent. Therefore, it’s strong enough to approve the availability of our system.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii