Дисертації з теми "Facial recognition algorithms"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-29 дисертацій для дослідження на тему "Facial recognition algorithms".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.
Повний текст джерелаSilva, Eduardo Machado. "Padrões mapeados localmente em multiescala aplicados ao reconhecimento de faces." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154142.
Повний текст джерелаApproved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-06-04T16:05:04Z (GMT) No. of bitstreams: 1 silva_em_me_sjrp.pdf: 7020230 bytes, checksum: 17f5f419806417111d44cacbf46f3f0d (MD5)
Made available in DSpace on 2018-06-04T16:05:04Z (GMT). No. of bitstreams: 1 silva_em_me_sjrp.pdf: 7020230 bytes, checksum: 17f5f419806417111d44cacbf46f3f0d (MD5) Previous issue date: 2018-04-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O Reconhecimento facial é uma das tecnologias biométricas mais utilizadas em sistemas automatizados que necessitam garantir a identidade de uma pessoa para acesso autorizado e monitoramento. A grande aceitação do uso da face tem várias vantagens sobre outras tecnologias biométricas: ela é natural, não exige equipamentos sofisticados, a aquisição de dados é baseada em abordagens não invasivas, e pode ser feito a distância, de maneira cooperativa ou não. Embora muitos estudos em reconhecimento facial tenham sido feitos, problemas com variação de iluminação, poses com oclusão facial, expressão facial e envelhecimento ainda são desafios, pois influenciam a performance dos sistemas de reconhecimento facial e motivam o desenvolvimento de novos sistemas de reconhecimento que lidam com esses problemas e sejam mais confiáveis. Este trabalho tem como objetivo avaliar a técnica de Padrões Localmente Mapeados em Multiescala (MSLMP) para o reconhecimento facial. Técnicas baseadas em algoritmos genéticos e processamento de imagens foram usadas para obter melhores resultados. Os resultados obtidos chegam a 100% de acurácia para alguns banco de dados. A base de dados MUCT ´e, em particular, bastante complexa, ela foi criada em 2010 com o objetivo de aumentar a quantidade de bancos de dados disponíveis com alta variação de iluminação, idade, posições e etnias, e por isso, ´e um banco de dados difícil quanto ao reconhecimento automático de faces. Uma nova técnica de processamento baseada na média dos níveis de cinza da base foi desenvolvida.
Facial recognition is one of the most used biometric technologies in automated systems which ensure a person’s identity for authorized access and monitoring. The acceptance of face use has several advantages over other biometric technologies: it is natural, it does not require sophisticated equipment, data acquisition is based on non-invasive approaches, and can it be done remotely, cooperatively or not. Although many facial recognition studies have been done, problems with light variation, facial occlusion, position, expression, and aging are still challenges, because they influence the performance of facial recognition systems and motivate the development of more reliable recognition systems that deal with these problems. This work aim to evaluate the Multi-scale Local Mapped Pattern (MSLMP) technique for the facial recognition. Techniques based on genetic algorithms and image processing were applied to increase the performance of the method. The obtained results reach up to 100% of accuracy for some databases. A very difficult database to deal is the MUCT database which was created in 2010 with aim of providing images with high variation of lighting, age, positions and ethnicities in the facial biometry literature, which makes it a highly difficult base in relation to automated recognition. A new processing technique was developed based on the average gray levels of the images of the database.
Dragon, Carolyn Bradford. "Let’s Face It: The effect of orthognathic surgery on facial recognition algorithm analysis." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5778.
Повний текст джерелаGarcia, Ivette Cristina Araujo, Eduardo Rodrigo Linares Salmon, Rosario Villalta Riega, and Alfredo Barrientos Padilla. "Implementation and customization of a smart mirror through a facial recognition authentication and a personalized news recommendation algorithm." Institute of Electrical and Electronics Engineers Inc, 2018. http://hdl.handle.net/10757/624657.
Повний текст джерелаIn recent years the advancement of technologies of information and communication (technology ICTs) have helped to improve the quality of people's lives. The paradigm of internet of things (IoT, Internet of things) presents innovative solutions that are changing the style of life of the people. Because of this proposes the implementation of a smart mirror as part of a system of home automation, with which we intend to optimize the time of people as they prepare to start their day. This device is constructed from a reflective glass, LCD monitor, a Raspberry Pi 3, a camera and a platform IoT oriented cloud computing, where the information is obtained to show in the mirror, through the consumption of web services. The information is customizable thanks to a mobile application, which in turn allows the user photos to access the mirror, using authentication with facial recognition and user information to predict the news to show according to your profile. In addition, as part of the idea of providing the user a personalized experience, the Smart Mirror incorporates a news recommendation algorithm, implemented using a predictive model, which uses the algorithm, naive bayes.
Revisión por pares
Silva, Jadiel Caparrós da [UNESP]. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários." Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/127901.
Повний текст джерелаO presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Silva, Jadiel Caparrós da. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários /." Ilha Solteira, 2015. http://hdl.handle.net/11449/127901.
Повний текст джерелаCo-orientador: Jorge Manuel M. C. Pereira Batista
Banca: Carlos Roberto Minussi
Banca: Ricardo Luiz Barros de Freitas
Banca: Díbio Leandro Borges
Banca: Gelson da Cruz Junior
Resumo: O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
Abstract: This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Doutor
Grossard, Charline. "Evaluation et rééducation des expressions faciales émotionnelles chez l’enfant avec TSA : le projet JEMImE Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD) Children facial expression production : influence of age, gender, emotion subtype, elicitation condition and culture." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS625.
Повний текст джерелаThe autism spectrum disorder (ASD) is characterized by difficulties in socials skills, as emotion recognition and production. Several studies focused on emotional facial expressions (EFE) recognition, but few worked on its production, either in typical children or in children with ASD. Nowadays, information and communication technologies are used to work on social skills in ASD but few studies using these technologies focus on EFE production. After a literature review, we found only 4 games regarding EFE production. Our final goal was to create the serious game JEMImE to work on EFE production with children with ASD using an automatic feedback. We first created a dataset of EFE of typical children and children with ASD to train an EFE recognition algorithm and to study their production skills. Several factors modulate them, such as age, type of emotion or culture. We observed that human judges and the algorithm assess the quality of the EFE of children with ASD as poorer than the EFE of typical children. Also, the EFE recognition algorithm needs more features to classify their EFE. We then integrated the algorithm in JEMImE to give the child a visual feedback in real time to correct his/her productions. A pilot study including 23 children with ASD showed that children are able to adapt their productions thanks to the feedback given by the algorithm and illustrated an overall good subjective experience with JEMImE. The beta version of JEMImE shows promising potential and encourages further development of the game in order to offer longer game exposure to children with ASD and so allow a reliable assessment of the effect of this training on their production of EFE
Ben, Soltana Wael. "Optimisation de stratégies de fusion pour la reconnaissance de visages 3D." Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-01070638.
Повний текст джерелаSurajpal, Dhiresh Ramchander. "An independent evaluation of subspace facial recognition algorithms." Thesis, 2008. http://hdl.handle.net/10539/5906.
Повний текст джерелаWatkins, Elizabeth Anne. "The Polysemia of Recognition: Facial Recognition in Algorithmic Management." Thesis, 2021. https://doi.org/10.7916/d8-6qwc-0t83.
Повний текст джерелаGupta, Shalini 1979. "Novel algorithms for 3D human face recognition." Thesis, 2008. http://hdl.handle.net/2152/29595.
Повний текст джерелаtext
Lu, Yun-Jen, and 盧韻仁. "Facial Features Detection and Expression Recognition based on Loopy Belief Propagation Algorithms." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/45945653737477152143.
Повний текст джерела國立清華大學
資訊工程學系
95
In this thesis, we propose two graphical models for automatically detecting facial features and estimating optical flow on face images for extracting the expression flow features. To accomplish these tasks, we apply the Loopy Belief Propagation (LBP) algorithm which is a common framework for graphical model. In the first part, we learn the feature PCA models and geometry relationship for building a graphical model for facial features. In the second part, we build a Markov Random Field (MRF) model for optical flow estimation, and the purpose of the model structure is to make sure that the patch of neutral image could move to correct corresponding position on the expression image. The local feature constraint makes the optical flow computation in the feature areas more precise. Finally, we combine these two algorithms with the SVM classifier to develop a facial expression recognition system.
YAN, YU-CHANG, and 顏毓昌. "A Study of Facial Recognition Using Deep Learning Algorithms and RGBD Images." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4nv63a.
Повний текст джерела國立高雄第一科技大學
資訊管理系碩士班
106
Face recognition techniques has been developed for many years. Many applications such as the people identification, access control and crime detection have been widely applied in our daily lives. In addition, we can extract different features from facial images to estimate the age and gender of people. These applications can help companies obtaining great benefits in different commercial purposes. Recently, machine learning technology has a rapidly progress with the GPU hardware development by NVIDIA company. The deep learning algorithms can perform more efficiently by exploiting GPU. The iPhone X's facial recognition application has successfully attract the attention of people in the world. Therefore, many enterprises began to research to improve the techniques of facial recognition by using deep learning models and big data. However, there exist some problems in facial recognition, the light influence and poor image quality will result in a decrease of recognition accuracy. In this study, we construct a new CNN model and build a small scale facial image database. Kinect v2 camera was used in our work to collect the RGB and depth images. In the experiments, 4962 images of 30 peoples were used in the training stage according to 8:2 ratio for training and validation and 12164 images of 10 people with 8 different environments were used in the test stage. The experimental results show 84.46% accuracy rate and 90.13% accuracy rate of top 3 responses. In this study, we also implemented some popular CNN models such as AlexNet, GoogLeNet V3 and VGG-16 for comparison. The results show that the proposed method outperformed than these CNN models. We design an algorithm to discriminate the real 3D face and 2D photo face. 3174 3D depth images by Kinect cameras and 163 photo facial images of 10 people were applied in our experiment. Experimental result shows that we can obtain 100% perfect accuracy by computing the entropy of images. Finally, experiments with different type of facial image dataset including RGB, D, RGBD are performed. There are 3074 images in each dataset. We divided each dataset into three parts, training, validation and test with the ratio of 7:2:1. The accuracy rate are 96.75%, 99.35% and 100%, respectively. In addition, 6 people in the dataset were invited to a real-time test in 4 different environments, and we can obtain an average accuracy rate of 79.51%, 88.01%, and 82.66%, respectively.
Stacy, Emily Margaret. "Human and algorithm facial recognition performance : face in a crowd." Thesis, 2017. http://hdl.handle.net/10453/116916.
Повний текст джерелаDeveloping a method of identifying persons of interest (POIs) in uncontrolled environments, accurately and rapidly, is paramount in the 21st century. One such technique to do this is by using automated facial recognition systems (FRS). To date, FRS have mainly been tested in laboratory conditions (controlled) however there is little publically available research to indicate the performance levels, and therefore the feasibility of using FRS in public, uncontrolled environments, known as face-in-a-crowd (FIAC). This research project was hence directed at determining the feasibility of FIAC technology in uncontrolled, operational environments with the aim of being able to identify POIs. This was done by processing imagery obtained from a range of environments and camera technologies through one of the latest FR algorithms to evaluate the current level of FIAC performance. The hypothesis was that FR performance with higher resolution imagery would produce better FR results and that FIAC will be feasible in an operational environment when certain variables are controlled, such as camera type (resolution), lighting and number of people in the field of view. Key findings from this research revealed that although facial recognition algorithms for FIAC applications have shown improvement over the past decade, the feasibility of its deployment into uncontrolled environments remains unclear. The results support previous literature regarding the quality of the imagery being processed largely affecting the FRS performance, as imagery produced from high resolution cameras produced better performance results than imagery produced from CCTV cameras. The results suggest the current FR technology can potentially be viable in a FIAC scenario, if the operational environment can be modified to become better suited for optimal image acquisition. However, in areas where the environmental constraints were less controlled, the performance levels are seen to decrease significantly. The essential conclusion is that the data be processed with new versions of the algorithms that can track subjects through the environment, which is expected to vastly increase the performance, as well as potentially run an additional trial in alternate locations to gain a greater understanding of the feasibility of FIAC generically.
Michalski, Dana Jaclyn. "The impact of age-related variables on facial comparisons with images of children: algorithm and practitioner performance." Thesis, 2018. http://hdl.handle.net/2440/111184.
Повний текст джерелаThesis (Ph.D.) -- University of Adelaide, School of Psychology, 2018
Brito, Paulo. "Facial analysis with depth maps and deep learning." Doctoral thesis, 2018. http://hdl.handle.net/10400.2/7787.
Повний текст джерелаA recolha e análise sequencial de dados multimodais do rosto humano é um problema importante em visão por computador, com aplicações variadas na análise e monitorização médica, entretenimento e segurança. No entanto, devido à natureza do problema, há uma falta de sistemas acessíveis e fáceis de usar, em tempo real, com capacidade de anotações, análise 3d, capacidade de reanalisar e com uma velocidade capaz de detetar padrões faciais em ambientes de trabalho. No âmbito de um esforço contínuo, para desenvolver ferramentas de apoio à monitorização e avaliação de emoções/sinais em ambiente de trabalho, será realizada uma investigação relativa à aplicabilidade de uma abordagem de análise facial para mapear e avaliar os padrões faciais humanos. O objetivo consiste em investigar um conjunto de sistemas e técnicas que possibilitem responder à questão de como usar dados de sensores multimodais para obter um sistema de classificação para identificar padrões faciais. Com isso em mente, foi planeado desenvolver ferramentas para implementar um sistema em tempo real de forma a reconhecer padrões faciais. O desafio é interpretar esses dados de sensores multimodais para classificá-los com algoritmos de aprendizagem profunda e cumprir os seguintes requisitos: capacidade de anotações, análise 3d e capacidade de reanalisar. Além disso, o sistema tem que ser capaze de melhorar continuamente o resultado do modelo de classificação para melhorar e avaliar diferentes padrões do rosto humano. A FACE ANALYSYS, uma ferramenta desenvolvida no contexto desta tese de doutoramento, será complementada por várias aplicações para investigar as relações de vários dados de sensores com estados emocionais/sinais. Este trabalho é útil para desenvolver um sistema de análise adequado para a perceção de grandes quantidades de dados comportamentais.
Collecting and analyzing in real time multimodal sensor data of a human face is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy to use systems, with real time annotations capability, 3d analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of human affective state in working environments, this research will investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. Our objective consists in investigating a set of systems and techniques that make it possible to answer the question regarding how to use multimodal sensor data to obtain a classification system in order to identify facial patterns. With that in mind, it will be developed tools to implement a real-time system in a way that it will be able to recognize facial patterns from 3d data. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the follow requirements: annotations capability, 3d analysis and replay capability. In addition, the system will be able to enhance continuously the output result of the system with a training process in order to improve and evaluate different patterns of the human face. FACE ANALYSYS is a tool developed in the context of this doctoral thesis, in order to research the relations of various sensor data with human facial affective state. This work is useful to develop an appropriate visualization system for better insight of a large amount of behavioral data.
N/A
Yang, Chien-wei, and 楊建葦. "Facial Expression Recognition based on AdaBoost Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/73458390041204905263.
Повний текст джерела國立中正大學
電機工程所
96
Recently, facial expression recognition forms on of the main interests of many researchers. We develop a facial expression recognition system in this dissertation, which extracts the facial features from the input image automatically and recognizes the facial expressions via classifiers trained based on the well-learned AdaBoost algorithm. The facial expressions to be recognized include happy, angry, sad, disgust, surprise, fear, and neutral. We propose an AdaBoost-based facial expression recognition system with two types of sample data, single image frame and image sequence. For the former, an image texture code is extracted at each position within a face; for the latter, the optical flow sequence at each position is calculated. Both these two kinds of image features are adopted separately as the input of the classifiers to be designed. The AdaBoost algorithm is used to iteratively select the best weak classifier, i.e. the positions in a face with the most discrimination, at each iteration and combine them to form a strong classifier which is capable of increasing the recognition rate. Experimental results are presented, which show that the more weak classifiers selected by AdaBoost algorithm, the better recognition rate gained. Besides, as we apply an improved method which avoids selecting repeated weak classifiers, the performance of the recognition system is also improved.
Hong, Jung-Wei, and 洪濬尉. "A Fast Learning Algorithm for Robotic Facial Expression Recognition." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27350443841945007606.
Повний текст джерела國立交通大學
電機與控制工程系所
95
A robotic facial expression recognition system very often misclassifies data from a new face because different people may show their expressions in different ways. This thesis aims to study a facial expression recognition system that can learn new facial data and facilitate a robot to accommodate itself to various persons. The main idea of the proposed method is to adjust parameters of the hyperplane of support vector machine (SVM) for classifying new facial data. The concept of support vector pursuit learning (SVPL) is adopted to retrain the hyperplane in the Gaussian kernel space. To expedite the training procedure, we propose to retrain the new SVM classifier by using only samples classified incorrectly and the critical sets (CSs) from previous samples. After adjusting hyperplane parameters, the new classifier not only recognizes new facial data but also keeps acceptable performance of classifying previous data. Further, to obtain reliable facial features, we adopted Gabor wavelet to develop a feature extraction method in the system. The proposed algorithms have been successfully implemented on an entertainment robot platform. On-line experimental results show that the proposed system learns new facial data with a recognition rate of 81.3% increased from an original recognition rate of 58%. The proposed method also keeps satisfactory recognition rate of old facial samples with a recognition rate of 78.7%.
Jeng, Kuang-Yo, and 鄭匡祐. "Optimization of Facial Expression Recognition Classifier by PSO Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/98434283982039184508.
Повний текст джерела朝陽科技大學
資訊與通訊系碩士班
99
This paper proposed an image recognition method for distinguishing eight facial expressions. When the eyes and the mouth of a person are open or closed, there are eight possible facial expressions to be distinguished. The Active Basis Model is used to construct basis templates and deformation templates for the eyes and mouth. The characteristics of the basis template are used as features for training the Support Vector Machines. The eye and mouth areas of the test images are extracted and used to generate the basis and deformation template. After three Support Vector Machines were trained, And join the PSO search Support Vector Machines parameters to optimize, improve the classificationaccuracy, eight facial expressions can be classified due to the statuses of two eyes and mouth. The accuracy of facial expression recognition is 90.33% for testing 100 facial images. The results show the proposed method for facial expression recognition is very effective.
Lin, Yong-Fong, and 林泳鋒. "A Robust Active Appearance Models Search Algorithm for Facial Expression Recognition." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/62228928210754309468.
Повний текст джерела國立中央大學
通訊工程研究所
96
Active Appearance Models (AAMs) is an image representation method for non-rigid visual object with both shape and texture variations. It is a model-based representation method, and it uses a mean vector and a linear combinations of a set of variation modes to represent a non-rigid object. By adjusting the coefficients of the linear combinations of the variation modes(model parameters), we can synthesize any non-rigid objects. With this, we can express facial expressions using a model-based approach. For the facial expression recognition, an AAMs search algorithm is required to find the optimum model parameters such that the synthesized expression is similar to the facial expression in images. In this paper, we propose a novel iterative AAMs search algorithm. It minimizes the error which measures the difference between a model and a test image. We only adopt the magnitude of the predicted change of the parameters from the traditional search algorithm. However we decide the direction of the change of the parameters by estimating the gradient of the error function at each iteration. Moreover we prevent the local minimum search of the error function at each iteration by disturbing the searched parameters. Our experiments show that the proposed robust AAMs search algorithm reduced 36.41% location error of shape and 30.82% intensity error of texture of facial expressions related to the AAMs search algorithm.
Sigalingging, Xanno Kharis, and Xanno Kharis Sigalingging. "A Human-Computer Interface Classification Algorithm using EMG-Based Facial Gesture Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/71232114838745205650.
Повний текст джерела國立臺灣科技大學
電機工程系
105
An assistive technology (AT) in the form of a novel human interface device (HID) using (electromyography) EMG-based gesture recognition scheme is proposed in this thesis. The system utilizes MYO device from Thalmic Labs with a custom software as the signal capturing tool. The electrodes of the EMG are placed on certain positions on the face, corresponding to the locations of the major muscles that govern certain facial gestures. The signals are then processed using a Hidden Markov model (HMM) algorithm to classify the gesture, and based on the gesture a custom command can be assigned. Depending on the gesture made, there are countless possible command. The accuracy of the system is 94.4% with 5 gestures classification
Hu, Jheng-Hong, and 胡正宏. "Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/49257342967572489050.
Повний текст джерела國立中興大學
電機工程學系所
97
Nowadays, because of the low-birth rate and busy working parents, the security of babies is more and more requested. Therefore, the development of wisdom baby watch and care system technology can promote the baby security of indoor and house environments. In this study, we will figure out whether babies are in the safe condition or not by baby facial expressions. According to facial expressions recognition, there are three conditions, which include deadpan, smiling, and crying. First, we extract baby’s face from the image and trace the face. Then detecting fourteen features on the face, where eight features for eyes, four features for mouth, and two features for eye brows. The features distance will be calculated by the features and they will be as input values to the neural network system. Thus, the scheme can recognize baby facial expressions. In this thesis, the features detection part can solve the problem of color temperature in different environments. The light with red temperature, yellow temperature or other color temperature will not result in the detecting mistake. By the baby features, the detection can be used not only to Asian babies, but also to other racial babies. Therefore, this system is capable of being used to Caucasian or Africa America babies. Besides, we will discuss two experimented features detection methods in this study. One is Adaptive Color Analysis Detection Method, and the other is Fast Ellipse Template Edge Detection Method. Compare the two methods with Eye Filter Detection Method in this study, we will see the accuracy and operation speed of the detection results and we also analyze the advantages and drawbacks.
Wang, Yun Yao, and 王韻堯. "Development of a facial image processing algorithm with its application into emotional recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/26439000077687682089.
Повний текст джерела長庚大學
電子工程學系
99
Abstract Facial expression recognition has become a popular subject in because it is a non-verbal communication between computer and human reaction. We propose a two stage recognition algorithm. It separated into three parts, pre-processing, screening for first stage, and recognition for second stage, respectively. We use a real time object detection algorithm to extract the facial image in pre-processing. In screening part, we calculate the vectors of the images and compare with database by using Long Haar-like filters. And the last part, we extract the information from mouth for positive emotion and eyebrow for negative emotion. Finally, we use Linear Discriminant Function (LDF) to recognize the emotion by using these information. The final percentage of recognition that happy and sad are 86.21%, and angry is 75.86%.
"A matching algorithm for facial memory recall in forensic applications." 2000. http://library.cuhk.edu.hk/record=b5890306.
Повний текст джерелаThesis (M.Phil.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (leaves 82-87).
Abstracts in English and Chinese.
List of Figures --- p.vi
List of Tables --- p.vii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Objective of This Thesis --- p.3
Chapter 1.2 --- Organization of This Thesis --- p.3
Chapter 2 --- Literature Review --- p.4
Chapter 2.1 --- Facial Memory Recall --- p.4
Chapter 2.2 --- Facial Recognition --- p.6
Chapter 2.2.1 --- Earlier Approaches --- p.7
Chapter 2.2.2 --- Feature and Template Matching --- p.8
Chapter 2.2.3 --- Neural Network --- p.10
Chapter 2.2.4 --- Statistical Approach --- p.14
Chapter 3 --- A Forensic Application of Facial Recall --- p.19
Chapter 3.1 --- Motivation --- p.20
Chapter 3.2 --- AICAMS-FIT --- p.20
Chapter 3.2.1 --- The Facial Component Library --- p.21
Chapter 3.2.2 --- The Feature Selection Module --- p.24
Chapter 3.2.3 --- The Facial Construction Module --- p.24
Chapter 3.3 --- The Interaction Between The Three Main Components --- p.29
Chapter 3.4 --- Summary --- p.30
Chapter 4 --- Sketch-to-Sketch Matching --- p.31
Chapter 4.1 --- The Representation of A Composite Face --- p.31
Chapter 4.2 --- The Component-based Encoding Scheme --- p.32
Chapter 4.2.1 --- Local Feature Analysis --- p.34
Chapter 4.2.2 --- Similarity Matrix --- p.36
Chapter 4.3 --- Experimental Results and Evaluation --- p.41
Chapter 4.4 --- Shortcomings of the encoding scheme --- p.44
Chapter 4.4.1 --- Size Variation --- p.45
Chapter 4.5 --- Summary --- p.51
Chapter 5 --- Sketch-to-Photo/Photo-to-Sketch Matching --- p.52
Chapter 5.1 --- Principal Component Analysis --- p.53
Chapter 5.2 --- Experimental Setup --- p.56
Chapter 5.3 --- Experimental Results --- p.59
Chapter 5.3.1 --- Sketch-to-Photo Matching --- p.59
Chapter 5.3.2 --- Photo-to-Sketch Matching --- p.62
Chapter 5.4 --- Summary --- p.66
Chapter 6 --- Future Work --- p.67
Chapter 7 --- Conclusions --- p.70
Chapter A --- Image Library I --- p.72
Chapter A.1 --- The Database for Searching --- p.72
Chapter A.2 --- The Database for Testing --- p.74
Chapter B --- Image Library II --- p.75
Chapter B.1 --- The Photographic Database --- p.75
Chapter B.2 --- The Sketch Database --- p.77
Chapter C --- The Eigenfaces --- p.78
Chapter C.1 --- Eigenfaces of Photographic Database (N = 20) --- p.78
Chapter C.2 --- Eigenfaces of Photographic Database (N = 100) --- p.79
Chapter C.3 --- The Eigenfaces of Sketch Database --- p.81
Bibliography --- p.82
Candra, Henry. "Emotion recognition using facial expression and electroencephalography features with support vector machine classifier." Thesis, 2017. http://hdl.handle.net/10453/116427.
Повний текст джерелаRecognizing emotions from facial expression and electroencephalography (EEG) emotion signals are complicated tasks that require substantial issues to be solved in order to achieve higher performance of the classifications, i.e. facial expression has to deal with features, features dimensionality, and classification processing time, while EEG emotion recognition has the concerned with features, number of channels and sub band frequency, and also non-stationary behaviour of EEG signals. This thesis addresses the aforementioned challenges. First, a feature for facial expression recognition using a combination of Viola-Jones algorithm and improved Histogram of Oriented Gradients (HOG) descriptor termed Edge-HOG or E–HOG is proposed which has the advantage of insensitivity to lighting conditions. The issue of dimensionality and classification processing time was resolved using a combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which has successfully reduced both the dimension and the classification processing time resulting in a new low dimension of feature called Reduced E–HOG (RED E–HOG). In the case of EEG emotion recognition, a method to recognize 4 discrete emotions from arousal-valence dimensional plane using wavelet energy and entropy features was developed. The effects of EEG channel and subband selection were also addressed, which managed to reduce the channels from 32 to 18 channels and the subband from 5 to 3 bands. To deal with the non-stationary behaviour of EEG signals, an Optimal Window Selection (OWS) method as feature-agnostic pre-processing was proposed. The main objective of OWS is window segmentation with varying window which was applied to 7 various features to improve the classification results of 4 dimensional plane emotions, namely arousal, valence, dominance, and liking, to distinguish between the high or low state of the aforementioned emotions. The improvement of accuracy makes the OWS method a potential solution to dealing with the non-stationary behaviour of EEG signals in emotion recognition. The implementation of OWS provides the information that the EEG emotions may be appropriately localized at 4–12 seconds time segments. In addition, a feature concatenating of both Wavelet Entropy and average Wavelet Approximation Coefficients was developed for EEG emotion recognition. The SVM classifier trained using this feature provides a higher classification result consistently compared to various different features such as: simple average, Fast Fourier Transform (FFT), and Wavelet Energy. In all the experiments, the classification was conducted using optimized SVM with a Radial Basis Function (RBF) kernel. The RBF kernel parameters were properly optimized using a particle swarm ensemble clustering algorithm called Ensemble Rapid Centroid Estimation (ERCE). The algorithm estimates the number of clusters directly from the data using swarm intelligence and ensemble aggregation. The SVM is then trained using the optimized RBF kernel parameters and Sequential Minimal Optimization (SMO) algorithm.
Tang, Chia-Wei, and 湯家瑋. "A Research of Training Neural Network Based on Particle Swarm Optimization Algorithm for Facial Expression Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/37490062119094195968.
Повний текст джерела國立高雄應用科技大學
電子工程系
98
Facial expression plays a critical role in our daily life. In the field of the interaction of Human-Machine Interface, the facial expression recognition system has become an important issue in recent years. The interface, not only of the Human-Computer but also of the Human-Robot are important applications of Human-Machine Interaction.Facial expression usually implies important messages in communications among people in our daily life. Therefore, facial recognition has become a topic in which a large number of researchers are interested. The design of our system consists of three major parts: Face Detection, Feature Extraction, and Expression Recognition.In the stage of Face Detection, after picking out the facial area by using skin color detection, we further recognize the area of eyebrows, mouth, and eyes by utilizing facial geometry proportion. In the stage of Feature Extraction, we process the image by next two way: First, identify the distinct profile via Sobel Edge Detection from the image in the area, and then transfer this image into a binary form; Second, perform gray value sort for the image in the area and also transfer it into a binary form. After these two ways above are done, we perform AND operation for these two binary images, then mark the feature points on eyebrows, mouth, and eyes respectively, finally, we calculate the feature information according to the distance between specific points. For the part of the Expression Recognition, we try to utilize Neural Network and train the weights of network via Back-Propagation Learning and PSO to reach optimization. To distinguish the expression of happy, sad, surprise, anger, fear, disgust, and natural, we applied Japanese Female Facial Expression (JAFFE) database in our study. The experimental results exhibit that our proposed approach could reach a recognized ratio of 89 %.
Yu, Sheng-Min, and 游聖民. "Design and Implementation of Facial Expression Recognition and Foreign Object Detection Algorithm for Baby Watch and Care System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/83502166059607439703.
Повний текст джерела國立中興大學
電機工程學系所
98
In this study, we will discuss a digital intelligent baby-watch-and-care system that can recognize baby''s expression and detect the external object. The system will alert watchers when it detects something around mouth and nose, and a vomit condition. In addition, we will figure out whether babies are within the safe condition by baby facial expressions. Thus, we can replace the manpower security with the intelligent video system and reduce the watcher''s burden. In the intelligent baby-watch-and-care system, there are two subsystems, which include the facial expression recognition and external object detection. On the part of facial expression recognition, there are three conditions, which include deadpan, smiling, and crying. First, we extract baby’s face features from the image. The features distance will be calculated by the features and they will be as input values to the neural network system. Thus, the scheme can recognize baby facial expressions. On the other part of the external object detection, we focus on detecting the vomit and something around mouth and nose. In order to achieve the above-mentioned demand, we can observe the color change which is near to the mouth by detecting the current and previous frames in dynamic video sequences. The experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 88%, and the processing time needs 45 ms, then the accuracy for the facial expression recognition is about 80% by using the Quadcore (@2.66GHz) computer with C codes. Finally, by following the principle of the HW/SW co-design, the baby- watch-and-care system is implemented on an embedded platform which is composed of ARM926EJ-S CPU and Xilinx FPGA. First, we profile the execution time of each module in the algorithm, and choose the maximum computational complexity module for the hardware realization. The HW/SW co-design can process 1.86 frames per second, and the pure software design with the ARM CPU can achieve 3.03 frames per second when the ARM CPU operates at 266MHz and the system frequency operates at 50MHz.
Sales, Caio César Galdino. "Impactos sociais do reconhecimento facial: Privacidade e vigilância." Master's thesis, 2021. http://hdl.handle.net/10071/24838.
Повний текст джерелаThe integration of technological devices promoted by Cyberculture transformed the everyday spectrum and included the digital sphere in the universe of social interactions, making the online and offline contexts almost undifferentiated. The development of information technologies represents an advance from a technical point of view however the use of Face Recognition techniques under commercial logic impacts the social context as it conflicts with traditional privacy structures and mechanisms that regulate the processing of biometric information based on consent issued by the user in the digital universe. At the same time, the use of sophisticated surveillance techniques inherent in modern societies makes Face Recognition an ally for the security forces that enables intrusive surveillance practices, whose influence of algorithmic bias and the automation of criminal processes reveals concerns that affect different social groups disproportionately. Given this framework, it is proposed a critical reflection on technological mediation associated with the contexts of privacy and security, addressing the multiplicity of damage resulting from the processing of personal information in a decontextualized way, and the consequences of this process in the creation of subjectivities in the digital sphere. This framework suggests the influence of racial issues linked to historical processes that shape surveillance practices permeated by hierarchical stereotypes, which in turn are transferred to the systemic logic of algorithmic decisions highlighting the responsibility of technology companies and security authorities, in addition of ethical issues associated with the universe of Artificial Intelligence.
Ou, Wei-Liang, and 歐威良. "Design and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/40068638807786860451.
Повний текст джерела國立中興大學
電機工程學系所
99
This study presents a colorless facial foreign object detection algorithm and a facial expression recognition technique for baby care video surveillance systems, and the implementation with an embedded platform is also complete. When babies encounter foreign object invasion, spit up in nose and mouth, occur vomit and other dangerous situations, the system immediately issues an alert to the guardian, and simultaneously acts facial expressions to judge whether babies are crying. Based on the colorless video processing algorithm, using a near-infrared camera only will function properly day and night, and then this video surveillance system will reduce the human burden effectively. The proposed intelligent baby-watch-and-care system is divided into two subsystems, including the facial foreign object detection and facial expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we can observe the color change which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only consider three conditions, including deadpan, smiling, and crying. First, we extract baby’s face features from facial images. The features distance will be calculated, and they are input vectors to the neural network system. Thus the scheme recognizes baby facial expressions effectively. Using the quad-core (@2.50GHz) computer with C codes, the experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 91%, and the processing time needs 29 ms. The accuracy for the nose feature detection is up to 87%, and the processing time needs 1 ms. The accuracy for the mouth feature detection is up to 87%, and the processing time needs 2.1 ms. Then the accuracy for the facial expression recognition is about 80%. Finally, the baby-watch-and-care system is implemented on the BOLYMIN BEGA220A embedded platform with an ARM926EJ CPU. Using software-based optimizations, the computational complexities are reduced and the software computations are optimized. When the system operates at 400MHz with an ARM CPU and mounts in Windows CE, the frame rate is about 1.5 frames per second. Using near-infrared photography, the frame rate is 1.02 frames per second.