Дисертації з теми "Facial recognition algorithms"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Facial recognition algorithms.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-29 дисертацій для дослідження на тему "Facial recognition algorithms".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.

Повний текст джерела
Анотація:
In this paper an analysis is conducted regarding whether a higher classification accuracy of facial expressions are possible. The approach used is that the seven basic emotional states are combined into a binary classification problem. Five different machine learning algorithms are implemented: Support vector machines, Extreme learning Machine and three different Convolutional Neural Networks (CNN). The utilized CNN:S were one conventional, one based on VGG16 and transfer learning and one based on residual theory known as RESNET50. The experiment was conducted on two datasets, one small containing no contamination called JAFFE and one big containing contamination called FER2013. The highest accuracy was achieved with the CNN:s where RESNET50 had the highest classification accuracy. When comparing the classification accuracy with the state of the art accuracy an improvement of around 0.09 was achieved on the FER2013 dataset. This dataset does however include some ambiguities regarding what facial expression is shown. It would henceforth be of interest to conduct an experiment where humans classify the facial expressions in the dataset in order to achieve a benchmark.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Silva, Eduardo Machado. "Padrões mapeados localmente em multiescala aplicados ao reconhecimento de faces." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154142.

Повний текст джерела
Анотація:
Submitted by EDUARDO MACHADO SILVA (eduardodz@outlook.com) on 2018-06-02T22:50:24Z No. of bitstreams: 1 Eduardo_Final.pdf: 7020230 bytes, checksum: 17f5f419806417111d44cacbf46f3f0d (MD5)
Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-06-04T16:05:04Z (GMT) No. of bitstreams: 1 silva_em_me_sjrp.pdf: 7020230 bytes, checksum: 17f5f419806417111d44cacbf46f3f0d (MD5)
Made available in DSpace on 2018-06-04T16:05:04Z (GMT). No. of bitstreams: 1 silva_em_me_sjrp.pdf: 7020230 bytes, checksum: 17f5f419806417111d44cacbf46f3f0d (MD5) Previous issue date: 2018-04-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O Reconhecimento facial é uma das tecnologias biométricas mais utilizadas em sistemas automatizados que necessitam garantir a identidade de uma pessoa para acesso autorizado e monitoramento. A grande aceitação do uso da face tem várias vantagens sobre outras tecnologias biométricas: ela é natural, não exige equipamentos sofisticados, a aquisição de dados é baseada em abordagens não invasivas, e pode ser feito a distância, de maneira cooperativa ou não. Embora muitos estudos em reconhecimento facial tenham sido feitos, problemas com variação de iluminação, poses com oclusão facial, expressão facial e envelhecimento ainda são desafios, pois influenciam a performance dos sistemas de reconhecimento facial e motivam o desenvolvimento de novos sistemas de reconhecimento que lidam com esses problemas e sejam mais confiáveis. Este trabalho tem como objetivo avaliar a técnica de Padrões Localmente Mapeados em Multiescala (MSLMP) para o reconhecimento facial. Técnicas baseadas em algoritmos genéticos e processamento de imagens foram usadas para obter melhores resultados. Os resultados obtidos chegam a 100% de acurácia para alguns banco de dados. A base de dados MUCT ´e, em particular, bastante complexa, ela foi criada em 2010 com o objetivo de aumentar a quantidade de bancos de dados disponíveis com alta variação de iluminação, idade, posições e etnias, e por isso, ´e um banco de dados difícil quanto ao reconhecimento automático de faces. Uma nova técnica de processamento baseada na média dos níveis de cinza da base foi desenvolvida.
Facial recognition is one of the most used biometric technologies in automated systems which ensure a person’s identity for authorized access and monitoring. The acceptance of face use has several advantages over other biometric technologies: it is natural, it does not require sophisticated equipment, data acquisition is based on non-invasive approaches, and can it be done remotely, cooperatively or not. Although many facial recognition studies have been done, problems with light variation, facial occlusion, position, expression, and aging are still challenges, because they influence the performance of facial recognition systems and motivate the development of more reliable recognition systems that deal with these problems. This work aim to evaluate the Multi-scale Local Mapped Pattern (MSLMP) technique for the facial recognition. Techniques based on genetic algorithms and image processing were applied to increase the performance of the method. The obtained results reach up to 100% of accuracy for some databases. A very difficult database to deal is the MUCT database which was created in 2010 with aim of providing images with high variation of lighting, age, positions and ethnicities in the facial biometry literature, which makes it a highly difficult base in relation to automated recognition. A new processing technique was developed based on the average gray levels of the images of the database.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dragon, Carolyn Bradford. "Let’s Face It: The effect of orthognathic surgery on facial recognition algorithm analysis." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5778.

Повний текст джерела
Анотація:
Aim: To evaluate the ability of a publicly available facial recognition application program interface (API) to calculate similarity scores for pre- and post-surgical photographs of patients undergoing orthognathic surgeries. Our primary objective was to identify which surgical procedure(s) had the greatest effect(s) on similarity score. Methods: Standard treatment progress photographs for 25 retrospectively identified, orthodontic-orthognathic patients were analyzed using the API to calculate similarity scores between the pre- and post-surgical photographs. Photographs from two pre-surgical timepoints were compared as controls. Both relaxed and smiling photographs were included in the study to assess for the added impact of facial pose on similarity score. Surgical procedure(s) performed on each patient, gender, age at time of surgery, and ethnicity were recorded for statistical analysis. Nonparametric Kruskal-Wallis Rank Sum Tests were performed to univariately analyze the relationship between each categorical patient characteristic and each recognition score. Multiple comparison Wilcoxon Rank Sum Tests were performed on the subsequent statistically significant characteristics. P-Values were adjusted for using the Bonferroni correction technique. Results: Patients that had surgery on both jaws had a lower median similarity score, when comparing relaxed expressions before and after surgery, compared to those that had surgery only on the mandible (p = 0.014). It was also found that patients receiving LeFort and bilateral sagittal split osteotomies (BSSO) surgeries had a lower median similarity score compared to those that received only BSSO (p = 0.009). For the score comparing relaxed expressions before surgery versus smiling expressions after surgery, patients receiving two-jaw surgeries had lower scores than those that had surgery on only the mandible (p = 0.028). Patients that received LeFort and BSSO surgeries were also found to have lower similarity scores compared to patients that received only BSSO when comparing pre-surgical relaxed photographs to post-surgical smiling photographs (p = 0.036). Conclusions: Two-jaw surgeries were associated with a statistically significant decrease in similarity score when compared to one-jaw procedures. Pose was also found to be a factor influencing similarity scores, especially when comparing pre-surgical relaxed photographs to post-surgical smiling photographs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Garcia, Ivette Cristina Araujo, Eduardo Rodrigo Linares Salmon, Rosario Villalta Riega, and Alfredo Barrientos Padilla. "Implementation and customization of a smart mirror through a facial recognition authentication and a personalized news recommendation algorithm." Institute of Electrical and Electronics Engineers Inc, 2018. http://hdl.handle.net/10757/624657.

Повний текст джерела
Анотація:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In recent years the advancement of technologies of information and communication (technology ICTs) have helped to improve the quality of people's lives. The paradigm of internet of things (IoT, Internet of things) presents innovative solutions that are changing the style of life of the people. Because of this proposes the implementation of a smart mirror as part of a system of home automation, with which we intend to optimize the time of people as they prepare to start their day. This device is constructed from a reflective glass, LCD monitor, a Raspberry Pi 3, a camera and a platform IoT oriented cloud computing, where the information is obtained to show in the mirror, through the consumption of web services. The information is customizable thanks to a mobile application, which in turn allows the user photos to access the mirror, using authentication with facial recognition and user information to predict the news to show according to your profile. In addition, as part of the idea of providing the user a personalized experience, the Smart Mirror incorporates a news recommendation algorithm, implemented using a predictive model, which uses the algorithm, naive bayes.
Revisión por pares
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Silva, Jadiel Caparrós da [UNESP]. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários." Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/127901.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-09-17T15:26:23Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-05-15. Added 1 bitstream(s) on 2015-09-17T15:45:43Z : No. of bitstreams: 1 000846199.pdf: 4785482 bytes, checksum: d06441c7f33c2c6fc4bfe273884b0d5a (MD5)
O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Silva, Jadiel Caparrós da. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários /." Ilha Solteira, 2015. http://hdl.handle.net/11449/127901.

Повний текст джерела
Анотація:
Orientador: Anna Diva Plasencia Lotufo
Co-orientador: Jorge Manuel M. C. Pereira Batista
Banca: Carlos Roberto Minussi
Banca: Ricardo Luiz Barros de Freitas
Banca: Díbio Leandro Borges
Banca: Gelson da Cruz Junior
Resumo: O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
Abstract: This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Doutor
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Grossard, Charline. "Evaluation et rééducation des expressions faciales émotionnelles chez l’enfant avec TSA : le projet JEMImE Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD) Children facial expression production : influence of age, gender, emotion subtype, elicitation condition and culture." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS625.

Повний текст джерела
Анотація:
Le trouble du Spectre de l’Autisme (TSA) est caractérisé par des difficultés concernant les habiletés sociales dont l’utilisation des expressions faciales émotionnelles (EFE). Si de nombreuses études s’intéressent à leur reconnaissance, peu évaluent leur production chez l’enfant typique et avec TSA. Les nouvelles technologies sont plébiscitées pour travailler les habiletés sociales auprès des enfants avec TSA, or, peu d’études concernent leur utilisation pour le travail de la production des EFE. Au début de ce projet, nous retrouvions seulement 4 jeux la travaillant. Notre objectif a été la création du jeu sérieux JEMImE travaillant la production des EFE chez l’enfant avec TSA grâce à un feedback automatisé. Nous avons d’abord constitué une base de données d’EFE d’enfants typiques et avec TSA pour créer un algorithme de reconnaissance des EFE et étudier leurs compétences de production. Plusieurs facteurs les influencent comme l’âge, le type d’émotion, la culture. Les EFE des enfants avec TSA sont jugées de moins bonne qualité par des juges humains et par l’algorithme de reconnaissance des EFE qui a besoin de plus de points repères sur leurs visages pour classer leurs EFE. L’algorithme ensuite intégré dans JEMImE donne un retour visuel en temps réel à l’enfant pour corriger ses productions. Une étude pilote auprès de 23 enfants avec TSA met en avant une bonne adaptation des enfants aux retours de l’algorithme ainsi qu’une bonne expérience dans l’utilisation du jeu. Ces résultats prometteurs ouvrent la voie à un développement plus poussé du jeu pour augmenter le temps de jeu et ainsi évaluer l’effet de cet entraînement sur la production des EFE chez les enfants avec TSA
The autism spectrum disorder (ASD) is characterized by difficulties in socials skills, as emotion recognition and production. Several studies focused on emotional facial expressions (EFE) recognition, but few worked on its production, either in typical children or in children with ASD. Nowadays, information and communication technologies are used to work on social skills in ASD but few studies using these technologies focus on EFE production. After a literature review, we found only 4 games regarding EFE production. Our final goal was to create the serious game JEMImE to work on EFE production with children with ASD using an automatic feedback. We first created a dataset of EFE of typical children and children with ASD to train an EFE recognition algorithm and to study their production skills. Several factors modulate them, such as age, type of emotion or culture. We observed that human judges and the algorithm assess the quality of the EFE of children with ASD as poorer than the EFE of typical children. Also, the EFE recognition algorithm needs more features to classify their EFE. We then integrated the algorithm in JEMImE to give the child a visual feedback in real time to correct his/her productions. A pilot study including 23 children with ASD showed that children are able to adapt their productions thanks to the feedback given by the algorithm and illustrated an overall good subjective experience with JEMImE. The beta version of JEMImE shows promising potential and encourages further development of the game in order to offer longer game exposure to children with ASD and so allow a reliable assessment of the effect of this training on their production of EFE
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ben, Soltana Wael. "Optimisation de stratégies de fusion pour la reconnaissance de visages 3D." Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-01070638.

Повний текст джерела
Анотація:
La reconnaissance faciale (RF) est un domaine de recherche très actif en raison de ses nombreuses applications dans le domaine de la vision par ordinateur en général et en biométrie en particulier. Cet intérêt est motivé par plusieurs raisons. D'abord, le visage est universel. Ensuite, il est le moyen le plus naturel par les êtres humains de s'identifier les uns des autres. Enfin, le visage en tant que modalité biométrique est présente un caractère non intrusif, ce qui le distingue d'autres modalités biométriques comme l'iris ou l'emprunte digitale. La RF représente aussi des défis scientifiques importants. D'abord parce que tous les visages humains ont des configurations similaires. Ensuite, avec les images faciales 2D que l'on peut acquérir facilement, la variation intra-classe, due à des facteurs comme le changement de poses et de conditions d'éclairage, les variations d'expressions faciales, le vieillissement, est bien plus importante que la variation inter-classe.Avec l'arrivée des systèmes d'acquisition 3D capables de capturer la profondeur d'objets, la reconnaissance faciale 3D (RF 3D) a émergé comme une voie prometteuse pour traiter les deux problèmes non résolus en 2D, à savoir les variations de pose et d'éclairage. En effet, les caméras 3D délivrent généralement les scans 3D de visages avec leurs images de texture alignées. Une solution en RF 3D peut donc tirer parti d'une fusion avisée d'informations de forme en 3D et celles de texture en 2D. En effet, étant donné que les scans 3D de visage offrent à la fois les surfaces faciales pour la modalité 3D pure et les images de texture 2D alignées, le nombre de possibilités de fusion pour optimiser le taux de reconnaissance est donc considérable. L'optimisation de stratégies de fusion pour une meilleure RF 3D est l'objectif principal de nos travaux de recherche menés dans cette thèse.Dans l'état d'art, diverses stratégies de fusion ont été proposées pour la reconnaissance de visages 3D, allant de la fusion précoce "early fusion" opérant au niveau de caractéristiques à la fusion tardive "late fusion" sur les sorties de classifieurs, en passant par de nombreuses stratégies intermédiaires. Pour les stratégies de fusion tardive, nous distinguons encore des combinaisons en parallèle, en cascade ou multi-niveaux. Une exploration exhaustive d'un tel espace étant impossible, il faut donc recourir à des solutions heuristiques qui constituent nos démarches de base dans le cadre des travaux de cette thèse.En plus, en s'inscrivant dans un cadre de systèmes biométriques, les critères d'optimalité des stratégies de fusion restent des questions primordiales. En effet, une stratégie de fusion est dite optimisée si elle est capable d'intégrer et de tirer parti des différentes modalités et, plus largement, des différentes informations extraites lors du processus de reconnaissance quelque soit leur niveau d'abstraction et, par conséquent, de difficulté.Pour surmonter toutes ces difficultés et proposer une solution optimisée, notre démarche s'appuie d'une part sur l'apprentissage qui permet de qualifier sur des données d'entrainement les experts 2D ou 3D, selon des critères de performance comme ERR, et d'autre part l'utilisation de stratégie d'optimisation heuristique comme le recuit simulé qui permet d'optimiser les mélanges des experts à fusionner. [...]
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Surajpal, Dhiresh Ramchander. "An independent evaluation of subspace facial recognition algorithms." Thesis, 2008. http://hdl.handle.net/10539/5906.

Повний текст джерела
Анотація:
In traversing the diverse field of biometric security and face recognition techniques, this investigation explores a rather rare comparative study of three of the most popular Appearance-based Face Recognition projection classes, these being the methodologies of Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA). Both the linear and kernel alternatives are investigated along with the four most widely accepted similarity measures of City Block (L1), Euclidean (L2), Cosine and the Mahalanobis metrics. Although comparisons between these classes can become fairly complex given the different task natures, the algorithm architectures and the distance metrics that must be taken into account, an important aspect of this study is the completely equal working conditions that are provided in order to facilitate fair and proper comparative levels of evaluation. In doing so, one is able to realise an independent study that significantly contributes to prior literary findings, either by verifying previous results, offering further insight into why certain conclusions were made or by providing a better understanding as to why certain claims should be disputed and under which conditions they may hold true. The experimental procedure examines ten algorithms in the categories of expression, illumination, occlusion and temporal delay; the results are then evaluated based on a sequential combination of assessment tools that facilitate both intuitive and statistical decisiveness among the intra and inter-class comparisons. In a bid to boost the overall efficiency and accuracy levels of the identification system, the ‘best’ categorical algorithms are then incorporated into a hybrid methodology, where the advantageous effects of fusion strategies are considered. This investigation explores the weighted-sum approach, which by fusion at a matching score level, effectively harnesses the complimentary strengths of the component algorithms and in doing so highlights the improved performance levels that can be provided by hybrid implementations. In the process, by firstly exploring previous literature with respect to each other and secondly by relating the important findings of this paper to previous works one is also able to meet the primary objective in providing an amateur with a very insightful understanding of publicly available subspace techniques and their comparable application status within the environment of face recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Watkins, Elizabeth Anne. "The Polysemia of Recognition: Facial Recognition in Algorithmic Management." Thesis, 2021. https://doi.org/10.7916/d8-6qwc-0t83.

Повний текст джерела
Анотація:
Algorithmic management systems organize many different kinds of work across domains, and have increasingly come under academic scrutiny. Under labels including gig work, piecemeal work, and platform labor, these systems have been richly theorized under disciplines including human-computer interaction, sociology, communications, economics, and labor law. When it comes to the relationships between such systems and their workers, current theory frames these interactions on a continuum between organizational control and worker autonomy. This has laid the groundwork for other ways of examining micro-level practices of workers under algorithmic management. As an alternative to the binary of control and autonomy, this dissertation takes its cue from feminist scholars in Science, Technology, and Society (STS) studies. Drawing on frameworks from articulation, repair, and mutual shaping, I examine workers’ interpretations and interactions, to ask how new subjectivities around identity and community emerge from these entanglements. To shed empirical light on these processes, this dissertation employs a mixed-methods research design examining the introduction of facial recognition into the sociotechnical systems of algorithmic management. Data include 22 in-person interviews with workers in New York City and Toronto, a survey of 100 workers in the United States who have been subjected to facial recognition, and analysis of over 2800 comments gathered from an online workers’ forum posted over the course of four years.Facial recognition, like algorithmic management, suffers from a lack of empirical, on-the-ground insights into how workers communicate, negotiate, and strategize around and through them. Interviews with workers reveals that facial recognition evokes polysemia, i.e. a number of distinct, yet interrelated interpretations. I find that for some workers, facial recognition means safety and security. To others it means violation of privacy and accusations of fraud. Some are impressed by the “science-fiction”-like capabilities of the system: “it’s like living in the future.” Others are wary, and science fiction becomes a vehicle to encapsulate their fears: “I’m in the [movie] The Minority Report.” For some the technology is hyper-powerful: “It feels like I’m always being watched,” yet others decry, “it’s an obvious façade.” Following interviews, I build a body of research using empirical methods combined with frameworks drawn from STS and organizational theory to illuminate workers’ perceptions and strategies negotiating their algorithmic managers. I operationalize Julian Orr’s studies of storytelling among Xerox technicians to analyze workers’ information-sharing practices in online forums, to better understand how gig workers, devices, forums, and algorithmic management systems engage in mutual shaping processes. Analysis reveals that opposing interpretations of facial recognition, rather than dissolving into consensus of “shared understanding,” continue to persist. Rather than pursuing and relying on shared understanding of their work to maintain relationships, workers under algorithmic management, communicating in online forums about facial recognition, elide consensus. After forum analysis, I then conduct a survey, to assess workers’ fairness perceptions of facial recognition targeting and verification. The goal of this research is to establish an empirical foundation to determine whether algorithmic fairness perceptions are subject to theories of bounded rationality and decision-making. Finally, for the last two articles, I turn back to the forums, to analyze workers’ experiences negotiating two other processes with threats or ramifications for safety, privacy, and risk. In one article, I focus on their negotiation of threats from scam attackers, and the use the forum itself as a “shared repertoire” of knowledge. In the other I use the forums as evidence to illuminate workers’ experiences and meaning-making around algorithmic risk management under COVID-19. In the conclusion, I engage in theory-building to examine how algorithmic management and its attendant processes demand that information-sharing mechanisms serve novel ends buttressing legitimacy and authenticity, in what I call “para-organizational” work, a world of work where membership and legitimacy are liminal and uncertain. Ultimately, this body of research illuminates mutual shaping processes in which workers’ practices, identity, and community are entangled with technological artifacts and organizational structures. Algorithmic systems of work and participants’ interpretations of, and interactions with, related structures and devices, may be creating a world where sharing information is a process wielded not as a mechanism of learning, but as one of belonging.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gupta, Shalini 1979. "Novel algorithms for 3D human face recognition." Thesis, 2008. http://hdl.handle.net/2152/29595.

Повний текст джерела
Анотація:
Automated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lu, Yun-Jen, and 盧韻仁. "Facial Features Detection and Expression Recognition based on Loopy Belief Propagation Algorithms." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/45945653737477152143.

Повний текст джерела
Анотація:
碩士
國立清華大學
資訊工程學系
95
In this thesis, we propose two graphical models for automatically detecting facial features and estimating optical flow on face images for extracting the expression flow features. To accomplish these tasks, we apply the Loopy Belief Propagation (LBP) algorithm which is a common framework for graphical model. In the first part, we learn the feature PCA models and geometry relationship for building a graphical model for facial features. In the second part, we build a Markov Random Field (MRF) model for optical flow estimation, and the purpose of the model structure is to make sure that the patch of neutral image could move to correct corresponding position on the expression image. The local feature constraint makes the optical flow computation in the feature areas more precise. Finally, we combine these two algorithms with the SVM classifier to develop a facial expression recognition system.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

YAN, YU-CHANG, and 顏毓昌. "A Study of Facial Recognition Using Deep Learning Algorithms and RGBD Images." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4nv63a.

Повний текст джерела
Анотація:
碩士
國立高雄第一科技大學
資訊管理系碩士班
106
Face recognition techniques has been developed for many years. Many applications such as the people identification, access control and crime detection have been widely applied in our daily lives. In addition, we can extract different features from facial images to estimate the age and gender of people. These applications can help companies obtaining great benefits in different commercial purposes. Recently, machine learning technology has a rapidly progress with the GPU hardware development by NVIDIA company. The deep learning algorithms can perform more efficiently by exploiting GPU. The iPhone X's facial recognition application has successfully attract the attention of people in the world. Therefore, many enterprises began to research to improve the techniques of facial recognition by using deep learning models and big data. However, there exist some problems in facial recognition, the light influence and poor image quality will result in a decrease of recognition accuracy. In this study, we construct a new CNN model and build a small scale facial image database. Kinect v2 camera was used in our work to collect the RGB and depth images. In the experiments, 4962 images of 30 peoples were used in the training stage according to 8:2 ratio for training and validation and 12164 images of 10 people with 8 different environments were used in the test stage. The experimental results show 84.46% accuracy rate and 90.13% accuracy rate of top 3 responses. In this study, we also implemented some popular CNN models such as AlexNet, GoogLeNet V3 and VGG-16 for comparison. The results show that the proposed method outperformed than these CNN models. We design an algorithm to discriminate the real 3D face and 2D photo face. 3174 3D depth images by Kinect cameras and 163 photo facial images of 10 people were applied in our experiment. Experimental result shows that we can obtain 100% perfect accuracy by computing the entropy of images. Finally, experiments with different type of facial image dataset including RGB, D, RGBD are performed. There are 3074 images in each dataset. We divided each dataset into three parts, training, validation and test with the ratio of 7:2:1. The accuracy rate are 96.75%, 99.35% and 100%, respectively. In addition, 6 people in the dataset were invited to a real-time test in 4 different environments, and we can obtain an average accuracy rate of 79.51%, 88.01%, and 82.66%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Stacy, Emily Margaret. "Human and algorithm facial recognition performance : face in a crowd." Thesis, 2017. http://hdl.handle.net/10453/116916.

Повний текст джерела
Анотація:
University of Technology Sydney. Faculty of Science.
Developing a method of identifying persons of interest (POIs) in uncontrolled environments, accurately and rapidly, is paramount in the 21st century. One such technique to do this is by using automated facial recognition systems (FRS). To date, FRS have mainly been tested in laboratory conditions (controlled) however there is little publically available research to indicate the performance levels, and therefore the feasibility of using FRS in public, uncontrolled environments, known as face-in-a-crowd (FIAC). This research project was hence directed at determining the feasibility of FIAC technology in uncontrolled, operational environments with the aim of being able to identify POIs. This was done by processing imagery obtained from a range of environments and camera technologies through one of the latest FR algorithms to evaluate the current level of FIAC performance. The hypothesis was that FR performance with higher resolution imagery would produce better FR results and that FIAC will be feasible in an operational environment when certain variables are controlled, such as camera type (resolution), lighting and number of people in the field of view. Key findings from this research revealed that although facial recognition algorithms for FIAC applications have shown improvement over the past decade, the feasibility of its deployment into uncontrolled environments remains unclear. The results support previous literature regarding the quality of the imagery being processed largely affecting the FRS performance, as imagery produced from high resolution cameras produced better performance results than imagery produced from CCTV cameras. The results suggest the current FR technology can potentially be viable in a FIAC scenario, if the operational environment can be modified to become better suited for optimal image acquisition. However, in areas where the environmental constraints were less controlled, the performance levels are seen to decrease significantly. The essential conclusion is that the data be processed with new versions of the algorithms that can track subjects through the environment, which is expected to vastly increase the performance, as well as potentially run an additional trial in alternate locations to gain a greater understanding of the feasibility of FIAC generically.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Michalski, Dana Jaclyn. "The impact of age-related variables on facial comparisons with images of children: algorithm and practitioner performance." Thesis, 2018. http://hdl.handle.net/2440/111184.

Повний текст джерела
Анотація:
Determining the identity of children is critical for many national security agencies for example, to aid in the fight against child exploitation, trafficking, and radicalised minors, as well as for passport control and visa issuance purposes. Facial comparison is one method that may be used to achieve this. Facial comparison can be conducted using an algorithm (within a facial recognition system), manually by a facial comparison practitioner, or by a combination of the two. Much of the previous research examining facial comparison performance of both algorithms and practitioners has been conducted using images of adults. Due to the substantial amount of age-related facial growth that occurs in childhood, compared to adulthood, it is likely that performance will be poorer with images of children. The overarching aim of the research therefore, was to determine the impact of age-related variables, namely chronological age and age variation (the age difference between images) on facial comparison performance of algorithms and practitioners with images of children. Study 1 involved consultation with national security agencies and algorithm vendors to identify the key requirements to examine in this thesis. After reviewing the literature to identify research gaps, five empirical studies were conducted. To ensure the studies were as operationally relevant as possible, a large database containing several million controlled images of children and adults was sourced, and five state-of-the-art facial recognition algorithms were employed. In addition, facial comparison practitioners from a government agency participated in the practitioner studies. Study 2A compared algorithm performance with images of children to performance with images of adults. Study 2B compared practitioner performance with images of children to performance with images of adults. Study 3A examined algorithm performance with images of children at each chronological age in childhood (0–17 years) and age variations ranging from 0–10 years apart. Study 3B examined practitioner performance on the same age-related variables examined in Study 3A. Study 4 demonstrated how the data collected in Study 3A and 3B could be used to answer agency specific questions. This thesis concludes with a series of recommendations for both the algorithm and practitioner domains, as well as future research directions designed to improve knowledge and performance regarding facial comparisons with images of children.
Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 2018
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Brito, Paulo. "Facial analysis with depth maps and deep learning." Doctoral thesis, 2018. http://hdl.handle.net/10400.2/7787.

Повний текст джерела
Анотація:
Tese de Doutoramento em Ciência e Tecnologia Web em associação com a Universidade de Trás-os-Montes e Alto Douro, apresentada à Universidade Aberta
A recolha e análise sequencial de dados multimodais do rosto humano é um problema importante em visão por computador, com aplicações variadas na análise e monitorização médica, entretenimento e segurança. No entanto, devido à natureza do problema, há uma falta de sistemas acessíveis e fáceis de usar, em tempo real, com capacidade de anotações, análise 3d, capacidade de reanalisar e com uma velocidade capaz de detetar padrões faciais em ambientes de trabalho. No âmbito de um esforço contínuo, para desenvolver ferramentas de apoio à monitorização e avaliação de emoções/sinais em ambiente de trabalho, será realizada uma investigação relativa à aplicabilidade de uma abordagem de análise facial para mapear e avaliar os padrões faciais humanos. O objetivo consiste em investigar um conjunto de sistemas e técnicas que possibilitem responder à questão de como usar dados de sensores multimodais para obter um sistema de classificação para identificar padrões faciais. Com isso em mente, foi planeado desenvolver ferramentas para implementar um sistema em tempo real de forma a reconhecer padrões faciais. O desafio é interpretar esses dados de sensores multimodais para classificá-los com algoritmos de aprendizagem profunda e cumprir os seguintes requisitos: capacidade de anotações, análise 3d e capacidade de reanalisar. Além disso, o sistema tem que ser capaze de melhorar continuamente o resultado do modelo de classificação para melhorar e avaliar diferentes padrões do rosto humano. A FACE ANALYSYS, uma ferramenta desenvolvida no contexto desta tese de doutoramento, será complementada por várias aplicações para investigar as relações de vários dados de sensores com estados emocionais/sinais. Este trabalho é útil para desenvolver um sistema de análise adequado para a perceção de grandes quantidades de dados comportamentais.
Collecting and analyzing in real time multimodal sensor data of a human face is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy to use systems, with real time annotations capability, 3d analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of human affective state in working environments, this research will investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. Our objective consists in investigating a set of systems and techniques that make it possible to answer the question regarding how to use multimodal sensor data to obtain a classification system in order to identify facial patterns. With that in mind, it will be developed tools to implement a real-time system in a way that it will be able to recognize facial patterns from 3d data. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the follow requirements: annotations capability, 3d analysis and replay capability. In addition, the system will be able to enhance continuously the output result of the system with a training process in order to improve and evaluate different patterns of the human face. FACE ANALYSYS is a tool developed in the context of this doctoral thesis, in order to research the relations of various sensor data with human facial affective state. This work is useful to develop an appropriate visualization system for better insight of a large amount of behavioral data.
N/A
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yang, Chien-wei, and 楊建葦. "Facial Expression Recognition based on AdaBoost Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/73458390041204905263.

Повний текст джерела
Анотація:
碩士
國立中正大學
電機工程所
96
Recently, facial expression recognition forms on of the main interests of many researchers. We develop a facial expression recognition system in this dissertation, which extracts the facial features from the input image automatically and recognizes the facial expressions via classifiers trained based on the well-learned AdaBoost algorithm. The facial expressions to be recognized include happy, angry, sad, disgust, surprise, fear, and neutral. We propose an AdaBoost-based facial expression recognition system with two types of sample data, single image frame and image sequence. For the former, an image texture code is extracted at each position within a face; for the latter, the optical flow sequence at each position is calculated. Both these two kinds of image features are adopted separately as the input of the classifiers to be designed. The AdaBoost algorithm is used to iteratively select the best weak classifier, i.e. the positions in a face with the most discrimination, at each iteration and combine them to form a strong classifier which is capable of increasing the recognition rate. Experimental results are presented, which show that the more weak classifiers selected by AdaBoost algorithm, the better recognition rate gained. Besides, as we apply an improved method which avoids selecting repeated weak classifiers, the performance of the recognition system is also improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hong, Jung-Wei, and 洪濬尉. "A Fast Learning Algorithm for Robotic Facial Expression Recognition." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27350443841945007606.

Повний текст джерела
Анотація:
碩士
國立交通大學
電機與控制工程系所
95
A robotic facial expression recognition system very often misclassifies data from a new face because different people may show their expressions in different ways. This thesis aims to study a facial expression recognition system that can learn new facial data and facilitate a robot to accommodate itself to various persons. The main idea of the proposed method is to adjust parameters of the hyperplane of support vector machine (SVM) for classifying new facial data. The concept of support vector pursuit learning (SVPL) is adopted to retrain the hyperplane in the Gaussian kernel space. To expedite the training procedure, we propose to retrain the new SVM classifier by using only samples classified incorrectly and the critical sets (CSs) from previous samples. After adjusting hyperplane parameters, the new classifier not only recognizes new facial data but also keeps acceptable performance of classifying previous data. Further, to obtain reliable facial features, we adopted Gabor wavelet to develop a feature extraction method in the system. The proposed algorithms have been successfully implemented on an entertainment robot platform. On-line experimental results show that the proposed system learns new facial data with a recognition rate of 81.3% increased from an original recognition rate of 58%. The proposed method also keeps satisfactory recognition rate of old facial samples with a recognition rate of 78.7%.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jeng, Kuang-Yo, and 鄭匡祐. "Optimization of Facial Expression Recognition Classifier by PSO Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/98434283982039184508.

Повний текст джерела
Анотація:
碩士
朝陽科技大學
資訊與通訊系碩士班
99
This paper proposed an image recognition method for distinguishing eight facial expressions. When the eyes and the mouth of a person are open or closed, there are eight possible facial expressions to be distinguished. The Active Basis Model is used to construct basis templates and deformation templates for the eyes and mouth. The characteristics of the basis template are used as features for training the Support Vector Machines. The eye and mouth areas of the test images are extracted and used to generate the basis and deformation template. After three Support Vector Machines were trained, And join the PSO search Support Vector Machines parameters to optimize, improve the classificationaccuracy, eight facial expressions can be classified due to the statuses of two eyes and mouth. The accuracy of facial expression recognition is 90.33% for testing 100 facial images. The results show the proposed method for facial expression recognition is very effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lin, Yong-Fong, and 林泳鋒. "A Robust Active Appearance Models Search Algorithm for Facial Expression Recognition." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/62228928210754309468.

Повний текст джерела
Анотація:
碩士
國立中央大學
通訊工程研究所
96
Active Appearance Models (AAMs) is an image representation method for non-rigid visual object with both shape and texture variations. It is a model-based representation method, and it uses a mean vector and a linear combinations of a set of variation modes to represent a non-rigid object. By adjusting the coefficients of the linear combinations of the variation modes(model parameters), we can synthesize any non-rigid objects. With this, we can express facial expressions using a model-based approach. For the facial expression recognition, an AAMs search algorithm is required to find the optimum model parameters such that the synthesized expression is similar to the facial expression in images. In this paper, we propose a novel iterative AAMs search algorithm. It minimizes the error which measures the difference between a model and a test image. We only adopt the magnitude of the predicted change of the parameters from the traditional search algorithm. However we decide the direction of the change of the parameters by estimating the gradient of the error function at each iteration. Moreover we prevent the local minimum search of the error function at each iteration by disturbing the searched parameters. Our experiments show that the proposed robust AAMs search algorithm reduced 36.41% location error of shape and 30.82% intensity error of texture of facial expressions related to the AAMs search algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Sigalingging, Xanno Kharis, and Xanno Kharis Sigalingging. "A Human-Computer Interface Classification Algorithm using EMG-Based Facial Gesture Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/71232114838745205650.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
電機工程系
105
An assistive technology (AT) in the form of a novel human interface device (HID) using (electromyography) EMG-based gesture recognition scheme is proposed in this thesis. The system utilizes MYO device from Thalmic Labs with a custom software as the signal capturing tool. The electrodes of the EMG are placed on certain positions on the face, corresponding to the locations of the major muscles that govern certain facial gestures. The signals are then processed using a Hidden Markov model (HMM) algorithm to classify the gesture, and based on the gesture a custom command can be assigned. Depending on the gesture made, there are countless possible command. The accuracy of the system is 94.4% with 5 gestures classification
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Hu, Jheng-Hong, and 胡正宏. "Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/49257342967572489050.

Повний текст джерела
Анотація:
碩士
國立中興大學
電機工程學系所
97
Nowadays, because of the low-birth rate and busy working parents, the security of babies is more and more requested. Therefore, the development of wisdom baby watch and care system technology can promote the baby security of indoor and house environments. In this study, we will figure out whether babies are in the safe condition or not by baby facial expressions. According to facial expressions recognition, there are three conditions, which include deadpan, smiling, and crying. First, we extract baby’s face from the image and trace the face. Then detecting fourteen features on the face, where eight features for eyes, four features for mouth, and two features for eye brows. The features distance will be calculated by the features and they will be as input values to the neural network system. Thus, the scheme can recognize baby facial expressions. In this thesis, the features detection part can solve the problem of color temperature in different environments. The light with red temperature, yellow temperature or other color temperature will not result in the detecting mistake. By the baby features, the detection can be used not only to Asian babies, but also to other racial babies. Therefore, this system is capable of being used to Caucasian or Africa America babies. Besides, we will discuss two experimented features detection methods in this study. One is Adaptive Color Analysis Detection Method, and the other is Fast Ellipse Template Edge Detection Method. Compare the two methods with Eye Filter Detection Method in this study, we will see the accuracy and operation speed of the detection results and we also analyze the advantages and drawbacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Yun Yao, and 王韻堯. "Development of a facial image processing algorithm with its application into emotional recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/26439000077687682089.

Повний текст джерела
Анотація:
碩士
長庚大學
電子工程學系
99
Abstract Facial expression recognition has become a popular subject in because it is a non-verbal communication between computer and human reaction. We propose a two stage recognition algorithm. It separated into three parts, pre-processing, screening for first stage, and recognition for second stage, respectively. We use a real time object detection algorithm to extract the facial image in pre-processing. In screening part, we calculate the vectors of the images and compare with database by using Long Haar-like filters. And the last part, we extract the information from mouth for positive emotion and eyebrow for negative emotion. Finally, we use Linear Discriminant Function (LDF) to recognize the emotion by using these information. The final percentage of recognition that happy and sad are 86.21%, and angry is 75.86%.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

"A matching algorithm for facial memory recall in forensic applications." 2000. http://library.cuhk.edu.hk/record=b5890306.

Повний текст джерела
Анотація:
by Lau Kwok Kin.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (leaves 82-87).
Abstracts in English and Chinese.
List of Figures --- p.vi
List of Tables --- p.vii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Objective of This Thesis --- p.3
Chapter 1.2 --- Organization of This Thesis --- p.3
Chapter 2 --- Literature Review --- p.4
Chapter 2.1 --- Facial Memory Recall --- p.4
Chapter 2.2 --- Facial Recognition --- p.6
Chapter 2.2.1 --- Earlier Approaches --- p.7
Chapter 2.2.2 --- Feature and Template Matching --- p.8
Chapter 2.2.3 --- Neural Network --- p.10
Chapter 2.2.4 --- Statistical Approach --- p.14
Chapter 3 --- A Forensic Application of Facial Recall --- p.19
Chapter 3.1 --- Motivation --- p.20
Chapter 3.2 --- AICAMS-FIT --- p.20
Chapter 3.2.1 --- The Facial Component Library --- p.21
Chapter 3.2.2 --- The Feature Selection Module --- p.24
Chapter 3.2.3 --- The Facial Construction Module --- p.24
Chapter 3.3 --- The Interaction Between The Three Main Components --- p.29
Chapter 3.4 --- Summary --- p.30
Chapter 4 --- Sketch-to-Sketch Matching --- p.31
Chapter 4.1 --- The Representation of A Composite Face --- p.31
Chapter 4.2 --- The Component-based Encoding Scheme --- p.32
Chapter 4.2.1 --- Local Feature Analysis --- p.34
Chapter 4.2.2 --- Similarity Matrix --- p.36
Chapter 4.3 --- Experimental Results and Evaluation --- p.41
Chapter 4.4 --- Shortcomings of the encoding scheme --- p.44
Chapter 4.4.1 --- Size Variation --- p.45
Chapter 4.5 --- Summary --- p.51
Chapter 5 --- Sketch-to-Photo/Photo-to-Sketch Matching --- p.52
Chapter 5.1 --- Principal Component Analysis --- p.53
Chapter 5.2 --- Experimental Setup --- p.56
Chapter 5.3 --- Experimental Results --- p.59
Chapter 5.3.1 --- Sketch-to-Photo Matching --- p.59
Chapter 5.3.2 --- Photo-to-Sketch Matching --- p.62
Chapter 5.4 --- Summary --- p.66
Chapter 6 --- Future Work --- p.67
Chapter 7 --- Conclusions --- p.70
Chapter A --- Image Library I --- p.72
Chapter A.1 --- The Database for Searching --- p.72
Chapter A.2 --- The Database for Testing --- p.74
Chapter B --- Image Library II --- p.75
Chapter B.1 --- The Photographic Database --- p.75
Chapter B.2 --- The Sketch Database --- p.77
Chapter C --- The Eigenfaces --- p.78
Chapter C.1 --- Eigenfaces of Photographic Database (N = 20) --- p.78
Chapter C.2 --- Eigenfaces of Photographic Database (N = 100) --- p.79
Chapter C.3 --- The Eigenfaces of Sketch Database --- p.81
Bibliography --- p.82
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Candra, Henry. "Emotion recognition using facial expression and electroencephalography features with support vector machine classifier." Thesis, 2017. http://hdl.handle.net/10453/116427.

Повний текст джерела
Анотація:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Recognizing emotions from facial expression and electroencephalography (EEG) emotion signals are complicated tasks that require substantial issues to be solved in order to achieve higher performance of the classifications, i.e. facial expression has to deal with features, features dimensionality, and classification processing time, while EEG emotion recognition has the concerned with features, number of channels and sub band frequency, and also non-stationary behaviour of EEG signals. This thesis addresses the aforementioned challenges. First, a feature for facial expression recognition using a combination of Viola-Jones algorithm and improved Histogram of Oriented Gradients (HOG) descriptor termed Edge-HOG or E–HOG is proposed which has the advantage of insensitivity to lighting conditions. The issue of dimensionality and classification processing time was resolved using a combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which has successfully reduced both the dimension and the classification processing time resulting in a new low dimension of feature called Reduced E–HOG (RED E–HOG). In the case of EEG emotion recognition, a method to recognize 4 discrete emotions from arousal-valence dimensional plane using wavelet energy and entropy features was developed. The effects of EEG channel and subband selection were also addressed, which managed to reduce the channels from 32 to 18 channels and the subband from 5 to 3 bands. To deal with the non-stationary behaviour of EEG signals, an Optimal Window Selection (OWS) method as feature-agnostic pre-processing was proposed. The main objective of OWS is window segmentation with varying window which was applied to 7 various features to improve the classification results of 4 dimensional plane emotions, namely arousal, valence, dominance, and liking, to distinguish between the high or low state of the aforementioned emotions. The improvement of accuracy makes the OWS method a potential solution to dealing with the non-stationary behaviour of EEG signals in emotion recognition. The implementation of OWS provides the information that the EEG emotions may be appropriately localized at 4–12 seconds time segments. In addition, a feature concatenating of both Wavelet Entropy and average Wavelet Approximation Coefficients was developed for EEG emotion recognition. The SVM classifier trained using this feature provides a higher classification result consistently compared to various different features such as: simple average, Fast Fourier Transform (FFT), and Wavelet Energy. In all the experiments, the classification was conducted using optimized SVM with a Radial Basis Function (RBF) kernel. The RBF kernel parameters were properly optimized using a particle swarm ensemble clustering algorithm called Ensemble Rapid Centroid Estimation (ERCE). The algorithm estimates the number of clusters directly from the data using swarm intelligence and ensemble aggregation. The SVM is then trained using the optimized RBF kernel parameters and Sequential Minimal Optimization (SMO) algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Tang, Chia-Wei, and 湯家瑋. "A Research of Training Neural Network Based on Particle Swarm Optimization Algorithm for Facial Expression Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/37490062119094195968.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
電子工程系
98
Facial expression plays a critical role in our daily life. In the field of the interaction of Human-Machine Interface, the facial expression recognition system has become an important issue in recent years. The interface, not only of the Human-Computer but also of the Human-Robot are important applications of Human-Machine Interaction.Facial expression usually implies important messages in communications among people in our daily life. Therefore, facial recognition has become a topic in which a large number of researchers are interested. The design of our system consists of three major parts: Face Detection, Feature Extraction, and Expression Recognition.In the stage of Face Detection, after picking out the facial area by using skin color detection, we further recognize the area of eyebrows, mouth, and eyes by utilizing facial geometry proportion. In the stage of Feature Extraction, we process the image by next two way: First, identify the distinct profile via Sobel Edge Detection from the image in the area, and then transfer this image into a binary form; Second, perform gray value sort for the image in the area and also transfer it into a binary form. After these two ways above are done, we perform AND operation for these two binary images, then mark the feature points on eyebrows, mouth, and eyes respectively, finally, we calculate the feature information according to the distance between specific points. For the part of the Expression Recognition, we try to utilize Neural Network and train the weights of network via Back-Propagation Learning and PSO to reach optimization. To distinguish the expression of happy, sad, surprise, anger, fear, disgust, and natural, we applied Japanese Female Facial Expression (JAFFE) database in our study. The experimental results exhibit that our proposed approach could reach a recognized ratio of 89 %.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Yu, Sheng-Min, and 游聖民. "Design and Implementation of Facial Expression Recognition and Foreign Object Detection Algorithm for Baby Watch and Care System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/83502166059607439703.

Повний текст джерела
Анотація:
碩士
國立中興大學
電機工程學系所
98
In this study, we will discuss a digital intelligent baby-watch-and-care system that can recognize baby''s expression and detect the external object. The system will alert watchers when it detects something around mouth and nose, and a vomit condition. In addition, we will figure out whether babies are within the safe condition by baby facial expressions. Thus, we can replace the manpower security with the intelligent video system and reduce the watcher''s burden. In the intelligent baby-watch-and-care system, there are two subsystems, which include the facial expression recognition and external object detection. On the part of facial expression recognition, there are three conditions, which include deadpan, smiling, and crying. First, we extract baby’s face features from the image. The features distance will be calculated by the features and they will be as input values to the neural network system. Thus, the scheme can recognize baby facial expressions. On the other part of the external object detection, we focus on detecting the vomit and something around mouth and nose. In order to achieve the above-mentioned demand, we can observe the color change which is near to the mouth by detecting the current and previous frames in dynamic video sequences. The experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 88%, and the processing time needs 45 ms, then the accuracy for the facial expression recognition is about 80% by using the Quadcore (@2.66GHz) computer with C codes. Finally, by following the principle of the HW/SW co-design, the baby- watch-and-care system is implemented on an embedded platform which is composed of ARM926EJ-S CPU and Xilinx FPGA. First, we profile the execution time of each module in the algorithm, and choose the maximum computational complexity module for the hardware realization. The HW/SW co-design can process 1.86 frames per second, and the pure software design with the ARM CPU can achieve 3.03 frames per second when the ARM CPU operates at 266MHz and the system frequency operates at 50MHz.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Sales, Caio César Galdino. "Impactos sociais do reconhecimento facial: Privacidade e vigilância." Master's thesis, 2021. http://hdl.handle.net/10071/24838.

Повний текст джерела
Анотація:
A integração de dispositivos tecnológicos promovida pela Cibercultura transformou o espectro cotidiano e incluiu a esfera digital no universo das interações sociais, tornando os contextos on e offline quase indiferenciados. O desenvolvimento de tecnologias da informação representa um avanço do ponto de vista técnico, contudo o uso de técnicas de Reconhecimento Facial sob lógicas comerciais impacta o contexto social, pois conflita com as estruturas tradicionais de privacidade e mecanismos que regulamentam o processamento de informações biométricas baseados no consentimento emitido pelo utilizador no universo digital. Paralelamente, o uso de técnicas de vigilância sofisticadas inerentes às sociedades modernas torna o Reconhecimento Facial um aliado para as forças de segurança que viabiliza práticas de vigilância intrusivas, cuja influência do viés algorítmico e da automatização de processos criminais revela preocupações que atingem os diferentes grupos sociais de forma desproporcional. Diante deste enquadramento, propõe-se uma reflexão crítica sobre a mediação tecnológica associada aos contextos da privacidade e segurança, abordando a multiplicidade de danos decorrentes do processamento de informações pessoais de forma descontextualizada, e os desdobramentos deste processo na criação de subjetividades na esfera digital. Este enquadramento sugere a influência de questões raciais ligadas a processos históricos que moldam as práticas de vigilância permeadas por estereótipos hierarquizantes, que por seu turno são transferidos para a lógica sistêmica das decisões algorítmicas destacando a responsabilidade das empresas de tecnologia e das autoridades de segurança, além de questões éticas associadas ao universo da Inteligência Artificial.
The integration of technological devices promoted by Cyberculture transformed the everyday spectrum and included the digital sphere in the universe of social interactions, making the online and offline contexts almost undifferentiated. The development of information technologies represents an advance from a technical point of view however the use of Face Recognition techniques under commercial logic impacts the social context as it conflicts with traditional privacy structures and mechanisms that regulate the processing of biometric information based on consent issued by the user in the digital universe. At the same time, the use of sophisticated surveillance techniques inherent in modern societies makes Face Recognition an ally for the security forces that enables intrusive surveillance practices, whose influence of algorithmic bias and the automation of criminal processes reveals concerns that affect different social groups disproportionately. Given this framework, it is proposed a critical reflection on technological mediation associated with the contexts of privacy and security, addressing the multiplicity of damage resulting from the processing of personal information in a decontextualized way, and the consequences of this process in the creation of subjectivities in the digital sphere. This framework suggests the influence of racial issues linked to historical processes that shape surveillance practices permeated by hierarchical stereotypes, which in turn are transferred to the systemic logic of algorithmic decisions highlighting the responsibility of technology companies and security authorities, in addition of ethical issues associated with the universe of Artificial Intelligence.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ou, Wei-Liang, and 歐威良. "Design and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/40068638807786860451.

Повний текст джерела
Анотація:
碩士
國立中興大學
電機工程學系所
99
This study presents a colorless facial foreign object detection algorithm and a facial expression recognition technique for baby care video surveillance systems, and the implementation with an embedded platform is also complete. When babies encounter foreign object invasion, spit up in nose and mouth, occur vomit and other dangerous situations, the system immediately issues an alert to the guardian, and simultaneously acts facial expressions to judge whether babies are crying. Based on the colorless video processing algorithm, using a near-infrared camera only will function properly day and night, and then this video surveillance system will reduce the human burden effectively. The proposed intelligent baby-watch-and-care system is divided into two subsystems, including the facial foreign object detection and facial expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we can observe the color change which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only consider three conditions, including deadpan, smiling, and crying. First, we extract baby’s face features from facial images. The features distance will be calculated, and they are input vectors to the neural network system. Thus the scheme recognizes baby facial expressions effectively. Using the quad-core (@2.50GHz) computer with C codes, the experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 91%, and the processing time needs 29 ms. The accuracy for the nose feature detection is up to 87%, and the processing time needs 1 ms. The accuracy for the mouth feature detection is up to 87%, and the processing time needs 2.1 ms. Then the accuracy for the facial expression recognition is about 80%. Finally, the baby-watch-and-care system is implemented on the BOLYMIN BEGA220A embedded platform with an ARM926EJ CPU. Using software-based optimizations, the computational complexities are reduced and the software computations are optimized. When the system operates at 400MHz with an ARM CPU and mounts in Windows CE, the frame rate is about 1.5 frames per second. Using near-infrared photography, the frame rate is 1.02 frames per second.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії