To see the other types of publications on this topic, follow the link: Face Recognition Across Pose.

Dissertations / Theses on the topic 'Face Recognition Across Pose'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Face Recognition Across Pose.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Graham, Daniel B. "Pose-varying face recognition." Thesis, University of Manchester, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abi, Antoun Ramzi. "Pose-Tolerant Face Recognition." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/244.

Full text
Abstract:
Automatic face recognition performance has been steadily improving over years of active research, however it remains significantly affected by a number of external factors such as illumination, pose, expression, occlusion and resolution that can severely alter the appearance of a face and negatively impact recognition scores. The focus of this thesis is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of “mugshot-style” near-frontal gallery images. We argue that in this scenario, a 3D face-modeling geometric approach is essential in tackling the pose problem. For this purpose, we utilize a recent technique for efficient synthesis of 3D face models called 3D General Elastic Model (3DGEM). It solved the pose synthesis problem from a single frontal image, but could not solve the pose correction problem because of missing face data due to self-occlusion. In this thesis, we extend the formulation of 3DGEM and cast this task as an occlusion-removal problem. We propose a sparse feature extraction approach using subspace-modeling and `1-minimization to find a representation of the geometrically 3D-corrected faces that we show is stable under varying pose and resolution. We then show how pose-tolerance can be achieved either in the feature space or in the reconstructed image space. We present two different algorithms that capitalize on the robustness of the sparse feature extracted from the pose-corrected faces to achieve high matching rates that are minimally impacted by the variation in pose. We also demonstrate high verification rates upon matching nonfrontal to non-frontal faces. Furthermore, we show that our pose-correction framework lends itself very conveniently to the task of super-resolution. By building a multiresolution subspace, we apply the same sparse feature extraction technique to achieve single-image superresolution with high magnification rates. We discuss how our layered framework can potentially solve both pose and resolution problems in a unified and systematic approach. The modularity of our framework also keeps it flexible, upgradable and expandable to handle other external factors such as illumination or expressions. We run extensive tests on the MPIE dataset to validate our findings.
APA, Harvard, Vancouver, ISO, and other styles
3

Lincoln, Michael C. "Pose-independent face recognition." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Godzich, Elliot J. "Automated Pose Correction for Face Recognition." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cmc_theses/376.

Full text
Abstract:
This paper describes my participation in a MITRE Corporation sponsored computer science clinic project at Harvey Mudd College as my senior project. The goal of the project was to implement a landmark-based pose correction system as a component in a larger, existing face recognition system. The main contribution I made to the project was the implementation of the Active Shape Models (ASM) algorithm; the inner workings of ASM are explained as well as how the pose correction system makes use of it. Included is the most recent draft (as of this writing) of the final report that my teammates and I produced highlighting the year's accomplishments. Even though there are few quantitative results to show because the clinic program is ongoing, our qualitative results are quite promising.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xiaozheng. "Pose-invariant Face Recognition through 3D Reconstructions." Thesis, Griffith University, 2008. http://hdl.handle.net/10072/366373.

Full text
Abstract:
Pose invariance is a key ability for face recognition to achieve its advantages of being non-intrusive over other biometric techniques requiring cooperative subjects such as fingerprint recognition and iris recognition. Due to the complex 3D structures and various surface reflectivities of human faces, however, pose variations bring serious challenges to current face recognition systems. The image variations of human faces under 3D transformations are larger than that existing face recognition can tolerate. This research attempts to achieve pose-invariant face recognition through 3D reconstructions, which inversely estimates 3D shape and texture information of human faces from 2D face images. This extracted information is intrinsic features useful for face recognition which is invariable to pose changes. The proposed framework reconstructs personalised 3D face models from images of known people in a database (or gallery views) and generates virtual views in possible poses for face recognition algorithms to match the captured image (or probe view). In particular, three different scenarios of gallery views have been scrutinised: 1) when multiple face images from a fixed viewpoint under different illumination conditions are used as gallery views; 2) when a police mug shot consisting of a frontal view and a side view per person is available as gallery views; and 3) when a single frontal face image per person is used as gallery view. These three scenarios provide the system different amount of information and cover a wide range of situations which a face recognition system will encounter. Three novel 3D reconstruction approaches have then been proposed according to these three scenarios, which are 1) Heterogeneous Specular and Diffuse (HSD) face modelling, 2) Multilevel Quadratic Variation Minimisation (MQVM), and 3) Automatic Facial Texture Synthesis (AFTS), respectively. Experimental results show that these three proposed approaches can effectively improve the performance of face recognition across pose...
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
6

Wibowo, Moh Edi. "Towards pose-robust face recognition on video." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/77836/1/Moh%20Edi_Wibowo_Thesis.pdf.

Full text
Abstract:
This thesis investigates face recognition in video under the presence of large pose variations. It proposes a solution that performs simultaneous detection of facial landmarks and head poses across large pose variations, employs discriminative modelling of feature distributions of faces with varying poses, and applies fusion of multiple classifiers to pose-mismatch recognition. Experiments on several benchmark datasets have demonstrated that improved performance is achieved using the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Sooraj. "Face recognition with variation in pose angle using face graphs /." Online version of thesis, 2009. http://hdl.handle.net/1850/9482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

King, Steve. "Robust face recognition under varying illumination and pose." Thesis, University of Huddersfield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beymer, David James. "Pose-invariant face recognition using real and virtual views." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38101.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 173-184).
by David James Beymer.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Du, Shan. "Image-based face recognition under varying pose and illuminations conditions." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2814.

Full text
Abstract:
Image-based face recognition has attained wide applications during the past decades in commerce and law enforcement areas, such as mug shot database matching, identity authentication, and access control. Existing face recognition techniques (e.g., Eigenface, Fisherface, and Elastic Bunch Graph Matching, etc.), however, do not perform well when the following case inevitably exists. The case is that, due to some variations in imaging conditions, e.g., pose and illumination changes, face images of the same person often have different appearances. These variations make face recognition techniques much challenging. With this concern in mind, the objective of my research is to develop robust face recognition techniques against variations. This thesis addresses two main variation problems in face recognition, i.e., pose and illumination variations. To improve the performance of face recognition systems, the following methods are proposed: (1) a face feature extraction and representation method using non-uniformly selected Gabor convolution features, (2) an illumination normalization method using adaptive region-based image enhancement for face recognition under variable illumination conditions, (3) an eye detection method in gray-scale face images under various illumination conditions, and (4) a virtual pose generation method for pose-invariant face recognition. The details of these proposed methods are explained in this thesis. In addition, we conduct a comprehensive survey of the existing face recognition methods. Future research directions are pointed out.
APA, Harvard, Vancouver, ISO, and other styles
11

Rajwade, Ajit. "Facial pose estimation and face recognition from three-dimensional data." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82410.

Full text
Abstract:
Face recognition from 3D shape information has been proposed as a method of biometric identification in recent times. This thesis presents a 3D face recognition system capable of recognizing the identity of an individual from his/her 3D facial scan in any pose across the view-sphere, by suitably comparing it with a set of models stored in a database. The system makes use of only 3D shape information ignoring textural information completely.
Firstly, the thesis proposes a generic learning strategy using support vector regression [11] to estimate the approximate pose of a 3D scan. The support vector machine (SVM) is trained on range images in several poses, belonging to a small set of individuals. This thesis also examines the relationship between size of the range image and the accuracy of the pose prediction from the scan.
Secondly, a hierarchical two-step strategy is proposed to normalize a facial scan to a nearly frontal pose before performing recognition. The first step consists of a coarse normalization making use of either the spatial relationships between salient facial features or the generic learning algorithm using the SVM. This is followed by an iterative technique to refine the alignment to the frontal pose, which is basically an improved form of the Iterated Closest Point Algorithm [17]. The latter step produces a residual error value, which can be used as a metric to gauge the similarity between two faces. Our two-step approach is experimentally shown to outdo both the individual normalization methods in terms of recognition rates, over a very wide range of facial poses. Our strategy has been tested on a large database of 3D facial scans in which the training and test images of each individual were acquired at significantly different times, unlike several existing 3D face recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Arashloo, Shervin Rahimzadeh. "Pose-invariant 2D face recognition by matching using graphical models." Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

El, Seuofi Sherif M. "Performance Evaluation of Face Recognition Using Frames of Ten Pose Angles." Youngstown State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1198184813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhao, Sanqiang. "On Sparse Point Representation for Face Localisation and Recognition." Thesis, Griffith University, 2009. http://hdl.handle.net/10072/366629.

Full text
Abstract:
Automatic face recognition has been an active research field during the last few decades. Existing face recognition systems have demonstrated acceptable recognition performance under controlled conditions. However, practical and robust face recognition which is tolerant to various interferential variations remains a difficult and unsolved problem in the research community. In the first part of this thesis, we propose to use the concept of sparse point representation to address four important challenges in face recognition: wider-range tolerance to pose variation, face misalignment, facial landmark localisation and head pose estimation. The sparse point representation can be classified into two different categories. In the first category, equal numbers of feature points are predefined on different individuals. Each feature point refers to a specific physical location on a face while all the feature points have explicit correspondence across different individuals. In the second category, a set of feature points are detected at different locations with discriminative information content on a face image. Both the number and the positions of the feature points are varied from person to person such that diverse facial characteristics of different individuals can be represented. Based on the first category of sparse point representation, we propose a new Constrained Profile Model (CPM) to form an efficient facial landmark localisation framework. We also propose a novel Elastic Energy Model (EEM) to automatically conduct head pose estimation. Based on the second category of sparse point representation, we propose a new Textural Hausdorff Distance (THD), which has demonstrated a considerably wider range of tolerance against both in-depth head rotation and face misalignment. In the second part of this thesis, we focus on recently proposed micropattern based approaches which have proven to outperform classical face recognition methods and provided a new way of investigation into face analysis. We first apply a new Multidirectional Binary Pattern (MBP) representation upon sparse points to establish point correspondences for face recognition. We further propose an enhanced Sobel-LBP operator for face representation, which has demonstrated better performance than the original Local Binary Pattern (LBP). We finally present a novel high-order Local Derivative Pattern (LDP) for face recognition, which can capture more detailed and discriminative information than the first-order local pattern used in LBP. It should be noted that the concept of LDP for face recognition was pioneered by Dr. Baochang Zhang, but we have significantly extended and elaborated this concept. We have extended the concept of LDP from its original usage on Gabor phase features only to much more generalised definition on gray-level images. We have rewritten and enlarged the original draft of his manuscript. Some of the experiments were also implemented and reported by us. In the third part of this thesis, we pay attention to the representation of 'Average Face', which was newly published on Science and claimed to be capable of dramatically improving performance of face recognition systems. To reveal its working mechanism, we conduct a comparative study to observe its effectiveness on holistic and local face recognition approaches. Our experimental results reveal that the process of face averaging does not necessarily improve all the face recognition systems. Its usefulness is dependent on the specific methods employed in practice.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
15

Lucey, Patrick Joseph. "Lipreading across multiple views." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.

Full text
Abstract:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within "meeting " or "lecture " type scenarios via "smart-rooms ". This has resulted in the collection of audio-visual speech data which allows for the recognition of visual speech from both frontal and non-frontal views to occur. Using this data, the main focus of this thesis was to investigate and develop various methods within the confines of a lipreading system which can recognise visual speech across multiple views. This reseach constitutes the first published work within the field which looks at this particular aspect of AVASR. The task of recognising visual speech from non-frontal views (i.e. profile) is in principle very similar to that of frontal views, requiring the lipreading system to initially locate and track the mouth region and subsequently extract visual features. However, this task is far more complicated than the frontal case, because the facial features required to locate and track the mouth lie in a much more limited spatial plane. Nevertheless, accurate mouth region tracking can be achieved by employing techniques similar to frontal facial feature localisation. Once the mouth region has been extracted, the same visual feature extraction process can take place to the frontal view. A novel contribution of this thesis, is to quantify the degradation in lipreading performance between the frontal and profile views. In addition to this, novel patch-based analysis of the various views is conducted, and as a result a novel multi-stream patch-based representation is formulated. Having a lipreading system which can recognise visual speech from both frontal and profile views is a novel contribution to the field of AVASR. How- ever, given both the frontal and profile viewpoints, this begs the question, is there any benefit of having the additional viewpoint? Another major contribution of this thesis, is an exploration of a novel multi-view lipreading system. This system shows that there does exist complimentary information in the additional viewpoint (possibly that of lip protrusion), with superior performance achieved in the multi-view system compared to the frontal-only system. Even though having a multi-view lipreading system which can recognise visual speech from both front and profile views is very beneficial, it can hardly considered to be realistic, as each particular viewpoint is dedicated to a single pose (i.e. front or profile). In an effort to make the lipreading system more realistic, a unified system based on a single camera was developed which enables a lipreading system to recognise visual speech from both frontal and profile poses. This is called pose-invariant lipreading. Pose-invariant lipreading can be performed on either stationary or continuous tasks. Methods which effectively normalise the various poses into a single pose were investigated for the stationary scenario and in another contribution of this thesis, an algorithm based on regularised linear regression was employed to project all the visual speech features into a uniform pose. This particular method is shown to be beneficial when the lipreading system was biased towards the dominant pose (i.e. frontal). The final contribution of this thesis is the formulation of a continuous pose-invariant lipreading system which contains a pose-estimator at the start of the visual front-end. This system highlights the complexity of developing such a system, as introducing more flexibility within the lipreading system invariability means the introduction of more error. All the works contained in this thesis present novel and innovative contributions to the field of AVASR, and hopefully this will aid in the future deployment of an AVASR system in realistic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Lucey, Patrick Joseph. "Lipreading across multiple views." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.

Full text
Abstract:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within "meeting " or "lecture " type scenarios via "smart-rooms ". This has resulted in the collection of audio-visual speech data which allows for the recognition of visual speech from both frontal and non-frontal views to occur. Using this data, the main focus of this thesis was to investigate and develop various methods within the confines of a lipreading system which can recognise visual speech across multiple views. This reseach constitutes the first published work within the field which looks at this particular aspect of AVASR. The task of recognising visual speech from non-frontal views (i.e. profile) is in principle very similar to that of frontal views, requiring the lipreading system to initially locate and track the mouth region and subsequently extract visual features. However, this task is far more complicated than the frontal case, because the facial features required to locate and track the mouth lie in a much more limited spatial plane. Nevertheless, accurate mouth region tracking can be achieved by employing techniques similar to frontal facial feature localisation. Once the mouth region has been extracted, the same visual feature extraction process can take place to the frontal view. A novel contribution of this thesis, is to quantify the degradation in lipreading performance between the frontal and profile views. In addition to this, novel patch-based analysis of the various views is conducted, and as a result a novel multi-stream patch-based representation is formulated. Having a lipreading system which can recognise visual speech from both frontal and profile views is a novel contribution to the field of AVASR. How- ever, given both the frontal and profile viewpoints, this begs the question, is there any benefit of having the additional viewpoint? Another major contribution of this thesis, is an exploration of a novel multi-view lipreading system. This system shows that there does exist complimentary information in the additional viewpoint (possibly that of lip protrusion), with superior performance achieved in the multi-view system compared to the frontal-only system. Even though having a multi-view lipreading system which can recognise visual speech from both front and profile views is very beneficial, it can hardly considered to be realistic, as each particular viewpoint is dedicated to a single pose (i.e. front or profile). In an effort to make the lipreading system more realistic, a unified system based on a single camera was developed which enables a lipreading system to recognise visual speech from both frontal and profile poses. This is called pose-invariant lipreading. Pose-invariant lipreading can be performed on either stationary or continuous tasks. Methods which effectively normalise the various poses into a single pose were investigated for the stationary scenario and in another contribution of this thesis, an algorithm based on regularised linear regression was employed to project all the visual speech features into a uniform pose. This particular method is shown to be beneficial when the lipreading system was biased towards the dominant pose (i.e. frontal). The final contribution of this thesis is the formulation of a continuous pose-invariant lipreading system which contains a pose-estimator at the start of the visual front-end. This system highlights the complexity of developing such a system, as introducing more flexibility within the lipreading system invariability means the introduction of more error. All the works contained in this thesis present novel and innovative contributions to the field of AVASR, and hopefully this will aid in the future deployment of an AVASR system in realistic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
17

Hwang, June Youn. "Fast pose and automatic matching using hybrid method for the three dimensional face recognition." Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zeni, Luis Felipe de Araujo. "Reconhecimento facial tolerante à variação de pose utilizando uma câmera RGB-D de baixo custo." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/101659.

Full text
Abstract:
Reconhecer a identidade de seres humanos a partir de imagens digitais gravadas de suas faces é uma etapa importante para uma variedade de aplicações que incluem segurança de acesso, iteração humano computador, entretenimento digital, entre outras. Neste trabalho é proposto um novo método automático para reconhecimento facial que utiliza simultaneamente a informação 2D e 3D de uma câmera RGB-D(Kinect). O método proposto utiliza a informação de cor da imagem 2D para localizar faces na cena, uma vez que uma face é localizada ela é devidamente recortada e normalizada para um padrão de tamanho e cor. Posteriormente com a informação de profundidade o método estima a pose da cabeça em relação com à câmera. Com faces recortadas e suas respectivas informações de pose, o método proposto treina um modelo de faces robusto à variação de poses e expressões propondo uma nova técnica automática que separa diferentes poses em diferentes modelos de faces. Com o modelo treinado o método é capaz de identificar se as pessoas utilizadas para aprender o modelo estão ou não presentes em novas imagens adquiridas, as quais o modelo não teve acesso na etapa de treinamento. Os experimentos realizados demonstram que o método proposto melhora consideravelmente o resultado de classificação em imagens reais com variação de pose e expressão.
Recognizing the identity of human beings from recorded digital images of their faces is important for a variety of applications, namely, security access, human computer interation, digital entertainment, etc. This dissertation proposes a new method for automatic face recognition that uses both 2D and 3D information of an RGB-D(Kinect) camera. The method uses the color information of the 2D image to locate faces in the scene, once a face is properly located it is cut and normalized to a standard size and color. Afterwards, using depth information the method estimates the pose of the head relative to the camera. With the normalized faces and their respective pose information, the proposed method trains a model of faces that is robust to pose and expressions using a new automatic technique that separates different poses in different models of faces. With the trained model, the method is able to identify whether people used to train the model are present or not in new acquired images, which the model had no access during the training phase. The experiments demonstrate that the proposed method considerably improves the result of classification in real images with varying pose and expression.
APA, Harvard, Vancouver, ISO, and other styles
19

Derkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.

Full text
Abstract:
Al llarg de les últimes dècades, l'anàlisi facial ha atret un interès creixent i considerable per part de la comunitat investigadora amb l’objectiu de millorar la interacció i la cooperació entre les persones i les màquines. Aquest interès ha propiciat la creació de sistemes automàtics capaços de reaccionar a diversos estímuls com ara els moviments del cap o les emocions d’una persona. Més enllà, les tasques automatitzades s’han de poder realitzar amb gran precisió dins d’entorns no controlats, fet que ressalta la necessitat d'algoritmes que aprofitin al màxim els avantatges que proporcionen les dades 3D. Aquests sistemes poden ser útils en molts àmbits com ara la interacció home-màquina, tutories, entrevistes, atenció sanitària, màrqueting, etc. En aquesta tesi, ens centrem en dos aspectes de l'anàlisi facial: el reconeixement d'expressions i l'estimació de l'orientació del cap. En ambdós casos, ens enfoquem en l’ús de dades 3D i presentem contribucions que tenen com a objectiu la identificació de representacions significatives de la geometria facial mitjançant mètodes basats en la descomposició espectral: 1. Proposem una tecnologia basada en la representació espectral per al reconeixement d’expressions facials utilitzant exclusivament la geometria 3D, la qual ens permet una descripció completa de la superfície subjacent que pot ser ajustada al nivell de detall desitjat. Dita tecnologia, es basa en la descomposició de fragments locals de la superfície en les seves components de freqüència espacial, d’una manera semblant a la transformada de Fourier, que estan relacionades amb característiques intrínseques de la superfície. Concretament, proposem la utilització de les Graph Laplacian Features (GLFs) que resulten de la projecció dels fragments locals de la superfície a una base comuna obtinguda a partir del Graph Laplacian eigenspace. El mètode proposat s’ha avaluat en termes de reconeixement d’expressions i Action Units (activacions musculars facials), i els resultats obtinguts confirmen que les GLFs produeixen taxes de reconeixement comparables a l’estat de l’art. 2. Proposem un mètode per a l’estimació de l’orientació del cap que permet modelar el manifold subjacent que formen les rotacions generals en 3D. En primer lloc, construïm un sistema completament automàtic que combina la detecció de landmarks (punts facials rellevants) i característiques basades en diccionari, el qual ha obtingut els millors resultats al FG2017 Head Pose Estimation Challenge. Posteriorment, utilitzem una representació basada en tensors i la seva descomposició en els valors singulars d’ordre més alt per tal de separar els subespais de cada factor de rotació i mostrar que cada un d’ells té una estructura clara que pot ser modelada amb funcions trigonomètriques. Aquesta representació proporciona un coneixement detallat del comportament de les dades i pot ser utilitzada per millorar l’estimació de les orientacions dels angles del cap.
Facial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
APA, Harvard, Vancouver, ISO, and other styles
20

Cament, Riveros Leonardo. "Enhancements by weighted feature fusion, selection and active shape model for frontal and pose variation face recognition." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/132854.

Full text
Abstract:
Doctor en Ingeniería Eléctrica
Face recognition is one of the most active areas of research in computer vision because of its wide range of possible applications in person identification, access control, human computer interfaces, and video search, among many others. Face identification is a one-to-n matching problem where a captured face is compared to n samples in a database. In this work a new method for robust face recognition is proposed. The methodology is divided in two parts, the first one focuses in face recognition robust to illumination, expression and small age variation and the second part focuses in pose variation. The proposed algorithm is based on Gabor features; which have been widely studied in face identification because of their good results and robustness. In the first part, a new method for face identification is proposed that combines local normalization for an illumination compensation stage, entropy-like weighted Gabor features for a feature extraction stage, and improvements in the Borda count classification through a threshold to eliminate low-score Gabor jets from the voting process. The FERET, AR, and FRGC 2.0 databases were used to test and compare the proposed method results with those previously published. Results on these databases show significant improvements relative to previously published results, reaching the best performance on the FERET and AR databases. Our proposed method also showed significant robustness to slight pose variations. The method was tested assuming noisy eye detection to check its robustness to inexact face alignment. Results show that the proposed method is robust to errors of up to three pixels in eye detection. However, face identification is strongly affected when the test images are very different from those of the gallery, as is the case in varying face pose. The second part of this work proposes a new 2D Gabor-based method which modifies the grid from which the Gabor features are extracted using a mesh to model face deformations produced by varying pose. Also, a statistical model of the Borda count scores computed by using the Gabor features is used to improve recognition performance across pose. The method was tested on the FERET and CMU-PIE databases, and the performance improvement provided by each block was assessed. The proposed method achieved the highest classification accuracy ever published on the FERET database with 2D face recognition methods. The performance obtained in the CMU-PIE database is among those obtained by the best published methods. Extensive experimental results are provided for different combinations of the proposed method, including results with two poses enrolled as a gallery.
APA, Harvard, Vancouver, ISO, and other styles
21

Brown, Dane. "Faster upper body pose recognition and estimation using compute unified device architecture." Thesis, University of Western Cape, 2013. http://hdl.handle.net/11394/3455.

Full text
Abstract:
>Magister Scientiae - MSc
The SASL project is in the process of developing a machine translation system that can translate fully-fledged phrases between SASL and English in real-time. To-date, several systems have been developed by the project focusing on facial expression, hand shape, hand motion, hand orientation and hand location recognition and estimation. Achmed developed a highly accurate upper body pose recognition and estimation system. The system is capable of recognizing and estimating the location of the arms from a twodimensional video captured from a monocular view at an accuracy of 88%. The system operates at well below real-time speeds. This research aims to investigate the use of optimizations and parallel processing techniques using the CUDA framework on Achmed’s algorithm to achieve real-time upper body pose recognition and estimation. A detailed analysis of Achmed’s algorithm identified potential improvements to the algorithm. Are- implementation of Achmed’s algorithm on the CUDA framework, coupled with these improvements culminated in an enhanced upper body pose recognition and estimation system that operates in real-time with an increased accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Chu, Baptiste. "Neutralisation des expressions faciales pour améliorer la reconnaissance du visage." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0005/document.

Full text
Abstract:
Les variations de pose et d’expression constituent des limitations importantes à la reconnaissance de visages en deux dimensions. Dans cette thèse, nous proposons d’augmenter la robustesse des algorithmes de reconnaissances faciales aux changements de pose et d’expression. Pour cela, nous proposons d’utiliser un modèle 3D déformable de visage permettant d’isoler les déformations d’identité de celles relatives à l’expression. Plus précisément, étant donné une image de probe avec expression, une nouvelle vue synthétique du visage est générée avec une pose frontale et une expression neutre. Nous présentons deux méthodes de correction de l’expression. La première est basée sur une connaissance a priori dans le but de changer l’expression de l’image vers une expression neutre. La seconde méthode, conçue pour les scénarios de vérification, est basée sur le transfert de l’expression de l’image de référence vers l’image de probe. De nombreuses expérimentations ont montré une amélioration significative des performances et ainsi valider l’apport de nos méthodes. Nous proposons ensuite une extension de ces méthodes pour traiter de la problématique émergente de reconnaissance de visage à partir d’un flux vidéo. Pour finir, nous présentons différents travaux permettant d’améliorer les performances obtenues dans des cas spécifiques et ainsi améliorer les performances générales obtenues grâce à notre méthode
Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this thesis, we aim to endow state of the art face recognition SDKs with robustness to simultaneous facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases show significant performance improvement and demonstrates the effectiveness of the proposed approach. Then, we aim to endow the state of the art FR SDKs with the capabilities to recognize faces in videos. Finally, we present different methods for improving biometric performances for specific cases
APA, Harvard, Vancouver, ISO, and other styles
23

Kramer, Annika. "Model based methods for locating, enhancing and recognising low resolution objects in video." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/585.

Full text
Abstract:
Visual perception is our most important sense which enables us to detect and recognise objects even in low detail video scenes. While humans are able to perform such object detection and recognition tasks reliably, most computer vision algorithms struggle with wide angle surveillance videos that make automatic processing difficult due to low resolution and poor detail objects. Additional problems arise from varying pose and lighting conditions as well as non-cooperative subjects. All these constraints pose problems for automatic scene interpretation of surveillance video, including object detection, tracking and object recognition.Therefore, the aim of this thesis is to detect, enhance and recognise objects by incorporating a priori information and by using model based approaches. Motivated by the increasing demand for automatic methods for object detection, enhancement and recognition in video surveillance, different aspects of the video processing task are investigated with a focus on human faces. In particular, the challenge of fully automatic face pose and shape estimation by fitting a deformable 3D generic face model under varying pose and lighting conditions is tackled. Principal Component Analysis (PCA) is utilised to build an appearance model that is then used within a particle filter based approach to fit the 3D face mask to the image. This recovers face pose and person-specific shape information simultaneously. Experiments demonstrate the use in different resolution and under varying pose and lighting conditions. Following that, a combined tracking and super resolution approach enhances the quality of poor detail video objects. A 3D object mask is subdivided such that every mask triangle is smaller than a pixel when projected into the image and then used for model based tracking. The mask subdivision then allows for super resolution of the object by combining several video frames. This approach achieves better results than traditional super resolution methods without the use of interpolation or deblurring.Lastly, object recognition is performed in two different ways. The first recognition method is applied to characters and used for license plate recognition. A novel character model is proposed to create different appearances which are then matched with the image of unknown characters for recognition. This allows for simultaneous character segmentation and recognition and high recognition rates are achieved for low resolution characters down to only five pixels in size. While this approach is only feasible for objects with a limited number of different appearances, like characters, the second recognition method is applicable to any object, including human faces. Therefore, a generic 3D face model is automatically fitted to an image of a human face and recognition is performed on a mask level rather than image level. This approach does not require an initial pose estimation nor the selection of feature points, the face alignment is provided implicitly by the mask fitting process.
APA, Harvard, Vancouver, ISO, and other styles
24

Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.

Full text
Abstract:
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
APA, Harvard, Vancouver, ISO, and other styles
25

Fiche, Cécile. "Repousser les limites de l'identification faciale en contexte de vidéo-surveillance." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENT005/document.

Full text
Abstract:
Les systèmes d'identification de personnes basés sur le visage deviennent de plus en plus répandus et trouvent des applications très variées, en particulier dans le domaine de la vidéosurveillance. Or, dans ce contexte, les performances des algorithmes de reconnaissance faciale dépendent largement des conditions d'acquisition des images, en particulier lorsque la pose varie mais également parce que les méthodes d'acquisition elles mêmes peuvent introduire des artéfacts. On parle principalement ici de maladresse de mise au point pouvant entraîner du flou sur l'image ou bien d'erreurs liées à la compression et faisant apparaître des effets de blocs. Le travail réalisé au cours de la thèse porte donc sur la reconnaissance de visages à partir d'images acquises à l'aide de caméras de vidéosurveillance, présentant des artéfacts de flou ou de bloc ou bien des visages avec des poses variables. Nous proposons dans un premier temps une nouvelle approche permettant d'améliorer de façon significative la reconnaissance des visages avec un niveau de flou élevé ou présentant de forts effets de bloc. La méthode, à l'aide de métriques spécifiques, permet d'évaluer la qualité de l'image d'entrée et d'adapter en conséquence la base d'apprentissage des algorithmes de reconnaissance. Dans un second temps, nous nous sommes focalisés sur l'estimation de la pose du visage. En effet, il est généralement très difficile de reconnaître un visage lorsque celui-ci n'est pas de face et la plupart des algorithmes d'identification de visages considérés comme peu sensibles à ce paramètre nécessitent de connaître la pose pour atteindre un taux de reconnaissance intéressant en un temps relativement court. Nous avons donc développé une méthode d'estimation de la pose en nous basant sur des méthodes de reconnaissance récentes afin d'obtenir une estimation rapide et suffisante de ce paramètre
The person identification systems based on face recognition are becoming increasingly widespread and are being used in very diverse applications, particularly in the field of video surveillance. In this context, the performance of the facial recognition algorithms largely depends on the image acquisition context, especially because the pose can vary, but also because the acquisition methods themselves can introduce artifacts. The main issues are focus imprecision, which can lead to blurred images, or the errors related to compression, which can introduce the block artifact. The work done during the thesis focuses on facial recognition in images taken by video surveillance cameras, in cases where the images contain blur or block artifacts or show various poses. First, we are proposing a new approach that allows to significantly improve facial recognition in images with high blur levels or with strong block artifacts. The method, which makes use of specific noreference metrics, starts with the evaluation of the quality level of the input image and then adapts the training database of the recognition algorithms accordingly. Second, we have focused on the facial pose estimation. Normally, it is very difficult to recognize a face in an image taken from another viewpoint than the frontal one and the majority of facial identification algorithms which are robust to pose variation need to know the pose in order to achieve a satisfying recognition rate in a relatively short time. We have therefore developed a fast and satisfying pose estimation method based on recent recognition techniques
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yuyao. "Non-linear dimensionality reduction and sparse representation models for facial analysis." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.

Full text
Abstract:
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination
Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
APA, Harvard, Vancouver, ISO, and other styles
27

Peng, Hsiao-Chia, and 彭小佳. "3D Face Reconstruction on RGB and RGB-D Images for Recognition Across Pose." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/88142215912683274078.

Full text
Abstract:
博士
國立臺灣科技大學
機械工程系
103
Face recognition across pose is a challenging problem in computer vision. Two scenarios are considered in this thesis. One is the common setup with one single frontal facial image of each subject in the gallery set and the images of other poses in the probe set. The other considers a RGB-D image of the frontal face for each subject in the gallery, but the probe set is the same as in the previous case that only contains RGB images of other poses. The second scenario simulates the case that RGB-D camera can be available for user registration only and recognition can be performed on regular RGB images without the depth channel. Two approaches are proposed for handling the first scenario, one is holistic and the other is component-based. The former is extended from a face reconstruction approach and improved with different sets of landmarks for alignment and multiple reference models considered in the reconstruction phase. The latter focuses on the reconstruction of facial components obtained by the pose-invariant landmarks, and the recognition with different components considered at different poses. Such a component-based reconstruction for handling cross-pose recognition is rarely seen in the literature. Although the approach for handling the second scenario, i.e., the RGB-D based recognition, is partially similar to the approach for handling the first scenario, the novelty is on the handling of the depth readings corrupted by quantization noise, which are often encountered when the face is not close enough to the RGB-D camera at registration. An approach is proposed to resurface the corrupted depth map and substantially improve the recognition performance. All of the proposed approaches are evaluated on benchmark databases and proven comparable to state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Sanyal, Soubhik. "Discriminative Descriptors for Unconstrained Face and Object Recognition." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4177.

Full text
Abstract:
Face and object recognition is a challenging problem in the field of computer vision. It deals with identifying faces or objects form an image or video. Due to its numerous applications in biometrics, security, multimedia processing, on-line shopping, psychology and neuroscience, automated vehicle parking systems, autonomous driving and machine inspection, it has drawn attention from a lot of researches. Researchers have studied different aspects of this problem. Among them pose robust matching is a very important problem with various applications like recognizing faces and objects in uncontrolled scenarios in which the images appear in wide variety of pose and illumination conditions along with low resolution. In this thesis, we propose three discriminative pose-free descriptors, Subspace Point Representation (DPF-SPR), Layered Canonical Correlated (DPF-LCC ) and Aligned Discriminative Pose Robust (ADPR) descriptor, for matching faces and objects across pose. They are also robust for recognition in low resolution and varying illumination. We use training examples at very few poses to generate virtual intermediate pose subspaces. An image is represented by a feature set obtained by projecting its low-level feature on these subspaces. This way we gather more information regarding the unseen poses by generating synthetic data and make our features more robust towards unseen pose variations. Then we apply a discriminative transform to make this feature set suitable for recognition for generating two of our descriptors namely DPF-SPR and DPF-LCC. In one approach, we transform it to a vector by using subspace to point representation technique which generates our DPF-SPR descriptors. In the second approach, layered structures of canonical correlated subspaces are formed, onto which the feature set is projected which generates our DPF-LCC descriptor. In a third approach we first align the remaining subspaces with the frontal one before learning the discriminative metric and concatenate the aligned discriminative projected features to generate ADPR. Experiments on recognizing faces and objects across varying pose are done. Specifically we have done experiments on MultiPIE and Surveillance Cameras Face database for face recognition and COIL-20 and RGB-D dataset for object recognition. We show that our approaches can even improve the recognition rate over the state-of-the-art deep learning approaches. We also perform extensive analysis of our three descriptors to get a better qualitative understanding. We compare with state-of-the-art to show the effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
29

Ling-ying, Lee, and 李玲瑩. "Face Recognition Across Poses Using A Single Reference Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/thxax6.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
100
Given a frontal facial image as a gallery sample, a scheme is developed to generate novel views of the face for recognition across poses. The core part of the scheme is a recently published 3D face reconstruction which exploits a single reference 3D face model to build a 3D shape model for each face in the gallery set. The 3D shape model combined with the texture of each facial image in the gallery allows novel poses of the face to be generated. The LBP features are then extracted from these generated poses to train an SVM classifier for recognition. Assuming Lambertian surface with a reflectance function approximated by spherical harmonics, the 3D reference model would be made to deform so that the 2D projection of the deformed model can approximate the facial image in the gallery. The problem is cast as an image irradiance equation with unknown lighting, albedo, and surface normals. Using the reference model to estimate lighting, and providing an initial estimate of albedo, the reflectance function becomes only a function of the unknown surface normals, and the irradiance equation becomes a partial differential equation which is then solved for depth. A 3D face from the FRGC database is used as the reference model in the experiments, and the performance is evaluated on the PIE database. It is shown that the developed scheme gives a satisfactory performance, and can be further improved if the alignment between the reference model and the gallery image can be enhanced.
APA, Harvard, Vancouver, ISO, and other styles
30

Beymer, David J. "Face Recognition Under Varying Pose." 1993. http://hdl.handle.net/1721.1/6621.

Full text
Abstract:
While researchers in computer vision and pattern recognition have worked on automatic techniques for recognizing faces for the last 20 years, most systems specialize on frontal views of the face. We present a face recognizer that works under varying pose, the difficult part of which is to handle face rotations in depth. Building on successful template-based systems, our basic approach is to represent faces with templates from multiple model views that cover different poses from the viewing sphere. Our system has achieved a recognition rate of 98% on a data base of 62 people containing 10 testing and 15 modelling views per person.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Feng. "Face recognition under significant pose variation." Thesis, 2007. http://spectrum.library.concordia.ca/975540/1/MR28958.pdf.

Full text
Abstract:
Unlike the frontal face detection, multi-pose face detection and recognition techniques, still face the following challenges: large variability of environments such as pose, illumination and backgrounds, and unconstrained capturing of facial images. We introduced a new system to deal with this problem. First, a two-step color-based approach is used to find a candidate area of face from original picture. Then a rough estimator of five poses is created using AdaBoost technique. In order to accurately locate the candidate face, multiple statistical shape models-ASM (Active Shape Models) are proposed to estimate an accurate pose of model of the input image and to extract facial features as well. In the recognition step, we use a geometrical mapping technique to deal with the pose variation and face identification.
APA, Harvard, Vancouver, ISO, and other styles
32

Chiu, Kuo-Yu, and 邱國育. "Face recognition system and its applications by using face pose estimation and face pose synthesis." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/85747296243675764247.

Full text
Abstract:
博士
國立交通大學
電控工程研究所
99
In this dissertation, an improved face pose estimation algorithm to increase face recognition rate is proposed. There are three main stages. The first stage is chin curve estimation by using active contour model. The active contour model is auto-initialized to approach the chin curve under various face poses according to statistical experimental results. The second stage is the face pose estimation and synthesis. Using the chin contour information along with other facial features, simulated annealing algorithm is adopted to estimate various face poses. Using the face pose information, input face image with arbitrary face poses can be synthesized to be frontal. The last stage is face recognition. The synthesized frontal face pose image is utilized to solve the problem that face recognition rate is dramatically reduced when non-frontal face pose images are presented. From experimental results, it can be seen that the face recognition rates of traditional algorithms are only about 40%, while the proposed method greatly improves the recognition rate to about 80%. When face recognition system is applied in surveillance system, the recognition rates are 23% and 70% for traditional algorithm and the proposed system respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

WANG, HSIANG-JUNG, and 王湘蓉. "Multi-pose Face Recognition Using an Enhanced 3D Face Modeling." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/36599347215052254848.

Full text
Abstract:
碩士
開南大學
資訊學院碩士班
104
With the development of information technology, face recognition has reached a near perfect positive identification. However, there is recognition on the side faces some difficulty, because of the similarity of the structure of the face, in the case only in profile, lack of information can give more to deepen the side of face recognition difficulties, so in recent years began to study side face recognition based. In only a single front to capture an image, use the point cloud 3D modeling, in turn flank angle over the General Assembly to produce due to lack of depth of information generated holes, can reduce the degree of identification. In this study, improvement of traditional point cloud 3D face modeling, in addition to the hole in the side of the angle is too large to improve the result, and with reference to active shape model selection feature points 3D modeling training samples, and then SVM identity and face angle. Identification of the correct angle of classification results was 80.64%; facial recognition part, the false acceptance rate is 0%, the false rejection rate of 100%.
APA, Harvard, Vancouver, ISO, and other styles
34

Beymer, David. "Pose-Invariant Face Recognition Using Real and Virtual Views." 1996. http://hdl.handle.net/1721.1/6772.

Full text
Abstract:
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.
APA, Harvard, Vancouver, ISO, and other styles
35

Ju-ChinChen and 陳洳瑾. "Subspace Learning for Face Detection, Recognition and Pose Estimation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/03701616334187369722.

Full text
Abstract:
博士
國立成功大學
資訊工程學系碩博士班
98
This thesis concerns the subspace learning methods on performing dimensionality reduction and extracting discriminant features for face detection, recognition and pose estimation. We examine the subspace learning methods and then the novel subspace learning methods are derived according to the data distribution of each application. The first application is based on analyzing the manifold in eigenspace to develop a statistic-based multi-view face detection and the pose estimation. In the eigenspace, not only the simple cascaded rejecter module can be developed to exclude 85% of the non-face images to enhance the overall system performance but also the manifold of face data can be applied to develop coarse pose estimation. In addition, to improve tolerance toward different partial occlusions and lighting conditions, the five-module detection system is based on significant local facial features (or subregions) rather than the entire face. In order to extract the low- and high-frequency feature information of each subregion of the facial image, the eigenspace and residual independent basis space are constructed. In addition, either projection weight vectors or coefficient vectors in the PCA (principal component analysis) or ICA (independent component analysis) space have divergent distributions and are therefore modeled by using the weighted Gaussian mixture model (GMM) with parameters estimated by Expectation-Maximization (EM) algorithm. Face detection is then performed by conducting a likelihood evaluation process based on the estimated joint probability of the weight and coefficient vectors and the corresponding geometric positions of the subregions. The use of subregion position information can reduce the risk of false acceptances. Following the use of PCA+ICA to model the face images, in the second application the kernel discriminant transformation (KDT) algorithm is proposed by extending the idea of canonical correlation analysis (CCA) of comparing facial image sets for face recognition. The recognition performance is rendered more robust by utilizing a set of test facial images characterized by arbitrary head poses, facial expressions and lighting conditions. Since the manifolds of the image sets in the training database are highly-overlapped and non-linearly distributed, each facial image set is non-linearly mapped into a high-dimensional space and a corresponding kernel subspace is then constructed using kernel principal component analysis (KPCA). To extract the discriminant features for recognition, a KDT matrix is proposed that maximizes the similarities of within-kernel subspaces and simultaneously minimizes those of between-kernel subspaces. While the KDT matrix cannot be computed explicitly in the high-dimensional feature space, an iterative kernel discriminant transformation algorithm is developed to solve the matrix in an implicit way. The proposed face recognition system is demonstrated to outperform existing still-image-based as well as image set-based recognition systems using the Yale face database B.
APA, Harvard, Vancouver, ISO, and other styles
36

Hsu, Heng-Wei, and 許恆瑋. "Face Recognition Using Metric Learning with Head Pose Information." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/69788s.

Full text
Abstract:
博士
國立交通大學
電子研究所
107
Face recognition has gained much interest recently and is widely used in daily applications such as video surveillance, applications for smart phones and airport security. Nevertheless, recognizing faces in large profile views still remains a hard problem since important features start to be obscured as a person’s head turns. This problem can be divided into two sub-problems: first, an accurate head pose estimation model is required to predict the angle given a face image. Second, a face recognition model that leverages the angle information is also needed to discriminate different people in different angles. In this dissertation, we aim to fulfill this gap. Instead of estimating the angle of head poses through a commonly used two-step process, a set of landmarks are first detected from faces then angles are estimated through these detected landmarks, we propose to directly predict angles from face images by training a deep convolutional neural network model. We further provide a metric learning based face recognition framework to leverage the angle information and improve the overall performance. Our contribution can be mainly divided into three parts: first, we propose a novel geometric loss for face recognition that explores the area relations within quadruplets of samples, which inherently considers the geometric characteristics of each sample set. The sampled quadruplet includes three positive samples and one negative sample which form a tetrahedron in the embedding space. The area of the triangular face formed by positive samples is minimized to reduce intraclass variations, whereas the areas of the triangular faces including the negative sample are maximized to increase interclass distances. With our area based objective function, the gradient of each sample considers its neighboring samples and adapts to local geometry which leads to improved performance. Second, we conduct an in-depth study of head pose estimation and present a multi-regression loss function, a L2 regression loss combined with an ordinal regression loss, to train a convolutional neural network (CNN) that is dedicated to estimating head poses from RGB images without depth information. The ordinal regression loss is utilized to address the non-stationary property observed as the facial features change with respect to different head pose angles and learn robust features. The L2 regression loss leverages these features to provide precise angle predictions for input images. To avoid the ambiguity problem in the commonly used Euler angle representation, we further formulate the head pose estimation problem in quaternions. Our quaternion-based multi-regression loss method achieves state-of-the-art performance on several public benchmark datasets. Third, we designed a sophisticated face recognition training framework. We start from data cleaning, an automatic method to deal with the labeling noise issue which most recent large datasets suffer. We then designed a data augmentation method that randomly augments the input image under various condition, such as adjusting the contrast, saturation, and the lighting condition of an image. Sharpening, blurring and noises are also applied to the images to simulate cases from different camera sources. The boundary values of the parameters for each image processing method are designed such that the resulting images are reasonable. Experiment results demonstrate that models trained with this kind of data augmentation show robust performance to unseen images. When training with large datasets, the size of the last fully connected layer for the classification loss are often large since the datasets consist of large number of identities. This makes the training process hard to converge, as the weights are randomly initialized. Thus we propose an iterative training and finetuning process that makes the training loss converge smoothly. Furthermore, to leverage the angle information for improving face recognition performance, we provide a detailed analysis of a metric learning based method that learns to minimize the distance between a person’s frontal and profile images. Qualitative and quantitative results are shown to demonstrate the effectiveness of our proposed training methodology. The following publications form the foundation of this thesis Heng-Wei Hsu, Tung-Yu Wu, Sheng Wan, Wing Hung Wong, and Chen-Yi Lee, “QuatNet: Quaternion-Based Head Pose Estimation With Multiregression Loss,” IEEE Transactions on Multimedia, Aug 2018. • Heng-Wei Hsu, Tung-Yu Wu, Wing Hung Wong, and Chen-Yi Lee, “Correlation-based Face Detection for Recognizing Faces in Videos,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3101–3105, Apr 2018. • Heng-Wei Hsu, Tung-Yu Wu, Sheng Wan, Wing Hung Wong, and Chen-Yi Lee, “Deep Metric Learning with Geometric Loss,” under review. • Sheng Wan, Tung-Yu Wu, Heng-Wei Hsu, Yi-Wei Chen, Wing H. Wong, and Chen-Yi Lee, “Model-based JPEG for Convolutional Neural Network Classifications,” under review.
APA, Harvard, Vancouver, ISO, and other styles
37

Chu, Tsu-ying, and 朱姿穎. "Correlation Filter for Face Recognition Across Illumination." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/96q2k7.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
100
Face recognition across illumination variation involves illumination normalization, feature extraction and classification. This research compares a few state-of-the-art illumination normalization methods, and selects the most potential one. We also investigate the impacts made by different facial regions on the recognition performance. Many believe that the facial region considered for face recognition is better bounded within the facial contour to minimize the degradation due to background and hair. However, we have found that the inclusion of the boundary of the forehead, contours of the cheeks, and the contour of the chin can effectively improve the performance. Minimum average correlation energy filter (MACE) combined with kernel class-dependence feature analysis (KCFA) is proven an effective solution, and therefore is adopted in this study with some minor modification. Following the protocol FGRC 2.0, the recognition rate can be improved from 72.91% to 84.83% using the recommended illumination normalization, and further improved to 88.17% with the recommended facial region.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Ding-En, and 王鼎恩. "Features Selection and GMM Classifier for Multi-Pose Face Recognition." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/03408850317662581389.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
103
Face recognition is widely used in security application, such as homeland security, video surveillance, law enforcement, and identity management. However, there are still some problems in face recognition system. The main problems include the light changes, facial expression changes, pose variations and partial occlusion. Although many face recognition approaches reported satisfactory performance, their successes are limited to the conditions of controlled environment. In fact, pose variation has been identified as one of the most current problems in the real world. Therefore, many algorithms focusing on how to handle pose variation have received much attention. To solve the pose variations problem, in this thesis, we propose a multi-pose face recognition system based on an effective design of classifier using SURF feature. In training phase, the proposed method utilizes SURF features to calculate similarity between two images from different poses of the same face. Face recognition model (GMM) is trained using the robust SURF features from different poses. In testing phase, feature vectors corresponding to the test images are input to all trained models for the decision of the recognized face. Experiment results show that the performance of the proposed method is better than other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhuo, You-Lin, and 卓佑霖. "A Comparative Study on Face Recognition Across Illumination." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/mp2v5w.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
99
Face recognition across illumination is one of the most challenging problems in image-based face analysis. Most research focus on the methods for illumination normalization, illumination-invariant feature extraction, or classifier design, but few compare the performance of different approaches. This research evaluates and compares the performance of a few competitive approaches for illumination normalization and several methods for local feature extraction, aiming at determining an effective approach for face recognition across illumination. Because the other issue of the central concern of this research is the appropriateness of the determined approach in making a real-time system, the methods with high computational cost are excluded, although some may result in high recognition rates. The approach recommended by this comparison study can attain 85.19% in recognition rate on FRGC 2.0 database. With its relatively low computational cost, the approach is experimentally proven appropriate for making a real-time system.
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Wei-Ting, and 吳韋霆. "Face Recognition Across Illumination Using Local DCT Features." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/31101436077802528442.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
98
Based on the holistic Log-DCT features, which are proven effective for face recognition across illumination conditions, this research considers the same features combined with local square patches for face recognition across illumination and with imprecise face localization. The objectives of this research include the following: (1) define the performance upper bound attainable by the combination of holistic Log-DCT features and local patches, (2) investigate the impacts on the performance from imprecise face localization contributed by a face detector. Satisfactory results are shown from the experiments on the CMU PIE database which offers faces with almost perfect localization, revealing that the combination of Log-DCT features and local patches can be an effective solution for recognizing perfectly localized faces across illumination. However, the performance degrades substantially when evaluating the combination on the FRGC 2.0 database, which offer faces with imprecise localization and variations on pose and expression, reflecting the fact that an actual face recognition system cannot leave alone these parameters. A local alignment and masking scheme is proposed to tackle the problems caused by these parameters, and is proven effective in an extensive experimental study.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Chiunhsiun, and 林群雄. "Face Detection, Pose Classification, and Face Recognition Based on Triangle Geometry and Color Features." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/00914481948416413620.

Full text
Abstract:
博士
國立中央大學
資訊工程研究所
89
ABSTRACT In this dissertation, the problems of face detection, pose classification, and face recognition are studied and solved completely. The applications of face detection, pose classification, and face recognition are extended to various topics. The applications include: computer vision, security system, authentication for remove banking and access-control application. In the past, the problems of face detection, pose classification, and face recognition were introduced by numerous researches. However, experimental results reveal the practicability and competence of our proposed approaches in finding human face, pose classification, and face recognition. The feasibility and efficiency of our approaches is confirmed by experimental results. In this thesis, the relationship between two eyes and one mouth is shown clearly based on the geometrical structure of an isosceles triangle. The first proposed face detection system consists of two primary parts. The first part is to search for the potential face regions. The second part is to perform face verification. This system can conquer different size, different lighting condition, varying pose and expression, and noise and defocus problems. In addition to overcome the problem of partial occlusion of mouth and sunglasses, the system can also detect faces from the side view. Experimental results demonstrate that an approximately 98 % success rate is achieved. In addition, a new method of extracting the human-skin-like colors is proposed for reduction of the total computation effort in complicated surroundings. In this approach, skin-color-segmentation is used to remove the complex backgrounds according to the values of R, G, and B directly. This partition method reveals the skin-color-segmentation, which results in the saving of the total computation effort nearly by 80% in complicated backgrounds. The third chapter presents another novel face detection algorithm that is presented to locate multiple faces in color scenery images. A binary skin color map is first obtained in the color analysis stage. Then, color regions corresponding to the facial and non-facial areas in the color map are separated with a clustering-based splitting algorithm. Thereafter, an elliptic face model is devised to crop the real human faces through the shape location procedure. Last, local thresholding technique and a statistic-based verification procedure are utilized to confirm the human faces. The proposed detection algorithm combines both the color and shape properties of faces. In this work, the color span of human face can be expanded as wilder as possible to cover different faces by using the clustering-based splitting algorithm. Experimental results also reveal the feasibility of our proposed approach in solving face detection problem. The fourth chapter presents a method for automatic estimation of the poses/degrees of human faces. The proposed system consists of two primary parts. The first part is to search the potential face regions that are gotten from the isosceles-triangle criteria based on the rules of "the combination of two eyes and one mouth". The second part of the proposed system is to perform the task of pose verification by utilizing face weighting mask function, direction weighting mask function, and pose weighting mask function. The proposed face poses/degrees classification system can determine the poses of multiple faces. Experimental results demonstrate that an approximately 99 % success rate is achieved and the relative false estimation rate is very low. The fifth chapter presented a robust and efficient feature-based classification to recognize human faces embedded in photographs. The proposed system consists of two main parts. The first part is to detect the face regions. The second part of the proposed system is to perform the face recognition task. The proposed face recognition system can handle different size and different brightness conditions problems. Experimental results demonstrate that we can succeed overcome the various brightness conditions. Finally, conclusions and future works are given in Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
42

Goren, Deborah. "Quantifying facial expression recognition across viewing conditions /." 2004. http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99314.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2004. Graduate Programme in Biology.
Typescript. Includes bibliographical references (leaves 59-66). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99314
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Shih Chieh, and 王仕傑. "Pose-Variant Face Recognition and Its Application to Human-Robot Interaction." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/63299410428657032826.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
97
In this thesis, a pose-variant face recognition system has been developed for human-robot interaction. In order to extract the facial feature points from different poses, active appearance model (AAM) is employed to find the position of feature point. The improved Lucas-Kanade algorithm is used to solve the image alignment. After obtaining the location of feature points, the eigenspace of texture model is reduced the dimension and sent to the back propagation neural network (BPNN). By using the BPNN, the proposed recognizes that which family-member is the user. The proposed pose-variant face recognition system has been implemented on an embedded image system of a pet robot. In order to test our method, UMIST and self-built database are both used to evaluate the performance of the proposed algorithm. Experimental results show that the average recognition rate of the UMIST database and self-built database in our lab are 91% and 95.56% respectively. The proposed pose-variant face recognition system is suitable for applying to human-robot interaction.
APA, Harvard, Vancouver, ISO, and other styles
44

Hsieh, Chao-Kuei, and 謝兆魁. "Research on Robust 2D Face Recognition under Expression and Pose Variations." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/29837810055342825192.

Full text
Abstract:
博士
國立清華大學
電機工程學系
97
Face recognition is one of the most intensively studied topics in computer vision and pattern recognition. There are three essential issues to be dealt with in the research of face recognition; namely, pose, illumination, and expression variations. The recognition rate will drop considerably when the head pose or illumination variation is too large, or when there is expression on the face. Although many researches were focused on overcoming these challenges, few were focused on how to robustly recognize expressional faces with one single training sample per class. In this thesis, we modify the regularization-based optical flow algorithm by imposing constraints on some given point correspondences to compute precise pixel displacements and intensity variations. The constrained optical flow computation can be efficiently computed by applying the modified ICPCG algorithm. By using the optical flow computed from the input expression-variant face image with respect to a reference neutral face image, we can remove the expression from the face image by elastic image warping to recognize the subject with facial expression. On the other hand, the optical flow can be computed in the opposite direction, which is from the neutral face image to the input expression-variant face image. By combining information from the computed intra-person optical flow and the synthesized face image in a probabilistic framework, an integrated face recognition system is proposed, which can be robust against facial expressions with a limited size of training database. Experimental validation on the Binghamton University 3D Face Expression (BU-3DFE) Database is given to show that the proposed expression normalization algorithm significantly improves the accuracy of face recognition on expression variant faces. A possible solution for overcoming the pose variation problem in face recognition is also presented in this thesis. The ideal solution is to reconstruct a 3D model from the input images and synthesize the virtual image with the corresponding pose, which might be too complex to be implemented in a real-time application. By formulating this kind of solution as a nonlinear pose normalization problem, we propose an algorithm that integrates the nonlinear kernel function and the linear regression method, which makes the solution resemble to the ideal one. Some discussions and experiments on CMU PIE database are carried out, and the experimental results show that the proposed method is robust against pose variations.
APA, Harvard, Vancouver, ISO, and other styles
45

Huang, Jia-ji, and 黃嘉吉. "Automatic Face Recognition based on Head Pose Estimation and SIFT Features." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/50374859509522309664.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
97
Video based face recognition can be divided three categories: (1)still-to-still, (2)multip stills-to-stills and (3)multiple-still-to-multiple-still. In this thesis, we are interested in video based recognition. Our system is divided into several parts. The first part is face detection procedure which detects face region in a frame from video. The detected face could contain pose variant. Therefore, we empoly head pose estimation method as ltering procedure which select frontal faces from video. In third part, the matching procedure is included. we extract facial feature using Scale-Invariant Feature Transform (SIFT). While features is extracted, each feature of frames in the probe set is matched with gallery set. We also exploit spacial information to eliminate false matching. Our performance will be evaluated in FRGC, MBGC, and IDIAP dataset.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Hong-Wen, and 陳宏文. "Pose and Expression Invariant Face Recognition with One Training Image P4er Person." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/42556758739964511010.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
96
Face recognition security system has becoming important for many applications such as automatic access control and video surveillance. Most face recognition security systems today require proper frontal view of a person, and these systems will fail if the person to be recognized does not face the camera correctly. In this paper, we present a method for pose and expression invariant face recognition using only a single sample image per person. The method utilizes the similarities of a face image against a set of faces from a prototype set taken at the same pose and expression to establish pose and expression invariant similarity vector which can be used for comparing face images of a person taken in different poses and facial expressions. Experimental results indicate that the proposed method achieves high recognition rate even for large pose and expression variations.
APA, Harvard, Vancouver, ISO, and other styles
47

Lazarus, Toby Ellen. "Changes in face processing across ages : the role of experience /." 2002. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3048398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chien, Ming-yen, and 簡名彥. "LBP-Based On-line Multi-Pose Face Model Learning and the Application in Real-time Face Tracking and Recognition." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/u9wznr.

Full text
Abstract:
碩士
國立臺灣科技大學
自動化及控制研究所
100
Because human face is not a rigid object, the changes of face expression or poses would cause huge variation in the image. Furthermore, there are other disturbances such as the varying of illumination and partial occlusions. Therefore, it is necessary to have a robust multi-pose measurement model to achieve stable tracking. In addition, tracked face could be lost when the occluded region is too large. To recover the tracking, specific features are needed to learn previously. However, how to overcome the disturbance and construct the multi-pose specific feature model is a challenge. Regarding to on-line face recognition, the more personal information we collect, the more accurate result we’ll get. The collection of such information would also be affected by the instability of tracking. How to obtain correct information for on-line face recognition is a problem. In this thesis, we propose an integrated tracking algorithm combining generic face model and specific face model. We use the color kernel histogram of human face to assistant the integration of combining two models, and use generic face model to help construct multi-pose specific face model. Via the specific face model, even if the tracking target is lost in some situations, the model can help to find the losing target. Because the specific face model is constructed by LBP texture feature, it can achieve robust tracking, including partial occlusion. And the learned information of specific face model can be used in face recognition. In our experiment, the purposed method can achieve good tracking result in the condition of complex background, varying of illumination, partial occlusion, changes of poses and so on. The target losing during tracking can be recovered correctly as well. For face recognition, the multi-pose specific face model can provide sufficient information to achieve acceptable accuracy rate. Through the experiment the accuracy rate is above 70%.
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Zong-xian, and 林宗賢. "Fast Semi-3D Vertical Pose Recovery for Face Recognition and Its OMAP Embedded System Implementation." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/87878543624234482851.

Full text
Abstract:
碩士
雲林科技大學
電機工程系碩士班
96
Face recognition is the key part in biometric field because it can provide noninvasive and convenient features. Most of conventional face recognition systems just focus on the frontal face cases, but the fact is that the face data captured by the camera are often filled up with the pose variation, whether horizontal or vertical pose variations. It decreases the recognition accuracy and reliability. Based on semi-3D face model, this paper proposes a simple but practical preprocessing method to recover the vertical pose variation simply from a single 2-D model view. The proposed method evaluates the angle of the vertical pose variation and thereby recovers the flanked face to the frontal face. Consequently, the recovered face data can be processed by the original face recognition system accurately and efficiently. In the experiment, we adopt the Gabor Wavelet transform for the feature extraction core of the face recognition system. The experimental result shows the proposed Fast Semi-3D Vertical Pose Recovery method can significantly help to raise both similarity and precision of the face recognition system.
APA, Harvard, Vancouver, ISO, and other styles
50

Yan, Yi-wei, and 嚴逸緯. "Integration of Human Feature Detection and Geometry Analysis for Real-time Face Pose Estimation and Gesture Recognition." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63197581381191989404.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
97
In recent years, the digital products have become more accessible to people. The requirement of the different levels of intelligent human-machine interface is increased gradually and conduces to grow more and more related research of human face technique. Face detection and face recognition technology applications such as identifications system, access control monitoring systems. Human-Computer Interactions (HCI) is more and more common in real life. The survey of human face pose estimation, a human face-related research field is a popular topic in HCI. In this paper, we divide the human faces into several viewpoint categories according to their poses in 3D and propose a system to estimate human face pose based on object detection and geometry analysis. The system architecture includes two components: 1) Face detection, 2) Face Pose estimation. It is not only considered about performance, but also the extension of the system by using the modular structure design. We define 9-posed in this system by the human features detection such as eyes, head and shoulders, frontal face and profile face and we defined a detect array for these detectors. Because of the fast object detection algorithm, the features can be detected and get good detect rate in low resolution 320*240. To improve the detect array of this system, we design a cascade detector array which detect only the interested region in image and can detect 9 face poses in real-time. We can speed up the detection system by using the cascade detector array. We have proposed a gesture detection system based on Paul and Viola’s object detection, and combine it with image processing to recognize the defined gesture. We define two gestures in gesture detection to control the appliance. In final chapter of this thesis, we will show the experimental results using the test videos we took. Then we combine the pose estimation system and gesture detection system and apply it to the appliance control in NCKU Aspire Home. The proposed system can not only detect the human pose’s position and pose effectively in image, but also order the appliance. In this research, we proposed a human face pose estimation system. If we add the face model with a pre-training mode, we can increase the system detect rate and it will be a complete detection approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography