To see the other types of publications on this topic, follow the link: Human face recognition.

Dissertations / Theses on the topic 'Human face recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human face recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wong, Vincent. "Human face recognition /." Online version of thesis, 1994. http://hdl.handle.net/1850/11882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ener, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.

Full text
Abstract:
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Batur, Aziz Umit. "Illumination-robust face recognition." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zou, Weiwen. "Face recognition from video." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Colin K. "Infrared face recognition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLee%5FColin.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Monique P. Fargues, Gamani Karunasiri. Includes bibliographical references (p. 135-136). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Gangam, Priyanka Reddy. "Recognizing Face Sketches by Human Volunteers." Youngstown State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1297198615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." University of Sydney. Electrical and Information Engineering, 2004. http://hdl.handle.net/2123/586.

Full text
Abstract:
Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of $\rho$ and $\eta$ prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked.
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." Thesis, The University of Sydney, 2003. http://hdl.handle.net/2123/586.

Full text
Abstract:
Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of $\rho$ and $\eta$ prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked.
APA, Harvard, Vancouver, ISO, and other styles
9

Tibbalds, Adam Dominic. "Three dimensional human face acquisition for recognition." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Low, Boon Kee. "Computer extraction of human faces." Thesis, De Montfort University, 1999. http://hdl.handle.net/2086/10668.

Full text
Abstract:
Due to the recent advances in visual communication and face recognition technologies, automatic face detection has attracted a great deal of research interest. Being a diverse problem, the development of face detection research has comprised contributions from researchers in various fields of sciences. This thesis examines the fundamentals of various face detection techniques implemented since the early 70's. Two groups of techniques are identified based on their approach in applying face knowledge as a priori: feature-based and image-based. One of the problems faced by the current feature-based techniques, is the lack of costeffective segmentation algorithms that are able to deal with issues such as background and illumination variations. As a result a novel facial feature segmentation algorithm is proposed in this thesis. The algorithm aims to combine spatial and temporal information using low cost techniques. In order to achieve this, an existing motion detection technique is analysed and implemented with a novel spatial filter, which itself is proved robust for segmentation of features in varying illumination conditions. Through spatio-temporal information fusion, the algorithm effectively addresses the background and illumination problems among several head and shoulder sequences. Comparisons of the algorithm with existing motion and spatial techniques establishes the efficacy of the combined approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Jian. "Discriminant analysis algorithms for face recognition." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kumar, Sooraj. "Face recognition with variation in pose angle using face graphs /." Online version of thesis, 2009. http://hdl.handle.net/1850/9482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kuhn, Lisa Katharina. "Emotion recognition in the human face and voice." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11216.

Full text
Abstract:
At a perceptual level, faces and voices consist of very different sensory inputs and therefore, information processing from one modality can be independent of information processing from another modality (Adolphs & Tranel, 1999). However, there may also be a shared neural emotion network that processes stimuli independent of modality (Peelen, Atkinson, & Vuilleumier, 2010) or emotions may be processed on a more abstract cognitive level, based on meaning rather than on perceptual signals. This thesis therefore aimed to examine emotion recognition across two separate modalities in a within-subject design, including a cognitive Chapter 1 with 45 British adults, a developmental Chapter 2 with 54 British children as well as a cross-cultural Chapter 3 with 98 German and British children, and 78 German and British adults. Intensity ratings as well as choice reaction times and correlations of confusion analyses of emotions across modalities were analysed throughout. Further, an ERP Chapter investigated the time-course of emotion recognition across two modalities. Highly correlated rating profiles of emotions in faces and voices were found which suggests a similarity in emotion recognition across modalities. Emotion recognition in primary-school children improved with age for both modalities although young children relied mainly on faces. British as well as German participants showed comparable patterns for rating basic emotions, but subtle differences were also noted and Germans perceived emotions as less intense than British. Overall, behavioural results reported in the present thesis are consistent with the idea of a general, more abstract level of emotion processing which may act independently of modality. This could be based, for example, on a shared emotion brain network or some more general, higher-level cognitive processes which are activated across a range of modalities. Although emotion recognition abilities are already evident during childhood, this thesis argued for a contribution of ‘nurture’ to emotion mechanisms as recognition was influenced by external factors such as development and culture.
APA, Harvard, Vancouver, ISO, and other styles
14

Eriksson, Anders. "3-D face recognition." Thesis, Stellenbosch : Stellenbosch University, 1999. http://hdl.handle.net/10019.1/51090.

Full text
Abstract:
Thesis (MEng) -- Stellenbosch University , 1999.
ENGLISH ABSTRACT: In recent years face recognition has been a focus of intensive research but has still not achieved its full potential, mainly due to the limited abilities of existing systems to cope with varying pose and illumination. The most popular techniques to overcome this problem are the use of 3-D models or stereo information as this provides a system with the necessary information about the human face to ensure good recognition performance on faces with largely varying poses. In this thesis we present a novel approach to view-invariant face recognition that utilizes stereo information extracted from calibrated stereo image pairs. The method is invariant of scaling, rotation and variations in illumination. For each of the training image pairs a number of facial feature points are located in both images using Gabor wavelets. From this, along with the camera calibration information, a sparse 3-D mesh of the face can be constructed. This mesh is then stored along with the Gabor wavelet coefficients at each feature point, resulting in a model that contains both the geometric information of the face as well as its texture, described by the wavelet coefficients. The recognition is then conducted by filtering the test image pair with a Gabor filter bank, projecting the stored models feature points onto the image pairs and comparing the Gabor coefficients from the filtered image pairs with the ones stored in the model. The fit is optimised by rotating and translating the 3-D mesh. With this method reliable recognition results were obtained on a database with large variations in pose and illumination.
AFRIKAANSE OPSOMMING: Alhoewel gesigsherkenning die afgelope paar jaar intensief ondersoek is, het dit nog nie sy volle potensiaal bereik nie. Dit kan hoofsaaklik toegeskryf word aan die feit dat huidige stelsels nie aanpasbaar is om verskillende beligting en posisie van die onderwerp te hanteer nie. Die bekendste tegniek om hiervoor te kompenseer is die gebruik van 3-D modelle of stereo inligting. Dit stel die stelsel instaat om akkurate gesigsherkenning te doen op gesigte met groot posisionele variansie. Hierdie werk beskryf 'n nuwe metode om posisie-onafhanklike gesigsherkenning te doen deur gebruik te maak van stereo beeldpare. Die metode is invariant vir skalering, rotasie en veranderinge in beligting. 'n Aantal gesigspatrone word gevind in elke beeldpaar van die oplei-data deur gebruik te maak van Gabor filters. Hierdie patrone en kamera kalibrasie inligting word gebruik om 'n 3-D raamwerk van die gesig te konstrueer. Die gesigmodel wat gebruik word om toetsbeelde te klassifiseer bestaan uit die gesigraamwerk en die Gabor filter koeffisiente by elke patroonpunt. Klassifisering van 'n toetsbeeldpaar word gedoen deur die toetsbeelde te filter met 'n Gabor filterbank. Die gestoorde modelpatroonpunte word dan geprojekteer op die beeldpaar en die Gabor koeffisiente van die gefilterde beelde word dan vergelyk met die koeffisiente wat gestoor is in die model. Die passing word geoptimeer deur rotosie en translasie van die 3-D raamwerk. Die studie het getoon dat hierdie metode akkurate resultate verskaf vir 'n databasis met 'n groot variansie in posisie en beligting.
APA, Harvard, Vancouver, ISO, and other styles
15

Pan, Wenbo. "Real-time human face tracking." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55535.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wickham, Lee H. V. "Attractiveness and distinctiveness of the human face." Thesis, Lancaster University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tran, Thao, and Nathalie Tkauc. "Face recognition and speech recognition for access control." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39776.

Full text
Abstract:
This project is a collaboration with the company JayWay in Halmstad. In order to enter theoffice today, a tag-key is needed for the employees and a doorbell for the guests. If someonerings the doorbell, someone on the inside has to open the door manually which is consideredas a disturbance during work time. The purpose with the project is to minimize thedisturbances in the office. The goal with the project is to develop a system that uses facerecognition and speech-to-text to control the lock system for the entrance door. The components used for the project are two Raspberry Pi’s, a 7 inch LCD-touch display, aRaspberry Pi Camera Module V2, a external sound card, a microphone and speaker. Thewhole project was written in Python and the platform used was Amazon Web Services (AWS)for storage and the face recognition while speech-to-text was provided by Google.The system is divided in three functions for employees, guests and deliveries. The employeefunction has two authentication steps, the face recognition and a random generated code that needs to be confirmed to avoid biometric spoofing. The guest function includes the speech-to-text service to state an employee's name that the guest wants to meet and the employee is then notified. The delivery function informs the specific persons in the office that are responsiblefor the deliveries by sending a notification.The test proves that the system will always match with the right person when using the facerecognition. It also shows what the threshold for the face recognition can be set to, to makesure that only authorized people enters the office.Using the two steps authentication, the face recognition and the code makes the system secureand protects the system against spoofing. One downside is that it is an extra step that takestime. The speech-to-text is set to swedish and works quite well for swedish-speaking persons.However, for a multicultural company it can be hard to use the speech-to-text service. It canalso be hard for the service to listen and translate if there is a lot of background noise or ifseveral people speak at the same time.
APA, Harvard, Vancouver, ISO, and other styles
18

Pang, Meng. "Single sample face recognition under complex environment." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/635.

Full text
Abstract:
Single sample per person face recognition (SSPP FR), i.e., recognizing a person with a single face image in the biometric enrolment database only for training, has lots of attractive real-world applications such as criminal identification, law enforcement, access control, video surveillance, just to name a few. This thesis studies two important problems in SSPP FR, i.e., 1) SSPP FR with a standard biometric enrolment database (SSPP-se FR), and 2) SSPP FR with a contaminated biometric enrolment database (SSPP-ce FR). The SSPP-ce FR is more challenging than SSPP-se FR since the enrolment samples are collected under more complex environments and can be contaminated by nuisance variations. In this thesis, we propose one patch-based method called robust heterogeneous discriminative analysis (RHDA) to tackle SSPP-se FR, and propose two generic learning methods called synergistic generic learning (SGL) and iterative dynamic generic learning (IDGL), respectively, to tackle SSPP-ce FR. RHDA is proposed to address the limitations in the existing patch-based methods, and to enhance the robustness against complex facial variations for SSPP-se FR from two aspects. First, for feature extraction, a new graph-based Fisher-like criterion is presented to extract the hidden discriminant information across two heterogeneous adjacency graphs, and meanwhile improve the discriminative ability of patch distribution in underlying subspaces. Second, a joint majority voting strategy is developed by considering both the patch-to-patch and patch-to-manifold distances, which can generate complementary information as well as increase error tolerance for identification. SGL is proposed to address the SSPP-ce FR problem. Different from the existing generic learning methods simply based on prototype plus variation (P+V) model, SGL presents a new "learned P + learned V" framework that enables the prototype learning and variation dictionary learning to work collaboratively to identify new probe samples. Specifically, SGL learns prototypes for contaminated enrolment samples by preserving the more discriminative parts while learns variation dictionary by extracting the less discriminative intra-personal variants from an auxiliary generic set, on account of a linear Fisher information-based feature regrouping (FIFR). IDGL is proposed to address the limitations in SGL and thus better handling the SSPP-ce FR problem. IDGL is also based on the "learned P + learned V" framework. However, rather than using the linear FIFR to recover prototypes for contaminated enrolment samples, IDGL constructs a dynamic label-feedback network to update prototypes iteratively, where both linear and non-linear variations can be well removed. Besides, the supplementary information in probe set is effectively employed to enhance the correctness of the prototypes to represent the enrolment persons. Furthermore, IDGL proposes a new "sample-specific" corruption strategy to learn a representative variation dictionary. Comprehensive validations and evaluations are conducted on various benchmark face datasets. The computational complexities of the proposed methods are analyzed and empirical studies on parameter sensitivities are provided. Experimental results demonstrate the superior performance of the proposed methods for both SSPP-se FR and SSPP-ce FR.
APA, Harvard, Vancouver, ISO, and other styles
19

Katadound, Sachin. "Face Recognition: Study and Comparison of PCA and EBGM Algorithms." TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/241.

Full text
Abstract:
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Feng, Guo Can. "Face recognition using virtual frontal-view image." HKBU Institutional Repository, 1999. http://repository.hkbu.edu.hk/etd_ra/267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Saleh, Mohamed Ibrahim. "Using Ears for Human Identification." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33158.

Full text
Abstract:
Biometrics includes the study of automatic methods for distinguishing human beings based on physical or behavioral traits. The problem of finding good biometric features and recognition methods has been researched extensively in recent years. Our research considers the use of ears as a biometric for human recognition. Researchers have not considered this biometric as much as others, which include fingerprints, irises, and faces. This thesis presents a novel approach to recognize individuals based on their outer ear images through spatial segmentation. This approach to recognizing is also good for dealing with occlusions. The study will present several feature extraction techniques based on spatial segmentation of the ear image. The study will also present a method for classifier fusion. Principal components analysis (PCA) is used in this research for feature extraction and dimensionality reduction. For classification, nearest neighbor classifiers are used. The research also investigates the use of ear images as a supplement to face images in a multimodal biometric system. Our base eigen-ear experiment results in an 84% rank one recognition rate, and the segmentation method yielded improvements up to 94%. Face recognition by itself, using the same approach, gave a 63% rank one recognition rate, but when complimented with ear images in a multimodal system improved to 94% rank one recognition rate.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Cadavid, Steven. "Human Identification Based on Three-Dimensional Ear and Face Models." Scholarly Repository, 2011. http://scholarlyrepository.miami.edu/oa_dissertations/516.

Full text
Abstract:
We propose three biometric systems for performing 1) Multi-modal Three-Dimensional (3D) ear + Two-Dimensional (2D) face recognition, 2) 3D face recognition, and 3) hybrid 3D ear recognition combining local and holistic features. For the 3D ear component of the multi-modal system, uncalibrated video sequences are utilized to recover the 3D ear structure of each subject within a database. For a given subject, a series of frames is extracted from a video sequence and the Region-of-Interest (ROI) in each frame is independently reconstructed in 3D using Shape from Shading (SFS). A fidelity measure is then employed to determine the model that most accurately represents the 3D structure of the subject’s ear. Shape matching between a probe and gallery ear model is performed using the Iterative Closest Point (ICP) algorithm. For the 2D face component, a set of facial landmarks is extracted from frontal facial images using the Active Shape Model (ASM) technique. Then, the responses of the facial images to a series of Gabor filters at the locations of the facial landmarks are calculated. The Gabor features are stored in the database as the face model for recognition. Match-score level fusion is employed to combine the match scores obtained from both the ear and face modalities. The aim of the proposed system is to demonstrate the superior performance that can be achieved by combining the 3D ear and 2D face modalities over either modality employed independently. For the 3D face recognition system, we employ an Adaboost algorithm to builda classifier based on geodesic distance features. Firstly, a generic face model is finely conformed to each face model contained within a 3D face dataset. Secondly, the geodesic distance between anatomical point pairs are computed across each conformed generic model using the Fast Marching Method. The Adaboost algorithm then generates a strong classifier based on a collection of geodesic distances that are most discriminative for face recognition. The identification and verification performances of three Adaboost algorithms, namely, the original Adaboost algorithm proposed by Freund and Schapire, and two variants – the Gentle and Modest Adaboost algorithms – are compared. For the hybrid 3D ear recognition system, we propose a method to combine local and holistic ear surface features in a computationally efficient manner. The system is comprised of four primary components, namely, 1) ear image segmentation, 2) local feature extraction and matching, 3) holistic feature extraction and matching, and 4) a fusion framework combining local and holistic features at the match score level. For the segmentation component, we employ our method proposed in [111], to localize a rectangular region containing the ear. For the local feature extraction and representation component, we extend the Histogram of Categorized Shapes (HCS) feature descriptor, proposed in [111], to an object-centered 3D shape descriptor, termed Surface Patch Histogram of Indexed Shapes (SPHIS), for surface patch representation and matching. For the holistic matching component, we introduce a voxelization scheme for holistic ear representation from which an efficient, element-wise comparison of gallery-probe model pairs can be made. The match scores obtained from both the local and holistic matching components are fused to generate the final match scores. Experimental results conducted on the University of Notre Dame (UND) collection J2 dataset demonstrate that theproposed approach outperforms state-of-the-art 3D ear biometric systems in both accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
23

Feng, Yicheng. "Template protecting algorithms for face recognition system." HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Anzellotti, Stefano. "The representation of person identity in the human brain." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11397.

Full text
Abstract:
Every day we encounter a variety of people, and we need to recognize their identity to interact with them appropriately. The most common ways to recognize a person's identity include the recognition of a face and of a voice. Recognizing a face or a voice is effortless, but the neural mechanisms that enable us to do so are complex. The face of a same person can look very different depending on the viewpoint and it can be partly occluded. Analogously, a voice can sound very different when it is saying different words. The neural mechanisms that enable us to recognize a person's identity need to abstract away from stimulus differences that are not relevant for identity recognition. Patient studies indicate that this process is executed with the contribution of multiple brain regions (Meadows, 1974; Tranel et al., 1997). However, the localization accuracy allowed by neuropsychological studies is limited by the lack of control on the location and extent of lesions. Neuroimaging studies individuated a set of regions that show stronger responses to faces than other objects (Kanwisher et al., 1997; Rajimehr et al., 2009), and to voices than other sounds (Belin et al., 2000). These regions do not necessarily encode information about a person's identity. In this thesis, a set of regions that encode information distinguishing between different face tokens were individuated, including ventral stream regions located in occipitotemporal cortex and the anterior temporal lobes, but also parietal regions: posterior cingulate and superior IPS. Representations of face identity with invariance across different viewpoints and across different halves of a face were found in the right ATL. However, representations of face identity and of voice identity were not found to overlap in ATL, indicating that in ATL representations of identity are organized by modality. For famous people, multimodal representations of identity were found in association cortex in posterior STS.
Psychology
APA, Harvard, Vancouver, ISO, and other styles
25

Singh, Richa. "Mitigating the effect of covariates in face recognition." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5990.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains xv, 136 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 125-136).
APA, Harvard, Vancouver, ISO, and other styles
26

Xue, Yun. "Non-negative matrix factorization for face recognition." HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Jin. "An Incremental Multilinear System for Human Face Learning and Recognition." FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/312.

Full text
Abstract:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
APA, Harvard, Vancouver, ISO, and other styles
28

Dagnes, Nicole. "3D human face analysis for recognition applications and motion capture." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.

Full text
Abstract:
Cette thèse se propose comme une étude géométrique de la surface faciale en 3D, dont le but est de fournir un ensemble d'entités, issues du contexte de la géométrie différentielle, à utiliser comme descripteurs faciaux dans les applications d'analyse du visage, comme la reconnaissance faciale et la reconnaissance des expressions faciales. En effet, bien que chaque visage soit unique, tous les visages sont similaires et leurs caractéristiques morphologiques sont les mêmes pour tous les individus. Par conséquent, il est primordial pour l'analyse des visages d'extraire les caractéristiques faciales les plus appropriées. Tous les traits du visage, proposés dans cette étude, sont basés uniquement sur les propriétés géométriques de la surface faciale. En effet, l'objectif final de cette recherche est de démontrer que la géométrie différentielle est un outil complet pour l'analyse des visages et que les caractéristiques géométriques conviennent pour décrire et comparer des visages et, en général, pour extraire des informations pertinentes pour l'analyse faciale dans les différents domaines d'application. Enfin, ce travail se concentre aussi sur l'analyse des troubles musculo-squelettiques en proposant une quantification objective des mouvements du visage pour aider la chirurgie maxillo-faciale et la rééducation des mouvements du visage. Ce travail de recherche explore le système de capture du mouvement 3D, en adoptant la plateforme Technologie, Sport et Santé, située au Centre d'Innovation de l'Université de Technologie de Compiègne, au sein du Laboratoire de Biomécanique et Bioingénierie (BMBI)
This thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Shaokang. "Robust discriminative principal component analysis for face recognition /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18934.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Akinbola, Akintunde A. "Estimation of image quality factors for face recognition." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4308.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2005.
Title from document title page. Document formatted into pages; contains vi, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-56).
APA, Harvard, Vancouver, ISO, and other styles
31

Ballot, Johan Stephen Simeon. "Face recognition using Hidden Markov Models." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/2577.

Full text
Abstract:
This thesis relates to the design, implementation and evaluation of statistical face recognition techniques. In particular, the use of Hidden Markov Models in various forms is investigated as a recognition tool and critically evaluated. Current face recognition techniques are very dependent on issues like background noise, lighting and position of key features (ie. the eyes, lips etc.). Using an approach which specifically uses an embedded Hidden Markov Model along with spectral domain feature extraction techniques, shows that these dependencies may be lessened while high recognition rates are maintained.
APA, Harvard, Vancouver, ISO, and other styles
32

Aljarrah, Inad A. "Color face recognition by auto-regressive moving averaging." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174410880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Zhenchun. "Design of a computer human face recognition system using fuzzy logic." Thesis, University of Huddersfield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Phung, Son Lam. "Automatic human face detection in color images." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1309.

Full text
Abstract:
Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D24
APA, Harvard, Vancouver, ISO, and other styles
35

Masip, David. "Feature extraction in face recognition on the use of internal and external features." Saarbrücken VDM Verlag Dr. Müller, 2005. http://d-nb.info/989265706/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Louw, Lloyd A. B. "Automated face detection and recognition for a login system." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

El, Seuofi Sherif M. "Performance Evaluation of Face Recognition Using Frames of Ten Pose Angles." Youngstown State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1198184813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shen, Chenyang. "L1-norm local preserving projection and its application." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Domboulas, Dimitrios I. "Infrared imaging face recognition using nonlinear kernel-based classifiers." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FDomboulas.pdf.

Full text
Abstract:
Thesis (Electrical Engineer and M.S. in Electrical Engineering)--Naval Postgraduate School, Dec. 2004.
Thesis Advisor(s): Monique P. Fargues. Includes bibliographical references (p. 107-109). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Feng, Yicheng. "Discriminability and security of binary template in face recognition systems." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Toure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.

Full text
Abstract:
This Master thesis proposes a human-computer interface for individual with limited hand movements that incorporate the use of facial gesture as a means of communication. The system recognizes faces and extracts facial gestures to map them into Morse code that would be translated in English in real time. The system is implemented on a MACBOOK computer using Python software, OpenCV library, and Dlib library. The system is tested by 6 students. Five of the testers were not familiar with Morse code. They performed the experiments in an average of 90 seconds. One of the tester was familiar with Morse code and performed the experiment in 53 seconds. It is concluded that errors occurred due to variations in features of the testers, lighting conditions, and unfamiliarity with the system. Implementing an auto correction and auto prediction system will decrease typing time considerably and make the system more robust.
APA, Harvard, Vancouver, ISO, and other styles
42

Zone, Anthony J. "Face Composite Recognition: Multiple Artists, Large Scale Human Performance and Multivariate Analysis." Youngstown State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1279908902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Canavan, Shaun. "Face recognition by multi-frame fusion of rotating heads in videos /." Connect to resource online, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1210446052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition /." Connect to this title, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Navarathna, Rajitha Dharshana Bandara. "Robust recognition of human behaviour in challenging environments." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/66235/1/Rajitha%20Dharshana%20Bandara_Navarathna_Thesis.pdf.

Full text
Abstract:
Novel techniques have been developed for the automatic recognition of human behaviour in challenging environments using information from visual and infra-red camera feeds. The techniques have been applied to two interesting scenarios: Recognise drivers' speech using lip movements and recognising audience behaviour, while watching a movie, using facial features and body movements. Outcome of the research in these two areas will be useful in the improving the performance of voice recognition in automobiles for voice based control and for obtaining accurate movie interest ratings based on live audience response analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhan, Ce. "Facial expression recognition for multi-player on-line games." School of Computer Science and Software Engineering, 2008. http://ro.uow.edu.au/theses/100.

Full text
Abstract:
Multi-player on-line games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communications and interactions. However, compared with ordinary human communication, MOG still has several limitations, especially in the communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. This thesis proposes an automatic expression recognition system that can be integrated into a MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, tailored and extended. In particular, Viola-Jones face detection method is modified in several aspects to detect small scale key facial components with wide shape variations. In addition a new coarse-to-fine method is proposed for extracting 20 facial landmarks from image sequences. The proposed system has been evaluated on a number of databases that are different from the training database and achieved 83% recognition rate for 4 emotional state expressions. During the real-time test, the system achieved an average frame rate of 13 fps for 320 x 240 images on a PC with 2.80 GHz Intel Pentium. Testing results have shown that the system has a practical range of working distances (from user to camera), and is robust against variations in lighting and backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
47

Arachchige, Somi Ruwan Budhagoda. "Face recognition in low resolution video sequences using super resolution /." Online version of thesis, 2008. http://hdl.handle.net/1850/7770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Triu Chiang, and 孫自強. "HUMAN FACE RECOGNITION." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/91139786195229458584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Chih-Ho, and 林志和. "Human Face Recognition System." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/04537729754713594129.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
91
This study proposes an face recognition system﹒This system has been developed many years﹒Many researcher also proposed many different method﹒Because there are many factors unable to overcome so recognition system have not a good result all the time﹒For example﹐the problem are facial expression﹐varying lighting﹐different quality of capture device﹐face feature extraction etc﹒The issue in this study is using color scenery images to develop recognition system﹒This system can be divided into three parts: First part is face detection﹒Because HSI color system are not sensitive to the intensity variations﹒Hence﹐the RGB values of pixel in the input image are first transform into HSI color space﹒The every pixels in the image will be mapped onto one point in the HSI plane﹒If the corresponding point lies on the specified zone﹐then the pixel will be labeled as a skin pixel. The specified zone was statistic skin color range lies on the HSI space﹒ISO DATA(Iterative Self-Organizing Data Analysis Technique Algorithm)must be applied to separate the skin pixels into several clusters﹒We could exploit organ’s location on the face to decide every clusters whether was human face or not﹒ Second part is feature segment﹒In order to distinguish from different faces﹐we have to find out every unique face’s feature﹒We must segment image before find out feature﹒In order to find out which one are organs that we want﹒The invariable features(eye﹐nose﹐lip) on the face have to be exploited﹒In this thesis﹐“Eigenspace Projection”was applied to project eye﹐nose﹐lip and face’s image on the eigenspace﹐then many feature values are gotten﹒ Third part is verification system﹒This system is implemented based on the “Plastic Perceptron Neural Network”﹒This network is more suitable for classification especially and it can parallel and distributed process different class. Network has not overall retraining when you replace patterns or add new ones.“Plastic Perceptron Neural Network”has more elasticity than conventional“Black-Propagation Neural Network”.
APA, Harvard, Vancouver, ISO, and other styles
50

HSUEH, Chieh-Jen, and 薛傑仁. "Biometrics on Human Face Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/76446999423396054679.

Full text
Abstract:
碩士
亞洲大學
生物資訊學系碩士班
98
This thesis reviews and compares the pros and cons of several popular theories and methods for face recognize system, such as PCA, ICA, LDA, HMM, SVM, etc. In the end, the thesis also presents our study of “Face Recognition Base on Gini Features and K-L Transform” which was published in ITIA 2010 conference. This study is to improve the performance of Karhunen-Loève transform (KLT) in face recognition of biometrics. A measure of non-uniformity, called Gini index, is used to extract critical blocks of a human face so that the computation needed can be reduced with satisfactory recognition accuracy. According to our experimental results, this approach can accelerate face recognizing process for two-fold with similar accuracy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography