Dissertations / Theses on the topic 'Face recognition'

To see the other types of publications on this topic, follow the link: Face recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Face recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hanafi, Marsyita. "Face recognition from face signatures." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10566.

Full text
Abstract:
This thesis presents techniques for detecting and recognizing faces under various imaging conditions. In particular, it presents a system that combines several methods for face detection and recognition. Initially, the faces in the images are located using the Viola-Jones method and each detected face is represented by a subimage. Then, an eye and mouth detection method is used to identify the coordinates of the eyes and mouth, which are then used to update the subimages so that the subimages contain only the face area. After that, a method based on Bayesian estimation and a fuzzy membership function is used to identify the actual faces on both subimages (obtained from the first and second steps). Then, a face similarity measure is used to locate the oval shape of a face in both subimages. The similarity measures between the two faces are compared and the one with the highest value is selected. In the recognition task, the Trace transform method is used to extract the face signatures from the oval shape face. These signatures are evaluated using the BANCA and FERET databases in authentication tasks. Here, the signatures with discriminating ability are selected and were used to construct a classifier. However, the classifier was shown to be a weak classifier. This problem is tackled by constructing a boosted assembly of classifiers developed by a Gentle Adaboost algorithm. The proposed methodologies are evaluated using a family album database.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Shaohua. "Unconstrained face recognition." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1800.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Ustun, Bulend. "3d Face Recognition." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609075/index.pdf.

Full text
Abstract:
In this thesis, the effect of registration process is evaluated as well as several methods proposed for 3D face recognition. Input faces are in point cloud form and have noises due to the nature of scanner technologies. These inputs are noise filtered and smoothed before registration step. In order to register the faces an average face model is obtained from all the images in the database. All the faces are registered to the average model and stored to the database. Registration is performed by using a rigid registration technique called ICP (Iterative Closest Point), probably the most popular technique for registering two 3D shapes. Furthermore some variants of ICP are implemented and they are evaluated in terms of accuracy, time and number of iterations needed for convergence. At the recognition step, several recognition methods, namely Eigenface, Fisherface, NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are tested on registered and non-registered faces and the performances are evaluated.
APA, Harvard, Vancouver, ISO, and other styles
4

Wong, Vincent. "Human face recognition /." Online version of thesis, 1994. http://hdl.handle.net/1850/11882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Colin K. "Infrared face recognition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLee%5FColin.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Monique P. Fargues, Gamani Karunasiri. Includes bibliographical references (p. 135-136). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Qu, Yawe, and Mingxi Yang. "Online Face Recognition Game." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-248.

Full text
Abstract:

The purpose of this project is to test and improve people’s ability of face recognition.

Although there are some tests on the internet with the same purpose, the problem is that people

may feel bored and give up before finishing the tests. Consequently they may not benefit from

testing nor from training. To solve this problem, face recognition and online game are put

together in this project. The game is supposed to provide entertainment when people are playing,

so that more people can take the test and improve their abilities of face recognition.

In the game design, the game is assumed to take place in the face recognition lab, which is

an imaginary lab. The player plays the main role in this game and asked to solve a number of

problems. There are several scenarios waiting for the player, which mainly need face recognition

skills from the player. At the end the player obtains the result of evaluation of her/his skills in

face recognition.

APA, Harvard, Vancouver, ISO, and other styles
7

Batur, Aziz Umit. "Illumination-robust face recognition." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Graham, Daniel B. "Pose-varying face recognition." Thesis, University of Manchester, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Mian. "Gobor-boosting face recognition." Thesis, University of Reading, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494814.

Full text
Abstract:
In the past decade, automatic face recognition has received much attention by both the commercial and public sectors as an efficient and resilient recognition technique in biometrics. This thesis describes a highly accurate appearance-based algorithm for grey scale front-view face recognition - Gabor-Boosting face recognition by means of computer vision, pattern recognition, image processing, machine learning etc. The strong performance of the Gabor-boosting face recognition algorithm is highlighted by combining three key leading edge techniques - the Gabor wavelet transform, AdaBoost, Support Vector Machine (SVM). The Gabor wavelet transform is used to extract features which describe texture variations of human faces. The Adaboost algorithm is used to select most significant features which represent different individuals. The SVM constructs a classifier with high recognition accuracy. Within the AdaBoost algorithm, a novel weak learner - Potsu is designed. The Potsu weak learner is fast due to the simple perception prototype, and is accurate due to large number of training examples available. More importantly, the Potsu weak learner is the only weak learner which satisfies the requirement of AdaBoost. The Potsu weak learners also demonstrate superior performance over other weak learners, such as FLD. The Gabor-Boosting face recognition algorithm is extended into multi-class classification domain, in which a multi-class weak learner called mPotsu is developed. The experiments show that performance is improved by applying loosely controlled face recognition in the multi-class classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Abi, Antoun Ramzi. "Pose-Tolerant Face Recognition." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/244.

Full text
Abstract:
Automatic face recognition performance has been steadily improving over years of active research, however it remains significantly affected by a number of external factors such as illumination, pose, expression, occlusion and resolution that can severely alter the appearance of a face and negatively impact recognition scores. The focus of this thesis is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of “mugshot-style” near-frontal gallery images. We argue that in this scenario, a 3D face-modeling geometric approach is essential in tackling the pose problem. For this purpose, we utilize a recent technique for efficient synthesis of 3D face models called 3D General Elastic Model (3DGEM). It solved the pose synthesis problem from a single frontal image, but could not solve the pose correction problem because of missing face data due to self-occlusion. In this thesis, we extend the formulation of 3DGEM and cast this task as an occlusion-removal problem. We propose a sparse feature extraction approach using subspace-modeling and `1-minimization to find a representation of the geometrically 3D-corrected faces that we show is stable under varying pose and resolution. We then show how pose-tolerance can be achieved either in the feature space or in the reconstructed image space. We present two different algorithms that capitalize on the robustness of the sparse feature extracted from the pose-corrected faces to achieve high matching rates that are minimally impacted by the variation in pose. We also demonstrate high verification rates upon matching nonfrontal to non-frontal faces. Furthermore, we show that our pose-correction framework lends itself very conveniently to the task of super-resolution. By building a multiresolution subspace, we apply the same sparse feature extraction technique to achieve single-image superresolution with high magnification rates. We discuss how our layered framework can potentially solve both pose and resolution problems in a unified and systematic approach. The modularity of our framework also keeps it flexible, upgradable and expandable to handle other external factors such as illumination or expressions. We run extensive tests on the MPIE dataset to validate our findings.
APA, Harvard, Vancouver, ISO, and other styles
11

Lincoln, Michael C. "Pose-independent face recognition." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Eriksson, Anders. "3-D face recognition." Thesis, Stellenbosch : Stellenbosch University, 1999. http://hdl.handle.net/10019.1/51090.

Full text
Abstract:
Thesis (MEng) -- Stellenbosch University , 1999.
ENGLISH ABSTRACT: In recent years face recognition has been a focus of intensive research but has still not achieved its full potential, mainly due to the limited abilities of existing systems to cope with varying pose and illumination. The most popular techniques to overcome this problem are the use of 3-D models or stereo information as this provides a system with the necessary information about the human face to ensure good recognition performance on faces with largely varying poses. In this thesis we present a novel approach to view-invariant face recognition that utilizes stereo information extracted from calibrated stereo image pairs. The method is invariant of scaling, rotation and variations in illumination. For each of the training image pairs a number of facial feature points are located in both images using Gabor wavelets. From this, along with the camera calibration information, a sparse 3-D mesh of the face can be constructed. This mesh is then stored along with the Gabor wavelet coefficients at each feature point, resulting in a model that contains both the geometric information of the face as well as its texture, described by the wavelet coefficients. The recognition is then conducted by filtering the test image pair with a Gabor filter bank, projecting the stored models feature points onto the image pairs and comparing the Gabor coefficients from the filtered image pairs with the ones stored in the model. The fit is optimised by rotating and translating the 3-D mesh. With this method reliable recognition results were obtained on a database with large variations in pose and illumination.
AFRIKAANSE OPSOMMING: Alhoewel gesigsherkenning die afgelope paar jaar intensief ondersoek is, het dit nog nie sy volle potensiaal bereik nie. Dit kan hoofsaaklik toegeskryf word aan die feit dat huidige stelsels nie aanpasbaar is om verskillende beligting en posisie van die onderwerp te hanteer nie. Die bekendste tegniek om hiervoor te kompenseer is die gebruik van 3-D modelle of stereo inligting. Dit stel die stelsel instaat om akkurate gesigsherkenning te doen op gesigte met groot posisionele variansie. Hierdie werk beskryf 'n nuwe metode om posisie-onafhanklike gesigsherkenning te doen deur gebruik te maak van stereo beeldpare. Die metode is invariant vir skalering, rotasie en veranderinge in beligting. 'n Aantal gesigspatrone word gevind in elke beeldpaar van die oplei-data deur gebruik te maak van Gabor filters. Hierdie patrone en kamera kalibrasie inligting word gebruik om 'n 3-D raamwerk van die gesig te konstrueer. Die gesigmodel wat gebruik word om toetsbeelde te klassifiseer bestaan uit die gesigraamwerk en die Gabor filter koeffisiente by elke patroonpunt. Klassifisering van 'n toetsbeeldpaar word gedoen deur die toetsbeelde te filter met 'n Gabor filterbank. Die gestoorde modelpatroonpunte word dan geprojekteer op die beeldpaar en die Gabor koeffisiente van die gefilterde beelde word dan vergelyk met die koeffisiente wat gestoor is in die model. Die passing word geoptimeer deur rotosie en translasie van die 3-D raamwerk. Die studie het getoon dat hierdie metode akkurate resultate verskaf vir 'n databasis met 'n groot variansie in posisie en beligting.
APA, Harvard, Vancouver, ISO, and other styles
13

Zou, Weiwen. "Face recognition from video." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Pershits, Edward. "Recognition of Face Images." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc277785/.

Full text
Abstract:
The focus of this dissertation is a methodology that enables computer systems to classify different up-front images of human faces as belonging to one of the individuals to which the system has been exposed previously. The images can present variance in size, location of the face, orientation, facial expressions, and overall illumination. The approach to the problem taken in this dissertation can be classified as analytic as the shapes of individual features of human faces are examined separately, as opposed to holistic approaches to face recognition. The outline of the features is used to construct signature functions. These functions are then magnitude-, period-, and phase-normalized to form a translation-, size-, and rotation-invariant representation of the features. Vectors of a limited number of the Fourier decomposition coefficients of these functions are taken to form the feature vectors representing the features in the corresponding vector space. With this approach no computation is necessary to enforce the translational, size, and rotational invariance at the stage of recognition thus reducing the problem of recognition to the k-dimensional clustering problem. A recognizer is specified that can reliably classify the vectors of the feature space into object classes. The recognizer made use of the following principle: a trial vector is classified into a class with the greatest number of closest vectors (in the sense of the Euclidean distance) among all vectors representing the same feature in the database of known individuals. A system based on this methodology is implemented and tried on a set of 50 pictures of 10 individuals (5 pictures per individual). The recognition rate is comparable to that of most recent results in the area of face recognition. The methodology presented in this dissertation is also applicable to any problem of pattern recognition where patterns can be represented as a collection of black shapes on the white background.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Weiping. "Face Recognition using Stringface." Thesis, Griffith University, 2012. http://hdl.handle.net/10072/365220.

Full text
Abstract:
Automatically recognizing human faces has attracted a lot of attention in the academic, commercial, and industrial communities during the last few decades due to its law intrusiveness and less cooperativeness. Face recognition technology has a variety of potential applications in information security, law enforcement, surveillance, smart cards, and access control. Despite significant advances in face recognition technology, it has yet to be put to wide use in industrial or commercial communities, mainly because of high error rates in real scenarios. Existing face recognition systems have achieved promising recognition accuracy under controlled condition. However, these systems are highly sensitive to environmental factors due to changing appearance of human face, such as variations in expression, illumination, pose, partial occlusion, and time gap between training and testing data capture. A practical face recognition system should be more robust against these varying conditions. Especially in some applications such as access control to sensitive areas, monitoring border crossing, and identifying criminals or terrorists, the system should be capable of identifying individuals who use disguise accessories to hide one’s identity to remain elusive from low enforcement. Furthermore, many reported face recognition techniques rely heavily on the size and representative of training set, and most of them will suffer serious performance drop or even fail to work if only one training sample per person is available to the system. Hence, face recognition from one sample per person is an important but challenging problem both in theory and for real-world applications. Fewer samples per person mean less laborious effort for collecting them, lower costs for storing and processing them.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
16

Pavani, Sri-Kaushik. "Methods for face detection and adaptive face recognition." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7567.

Full text
Abstract:
The focus of this thesis is on facial biometrics; specifically in the problems of face detection and face recognition. Despite intensive research over the last 20 years, the technology is not foolproof, which is why we do not see use of face recognition systems in critical sectors such as banking. In this thesis, we focus on three sub-problems in these two areas of research. Firstly, we propose methods to improve the speed-accuracy trade-off of the state-of-the-art face detector. Secondly, we consider a problem that is often ignored in the literature: to decrease the training time of the detectors. We propose two techniques to this end. Thirdly, we present a detailed large-scale study on self-updating face recognition systems in an attempt to answer if continuously changing facial appearance can be learnt automatically.
L'objectiu d'aquesta tesi és sobre biometria facial, específicament en els problemes de detecció de rostres i reconeixement facial. Malgrat la intensa recerca durant els últims 20 anys, la tecnologia no és infalible, de manera que no veiem l'ús dels sistemes de reconeixement de rostres en sectors crítics com la banca. En aquesta tesi, ens centrem en tres sub-problemes en aquestes dues àrees de recerca. En primer lloc, es proposa mètodes per millorar l'equilibri entre la precisió i la velocitat del detector de cares d'última generació. En segon lloc, considerem un problema que sovint s'ignora en la literatura: disminuir el temps de formació dels detectors. Es proposen dues tècniques per a aquest fi. En tercer lloc, es presenta un estudi detallat a gran escala sobre l'auto-actualització dels sistemes de reconeixement facial en un intent de respondre si el canvi constant de l'aparença facial es pot aprendre de forma automàtica.
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Khanh Duc. "A Study of Face Embedding in Face Recognition." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1989.

Full text
Abstract:
Face Recognition has been a long-standing topic in computer vision and pattern recognition field because of its wide and important applications in our daily lives such as surveillance system, access control, and so on. The current modern face recognition model, which keeps only a couple of images per person in the database, can now recognize a face with high accuracy. Moreover, the model does not need to be retrained every time a new person is added to the database. By using the face dataset from Digital Democracy, the thesis will explore the capability of this model by comparing it with the standard convolutional neural network based on pose variations and training set sizes. First, we compare different types of pose to see their effect on the accuracy of the algorithm. Second, we train the system using different number of training images per person to see how many training samples are actually needed to maintain a reasonable accuracy. Finally, to push the limit, we decide to train the model using only a single image per person with the help of a face generation technique to synthesize more faces. The performance obtained by this integration is found to be competitive with the previous results, which are trained on multiple images.
APA, Harvard, Vancouver, ISO, and other styles
18

Tran, Thao, and Nathalie Tkauc. "Face recognition and speech recognition for access control." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39776.

Full text
Abstract:
This project is a collaboration with the company JayWay in Halmstad. In order to enter theoffice today, a tag-key is needed for the employees and a doorbell for the guests. If someonerings the doorbell, someone on the inside has to open the door manually which is consideredas a disturbance during work time. The purpose with the project is to minimize thedisturbances in the office. The goal with the project is to develop a system that uses facerecognition and speech-to-text to control the lock system for the entrance door. The components used for the project are two Raspberry Pi’s, a 7 inch LCD-touch display, aRaspberry Pi Camera Module V2, a external sound card, a microphone and speaker. Thewhole project was written in Python and the platform used was Amazon Web Services (AWS)for storage and the face recognition while speech-to-text was provided by Google.The system is divided in three functions for employees, guests and deliveries. The employeefunction has two authentication steps, the face recognition and a random generated code that needs to be confirmed to avoid biometric spoofing. The guest function includes the speech-to-text service to state an employee's name that the guest wants to meet and the employee is then notified. The delivery function informs the specific persons in the office that are responsiblefor the deliveries by sending a notification.The test proves that the system will always match with the right person when using the facerecognition. It also shows what the threshold for the face recognition can be set to, to makesure that only authorized people enters the office.Using the two steps authentication, the face recognition and the code makes the system secureand protects the system against spoofing. One downside is that it is an extra step that takestime. The speech-to-text is set to swedish and works quite well for swedish-speaking persons.However, for a multicultural company it can be hard to use the speech-to-text service. It canalso be hard for the service to listen and translate if there is a lot of background noise or ifseveral people speak at the same time.
APA, Harvard, Vancouver, ISO, and other styles
19

Shriver, Edwin R. "Stereotypicality Moderates Face Recognition: Expectancy Violation Reverses the Cross-Race Effect in Face Recognition." Miami University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=miami1310067080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Xiaozheng. "Pose-invariant Face Recognition through 3D Reconstructions." Thesis, Griffith University, 2008. http://hdl.handle.net/10072/366373.

Full text
Abstract:
Pose invariance is a key ability for face recognition to achieve its advantages of being non-intrusive over other biometric techniques requiring cooperative subjects such as fingerprint recognition and iris recognition. Due to the complex 3D structures and various surface reflectivities of human faces, however, pose variations bring serious challenges to current face recognition systems. The image variations of human faces under 3D transformations are larger than that existing face recognition can tolerate. This research attempts to achieve pose-invariant face recognition through 3D reconstructions, which inversely estimates 3D shape and texture information of human faces from 2D face images. This extracted information is intrinsic features useful for face recognition which is invariable to pose changes. The proposed framework reconstructs personalised 3D face models from images of known people in a database (or gallery views) and generates virtual views in possible poses for face recognition algorithms to match the captured image (or probe view). In particular, three different scenarios of gallery views have been scrutinised: 1) when multiple face images from a fixed viewpoint under different illumination conditions are used as gallery views; 2) when a police mug shot consisting of a frontal view and a side view per person is available as gallery views; and 3) when a single frontal face image per person is used as gallery view. These three scenarios provide the system different amount of information and cover a wide range of situations which a face recognition system will encounter. Three novel 3D reconstruction approaches have then been proposed according to these three scenarios, which are 1) Heterogeneous Specular and Diffuse (HSD) face modelling, 2) Multilevel Quadratic Variation Minimisation (MQVM), and 3) Automatic Facial Texture Synthesis (AFTS), respectively. Experimental results show that these three proposed approaches can effectively improve the performance of face recognition across pose...
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
21

Brandoni, Domitilla. "Tensor decompositions for Face Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16867/.

Full text
Abstract:
Automatic Face Recognition has become increasingly important in the past few years due to its several applications in daily life, such as in social media platforms and security services. Numerical linear algebra tools such as the SVD (Singular Value Decomposition) have been extensively used to allow machines to automatically process images in the recognition and classification contexts. On the other hand, several factors such as expression, view angle and illumination can significantly affect the image, making the processing more complex. To cope with these additional features, multilinear algebra tools, such as high-order tensors are being explored. In this thesis we first analyze tensor calculus and tensor approximation via several dif- ferent decompositions that have been recently proposed, which include HOSVD (Higher-Order Singular Value Decomposition) and Tensor-Train formats. A new algorithm is proposed to perform data recognition for the latter format.
APA, Harvard, Vancouver, ISO, and other styles
22

Manikarnika, Achim Sanjay. "A General Face Recognition System." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10135.

Full text
Abstract:

In this project a real-time face detection and recognition system has been discussed and implemented. The main focus has been on the detection process which is the first and most important step before starting with the actual recognition. Computably intensive can give good results, but at the cost of the execution speed. The implemented algorithm which was done is project is build upon the work of Garcia, C. and Tziritas, but the algorithm accuracy is traded for faster speed. The program needs between 1-5 seconds on a standard workstation to analyze an image. On an image database with a lot of variety in the images, the system found 70-75% of the faces.

APA, Harvard, Vancouver, ISO, and other styles
23

Zhu, Jian Ke. "Real-time face recognition system." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1636556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ener, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.

Full text
Abstract:
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
APA, Harvard, Vancouver, ISO, and other styles
25

Noyes, Eilidh. "Face recognition in challenging situations." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/13577/.

Full text
Abstract:
A great deal of previous research has demonstrated that face recognition is unreliable for unfamiliar faces and reliable for familiar faces. However, such findings typically came from tasks that used ‘cooperative’ images, where there was no deliberate attempt to alter apparent identity. In applied settings, images are often far more challenging in nature. For example multiple images of the same identity may appear to be different identities, due to either incidental changes in appearance (such as age or style related change, or differences in images capture) or deliberate changes (evading own identity through disguise). At the same time, images of different identities may look like the same person, due to either incidental changes (natural similarities in appearance), or deliberate changes (attempts to impersonate someone else, such as in the case of identity fraud). Thus, past studies may have underestimated the applied problem. In this thesis I examine face recognition performance for these challenging image scenarios and test whether the familiarity advantage extends to these situations. I found that face recognition was indeed even poorer for challenging images than previously found using cooperative images. Familiar viewers were still better than unfamiliar viewers, yet familiarity did not bring performance to ceiling level for challenging images as it had done in the cooperative tasks in the past. I investigated several ways of improving performance, including image manipulations, exploiting perceptual constancy, crowd analysis of identity judgments, and viewing by super-recognisers. This thesis provides interesting insights into theory regarding what it is that familiar viewers are learning when they are becoming familiar with a face. It also has important practical implications; both for improving performance in challenging situations and for understanding deliberate disguise.
APA, Harvard, Vancouver, ISO, and other styles
26

Venkata, Anjaneya Subha Chaitanya Konduri. "Face recognition with Gabor phase." Thesis, Wichita State University, 2009. http://hdl.handle.net/10057/2508.

Full text
Abstract:
Face recognition is an attractive biometric measure due to its capacity to recognize individuals without their cooperation. This thesis proposes a method to dynamically recognize a facial image with the help of its valid features. To validate a set of feature points, the skin portion of the facial image is identified by processing each pixel value. Gabor phase samples are examined, depending on whether they are positive or negative at each filter output, and feature vectors are formed with positive or negative ones along with the spatial coordinates at the validated feature points. The collection of feature vectors is referred to as the feature vector set. The face recognition system has two phases: training and recognition. During the training phase, all images from the database are automatically loaded into the system, and their feature vector set is determined. When the test image arrives at the system, the feature vector set of the test image is compared with that of database images. Feature vectors are location-specific, and thereby similarities between the feature vectors of the test image and database images are calculated, provided that they are from the same spatial coordinates. Once spatial coordinates are matched by using exclusive-OR (X-OR) operation, the similarity is calculated from the values of the feature vector. Simulations using the proposed scheme have shown that precise recognition can be achieved.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
27

Majumdar, Angshul. "Compressive classification for face recognition." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/9531.

Full text
Abstract:
The problem of face recognition has been studied widely in the past two decades. Even though considerable progress has been made, e.g. in achieving better recognition rates, handling difficult environmental conditions etc., there has not been any widespread implementation of this technology. Most probably, the reason lies in not giving adequate consideration to practical problems such as communication costs and computational overhead. The thesis addresses the practical face recognition problem – e.g. a scenario that may arise in client recognition in Automated Teller Machines or employee authentication in large offices. In such scenarios the database of faces is updated regularly and the face recognition system must be updated at the same pace with minimum computational or communication costs. Such a scenario can not be handled by traditional machine learning methods as they assume the training is offline. We will develop novel methods to solve this problem from a completely new perspective. Face recognition consists of two main parts: dimensionality reduction followed by classification. This thesis employs the fastest possible dimensionality reduction technique – random projections. Most traditional classifiers do not give good classification results when the dimensionality of the data is reduced by such method. This work proposes a new class of classifiers that are robust to data whose dimensionality has been reduced using random projections. The Group Sparse Classifier (GSC) is based on the assumption that the training samples of each class approximately form a linear basis for any new test sample belonging to the same class. At the core of the GSC is an optimization problem which although gives very good results is somewhat slow. This problem is remedied in the Fast Group Sparse Classifier where the computationally intensive optimization is replaced by a fast greedy algorithm. The Nearest Subspace Classifier is based on the assumption that the samples from a particular class lie on a subspace specific to that class. This assumption leads to an optimization problem which can be solved very fast. In this work the robustness of the said classifiers is proved theoretically and is validated by thorough experimentation.
APA, Harvard, Vancouver, ISO, and other styles
28

Rowe, Dale Christopher. "Face recognition using skin texture." Thesis, University of Kent, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Calder, Andrew J. "Self priming in face recognition." Thesis, Durham University, 1993. http://etheses.dur.ac.uk/5787/.

Full text
Abstract:
Recently Burton, Bruce and Johnston (1990) have presented an interactive activation and competition model of face recognition. They have shown that this IAC model presents a parsimonious account of semantic and repetition priming effects with faces. In addition, a number of new predictions are evident from the model's structure. One such prediction is highlighted by Burton et al. themselves - that for short prime-target stimulus onset asynchronies (SOAs) a face should prime the recognition of a target name (or vice versa), 'self priming'. This thesis examined this prediction and found that it held for a design in which items were repeated across prime type conditions (same, associated, neutral and unrelated). Further, cross (face prime/name target) and within-domain (name prime/name target) designs were found to produce equivalent degrees of self and semantic priming (Experiments 1 and 2). Closer examination of the Burton et al. model suggested that the effect of domain equivalence for self priming should not hold for a design in which the stimulus items are not repeated across prime type conditions (i.e. subjects are presented with each item only once). This prediction was confirmed in Experiments 3, 4, 5 and 6.The time courses of self and semantic priming were investigated in two experiments where the interstimulus interval (ISI) between prime and target, and prime presentation times were varied. The results proved difficult to accommodate within the Burton et al. model, but it is argued that they did not provide a sufficient basis on which to reject the model. Finally, the self priming paradigm was applied to the study of distinctiveness effects. Faces judged to be distinctive in appearance were found to produce more facilitation than faces judged to be typical in appearance. Similarly, caricatured representation of faces were found to produce more facilitation than veridical or anticaricatured representations. The results of the distinctiveness studies are discussed in terms of the Valentine's (1991a; 1991b) exemplar-based coding model and Burton, Bruce and Johnston's (1990) IAC implementation. It is concluded that the results of these experiments lend support to the Burton et al. model.
APA, Harvard, Vancouver, ISO, and other styles
30

Valentine, T. R. "Encoding processes in face recognition." Thesis, University of Nottingham, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Memon, A. "Context effects in face recognition." Thesis, University of Nottingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shaikh, Muhammad. "Homogeneous to heterogeneous Face Recognition." Thesis, Northumbria University, 2015. http://nrl.northumbria.ac.uk/32283/.

Full text
Abstract:
Face Recognition, a very challenging research area, is being studied for almost more than a decade to solve variety of problems associated with it e.g. PIE (pose, expression and illumination), occlusion, gesture, aging etc. Most of the time, these problems are considered in situations when images are captured from same sensors/cameras/modalities. The methods in this domain are termed as homogeneous face recognition. In reality face images are being captured from alternate modalities also e.g. near infrared (NIR), thermal, sketch, digital (high resolution), web-cam (low resolution) which further alleviates face recognition problem. So, matching faces from different modalities are categorized as heterogeneous face recognition (HFR). This dissertation has major contributions in heterogeneous face recognition as well as its homogeneous counterpart. The first contribution is related to multi-scale LBP, Sequential forward search and KCRC-RLS method. Multi scale approaches results in high dimensional feature vectors that increases computational cost of the proposed approach and overtraining problem. Sequential forward approach is adopted to analyze the effect of multi-scale. This study brings an interesting facts about the merging of features of individual scale that it results in significant reduction of the variance of recognition rates among individual scales. In second contribution, I extend the efficacy of PLDA to heterogeneous face recognition. Due to its probabilistic nature, information from different modalities can easily be combined and priors can be applied over the possible matching. To the best of author’s knowledge, this is first study that aims to apply PLDA for intermodality face recognition. The third contribution is about solving small sample size problem in HFR scenarios by using intensity based features. Bagging based TFA method is proposed to exhaustively test face databases in cross validation environment with leave one out strategy to report fair and comparable results. The fourth contribution is about the module which can identify the modality types is missing in face recognition pipeline. The identification of the modalities in heterogeneous face recognition is required to assist automation in HFR methods. The fifth contribution is an extension of PLDA used in my second contribuiton. Bagging based probabilistic linear discriminant analysis is proposed to tackle problem of biased results as it uses overlapping train and test sets. Histogram of gradient descriptors (HOG) are applied and recognition rates using this method outperform all the state-of-the-art methods with only HOG features.
APA, Harvard, Vancouver, ISO, and other styles
33

Arandjelović, Ognjen. "Automatic face recognition from video." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fu, Y. "Face recognition in uncontrolled environments." Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1468901/.

Full text
Abstract:
This thesis concerns face recognition in uncontrolled environments in which the images used for training and test are collected from the real world instead of laboratories. Compared with controlled environments, images from uncontrolled environments contain more variation in pose, lighting, expression, occlusion, background, image quality, scale, and makeup. Therefore, face recognition in uncontrolled environments is much more challenging than in controlled conditions. Moreover, many real world applications require good recognition performance in uncontrolled environments. Example applications include social networking, human-computer interaction and electronic entertainment. Therefore, researchers and companies have shifted their interest from controlled environments to uncontrolled environments over the past seven years. In this thesis, we divide the history of face recognition into four stages and list the main problems and algorithms at each stage. We find that face recognition in unconstrained environments is still an unsolved problem although many face recognition algorithms have been proposed in the last decade. Existing approaches have two major limitations. First, many methods do not perform well when tested in uncontrolled databases even when all the faces are close to frontal. Second, most current algorithms cannot handle large pose variation, which has become a bottleneck for improving performance. In this thesis, we investigate Bayesian models for face recognition. Our contributions extend Probabilistic Linear Discriminant Analysis (PLDA) [Prince and Elder 2007]. In PLDA, images are described as a sum of signal and noise components. Each component is a weighted combination of basis functions. We firstly investigate the effect of degree of the localization of these basis functions and find better performance is obtained when the signal is treated more locally and the noise more globally. We call this new algorithm multi-scale PLDA and our experiments show it can handle lighting variation better than PLDA but fails for pose variation. We then analyze three existing Bayesian face recognition algorithms and combine the advantages of PLDA and the Joint Bayesian Face algorithm [Chen et al. 2012] to propose Joint PLDA. We find that our new algorithm improves performance compared to existing Bayesian face recognition algorithms. Finally, we propose Tied Joint Bayesian Face algorithm and Tied Joint PLDA to address large pose variations in the data, which drastically decreases performance in most existing face recognition algorithms. To provide sufficient training images with large pose difference, we introduce a new database called the UCL Multi-pose database. We demonstrate that our Bayesian models improve face recognition performance when the pose of the face images varies.
APA, Harvard, Vancouver, ISO, and other styles
35

Wei, Xingjie. "Unconstrained face recognition with occlusions." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/66778/.

Full text
Abstract:
Face recognition is one of the most active research topics in the interdisciplinary areas of biometrics, pattern recognition, computer vision and machine learning. Nowadays, there has been significant progress on automatic face recognition in controlled conditions. However, the performance in unconstrained conditions is still unsatisfactory. Face recognition systems in real-world environments often have to confront uncontrollable and unpredictable conditions such as large changes in illumination, pose, expression and occlusions, which introduce more intra-class variations and degrade the recognition performance. Compared with these factor related problems, the occlusion problem is relatively less studied in the research community. The overall goal of this thesis is to design robust algorithms for face recognition with occlusions in unconstrained environments. In uncontrollable environments, the occlusion preprocessing and detection are generally very difficult. Compared with previous works, we focus on directly performing recognition with the presence of occlusions. We deal with the occlusion problem in two directions and propose three novel algorithms to handle the occlusions in face images while also considering other factors. We propose a reconstruction based method structured sparse representation based face recognition when multiple gallery images are available for each subject. We point out that the non-zeros entries in the occlusion coefficient vector also have a cluster structure and propose a structured occlusion dictionary for better modelling them. On the other hand, we propose a local matching based method Dynamic Image-to-Class Warping (DICW) when the number of gallery images per subject is limited. DICW considers the inherent structure of the face and the experimental results confirm that the facial order is critical for recognition. In addition, we further propose a novel method fixations and saccades based classification when only one single gallery image is available for each subject. It is an extension of DICW and can be also applied to deal with other problems in face recognition caused by local deformations. The proposed algorithms are evaluated on standard face databases with various types occlusions and experimental results confirmed their effectiveness. We also consider several important and practical problems which are less noticed (i.e., coupled factors, occlusions in gallery or/and probe sets and the single sample per person problem) in face recognition and provide solutions to them.
APA, Harvard, Vancouver, ISO, and other styles
36

Sena, Emanuel Dario Rodrigues. "Multilinear technics in face recognition." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13381.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior
In this dissertation, the face recognition problem is investigated from the standpoint of multilinear algebra, more specifically the tensor decomposition, and by making use of Gabor wavelets. The feature extraction occurs in two stages: first the Gabor wavelets are applied holistically in feature selection; Secondly facial images are modeled as a higher-order tensor according to the multimodal factors present. Then, the HOSVD is applied to separate the multimodal factors of the images. The proposed facial recognition approach exhibits higher average success rate and stability when there is variation in the various multimodal factors such as facial position, lighting condition and facial expression. We also propose a systematic way to perform cross-validation on tensor models to estimate the error rate in face recognition systems that explore the nature of the multimodal ensemble. Through the random partitioning of data organized as a tensor, the mode-n cross-validation provides folds as subtensors extracted of the desired mode, featuring a stratified method and susceptible to repetition of cross-validation with different partitioning.
Nesta dissertaÃÃo o problema de reconhecimento facial à investigado do ponto de vista da Ãlgebra multilinear, mais especificamente por meio de decomposiÃÃes tensoriais fazendo uso das wavelets de Gabor. A extraÃÃo de caracterÃsticas ocorre em dois estÃgios: primeiramente as wavelets de Gabor sÃo aplicadas de maneira holÃstica na seleÃÃo de caracterÃsticas; em segundo as imagens faciais sÃo modeladas como um tensor de ordem superior de acordo com o fatores multimodais presentes. Com isso aplicamos a decomposiÃÃo tensorial Higher Order Singular Value Decomposition (HOSVD) para separar os fatores que influenciam na formaÃÃo das imagens. O mÃtodo de reconhecimento facial proposto possui uma alta taxa de acerto e estabilidade quando hà variaÃÃo nos diversos fatores multimodais, tais como, posiÃÃo facial, condiÃÃo de iluminaÃÃo e expressÃo facial. Propomos ainda uma maneira sistemÃtica para realizaÃÃo da validaÃÃo cruzada em modelos tensoriais para estimaÃÃo da taxa de erro em sistemas de reconhecimento facial que exploram a natureza multilinear do conjunto de imagens. AtravÃs do particionamento aleatÃrio dos dados organizado como um tensor, a validaÃÃo cruzada modo-n proporciona a criaÃÃo de folds extraindo subtensores no modo desejado, caracterizando um mÃtodo estratificado e susceptÃvel a repetiÃÃes da validaÃÃo cruzada com diferentes particionamentos.
APA, Harvard, Vancouver, ISO, and other styles
37

Shoja, Ghiass Reza. "Face recognition using infrared vision." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25333.

Full text
Abstract:
Au cours de la dernière décennie, la reconnaissance de visage basée sur l’imagerie infrarouge (IR) et en particulier la thermographie IR est devenue une alternative prometteuse aux approches conventionnelles utilisant l’imagerie dans le spectre visible. En effet l’imagerie (visible et infrarouge) trouvent encore des contraintes à leur application efficace dans le monde réel. Bien qu’insensibles à toute variation d’illumination dans le spectre visible, les images IR sont caractérisées par des défis spécifiques qui leur sont propres, notamment la sensibilité aux facteurs qui affectent le rayonnement thermique du visage tels que l’état émotionnel, la température ambiante, la consommation d’alcool, etc. En outre, il est plus laborieux de corriger l’expression du visage et les changements de poses dans les images IR puisque leur contenu est moins riche aux hautes fréquences spatiales ce qui représente en fait une indication importante pour le calage de tout modèle déformable. Dans cette thèse, nous décrivons une nouvelle méthode qui répond à ces défis majeurs. Concrètement, pour remédier aux changements dans les poses et expressions du visage, nous générons une image synthétique frontale du visage qui est canonique et neutre vis-à-vis de toute expression faciale à partir d’une image du visage de pose et expression faciale arbitraires. Ceci est réalisé par l’application d’une déformation affine par morceaux précédée par un calage via un modèle d’apparence active (AAM). Ainsi, une de nos publications est la première publication qui explore l’utilisation d’un AAM sur les images IR thermiques ; nous y proposons une étape de prétraitement qui rehausse la netteté des images thermiques, ce qui rend la convergence de l’AAM rapide et plus précise. Pour surmonter le problème des images IR thermiques par rapport au motif exact du rayonnement thermique du visage, nous le décrivons celui-ci par une représentation s’appuyant sur des caractéristiques anatomiques fiables. Contrairement aux approches existantes, notre représentation n’est pas binaire ; elle met plutôt l’accent sur la fiabilité des caractéristiques extraites. Cela rend la représentation proposée beaucoup plus robuste à la fois à la pose et aux changements possibles de température. L’efficacité de l’approche proposée est démontrée sur la plus grande base de données publique des vidéos IR thermiques des visages. Sur cette base d’images, notre méthode atteint des performances de reconnaissance assez bonnes et surpasse de manière significative les méthodes décrites précédemment dans la littérature. L’approche proposée a également montré de très bonnes performances sur des sous-ensembles de cette base de données que nous avons montée nous-mêmes au sein de notre laboratoire. A notre connaissance, il s’agit de l’une des bases de données les plus importantes disponibles à l’heure actuelle tout en présentant certains défis.
Over the course of the last decade, infrared (IR) and particularly thermal IR imaging based face recognition has emerged as a promising complement to conventional, visible spectrum based approaches which continue to struggle when applied in the real world. While inherently insensitive to visible spectrum illumination changes, IR images introduce specific challenges of their own, most notably sensitivity to factors which affect facial heat emission patterns, e.g., emotional state, ambient temperature, etc. In addition, facial expression and pose changes are more difficult to correct in IR images because they are less rich in high frequency details which is an important cue for fitting any deformable model. In this thesis we describe a novel method which addresses these major challenges. Specifically, to normalize for pose and facial expression changes we generate a synthetic frontal image of a face in a canonical, neutral facial expression from an image of the face in an arbitrary pose and facial expression. This is achieved by piecewise affine warping which follows active appearance model (AAM) fitting. This is the first work which explores the use of an AAM on thermal IR images; we propose a pre-processing step which enhances details in thermal images, making AAM convergence faster and more accurate. To overcome the problem of thermal IR image sensitivity to the exact pattern of facial temperature emissions we describe a representation based on reliable anatomical features. In contrast to previous approaches, our representation is not binary; rather, our method accounts for the reliability of the extracted features. This makes the proposed representation much more robust both to pose and scale changes. The effectiveness of the proposed approach is demonstrated on the largest public database of thermal IR images of faces on which it achieves satisfying recognition performance and significantly outperforms previously described methods. The proposed approach has also demonstrated satisfying performance on subsets of the largest video database of the world gathered in our laboratory which will be publicly available free of charge in future. The reader should note that due to the very nature of the feature extraction method in our system (i.e., anatomical based nature of it), we anticipate high robustness of our system to some challenging factors such as the temperature changes. However, we were not able to investigate this in depth due to the limits which exist in gathering realistic databases. Gathering the largest video database considering some challenging factors is one of the other contributions of this research.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Xiaojing. "Face Recognition with Shape Features." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429630097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Costen, Nicholas Paul. "Spatial frequencies and face recognition." Thesis, University of Aberdeen, 1994. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU069146.

Full text
Abstract:
If face images are degraded by spatial quantisation there is a non-linear acceleration of the decline of recognition accuracy as block-size increases, suggesting recognition requires a critical minimum range of object spatial frequencies. These may define the facial configuration, reflecting the structural properties allowing differentiation of faces. Experiment 1 measured speed and accuracy of recognition of six fronto-parallel faces shown with 11, 21 and 42 pixels/face, produced by quantisation, a Fourier low-pass filter and Gaussian blurring. Performance declined with image quality in a significant, non-linear manner, but faster for the quantised images. Experiment 2 found some of this additional decline was due to frequency-domain masking. Experiment 3 compared recognition for quantised, Fourier low-pass and high-pass versions, recognition was only impaired when the frequency limit exceeded the range 4.5-12.5 cycles/face. Experiment 4 found this was not due to contrast differences. Experiments 5, 6 and 7 used octave band-pass filters centred on 4.14, 9.67 and 22.15 cycles/face, varying view-point for both sequential matching and recognition. The spatial frequency effect was not found for matching, but was for recognition. Experiment 8 also measured recognition of band-passed images, presented with octave bands centred on 2.46-50.15 cycles/face and at 0-90 degrees from fronto-parallel. Spatial frequency effects were found at all angles, with best performance for semi-profile images and 11.10 cycles/face. Experiment 9 replicated this, with perceptually equal contrasts and the outer facial contour removed. Modeling showed this reflected a single spatial-frequency channel two octaves wide, centred on 9 cycles/face. Experiment 10 measured response time for successive matching of faces across a size-disparity, finding an asymmetrical effect.
APA, Harvard, Vancouver, ISO, and other styles
40

Pereira, Diogo Camara. "Face recognition using infrared imaging." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Dec%5FPereira.pdf.

Full text
Abstract:
Thesis (Electrical Engineer and M.S. in Electrical Engineering)--Naval Postgraduate School, December 2002.
Thesis advisor(s): Monique P. Fargues, Gamani Karunasiri, Roberto Cristi. Includes bibliographical references (p. 93-95). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
41

Faraji, Mohammadreza. "Face Recognition Under Varying Illuminations." DigitalCommons@USU, 2015. https://digitalcommons.usu.edu/etd/4410.

Full text
Abstract:
Face recognition under illumination is really challenging. This dissertation proposes four effective methods to produce illumination-invariant features for images with various lev- els of illuminations. The proposed methods are called logarithmic fractal dimension (LFD), eight local directional patterns (ELDP), adaptive homomorphic eight local directional pat- terns (AH-ELDP), and complete eight local directional patterns (CELDP), respectively. LFD, employing the log function and the fractal analysis (FA), produces a logarithmic fractal dimension (LFD) image that is illumination-invariant. The proposed FA feature- based method is an effective edge enhancer technique to extract and enhance facial features such as eyes, eyebrows, nose, and mouth. The proposed ELDP code scheme uses Kirsch compass masks to compute the edge responses of a pixel's neighborhood. It then uses all the directional numbers to produce an illumination-invariant image. AH-ELDP first uses adaptive homomorphic filtering to reduce the influence of illumi- nation from an input face image. It then applies an interpolative enhancement function to stretch the filtered image. Finally, it produces eight directional edge images using Kirsch compass masks and uses all the directional information to create an illumination-insensitive representation. CELDP seamlessly combines adaptive homomorphic filtering, simplified logarithmic fractal dimension, and complete eight local directional patterns to produce illumination- invariant representations. Our extensive experiments on Yale B, extended Yale B, CMU-PIE, and AR face databases show the proposed methods outperform several state-of-the-art methods, when using one image per subject for training. We also evaluate the ability of each method to verify and discriminate face images by plotting receiver operating characteristic (ROC) curves which plot true positive rates (TPR) against the false positive rates (FPR). In addition, we conduct an experiment on the Honda UCSD video face database to simulate real face recognition systems which include face detection, landmark localization, face normalization, and face matching steps. This experiment, also, verifies that our proposed methods outperform other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Ebrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16289/1/Hossein_Ebrahimpour-Komleh_Thesis.pdf.

Full text
Abstract:
Fractals are popular because of their ability to create complex images using only several simple codes. This is possible by capturing image redundancy and presenting the image in compressed form using the self similarity feature. For many years fractals were used for image compression. In the last few years they have also been used for face recognition. In this research we present new fractal methods for recognition, especially human face recognition. This research introduces three new methods for using fractals for face recognition, the use of fractal codes directly as features, Fractal image-set coding and Subfractals. In the first part, the mathematical principle behind the application of fractal image codes for recognition is investigated. An image Xf can be represented as Xf = A x Xf + B which A and B are fractal parameters of image Xf . Different fractal codes can be presented for any arbitrary image. With the defnition of a fractal transformation, T(X) = A(X - Xf ) + Xf , we can define the relationship between any image produced in the fractal decoding process starting with any arbitrary image X0 as Xn = Tn(X) = An(X - Xf ) + Xf . We show that some choices for A or B lead to faster convergence to the final image. Fractal image-set coding is based on the fact that a fractal code of an arbitrary gray-scale image can be divided in two parts - geometrical parameters and luminance parameters. Because the fractal codes for an image are not unique, we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters - which are faster to compute. For recognition purposes, the fractal code of a query image is applied to all the images in the training set for one iteration. The distance between an image and the result after one iteration is used to define a similarity measure between this image and the query image. The fractal code of an image is a set of contractive mappings each of which transfer a domain block to its corresponding range block. The distribution of selected domain blocks for range blocks in an image depends on the content of image and the fractal encoding algorithm used for coding. A small variation in a part of the input image may change the contents of the range and domain blocks in the fractal encoding process, resulting in a change in the transformation parameters in the same part or even other parts of the image. A subfractal is a set of fractal codes related to range blocks of a part of the image. These codes are calculated to be independent of other codes of the other parts of the same image. In this case the domain blocks nominated for each range block must be located in the same part of the image which the range blocks come from. The proposed fractal techniques were applied to face recognition using the MIT and XM2VTS face databases. Accuracies of 95% were obtained with up to 156 images.
APA, Harvard, Vancouver, ISO, and other styles
43

Ebrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16289/.

Full text
Abstract:
Fractals are popular because of their ability to create complex images using only several simple codes. This is possible by capturing image redundancy and presenting the image in compressed form using the self similarity feature. For many years fractals were used for image compression. In the last few years they have also been used for face recognition. In this research we present new fractal methods for recognition, especially human face recognition. This research introduces three new methods for using fractals for face recognition, the use of fractal codes directly as features, Fractal image-set coding and Subfractals. In the first part, the mathematical principle behind the application of fractal image codes for recognition is investigated. An image Xf can be represented as Xf = A x Xf + B which A and B are fractal parameters of image Xf . Different fractal codes can be presented for any arbitrary image. With the defnition of a fractal transformation, T(X) = A(X - Xf ) + Xf , we can define the relationship between any image produced in the fractal decoding process starting with any arbitrary image X0 as Xn = Tn(X) = An(X - Xf ) + Xf . We show that some choices for A or B lead to faster convergence to the final image. Fractal image-set coding is based on the fact that a fractal code of an arbitrary gray-scale image can be divided in two parts - geometrical parameters and luminance parameters. Because the fractal codes for an image are not unique, we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters - which are faster to compute. For recognition purposes, the fractal code of a query image is applied to all the images in the training set for one iteration. The distance between an image and the result after one iteration is used to define a similarity measure between this image and the query image. The fractal code of an image is a set of contractive mappings each of which transfer a domain block to its corresponding range block. The distribution of selected domain blocks for range blocks in an image depends on the content of image and the fractal encoding algorithm used for coding. A small variation in a part of the input image may change the contents of the range and domain blocks in the fractal encoding process, resulting in a change in the transformation parameters in the same part or even other parts of the image. A subfractal is a set of fractal codes related to range blocks of a part of the image. These codes are calculated to be independent of other codes of the other parts of the same image. In this case the domain blocks nominated for each range block must be located in the same part of the image which the range blocks come from. The proposed fractal techniques were applied to face recognition using the MIT and XM2VTS face databases. Accuracies of 95% were obtained with up to 156 images.
APA, Harvard, Vancouver, ISO, and other styles
44

Costa, Bernardo Maria de Lemos Ferreira Casimiro da. "Face Recognition." Master's thesis, 2018. https://hdl.handle.net/10216/116469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Costa, Bernardo Maria de Lemos Ferreira Casimiro da. "Face Recognition." Dissertação, 2018. https://hdl.handle.net/10216/116469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tseng, yu-shan, and 曾裕山. "Face Recognition Using 3D Face Information." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/03503210981299114008.

Full text
Abstract:
碩士
義守大學
電機工程學系
91
The methods of feature extraction are most importance in a face recognition system. There are two representation methods, template matching and geometric feature. In the 2D face recognition, the recognition rate is depended on the illumination, face location and viewing direction. The gray-value in the 2D face image is due to the illuminated intensity. This thesis studies the face recognition using the 3D faces which are reconstructed by Photometric Stereo Method. Our images contain with the depth-information which reconstructed in 3D faces. They are different to 2D gray-value images. We know that Principal Component Analysis method has better performance in face recognition. And it had been applied in computer vision, industrial robotics. In this thesis, we will present the novel approach which combines wavelet and PCA to generate the face feature. Furthermore, we will employ this face feature to compare with both in wavelet space and PCA space for face recognition, and compare the results. Our results will show the comparison results of different classifiers, for example, Nearest Center, Nearest Feature Line and Linear Discriminant Analysis, we will show that this new approach reveals the more excellent performance in face recognition.
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Triu Chiang, and 孫自強. "HUMAN FACE RECOGNITION." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/91139786195229458584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mr, Rishabh. "Robust Face Recognition." Thesis, 2017. http://ethesis.nitrkl.ac.in/8867/1/2017_MT_Rishabh.pdf.

Full text
Abstract:
Sparse representation (SRC) based Face classification has been developed extensively in last decade due to its superior performance compared to the other existing methods. Sparse representation of a query sample as a linear combination of all the training samples, and evaluation in terms of classification leads to the minimal representation error. Most of the SRC scheme uses l1 norm constraint in the classification, while all training samples, to collaboratively represent a query sample has been ignored. This thesis deals with the development of composite sparse and collaborative representation based classifier (CRC). The proposed method is robust against various illumination change, pose change of the test image. The sparse based collaborative representation provides excellent classification as well as very low computation, due to composite scheme. This thesis employees feature extraction process using LBP and extended LBP method. Extensive simulations are done for the assessment of the proposed methods. From the simulation results it has been found that the proposed methods yields better performance as compare with that of state of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Harguess, Joshua David. "Face recognition from video." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4711.

Full text
Abstract:
While the area of face recognition has been extensively studied in recent years, it remains a largely open problem, despite what movie and television studios would leave you to believe. Frontal, still face recognition research has seen a lot of success in recent years from any different researchers. However,the accuracy of such systems can be greatly diminished in cases such as increasing the variability of the database,occluding the face, and varying the illumination of the face. Further varying the pose of the face (yaw, pitch, and roll) and the face expression (smile, frown, etc.) adds even more complexity to the face recognition task, such as in the case of face recognition from video. In a more realistic video surveillance setting, a face recognition system should be robust to scale, pose, resolution, and occlusion as well as successfully track the face between frames. Also, a more advanced face recognition system should be able to improve the face recognition result by utilizing the information present in multiple video cameras. We approach the problem of face recognition from video in the following manner. We assume that the training data for the system consists of only still image data, such as passport photos or mugshots in a real-world system. We then transform the problem of face recognition from video to a still face recognition problem. Our research focuses on solutions to detecting, tracking and extracting face information from video frames so that they may be utilized effectively in a still face recognition system. We have developed four novel methods that assist in face recognition from video and multiple cameras. The first uses a patch-based method to handle the face recognition task when only patches, or parts, of the face are seen in a video, such as when occlusion of the face happens often. The second uses multiple cameras to fuse the recognition results of multiple cameras to improve the recognition accuracy. In the third solution, we utilize multiple overlapping video cameras to improve the face tracking result which thus improves the face recognition accuracy of the system. We additionally implement a methodology to detect and handle occlusion so that unwanted information is not used in the tracking algorithm. Finally, we introduce the average-half-face, which is shown to improve the results of still face recognition by utilizing the symmetry of the face. In one attempt to understand the use of the average-half-face in face recognition, an analysis of the effect of face symmetry on face recognition results is shown.
text
APA, Harvard, Vancouver, ISO, and other styles
50

Elmahmudi, Ali A. M., and Hassan Ugail. "Experiments on deep face recognition using partial faces." 2018. http://hdl.handle.net/10454/16872.

Full text
Abstract:
Yes
Face recognition is a very current subject of great interest in the area of visual computing. In the past, numerous face recognition and authentication approaches have been proposed, though the great majority of them use full frontal faces both for training machine learning algorithms and for measuring the recognition rates. In this paper, we discuss some novel experiments to test the performance of machine learning, especially the performance of deep learning, using partial faces as training and recognition cues. Thus, this study sharply differs from the common approaches of using the full face for recognition tasks. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the forehead. In this study, we use a convolutional neural network based architecture along with the pre-trained VGG-Face model to extract features for training. We then use two classifiers namely the cosine similarity and the linear support vector machine to test the recognition rates. We ran our experiments on the Brazilian FEI dataset consisting of 200 subjects. Our results show that the cheek of the face has the lowest recognition rate with 15% while the (top, bottom and right) half and the 3/4 of the face have near 100% recognition rates.
Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography