Dissertations / Theses on the topic 'FACE RECOGNITION TECHNIQUES'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'FACE RECOGNITION TECHNIQUES.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ebrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16289/1/Hossein_Ebrahimpour-Komleh_Thesis.pdf.
Full textEbrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16289/.
Full textHeseltine, Thomas David. "Face recognition : two-dimensional and three-dimensional techniques." Thesis, University of York, 2005. http://etheses.whiterose.ac.uk/9880/.
Full textSun, Yunlian <1986>. "Advanced Techniques for Face Recognition under Challenging Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6355/1/sun_yunlian_tesi.pdf.
Full textSun, Yunlian <1986>. "Advanced Techniques for Face Recognition under Challenging Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6355/.
Full textGul, Ahmet Bahtiyar. "Holistic Face Recognition By Dimension Reduction." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1056738/index.pdf.
Full texthowever, even Subspace LDA and Bayesian PCA do not perform well under changes in illumination and aging although they perform better than PCA.
Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.
Full textMian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition /." Connect to this title, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0046.
Full textAl-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.
Full textHan, Xia. "Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5347.
Full textPhung, Son Lam. "Automatic human face detection in color images." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1309.
Full textBouchech, Hamdi. "Selection of optimal narrowband multispectral images for face recognition." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS030/document.
Full textFace recognition systems based on ’conventional’ images have reached a significant level of maturity with some practical successes. However, their performance may degrade under poor and/or changing illumination. Multispectral imagery represents a viable alternative to conventional imaging in the search for a robust and practical identification system. Multi- spectral imaging (MI) can be defined as a ’collection of several monochrome images of the same scene, each of them taken with additional receptors sensitive to other frequencies of the visible light or to frequencies beyond the visible light like the infrared region of electro- magnetic continuum. Each image is referred to as a band or a channel. However, one weakness of MI is that they may significantly increase the system processing time because of the huge quantity of data to be mined; in some cases, hundreds of MI are taken for each subject. In this thesis, we propose to solve this problem by developing new approaches to select the set of best visible spectral bands for face matching. For this purpose, the problem of best spectral bands selection is formulated as an optimization problem where spectral bands are constrained to maximize the recognition accuracy under challenging imaging conditions. We reduce the redundancy of both spectral and spatial information without losing valuable details needed for the object recognition, discrimination and classification. We have investigated several mathematic and optimization tools widely used in the field of image processing. One of the approaches we have proposed formulated the problem of best spectral bands selection as a pursuit problem where weights of importance were affected to each spectral band and the vector of all weights was constrained to be sparse with most of its elements are zeros. In another work, we have assigned to each spectral band a linear discriminant analysis (LDA) based weak classifier. Then, all weak classifiers were boosted together using an Adaboost process. From this later, each weak classifier obtained a weight that characterizes its importance and hence the quality of the corresponding spectral band. Several other techniques were also used for best spectral bands selection including but not limited to mixture of Gaussian based modeling, multilinear sparse decomposition, image quality factors, local descriptors like SURF and HGPP, likelihood ratio and so on. These different techniques enabled to build systems for best spectral bands selection that are either static with the same bands are selected for all the subjects or dynamic with each new subject get its own set of best bands. This latter category, dynamic systems, is an original component of our work that, to the best of our knowledge, has not been proposed before; all existing systems are only static. Finally, the proposed algorithms were compared to state-of-the-art algorithms developed for face recognition purposes in general and specifically for best spectral bands selection
Ben, Said Ahmed. "Multispectral imaging and its use for face recognition : sensory data enhancement." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS008/document.
Full textIn this thesis, we focus on multispectral image for face recognition. With such application,the quality of the image is an important factor that affects the accuracy of therecognition. However, the sensory data are in general corrupted by noise. Thus, wepropose several denoising algorithms that are able to ensure a good tradeoff betweennoise removal and details preservation. Furthermore, characterizing regions and detailsof the face can improve recognition. We focus also in this thesis on multispectral imagesegmentation particularly clustering techniques and cluster analysis. The effectiveness ofthe proposed algorithms is illustrated by comparing them with state-of-the-art methodsusing both simulated and real multispectral data sets
Ferguson, Eilidh Louise. "Facial identification of children : a test of automated facial recognition and manual facial comparison techniques on juvenile face images." Thesis, University of Dundee, 2015. https://discovery.dundee.ac.uk/en/studentTheses/03679266-9552-45da-9c6d-0f062c4893c8.
Full textCho, Gyuchoon. "Real Time Driver Safety System." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/63.
Full textPeyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.
Full textThis thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
Nassar, Alaa S. N. "A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/16917.
Full textHigher Committee for Education Development in Iraq
Visweswaran, Krishnan. "Face Recognition Technique for Blurred/Unclear Images." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10263528.
Full textThe purpose of this project is to invalidate the existing methods for performing facial recognition. Facial recognition considers many aspects before its actual operations are performed. Many situations must also be considered such as the angle of the camera, the aspect ratio and motion of the object with respect to the camera, and the shutter speed of the camera. There are many techniques that have been implemented for face recognition, but many of these techniques have problems recognizing a blurred image. These detection problems can be eliminated by using three algorithms: illumination, blurring, and pose. These algorithms will sequentially be more effective than the existing methods and will prove a definite solution for facial recognition in a blurred state.
Muller, Neil Leonard. "Image recognition using the Eigenpicture Technique (with specific applications in face recognition and optical character recognition)." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/14381.
Full textIn the first part of this dissertation, we present a detailed description of the eigenface technique first proposed by Sirovich and Kirby and subsequently developed by several groups, most notably the Media Lab at MIT. Other significant contributions have been made by Rockefeller University, whose ideas have culminated in a commercial system known as Faceit. For a different techniques (i.e. not eigenfaces) and a detailed comparison of some other techniques, the reader is referred to [5]. Although we followed ideas in the open literature (we believe there that there is a large body of advanced proprietary knowledge, which remains inaccessible), the implementation is our own. In addition, we believe that the method for updating the eigenfaces to deal with badly represented images presented in section 2. 7 is our own. The next stage in this section would be to develop an experimental system that can be extensively tested. At this point however, another, nonscientific difficulty arises, that of developing an adequately large data base. The basic problem is that one needs a training set representative of all faces to be encountered in future. Note that this does not mean that one can only deal with faces in the database, the whole idea is to be able to work with any facial image. However, a data base is only representative if it contains images similar to anything that can be encountered in future. For this reason a representative database may be very large and is not easy to build. In addition for testing purposes one needs multiple images of a large number of people, acquired over a period of time under different physical conditions representing the typical variations encountered in practice. Obviously this is a very slow process. Potentially the variation between the faces in the database can be large suggesting that the representation of all these different images in terms of eigenfaces may not be particularly efficient. One idea is to separate all the facial images into different, more or less homogeneous classes. Again this can only be done with access to a sufficiently large database, probably consisting of several thousand faces.
Aly, Sherin Fathy Mohammed Gaber. "Techniques for Facial Expression Recognition Using the Kinect." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/89220.
Full textPHD
Marras, Ioannis. "Robust subspace learning techniques for tracking and recognition of human faces." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/41039.
Full textMuller, Neil. "Facial recognition, eigenfaces and synthetic discriminant functions." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51756.
Full textENGLISH ABSTRACT: In this thesis we examine some aspects of automatic face recognition, with specific reference to the eigenface technique. We provide a thorough theoretical analysis of this technique which allows us to explain many of the results reported in the literature. It also suggests that clustering can improve the performance of the system and we provide experimental evidence of this. From the analysis, we also derive an efficient algorithm for updating the eigenfaces. We demonstrate the ability of an eigenface-based system to represent faces efficiently (using at most forty values in our experiments) and also demonstrate our updating algorithm. Since we are concerned with aspects of face recognition, one of the important practical problems is locating the face in a image, subject to distortions such as rotation. We review two well-known methods for locating faces based on the eigenface technique.e These algorithms are computationally expensive, so we illustrate how the Synthetic Discriminant Function can be used to reduce the cost. For our purposes, we propose the concept of a linearly interpolating SDF and we show how this can be used not only to locate the face, but also to estimate the extent of the distortion. We derive conditions which will ensure a SDF is linearly interpolating. We show how many of the more popular SDF-type filters are related to the classic SDF and thus extend our analysis to a wide range of SDF-type filters. Our analysis suggests that by carefully choosing the training set to satisfy our condition, we can significantly reduce the size of the training set required. This is demonstrated by using the equidistributing principle to design a suitable training set for the SDF. All this is illustrated with several examples. Our results with the SDF allow us to construct a two-stage algorithm for locating faces. We use the SDF-type filters to obtain initial estimates of the location and extent of the distortion. This information is then used by one of the more accurate eigenface-based techniques to obtain the final location from a reduced search space. This significantly reduces the computational cost of the process.
AFRIKAANSE OPSOMMING: In hierdie tesis ondersoek ons sommige aspekte van automatiese gesigs- herkenning met spesifieke verwysing na die sogenaamde eigengesig ("eigen- face") tegniek. ‘n Deeglike teoretiese analise van hierdie tegniek stel ons in staat om heelparty van die resultate wat in die literatuur verskyn te verduidelik. Dit bied ook die moontlikheid dat die gedrag van die stelsel sal verbeter as die gesigte in verskillende klasse gegroepeer word. Uit die analise, herlei ons ook ‘n doeltreffende algoritme om die eigegesigte op te dateer. Ons demonstreer die vermoë van die stelsel om gesigte op ‘n doeltreffende manier te beskryf (ons gebruik hoogstens veertig eigegesigte) asook ons opdateringsalgoritme met praktiese voorbeelde. Verder ondersoek ons die belangrike probleem om gesigte in ‘n beeld te vind, veral as rotasie- en skaalveranderinge plaasvind. Ons bespreek twee welbekende algoritmes om gesigte te vind wat op eigengesigte gebaseer is. Hierdie algoritme is baie duur in terme van numerise berekeninge en ons ontwikkel n koste-effektiewe metode wat op die sogenaamde "Synthetic Discriminant Functions" (SDF) gebaseer is. Vir hierdie doel word die begrip van lineêr interpolerende SDF’s ingevoer. Dit stel ons in staat om nie net die gesig te vind nie, maar ook ‘n skatting van sy versteuring te bereken. Voorts kon ons voorwaardes aflei wat verseker dat ‘n SDF lineêr interpolerend is. Aangesien ons aantoon dat baie van die gewilde SDF-tipe filters aan die klassieke SDF verwant is, geld ons resultate vir ‘n hele verskeidenheid SDF- tipe filters. Ons analise toon ook dat ‘n versigtige keuse van die afrigdata mens in staat stel om die grootte van die afrigstel aansienlik te verminder. Dit word duidelik met behulp van die sogenaamde gelykverspreidings beginsel ("equidistributing principle") gedemonstreer. Al hierdie aspekte van die SDF’s word met voorbeelde geïllustreer. Ons resultate met die SDF laat ons toe om ‘n tweestap algoritme vir die vind van ‘n gesig in ‘n beeld te ontwikkel. Ons gebruik eers die SDF-tipe filters om skattings vir die posisie en versteuring van die gesig te kry en dan verfyn ons hierdie skattings deur een van die teknieke wat op eigengesigte gebaseer is te gebruik. Dit lei tot ‘n aansienlike vermindering in die berekeningstyd.
Günther, Manuel [Verfasser]. "Statistical Gabor Graph Based Techniques for the Detection, Recognition, Classification, and Visualization of Human Faces / Manuel Günther." Aachen : Shaker, 2012. http://d-nb.info/1069046140/34.
Full textAssefa, Anteneh. "Tracing and apportioning sources of dioxins using multivariate pattern recognition techniques." Doctoral thesis, Umeå universitet, Kemiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-102877.
Full textEcoChange
BalticPOPs
Poinsot, Audrey. "Traitements pour la reconnaissance biométrique multimodale : algorithmes et architectures." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS010.
Full textIncluding multiple sources of information in personal identity recognition reduces the limitations of each used characteristic and gives the opportunity to greatly improve performance. This thesis presents the design work done in order to build an efficient generalpublic recognition system, which can be implemented on a low-cost hardware platform. The chosen solution explores the possibilities offered by multimodality and in particular by the fusion of face and palmprint. The algorithmic chain consists in a processing based on Gabor filters and score fusion. A real database of 130 subjects has been designed and built for the study. High performance has been obtained and confirmed on a virtual database, which consists of two common public biometric databases (AR and PolyU). Thanks to a comprehensive study on the architecture of the DSP components and some implementations carried out on a DSP belonging to the TMS320c64x family, it has been proved that it is possible to implement the system on a single DSP with short processing times. Moreover, an algorithms and architectures development work for FPGA implementation has demonstrated that these times can be significantly reduced
Aganj, Ehsan. "Multi-view Reconstruction and Texturing=Reconstruction multi-vues et texturation." Phd thesis, Ecole des Ponts ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00517742.
Full textChou, Yu-Shu, and 周煜書. "Face detection and recognition based on neural techniques." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/15101749270685630676.
Full text義守大學
資訊管理學系碩士班
96
We develop and improve an algorithm which can detect faces with complex and noisy views, and recognize these identity in face images. We use a method that the value is above a threshold, then that location is classified as a face. The ability of face recognition are increasing. The system of face detection and recognition is divided into three stages: face detection, face location, face recognition. Our face detection algorithm improve the framework of neural network-based face detection(NNFD) proposed by Rowley(1998). We modify the detecting area of NNFD’s hidden layer. As the positive effect concerning the result of detecting specific features from shifted face image by overlapping the detecting area. In the face location stage, we use Gaussian filter to spread out the detective areas, if the value is above a threshold, then that location is classified as a face. The recognition algorithm of this study uses Gaussian parameter to extract the face features. The input patterns are clustering by the fuzzy c-means algorithm[18][19]. We feed these data to train the RBF neural networks, and use the RBF neural classifier to recognize faces. Experimental results show our system has better efficiency of the face detection and less training time of neural networks. We can promote the recognition ability of complex face images accurately.
MATHUR, PALLAVI. "STUDY OF FACE RECOGNITION TECHNIQUES USING VARIOUS MOMENTS." Thesis, 2012. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14115.
Full text"Automatic segmentation and registration techniques for 3D face recognition." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074674.
Full textThen we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets.
We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score.
Tang, Xinmin.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 109-117).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Yadav, Govind. "Feature Extraction and Feature Selection Techniques for Face Recognition." Thesis, 2016. http://ethesis.nitrkl.ac.in/9351/1/2016_MT_GYadav.pdf.
Full text"Learning-based descriptor for 2-D face recognition." 2010. http://library.cuhk.edu.hk/record=b5894302.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 30-34).
Abstracts in English and Chinese.
Chapter 1 --- Introduction and related work --- p.1
Chapter 2 --- Learning-based descriptor for face recognition --- p.7
Chapter 2.1 --- Overview of framework --- p.7
Chapter 2.2 --- Learning-based descriptor extraction --- p.9
Chapter 2.2.1 --- Sampling and normalization --- p.9
Chapter 2.2.2 --- Learning-based encoding and histogram rep-resentation --- p.11
Chapter 2.2.3 --- PCA dimension reduction --- p.12
Chapter 2.2.4 --- Multiple LE descriptors --- p.14
Chapter 2.3 --- Pose-adaptive matching --- p.16
Chapter 2.3.1 --- Component -level face alignment --- p.17
Chapter 2.3.2 --- Pose-adaptive matching --- p.17
Chapter 2.3.3 --- Evaluations of pose-adaptive matching --- p.19
Chapter 3 --- Experiment --- p.21
Chapter 3.1 --- Results on the LFW benchmark --- p.21
Chapter 3.2 --- Results on Multi-PIE --- p.24
Chapter 4 --- Conclusion and future work --- p.27
Chapter 4.1 --- Conclusion --- p.27
Chapter 4.2 --- Future work --- p.28
Bibliography --- p.30
LAI, YU-DIAN, and 賴育鈿. "A Mirroring and Monitoring System Using Face and Emotion Recognition Techniques." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vh4649.
Full text逢甲大學
資訊工程學系
107
Personnel management is usually required in classrooms, small companies, or laboratories. Thus, an entrance monitoring system may be installed at the entrance and exit to facilitate management. We use face recognition technology in the entrance surveillance system to identify the person's identity. In this system, a smart mirror is designed, which can display the information of users. Managers can know the identity and emotions of members, which will help members and managers at the same time. In this study, we proposed a smart mirror and entrance monitoring system by using deep learning and context-aware technology. Among them, we use the deep learning model of VGG-Face. Using transfer learning, the system can be identified without collecting too much photo training. We proposed the Top-K method to improve the accuracy of the identification system. The smart mirror can know the user's information, such as age, gender, emotion, and identify the person. In addition, it can send a message alert to the administrator via the LINE bot. There are two types of message alert, including regular notifications and warning notifications. This message alert feature sends a regular notification if the system recognizes a member. Conversely, if the system identifies a stranger, this message alert feature sends a warning to notify the administrator. This smart mirror helps managers manage better. The system is suitable for small groups such as laboratories, homes, and companies.
"Face authentication on mobile devices: optimization techniques and applications." 2005. http://library.cuhk.edu.hk/record=b5892581.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 106-111).
Abstracts in English and Chinese.
Chapter 1. --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.1.1 --- Introduction to Biometrics --- p.1
Chapter 1.1.2 --- Face Recognition in General --- p.2
Chapter 1.1.3 --- Typical Face Recognition Systems --- p.4
Chapter 1.1.4 --- Face Database and Evaluation Protocol --- p.5
Chapter 1.1.5 --- Evaluation Metrics --- p.7
Chapter 1.1.6 --- Characteristics of Mobile Devices --- p.10
Chapter 1.2 --- Motivation and Objectives --- p.12
Chapter 1.3 --- Major Contributions --- p.13
Chapter 1.3.1 --- Optimization Framework --- p.13
Chapter 1.3.2 --- Real Time Principal Component Analysis --- p.14
Chapter 1.3.3 --- Real Time Elastic Bunch Graph Matching --- p.14
Chapter 1.4 --- Thesis Organization --- p.15
Chapter 2. --- Related Work --- p.16
Chapter 2.1 --- Face Recognition for Desktop Computers --- p.16
Chapter 2.1.1 --- Global Feature Based Systems --- p.16
Chapter 2.1.2 --- Local Feature Based Systems --- p.18
Chapter 2.1.3 --- Commercial Systems --- p.20
Chapter 2.2 --- Biometrics on Mobile Devices --- p.22
Chapter 3. --- Optimization Framework --- p.24
Chapter 3.1 --- Introduction --- p.24
Chapter 3.2 --- Levels of Optimization --- p.25
Chapter 3.2.1 --- Algorithm Level --- p.25
Chapter 3.2.2 --- Code Level --- p.26
Chapter 3.2.3 --- Instruction Level --- p.27
Chapter 3.2.4 --- Architecture Level --- p.28
Chapter 3.3 --- General Optimization Workflow --- p.29
Chapter 3.4 --- Summary --- p.31
Chapter 4. --- Real Time Principal Component Analysis --- p.32
Chapter 4.1 --- Introduction --- p.32
Chapter 4.2 --- System Overview --- p.33
Chapter 4.2.1 --- Image Preprocessing --- p.33
Chapter 4.2.2 --- PCA Subspace Training --- p.34
Chapter 4.2.3 --- PCA Subspace Projection --- p.36
Chapter 4.2.4 --- Template Matching --- p.36
Chapter 4.3 --- Optimization using Fixed-point Arithmetic --- p.37
Chapter 4.3.1 --- Profiling Analysis --- p.37
Chapter 4.3.2 --- Fixed-point Representation --- p.38
Chapter 4.3.3 --- Range Estimation --- p.39
Chapter 4.3.4 --- Code Conversion --- p.42
Chapter 4.4 --- Experiments and Discussions --- p.43
Chapter 4.4.1 --- Experiment Setup --- p.43
Chapter 4.4.2 --- Execution Time --- p.44
Chapter 4.4.3 --- Space Requirement --- p.45
Chapter 4.4.4 --- Verification Accuracy --- p.45
Chapter 5. --- Real Time Elastic Bunch Graph Matching --- p.49
Chapter 5.1 --- Introduction --- p.49
Chapter 5.2 --- System Overview --- p.50
Chapter 5.2.1 --- Image Preprocessing --- p.50
Chapter 5.2.2 --- Landmark Localization --- p.51
Chapter 5.2.3 --- Feature Extraction --- p.52
Chapter 5.2.4 --- Template Matching --- p.53
Chapter 5.3 --- Optimization Overview --- p.54
Chapter 5.3.1 --- Computation Optimization --- p.55
Chapter 5.3.2 --- Memory Optimization --- p.56
Chapter 5.4 --- Optimization Strategies --- p.58
Chapter 5.4.1 --- Fixed-point Arithmetic --- p.60
Chapter 5.4.2 --- Gabor Masks and Bunch Graphs Precomputation --- p.66
Chapter 5.4.3 --- Improving Array Access Efficiency using ID array --- p.68
Chapter 5.4.4 --- Efficient Gabor Filter Selection --- p.75
Chapter 5.4.5 --- Fine Tuning System Cache Policy --- p.79
Chapter 5.4.6 --- Reducing Redundant Memory Access by Loop Merging --- p.80
Chapter 5.4.7 --- Maximizing Cache Reuse by Array Merging --- p.90
Chapter 5.4.8 --- Optimization of Trigonometric Functions using Table Lookup. --- p.97
Chapter 5.5 --- Summary --- p.99
Chapter 6. --- Conclusions --- p.103
Chapter 7. --- Bibliography --- p.106
Chang, Chia-Kai, and 張家愷. "Face Recognition Method by Integrating the Techniques of Biometrics and Principal Component Analysis." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/56719495701817415015.
Full text朝陽科技大學
資訊管理系
102
This study proposes a face recognition method by integrating the techniques of biometrics and principal component analysis (PCA). Based on this method, we construct a multi-face recognition system which is based on fourteen biometric features. In this system, improve the detection process by using two color space models to extract face regions from the picture. We capture the biometric features from every candidate of face image, calculate the difference of facial feature vector (DFFV) and find weights of feature vector by PCA. Then, these data are stored in facial database for face recognition. When a new face image is coming, we capture the biometric features can be captured from the coming face image, calculate DFFV, and compare them with DFFVs in database by progressively use the weights which obtained from the PCA to find the closest face. Finally, we continued test and regulate the experimental procedure, we obtain initial recognition success rate and confirmed our face recognition method which used PCA and biometrics can be used. And because this method use only some biometrics features to detection face, therefore, we can save more calculation time and amount of data. In the future, we need to study more the other research on biometrics and improve amount of features, we think it can be improve success rate make the face recognition both fast and accurately.
"Symmetry for face analysis." 2005. http://library.cuhk.edu.hk/record=b5892640.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 51-55).
Abstracts in English and Chinese.
abstract --- p.i
acknowledgments --- p.iv
table of contents --- p.v
list of figures --- p.vii
list of tables --- p.ix
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Reflectional Symmetry Detection --- p.1
Chapter 1.2 --- Research Progress on Face Analysis --- p.2
Chapter 1.2.1 --- Face Detection --- p.3
Chapter 1.2.2 --- Face Alignment --- p.4
Chapter 1.2.3 --- Face Recognition --- p.6
Chapter 1.3 --- Organization of this thesis --- p.8
Chapter Chapter 2 --- Local reflectional symmetry detection --- p.9
Chapter 2.1 --- Proposed Method --- p.9
Chapter 2.1.1 --- Symmetry measurement operator --- p.9
Chapter 2.1.2 --- Potential regions selection --- p.10
Chapter 2.1.3 --- Detection of symmetry axes --- p.11
Chapter 2.2 --- Experiments --- p.13
Chapter 2.2.1 --- Parameter setting and analysis --- p.13
Chapter 2.2.2 --- Experimental Results --- p.14
Chapter Chapter 3 --- Global perspective reflectional symmetry detection --- p.16
Chapter 3.1 --- Introduction of camera models --- p.16
Chapter 3.2 --- Property of Symmetric Point-Pair --- p.18
Chapter 3.3 --- analysis and Experiment --- p.20
Chapter 3.3.1 --- Confirmative Experiments --- p.20
Chapter 3.3.2 --- Face shape generation with PSI --- p.22
Chapter 3.3.3 --- Error Analysis --- p.24
Chapter 3.3.4 --- Experiments of Pose Estimation --- p.25
Chapter 3.4 --- Summary --- p.28
Chapter Chapter 4 --- Pre-processing of face analysis --- p.30
Chapter 4.1 --- Introduction of Hough Transform --- p.30
Chapter 4.2 --- Eye Detection --- p.31
Chapter 4.2.1 --- Coarse Detection --- p.32
Chapter 4.2.2 --- Refine the eyes positions --- p.34
Chapter 4.2.3 --- Experiments and Analysis --- p.35
Chapter 4.3 --- Face Components Detection with GHT --- p.37
Chapter 4.3.1 --- Parameter Analyses --- p.38
Chapter 4 3.2 --- R-table Construction --- p.38
Chapter 4.3.3 --- Detection Procedure and Voting Strategy --- p.39
Chapter 4.3.4 --- Experiments and Analysis --- p.41
Chapter Chapter 5 --- Pose estimation with face symmetry --- p.45
Chapter 5.1 --- Key points selection --- p.45
Chapter 5.2 --- Face Pose Estimation --- p.46
Chapter 5.2.1 --- Locating eye corners --- p.46
Chapter 5.2.2 --- Analysis and Summary --- p.47
Chapter Chapter 6 --- Conclusions and future work --- p.49
bibliography --- p.51
"Rotation-invariant face detection in grayscale images." 2005. http://library.cuhk.edu.hk/record=b5892397.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 73-78).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.ii
List of Figures --- p.viii
List of Tables --- p.ix
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Previous work --- p.2
Chapter 1.1.1 --- Learning-based approaches --- p.3
Chapter 1.1.2 --- Feature-based approaches --- p.7
Chapter 1.2 --- Thesis objective --- p.12
Chapter 1.3 --- The proposed detector --- p.13
Chapter 1.4 --- Thesis outline --- p.14
Chapter 2 --- The Edge Merging Algorithm --- p.16
Chapter 2.1 --- Edge detection --- p.16
Chapter 2.2 --- Edge breaking --- p.18
Chapter 2.2.1 --- Cross detection --- p.20
Chapter 2.2.2 --- Corner detection --- p.20
Chapter 2.3 --- Curve merging --- p.23
Chapter 2.3.1 --- The search region --- p.25
Chapter 2.3.2 --- The merging cost function --- p.27
Chapter 2.4 --- Ellipse fitting --- p.30
Chapter 2.5 --- Discussion --- p.33
Chapter 3 --- The Face Verifier --- p.35
Chapter 3.1 --- The face box --- p.35
Chapter 3.1.1 --- Face box localization --- p.36
Chapter 3.1.2 --- Conditioning the face box --- p.42
Chapter 3.2 --- Eye-mouth triangle search --- p.45
Chapter 3.3 --- Face model matching --- p.48
Chapter 3.3.1 --- Face model construction --- p.48
Chapter 3.3.2 --- Confidence of detection --- p.51
Chapter 3.4 --- Dealing with overlapped detections --- p.51
Chapter 3.5 --- Discussion --- p.53
Chapter 4 --- Experiments --- p.55
Chapter 4.1 --- The test sets --- p.55
Chapter 4.2 --- Experimental results --- p.56
Chapter 4.2.1 --- The ROC curves --- p.56
Chapter 4.3 --- Discussions --- p.61
Chapter 5 --- Conclusions --- p.69
Chapter 5.1 --- Conclusions --- p.69
Chapter 5.2 --- Suggestions for future work --- p.70
List of Original Contributions --- p.72
Bibliography --- p.73
Wang, Kai-yi, and 王凱毅. "A Real-Time Face Tracking and Recognition System Based on Particle Filtering and AdaBoosting Techniques." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/4xrvmn.
Full text國立臺灣科技大學
資訊工程系
95
Owing to the demand of more efficient and friendly human computer interface, the researches on face processing have been rapidly grown in recent years. In addition to providing some kinds of service for human beings, one of the most important characteristics of a system is to naturally interact with people. In this thesis, a design and experimental study of a face tracking and recognition system is presented. Regarding the face tracking, we utilize a particle filter to localize faces in image sequences. Since we have considered the hair color information of a human head, it will keep tracking even if the person is back to a camera. We further adopt both the motion and color cues as the features to make the influence of the background as low as possible. In the face recognition phase, a new architecture is proposed to achieve fast recognition. After the face detection process, we will capture the face region and fed its features derived from the wavelet transform into a strong classifier which is trained by an AdaBoost learning algorithm. Compared with other machine learning algorithms, the AdaBoost algorithm has an advantage of facilitating the speed of convergence. Thus, we can update the training samples to deal with comprehensive circumstances but need not spend much computational cost. Finally, we further develop a bottom-up hierarchical classification structure for multi-class face recognition. Experimental results reveal that the face tracking rate is more than 95% in general situations and 88% when the face suffering from temporal occlusion. As for the face recognition, the accurate rate is more than 90%; besides this, the efficiency of system execution is very satisfactory, which reaches 20 frames per second at least.
Wang, Chih-hsin, and 汪至信. "Real-time Multi-Face Recognition and Tracking Techniques Used for the Interaction between Humans and Robots." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/05324340872357424096.
Full text國立臺灣科技大學
資訊工程系
97
More recently, the technique of face recognition has been markedly extended to realize the optimality of human-computer interfaces due to the promotion of “intelligent life.” Face recognition has been broadly applied in many areas, such as biometric identity authentication, entrance guard, and human-computer interface in recent years. In view of the above-mentioned facts, a completely automatic real-time multi-faces recognition and tracking system installed on a person following robot is presented in this thesis, including face detection, face recognition, and face tracking procedures. As to face detection, the AdaBoost technique is used in our system, and a structure of cascaded classifiers is adopted to detect human faces; in the face recognition procedure, we have captured face images and apply the two-dimensional Haar wavelet transform (2D-HWT) to acquire the low-frequency data of face images. We modify the discriminative common vectors (DCV) algorithm to setup the discriminative models of face features received from different persons. Finally, we utilize the minimum Euclidean distance to measure the similarity of the face image and a candidate person, and decide the most likely person by the majority voting of ten successive recognition results from a face image sequence. Subsequently, the results of recognition will be grouped into two classes: “master” and “stranger.” In our system, the robot will track the master unceasingly; after check the class of targets, our system will go to the face tracking procedure. Herein, we employ a two-level improved particle filter to dynamically locate multiple human faces. According to the position of the human face in an image, we issue a series of commands (moving forward, turning left or turning right) to drive the motors of wheels on a robot, and judge the distance between the robot and a person with the aid of a laser range finder to issue a set of commands (stop or turn backward) until the robot follows to a suitable distance in front of the person. Experimental results reveal that the face tracking rate is more than 97% in general situations and exceeds 82% when the face occlusion happening. As for the face recognition, the correct rate is over 93% in general situations; besides this, the efficiency of system execution attains 7 frames per second at least in our system. Such system performance is very satisfactory and we are encouraged to commercialize the robot.
Lin, Yu-Ta, and 林裕達. "Real-time Visual Face Tracking and Recognition Techniques Used for the Interaction between Humans and Robots." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50413688003753242620.
Full text國立臺灣科技大學
資訊工程系
96
Owing to the demand of more efficient and friendly human-computer interfaces, the researches on face processing have been rapidly grown in recent years. In addition to offering some kinds of service for human beings, one of the most important characteristics of a favorable system is to autonomously interact with people. Accordingly, face recognition has been broadly applied in the areas, such as biometric identity authentication, entrance guard, and human-computer interfaces. More recently, the technique of face recognition has been markedly extended to the applications of the optimality of human-computer interfaces due to the promotion of “intelligent life.” In view of the above-mentioned facts, a completely automatic real-time face tracking and recognition system installed on a person following robot is presented in this thesis, including face tracking and face recognition procedures. As to face detection, it is first based on skin color blocks and geometrical properties applied to eliminate the skin color regions that do not belong to the face in the HSV color space. Then we find the proper ranges of two eyes and one mouth according to the positions of pupils and the center of a mouth. Subsequently, we utilize the foundation of an isosceles triangle formed by the relative positions of two eyes and one mouth to judge whether the detected skin color regions a human face. In the face tracking procedure, we employ an improved particle filter to dynamically locate a human face. Since we have considered the hair color information of a human head, the particle filter will keep tracking even if the person is back to the sight of a camera. We further adopt both the motion and color cues as the features to make the influence of the background as low as possible. According to the position of the human face in an image, we issue a series of commands (moving forward, turning left or turning right) to drive the motors of wheels on a robot, and judge the distance between the robot and a person with the aid of three ultrasonic sensors to issue a set of commands (stop or turn backward) until the robot follows to a suitable distance from the person. At this moment, the system starts the recognition procedure that identifies whether the person is the master of the robot or not. In the face recognition procedure, after the face detection and tracking procedure, we have captured a face image and apply the two-dimensional Haar wavelet transform (2D-HWT) to acquire the low-frequency data of the face image. This method is able to overcome the drawbacks of extracting face features in traditional manners. Additionally, we improve the shortcomings of principal component analysis (PCA) which can not effectively distinguish from different classes and those of linear discriminant analysis (LDA) which may find no inverse matrix. And we then employ the discriminative common vectors (DCV) algorithm to setup the discriminative models of face features received from different persons. Finally, we utilize the minimum Euclidean distance to measure the similarity of the face image and a candidate person and decide the most likely person by the majority vote of ten successive recognition results from a face image sequence. Experimental results reveal that the face tracking rate is more than 95% in general situations and over 88% when the face suffers from temporal occlusion. As for the face recognition, the rate is more than 93% in general situations and still reaches 80% in complicated backgrounds; besides this, the efficiency of system execution is very satisfactory, which respectively attains 5 and 2 frames per second at least in the face tracking and recognition modes.
"3D object retrieval and recognition." 2010. http://library.cuhk.edu.hk/record=b5894304.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (p. 53-59).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- 3D Object Representation --- p.1
Chapter 1.1.1 --- Polygon Mesh --- p.2
Chapter 1.1.2 --- Voxel --- p.2
Chapter 1.1.3 --- Range Image --- p.3
Chapter 1.2 --- Content-Based 3D Object Retrieval --- p.3
Chapter 1.3 --- 3D Facial Expression Recognition --- p.4
Chapter 1.4 --- Contributions --- p.5
Chapter 2 --- 3D Object Retrieval --- p.6
Chapter 2.1 --- A Conceptual Framework for 3D Object Retrieval --- p.6
Chapter 2.1.1 --- Query Formulation and User Interface --- p.7
Chapter 2.1.2 --- Canonical Coordinate Normalization --- p.8
Chapter 2.1.3 --- Representations of 3D Objects --- p.10
Chapter 2.1.4 --- Performance Evaluation --- p.11
Chapter 2.2 --- Public Databases --- p.13
Chapter 2.2.1 --- Databases of Generic 3D Objects --- p.14
Chapter 2.2.2 --- A Database of Articulated Objects --- p.15
Chapter 2.2.3 --- Domain-Specific Databases --- p.15
Chapter 2.2.4 --- Data Sets for the Shrec Contest --- p.16
Chapter 2.3 --- Experimental Systems --- p.16
Chapter 2.4 --- Challenges in 3D Object Retrieval --- p.17
Chapter 3 --- Boosting 3D Object Retrieval by Object Flexibility --- p.19
Chapter 3.1 --- Related Work --- p.19
Chapter 3.2 --- Object Flexibility --- p.21
Chapter 3.2.1 --- Definition --- p.21
Chapter 3.2.2 --- Computation of the Flexibility --- p.22
Chapter 3.3 --- A Flexibility Descriptor for 3D Object Retrieval --- p.24
Chapter 3.4 --- Enhancing Existing Methods --- p.25
Chapter 3.5 --- Experiments --- p.26
Chapter 3.5.1 --- Retrieving Articulated Objects --- p.26
Chapter 3.5.2 --- Retrieving Generic Objects --- p.27
Chapter 3.5.3 --- Experiments on Larger Databases --- p.28
Chapter 3.5.4 --- Comparison of Times for Feature Extraction --- p.31
Chapter 3.6 --- Conclusions & Analysis --- p.31
Chapter 4 --- 3D Object Retrieval with Referent Objects --- p.32
Chapter 4.1 --- 3D Object Retrieval with Prior --- p.32
Chapter 4.2 --- 3D Object Retrieval with Referent Objects --- p.34
Chapter 4.2.1 --- Natural and Man-made 3D Object Classification --- p.35
Chapter 4.2.2 --- Inferring Priors Using 3D Object Classifier --- p.36
Chapter 4.2.3 --- Reducing False Positives --- p.37
Chapter 4.3 --- Conclusions and Future Work --- p.38
Chapter 5 --- 3D Facial Expression Recognition --- p.39
Chapter 5.1 --- Introduction --- p.39
Chapter 5.2 --- Separation of BFSC and ESC --- p.43
Chapter 5.2.1 --- 3D Face Alignment --- p.43
Chapter 5.2.2 --- Estimation of BFSC --- p.44
Chapter 5.3 --- Expressional Regions and an Expression Descriptor --- p.45
Chapter 5.4 --- Experiments --- p.47
Chapter 5.4.1 --- Testing the Ratio of Preserved Energy in the BFSC Estimation --- p.47
Chapter 5.4.2 --- Comparison with Related Work --- p.48
Chapter 5.5 --- Conclusions --- p.50
Chapter 6 --- Conclusions --- p.51
Bibliography --- p.53
"An investigation into the parameters influencing neural network based facial recognition." Thesis, 2012. http://hdl.handle.net/10210/7007.
Full textThis thesis deals with an investigation into facial recognition and some variables that influence the performance of such a system. Firstly there is an investigation into the influence of image variability on the overall recognition performance of a system and secondly the performance and subsequent suitability of a neural network based system is tested. Both tests are carried out on two distinctly different databases, one more variable than the other. The results indicate that the greater the amount of variability the more negatively affected is the performance rating of a specific facial recognition system. The results further indicate the success with the implementation of a neural network system over a more conventional statistical system.
Sherman, George Edward. "A model of an expert computer vision and recognition facility with applications of a proportion technique." 1985. http://hdl.handle.net/2097/27537.
Full textChien, Chin-Hsiang, and 簡晉翔. "3D human face reconstruction and recognition by using the techniques of multi-view synthesizing method with the aids of the depth images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50336494037594860799.
Full text淡江大學
電機工程學系碩士班
101
In the past years, most of the three-dimensional reconstruction or recognition systems use two-dimensional image and its depth image to calculate the three-dimensional coordinates of the image to process the three-dimensional theme. Such operations usually take a considerable amount of computing costs. This research proposes another approach, point cloud, which can preserve feature vectors and color information, for three-dimensional face reconstruction and recognition. In the conventional 2D approach, it keeps tracking the information of each pixel of the 2D image. On the other hand, the point cloud system directly synthesizes the 2D image and its depth image into a point cloud model with 3D coordinates. Therefore it can reduce the computation complexity significantly. It can further construct a 3D space coordinate KD-Tree query system to accelerate the query search speed for searching the key points of the 3D coordinate. Generally, it uses some expensive equipments and laser scanners for three-dimensional facial reconstruction. In this research we try to use the Microsoft KINCET sensor to reconstruct the 3D human face. Compared with the expensive laser scanner, KINECT has the characteristics of cheap cost and can find the information of color image and depth image. In this research KINET is used to scan the human face in multi-view within 180. Then we use the iteration closest point (ICP) algorithm to match the multi-view human faces. By this approach the 3D data base group points of the human face can thus be established. The three-dimensional face model point cloud data via 3D SIFT (3D Scale Invariant Feature Transform) algorithm is applied to extract the feature key points. Then we use the three-dimensional coordinates of Euclidean distance to calculate the feature points and feature weights distance relationship to determine whether the face belongs to the same person. The experimental results show that under Gavab DB face database our approach has the recognition rate of 83.6%.
G, Rajesh Babu. "Attendance Maintenance Using Face Recognition Technique." Thesis, 2014. http://raiith.iith.ac.in/114/1/CS11M09.pdf.
Full texttsai, Chuan-yi, and 蔡全益. "Using Stereo Vision Technique for Face Recognition." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/8py637.
Full text國立臺北科技大學
工業工程與管理系所
93
Biometric measurements received an increasing interest for security applications in the last two decades. In particularly, face recognition has been an active research in this area. The objective of this study is to develop an effective face recognition system that extracts both 2D and 3D face features to improve the recognition performance. The proposed method derives 3D face information using a designed stereo face system. Then, it retrieves 2D and 3D face features with Principle Component Analysis (PCA) and Local Autocorrelation Coefficient (LAC) respectively. Eventually, the information of features are fused and fed into a Euclidean-distance classifier and a Backpropagation neural network for recognition. An experiment was conducted with 100 subjects. For each subject, thirteen stereo face images were taken with different expressions. Among them, the faces with expressions one to seven are used for training, and the rest of the expressions is used for testing. For the Euclidean-distance classifier, the proposed method does not improve the recognition result by combining the features derived from PCA with LAC; however, an improvement is observed when using the Back-Propagation Neural Network. In general, BP outperforms Euclidean distance in both 2D and 3D face recognition. Furthermore, the experimental results show that the proposed method effectively improves the recognition rate by combines the 2D with 3D face information.
Gillan, Steven. "A technique for face recognition based on image registration." Thesis, 2010. http://hdl.handle.net/1828/2548.
Full text(9795329), Xiaolong Fan. "A feature selection and classification technique for face recognition." Thesis, 2005. https://figshare.com/articles/thesis/A_feature_selection_and_classification_technique_for_face_recognition/13457450.
Full textWu, Yao-Ting, and 吳曜廷. "Face Recognition and Destiny Foreseeing by Using Fuzzy Classification Technique." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/87425209250827454844.
Full text逢甲大學
自動控制工程所
98
This paper proposes a face recognition and fortunes foreseeing system for specified person by using fuzzy inference method. This system uses CCD camera to take a picture of specified person in the best distance, and uses a skin color detection method to find out the facial area by separating skin color scope. This achieves the purpose of first positioning. After the preliminary positioning , we locate the facial contour by using the ellipse template method. Find out the locations of eye and lip of the five sense organs in the human face, and then to get the complete shape for eye and lip separately by using image processing technique and morphology .In this research, we classify the sample template into some classes by using fuzzy classification rule in advance, this work will speed up to run the real-time jobs of face recognition, 3D face modeling and destiny foreseeing. Afterward, we apply the -Norm minimization criterion to calculate the certainty degree of recognized face and estimated destiny. Finally, we also infer the fortune foreseeing analysis based on the shapes of eye, lip and face as well as face feature recognition method.
Chang-LinTsou and 鄒昌霖. "Face Recognition using Dual-LBP Architecture Technique under Different Illumination." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/f392u2.
Full textTsai, Pei-Chun, and 蔡佩君. "Face detection and recognition based on fuzzy theory and neural technique." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50182223129448794553.
Full text義守大學
資訊管理學系碩士班
96
We develop and improve an algorithm in order to detect the faces and recognize theses identity in daily life images in the varied background. We use less-dimension vectors to reduce images complexity and improving interference with noise in images,increasing ability of face detection and recognition. The system of face detection and recognition is divided into three stages: face detection, face location, and face recognition. In the first stage, we use a fuzzy Gaussian classifier and a face feature extracting neural network to detecting faces in image. In this stage, we hope to divide images to face images and non-face images roughly by fuzzy Gaussian classifier. We compute the fuzzy Gaussian parameters of input images, and then accumulate the square errors of Gaussian parameters between training patterns to exclude the most part of non-face image. Next, we feed the passed images to the feature extracting neural network for detecting faces accurately. In the face location stage, we use Gaussian spread method to remove some fault detections in the previous detecting stage and locate the faces in images. In the last stage, we use a fuzzy c-means and a framework of parallel neural networks to recognize the faces that located in the previous stage. The fuzzy c-means can classify each input image to some clusters and activate their small-scale parallel neural networks corresponsivelyto recognize the input images. Our algorithm can reduce the dimension of images, and eliminate a great deal of non-face images by classifier. Therefore, we can decrease the training time and recognition efficiently. Further, we can promote the detection and recognition ability of complex face images accurately.