To see the other types of publications on this topic, follow the link: FACE RECOGNITION TECHNIQUES.

Dissertations / Theses on the topic 'FACE RECOGNITION TECHNIQUES'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'FACE RECOGNITION TECHNIQUES.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ebrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16289/1/Hossein_Ebrahimpour-Komleh_Thesis.pdf.

Full text
Abstract:
Fractals are popular because of their ability to create complex images using only several simple codes. This is possible by capturing image redundancy and presenting the image in compressed form using the self similarity feature. For many years fractals were used for image compression. In the last few years they have also been used for face recognition. In this research we present new fractal methods for recognition, especially human face recognition. This research introduces three new methods for using fractals for face recognition, the use of fractal codes directly as features, Fractal image-set coding and Subfractals. In the first part, the mathematical principle behind the application of fractal image codes for recognition is investigated. An image Xf can be represented as Xf = A x Xf + B which A and B are fractal parameters of image Xf . Different fractal codes can be presented for any arbitrary image. With the defnition of a fractal transformation, T(X) = A(X - Xf ) + Xf , we can define the relationship between any image produced in the fractal decoding process starting with any arbitrary image X0 as Xn = Tn(X) = An(X - Xf ) + Xf . We show that some choices for A or B lead to faster convergence to the final image. Fractal image-set coding is based on the fact that a fractal code of an arbitrary gray-scale image can be divided in two parts - geometrical parameters and luminance parameters. Because the fractal codes for an image are not unique, we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters - which are faster to compute. For recognition purposes, the fractal code of a query image is applied to all the images in the training set for one iteration. The distance between an image and the result after one iteration is used to define a similarity measure between this image and the query image. The fractal code of an image is a set of contractive mappings each of which transfer a domain block to its corresponding range block. The distribution of selected domain blocks for range blocks in an image depends on the content of image and the fractal encoding algorithm used for coding. A small variation in a part of the input image may change the contents of the range and domain blocks in the fractal encoding process, resulting in a change in the transformation parameters in the same part or even other parts of the image. A subfractal is a set of fractal codes related to range blocks of a part of the image. These codes are calculated to be independent of other codes of the other parts of the same image. In this case the domain blocks nominated for each range block must be located in the same part of the image which the range blocks come from. The proposed fractal techniques were applied to face recognition using the MIT and XM2VTS face databases. Accuracies of 95% were obtained with up to 156 images.
APA, Harvard, Vancouver, ISO, and other styles
2

Ebrahimpour-Komleh, Hossein. "Fractal techniques for face recognition." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16289/.

Full text
Abstract:
Fractals are popular because of their ability to create complex images using only several simple codes. This is possible by capturing image redundancy and presenting the image in compressed form using the self similarity feature. For many years fractals were used for image compression. In the last few years they have also been used for face recognition. In this research we present new fractal methods for recognition, especially human face recognition. This research introduces three new methods for using fractals for face recognition, the use of fractal codes directly as features, Fractal image-set coding and Subfractals. In the first part, the mathematical principle behind the application of fractal image codes for recognition is investigated. An image Xf can be represented as Xf = A x Xf + B which A and B are fractal parameters of image Xf . Different fractal codes can be presented for any arbitrary image. With the defnition of a fractal transformation, T(X) = A(X - Xf ) + Xf , we can define the relationship between any image produced in the fractal decoding process starting with any arbitrary image X0 as Xn = Tn(X) = An(X - Xf ) + Xf . We show that some choices for A or B lead to faster convergence to the final image. Fractal image-set coding is based on the fact that a fractal code of an arbitrary gray-scale image can be divided in two parts - geometrical parameters and luminance parameters. Because the fractal codes for an image are not unique, we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters - which are faster to compute. For recognition purposes, the fractal code of a query image is applied to all the images in the training set for one iteration. The distance between an image and the result after one iteration is used to define a similarity measure between this image and the query image. The fractal code of an image is a set of contractive mappings each of which transfer a domain block to its corresponding range block. The distribution of selected domain blocks for range blocks in an image depends on the content of image and the fractal encoding algorithm used for coding. A small variation in a part of the input image may change the contents of the range and domain blocks in the fractal encoding process, resulting in a change in the transformation parameters in the same part or even other parts of the image. A subfractal is a set of fractal codes related to range blocks of a part of the image. These codes are calculated to be independent of other codes of the other parts of the same image. In this case the domain blocks nominated for each range block must be located in the same part of the image which the range blocks come from. The proposed fractal techniques were applied to face recognition using the MIT and XM2VTS face databases. Accuracies of 95% were obtained with up to 156 images.
APA, Harvard, Vancouver, ISO, and other styles
3

Heseltine, Thomas David. "Face recognition : two-dimensional and three-dimensional techniques." Thesis, University of York, 2005. http://etheses.whiterose.ac.uk/9880/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Yunlian <1986&gt. "Advanced Techniques for Face Recognition under Challenging Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6355/1/sun_yunlian_tesi.pdf.

Full text
Abstract:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Yunlian <1986&gt. "Advanced Techniques for Face Recognition under Challenging Environments." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6355/.

Full text
Abstract:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
APA, Harvard, Vancouver, ISO, and other styles
6

Gul, Ahmet Bahtiyar. "Holistic Face Recognition By Dimension Reduction." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1056738/index.pdf.

Full text
Abstract:
Face recognition is a popular research area where there are different approaches studied in the literature. In this thesis, a holistic Principal Component Analysis (PCA) based method, namely Eigenface method is studied in detail and three of the methods based on the Eigenface method are compared. These are the Bayesian PCA where Bayesian classifier is applied after dimension reduction with PCA, the Subspace Linear Discriminant Analysis (LDA) where LDA is applied after PCA and Eigenface where Nearest Mean Classifier applied after PCA. All the three methods are implemented on the Olivetti Research Laboratory (ORL) face database, the Face Recognition Technology (FERET) database and the CNN-TURK Speakers face database. The results are compared with respect to the effects of changes in illumination, pose and aging. Simulation results show that Subspace LDA and Bayesian PCA perform slightly well with respect to PCA under changes in pose
however, even Subspace LDA and Bayesian PCA do not perform well under changes in illumination and aging although they perform better than PCA.
APA, Harvard, Vancouver, ISO, and other styles
7

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
Abstract:
[Truncated abstract] The aim of visual recognition is to identify objects in a scene and estimate their pose. Object recognition from 2D images is sensitive to illumination, pose, clutter and occlusions. Object recognition from range data on the other hand does not suffer from these limitations. An important paradigm of recognition is model-based whereby 3D models of objects are constructed offline and saved in a database, using a suitable representation. During online recognition, a similar representation of a scene is matched with the database for recognizing objects present in the scene . . . The tensor representation is extended to automatic and pose invariant 3D face recognition. As the face is a non-rigid object, expressions can significantly change its 3D shape. Therefore, the last part of this thesis investigates representations and matching techniques for automatic 3D face recognition which are robust to facial expressions. A number of novelties are proposed in this area along with their extensive experimental validation using the largest available 3D face database. These novelties include a region-based matching algorithm for 3D face recognition, a 2D and 3D multimodal hybrid face recognition algorithm, fully automatic 3D nose ridge detection, fully automatic normalization of 3D and 2D faces, a low cost rejection classifier based on a novel Spherical Face Representation, and finally, automatic segmentation of the expression insensitive regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
8

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition /." Connect to this title, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.

Full text
Abstract:
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Xia. "Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5347.

Full text
Abstract:
The purpose of this research was to enhance the methods towards the development of an efficient three dimensional face recognition system. More specifically, one of our aims was to investigate how the use of curvature of the diagonal profiles, extracted from 3D facial geometry models can help the neutral face recognition processes. Another aim was to use a gender classifier employed on 3D facial geometry in order to reduce the search space of the database on which facial recognition is performed. 3D facial geometry with facial expression possesses considerable challenges when it comes face recognition as identified by the communities involved in face recognition research. Thus, one aim of this study was to investigate the effects of the curvature-based method in face recognition under expression variations. Another aim was to develop techniques that can discriminate both expression-sensitive and expression-insensitive regions for ii face recognition based on non-neutral face geometry models. In the case of neutral face recognition, we developed a gender classification method using support vector machines based on the measurements of area and volume of selected regions of the face. This method reduced the search range of a database initially for a given image and hence reduces the computational time. Subsequently, in the characterisation of the face images, a minimum feature set of diagonal profiles, which we call T shape profiles, containing diacritic information were determined and extracted to characterise face models. We then used a method based on computing curvatures of selected facial regions to describe this feature set. In addition to the neutral face recognition, to solve the problem arising from data with facial expressions, initially, the curvature-based T shape profiles were employed and investigated for this purpose. For this purpose, the feature sets of the expression-invariant and expression-variant regions were determined respectively and described by geodesic distances and Euclidean distances. By using regression models the correlations between expressions and neutral feature sets were identified. This enabled us to discriminate expression-variant features and there was a gain in face recognition rate. The results of the study have indicated that our proposed curvature-based recognition, 3D gender classification of facial geometry and analysis of facial expressions, was capable of undertaking face recognition using a minimum set of features improving efficiency and computation.
APA, Harvard, Vancouver, ISO, and other styles
11

Phung, Son Lam. "Automatic human face detection in color images." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1309.

Full text
Abstract:
Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D24
APA, Harvard, Vancouver, ISO, and other styles
12

Bouchech, Hamdi. "Selection of optimal narrowband multispectral images for face recognition." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS030/document.

Full text
Abstract:
Les performances des systèmes de reconnaissance des visages en utilisant des images RGB baissent rapidement quand ils sont appliqués dans des conditions d’illumination extrêmes. L’utilisation des images multispectrales représente une alternative prometteuse pour résoudre ce problème. Dans cette thèse on s’intéresse à l’utilisation des images multispectrales visibles pour la reconnaissance des visages humains. Les images multispectrales visibles sont des images capturées à des longueurs d’ondes différentes du spectre visible (band spectral) qui s’étend de 480nm à 720nm. Ces images représentent des caractéristiques qui favorisent la reconnaissance des visages humains dans des conditions particulières comme la présence d’excès d’illumination incidente sur le visage photographié. Notre travail consiste à exploiter ces caractéristiques sur des stages différentes: optimiser le choix du nombre de bandes spectrales à utiliser, optimiser les longueurs d’ondes choisies, optimiser les techniques de fusion des informations extraites à partir des différentes bandes spectrales pour avoir plus d’informations utiles et moins d’informations bruits. Plusieurs nouvelles approches ont été proposées dans le cadre de ce travail avec des résultats encourageants en termes de performances. Ces approches ont exploité plusieurs outils mathématiques pour resoudre les différents problèmes rencontrés, en particulier la formulation de la sélection des bandes spectrales optimales sous formes de problèmes d’optimisation où nous avons utilisé le « basis pursuit algorithm » pour déterminer un vecteur de poids sparse pour représenter l’importance des différentes bandes. Dans d’autres problèmes d’optimisation, nous avons attribué à chaque bande un classifieur faible, puis combiné les classifieurs faibles avec dif- férents poids associés selon l’importance. La méthode Adaboost a été utilisée pour trouver la combinaison optimale. D’autres techniques ont introduites d’une manière originale la dé- composition multilinéaire des images de visage pour formuler une sorte de base de données caractérisant les bandes spectrales. Cette base de données a été utilisée avec les nouvelles images, ou image test, pour déterminer les bandes les plus robustes contre une variation importante d’illumination. Le travail présenté dans le cadre de cette thèse est une petite contribution à la reconnaissance des visages en utilisant des images multispectrales, qui est une approche d’actualité, mais qui nécessite encore plus de développement afin de maximiser ses performances
Face recognition systems based on ’conventional’ images have reached a significant level of maturity with some practical successes. However, their performance may degrade under poor and/or changing illumination. Multispectral imagery represents a viable alternative to conventional imaging in the search for a robust and practical identification system. Multi- spectral imaging (MI) can be defined as a ’collection of several monochrome images of the same scene, each of them taken with additional receptors sensitive to other frequencies of the visible light or to frequencies beyond the visible light like the infrared region of electro- magnetic continuum. Each image is referred to as a band or a channel. However, one weakness of MI is that they may significantly increase the system processing time because of the huge quantity of data to be mined; in some cases, hundreds of MI are taken for each subject. In this thesis, we propose to solve this problem by developing new approaches to select the set of best visible spectral bands for face matching. For this purpose, the problem of best spectral bands selection is formulated as an optimization problem where spectral bands are constrained to maximize the recognition accuracy under challenging imaging conditions. We reduce the redundancy of both spectral and spatial information without losing valuable details needed for the object recognition, discrimination and classification. We have investigated several mathematic and optimization tools widely used in the field of image processing. One of the approaches we have proposed formulated the problem of best spectral bands selection as a pursuit problem where weights of importance were affected to each spectral band and the vector of all weights was constrained to be sparse with most of its elements are zeros. In another work, we have assigned to each spectral band a linear discriminant analysis (LDA) based weak classifier. Then, all weak classifiers were boosted together using an Adaboost process. From this later, each weak classifier obtained a weight that characterizes its importance and hence the quality of the corresponding spectral band. Several other techniques were also used for best spectral bands selection including but not limited to mixture of Gaussian based modeling, multilinear sparse decomposition, image quality factors, local descriptors like SURF and HGPP, likelihood ratio and so on. These different techniques enabled to build systems for best spectral bands selection that are either static with the same bands are selected for all the subjects or dynamic with each new subject get its own set of best bands. This latter category, dynamic systems, is an original component of our work that, to the best of our knowledge, has not been proposed before; all existing systems are only static. Finally, the proposed algorithms were compared to state-of-the-art algorithms developed for face recognition purposes in general and specifically for best spectral bands selection
APA, Harvard, Vancouver, ISO, and other styles
13

Ben, Said Ahmed. "Multispectral imaging and its use for face recognition : sensory data enhancement." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS008/document.

Full text
Abstract:
La recherche en biométrie a connu une grande évolution durant les dernières annéessurtout avec le développement des méthodes de décomposition de visage. Cependant,ces méthodes ne sont pas robustes particulièrement dans les environnements incontrôlés.Pour faire face à ce problème, l'imagerie multispectrale s'est présentée comme une nouvelletechnologie qui peut être utilisée en biométrie basée sur la reconnaissance de visage.Dans tous ce processus, la qualité des images est un facteur majeur pour concevoirun système de reconnaissance fiable. Il est essentiel de se disposer d'images de hautequalité. Ainsi, il est indispensable de développer des algorithmes et des méthodes pourl'amélioration des données sensorielles. Cette amélioration inclut plusieurs tâches tellesque la déconvolution des images, le defloutage, la segmentation, le débruitage. . . Dansle cadre de cette thèse, nous étudions particulièrement la suppression de bruit ainsi quela segmentation de visage.En général, le bruit est inévitable dans toutes applications et son élimination doit sefaire tout en assurant l'intégrité de l'information confinée dans l'image. Cette exigenceest essentielle dans la conception d'un algorithme de débruitage. Le filtre Gaussienanisotropique est conçu spécifiquement pour répondre à cette caractéristique. Nous proposonsd'étendre ce filtre au cas vectoriel où les données en disposition ne sont plus desvaleurs de pixels mais un ensemble de vecteurs dont les attribues sont la réflectance dansune longueur d'onde spécifique. En outre, nous étendons aussi le filtre de la moyennenon-local (NLM) dans le cas vectoriel. La particularité de ce genre de filtre est la robustesseface au bruit Gaussien.La deuxième tâche dans le but d'amélioration de données sensorielles est la segmentation.Le clustering est l'une des techniques souvent utilisées pour la segmentation etclassification des images. L'analyse du clustering implique le développement de nouveauxalgorithmes particulièrement ceux qui sont basés sur la méthode partitionnelle.Avec cette approche, le nombre de clusters doit être connu d'avance, chose qui n'est pastoujours vraie surtout si nous disposons de données ayant des caractéristiques inconnues.Dans le cadre de cette thèse, nous proposons de nouveaux indices de validationde clusters qui sont capables de prévoir le vrai nombre de clusters même dans le cas dedonnées complexes.A travers ces deux tâches, des expériences sur des images couleurs et multispectrales sontréalisées. Nous avons utilisé des bases de données d'image très connues pour analyserl'approche proposée
In this thesis, we focus on multispectral image for face recognition. With such application,the quality of the image is an important factor that affects the accuracy of therecognition. However, the sensory data are in general corrupted by noise. Thus, wepropose several denoising algorithms that are able to ensure a good tradeoff betweennoise removal and details preservation. Furthermore, characterizing regions and detailsof the face can improve recognition. We focus also in this thesis on multispectral imagesegmentation particularly clustering techniques and cluster analysis. The effectiveness ofthe proposed algorithms is illustrated by comparing them with state-of-the-art methodsusing both simulated and real multispectral data sets
APA, Harvard, Vancouver, ISO, and other styles
14

Ferguson, Eilidh Louise. "Facial identification of children : a test of automated facial recognition and manual facial comparison techniques on juvenile face images." Thesis, University of Dundee, 2015. https://discovery.dundee.ac.uk/en/studentTheses/03679266-9552-45da-9c6d-0f062c4893c8.

Full text
Abstract:
The accurate identification of children from facial photographs could provide a great attribute in the fight against child sexual exploitation, and may also aid in the detection of missing juveniles where comparative material is available. The European Commission is actively pursuing a global alliance for the identification of the victims of child sexual abuse; a task which is considered to be of the utmost importance. Images of child sexual abuse are shared, copied, and distributed online and their origin can be difficult to trace. Current investigations attempting to identify the children within such images appear to focus on the determination of places or geographical regions depicted in these images, from which victims can subsequently be tracked down and identified. Cutting edge technology is also used to detect duplicate images in order to decrease the workload of human operators and dedicate more time to the identification of new victims. Present investigations do not appear to focus on facial information for victim identification. Methods of facial identification already exist for adult individuals, consisting of both automated facial recognition algorithms and manual facial comparison techniques carried out by human operators. Human operator image comparison is presently the only method considered accurate enough to verify a face identity. It is only recently that researchers involved in automated facial recognition have begun to concern themselves with identification spanning childhood. Methods focus on age simulation to match query images with the age of the target database, rather than discrimination of individual faces over age progression. As far as can be determined, this is the first attempt to assess the manual comparison of juvenile faces. This study aimed to create a database of children’s faces from which identification accuracy could be tested using both automated facial recognition and manual facial comparison methods, which already exist for the identification of adults. A state-of-the-art facial recognition algorithm was employed and manual facial comparison was based on current recommendations by the Facial Identification Scientific Working Group (FISWG). It was not known if methods based on adult faces could be successfully extrapolated to juvenile faces, particularly as facial identification is highly susceptible to errors when there is an age difference between images of an individual. In children, the face changes much more rapidly than adults over ageing, due to the rapid growth and development of the juvenile face. The results of this study are in agreement with comparisons of automated and human performance in the identification of adult faces. Overall the automated facial recognition algorithm superseded human ability for identification of juvenile faces, however human performance was higher for the most difficult face pairs. The average accuracy for human image comparison was 61%. There was no significant difference in juvenile identification between individuals with prior experience of adult facial comparison and those with no prior experience. For automated facial recognition a correct identification rate of 71% was achieved at a false acceptance rate of 9%. Despite using methods created for adult facial identification, the results of this study are promising, particularly as they are based on a set of images acquired under uncontrolled conditions, which is known to increase error rates. With further augmentation of the database and investigation into child-specific identification techniques, the ability to accurately identify children from facial images is certainly a future possibility.
APA, Harvard, Vancouver, ISO, and other styles
15

Cho, Gyuchoon. "Real Time Driver Safety System." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/63.

Full text
Abstract:
The technology for driver safety has been developed in many fields such as airbag system, Anti-lock Braking System or ABS, ultrasonic warning system, and others. Recently, some of the automobile companies have introduced a new feature of driver safety systems. This new system is to make the car slower if it finds a driver’s drowsy eyes. For instance, Toyota Motor Corporation announced that it has given its pre-crash safety system the ability to determine whether a driver’s eyes are properly open with an eye monitor. This paper is focusing on finding a driver’s drowsy eyes by using face detection technology. The human face is a dynamic object and has a high degree of variability; that is why face detection is considered a difficult problem in computer vision. Even with the difficulty of this problem, scientists and computer programmers have developed and improved the face detection technologies. This paper also introduces some algorithms to find faces or eyes and compares algorithm’s characteristics. Once we find a face in a sequence of images, the matter is to find drowsy eyes in the driver safety system. This system can slow a car or alert the user not to sleep; that is the purpose of the pre-crash safety system. This paper introduces the VeriLook SDK, which is used for finding a driver’s face in the real time driver safety system. With several experiments, this paper also introduces a new way to find drowsy eyes by AOI,Area of Interest. This algorithm improves the speed of finding drowsy eyes and the consumption of memory use without using any object classification methods or matching eye templates. Moreover, this system has a higher accuracy of classification than others.
APA, Harvard, Vancouver, ISO, and other styles
16

Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.

Full text
Abstract:
Cette thèse porte sur les méthodes de super-résolution (SR) pour l’amélioration des performances des systèmes de reconnaissance automatique (OCR, reconnaissance faciale). Les méthodes de Super-Résolution (SR) permettent de générer des images haute résolution (HR) à partir d’images basse résolution (BR). Contrairement à un rééchantillonage par interpolation, elles restituent les hautes fréquences spatiales et compensent les artéfacts (flou, crénelures). Parmi elles, les méthodes d’apprentissage automatique telles que les réseaux de neurones artificiels permettent d’apprendre et de modéliser la relation entre les images BR et HR à partir d’exemples. Ce travail démontre l’intérêt des méthodes de SR à base de réseaux de neurones pour les systèmes de reconnaissance automatique. Les réseaux de neurones à convolutions sont particulièrement adaptés puisqu’ils peuvent être entraînés à extraire des caractéristiques non-linéaires bidimensionnelles pertinentes tout en apprenant la correspondance entre les espaces BR et HR. Sur des images de type documents, la méthode proposée permet d’améliorer la précision en reconnaissance de caractère de +7.85 points par rapport à une simple interpolation. La création d’une base d’images annotée et l’organisation d’une compétition internationale (ICDAR2015) ont souligné l’intérêt et la pertinence de telles approches. Pour les images de visages, les caractéristiques faciales sont cruciales pour la reconnaissance automatique. Une méthode en deux étapes est proposée dans laquelle la qualité de l’image est d’abord globalement améliorée, pour ensuite se focaliser sur les caractéristiques essentielles grâce à des modèles spécifiques. Les performances d’un système de vérification faciale se trouvent améliorées de +6.91 à +8.15 points. Enfin, pour le traitement d’images BR en conditions réelles, l’utilisation de réseaux de neurones profonds permet d’absorber la variabilité des noyaux de flous caractérisant l’image BR, et produire des images HR ayant des statistiques naturelles sans connaissance du modèle d’observation exact
This thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
APA, Harvard, Vancouver, ISO, and other styles
17

Nassar, Alaa S. N. "A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/16917.

Full text
Abstract:
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.
Higher Committee for Education Development in Iraq
APA, Harvard, Vancouver, ISO, and other styles
18

Visweswaran, Krishnan. "Face Recognition Technique for Blurred/Unclear Images." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10263528.

Full text
Abstract:

The purpose of this project is to invalidate the existing methods for performing facial recognition. Facial recognition considers many aspects before its actual operations are performed. Many situations must also be considered such as the angle of the camera, the aspect ratio and motion of the object with respect to the camera, and the shutter speed of the camera. There are many techniques that have been implemented for face recognition, but many of these techniques have problems recognizing a blurred image. These detection problems can be eliminated by using three algorithms: illumination, blurring, and pose. These algorithms will sequentially be more effective than the existing methods and will prove a definite solution for facial recognition in a blurred state.

APA, Harvard, Vancouver, ISO, and other styles
19

Muller, Neil Leonard. "Image recognition using the Eigenpicture Technique (with specific applications in face recognition and optical character recognition)." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/14381.

Full text
Abstract:
Includes bibliographical references.
In the first part of this dissertation, we present a detailed description of the eigenface technique first proposed by Sirovich and Kirby and subsequently developed by several groups, most notably the Media Lab at MIT. Other significant contributions have been made by Rockefeller University, whose ideas have culminated in a commercial system known as Faceit. For a different techniques (i.e. not eigenfaces) and a detailed comparison of some other techniques, the reader is referred to [5]. Although we followed ideas in the open literature (we believe there that there is a large body of advanced proprietary knowledge, which remains inaccessible), the implementation is our own. In addition, we believe that the method for updating the eigenfaces to deal with badly represented images presented in section 2. 7 is our own. The next stage in this section would be to develop an experimental system that can be extensively tested. At this point however, another, nonscientific difficulty arises, that of developing an adequately large data base. The basic problem is that one needs a training set representative of all faces to be encountered in future. Note that this does not mean that one can only deal with faces in the database, the whole idea is to be able to work with any facial image. However, a data base is only representative if it contains images similar to anything that can be encountered in future. For this reason a representative database may be very large and is not easy to build. In addition for testing purposes one needs multiple images of a large number of people, acquired over a period of time under different physical conditions representing the typical variations encountered in practice. Obviously this is a very slow process. Potentially the variation between the faces in the database can be large suggesting that the representation of all these different images in terms of eigenfaces may not be particularly efficient. One idea is to separate all the facial images into different, more or less homogeneous classes. Again this can only be done with access to a sufficiently large database, probably consisting of several thousand faces.
APA, Harvard, Vancouver, ISO, and other styles
20

Aly, Sherin Fathy Mohammed Gaber. "Techniques for Facial Expression Recognition Using the Kinect." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/89220.

Full text
Abstract:
Facial expressions convey non-verbal cues. Humans use facial expressions to show emotions, which play an important role in interpersonal relations and can be of use in many applications involving psychology, human-computer interaction, health care, e-commerce, and many others. Although humans recognize facial expressions in a scene with little or no effort, reliable expression recognition by machine is still a challenging problem. Automatic facial expression recognition (FER) has several related problems: face detection, face representation, extraction of the facial expression information, and classification of expressions, particularly under conditions of input data variability such as illumination and pose variation. A system that performs these operations accurately and in realtime would be a major step forward in achieving a human-like interaction between the man and machine. This document introduces novel approaches for the automatic recognition of the basic facial expressions, namely, happiness, surprise, sadness, fear, disgust, anger, and neutral using relatively low-resolution noisy sensor such as the Microsoft Kinect. Such sensors are capable of fast data collection, but the low-resolution noisy data present unique challenges when identifying subtle changes in appearance. This dissertation will present the work that has been done to address these challenges and the corresponding results. The lack of Kinect-based FER datasets motivated this work to build two Kinect-based RGBD+time FER datasets that include facial expressions of adults and children. To the best of our knowledge, they are the first FER-oriented datasets that include children. Availability of children data is important for research focused on children (e.g., psychology studies on facial expressions of children with autism), and also allows researchers to do deeper studies on automatic FER by analyzing possible differences between data coming from adults and children. The key contributions of this dissertation are both empirical and theoretical. The empirical contributions include the design and successful test of three FER systems that outperform existing FER systems either when tested on public datasets or in realtime. One proposed approach automatically tunes itself to the given 3D data by identifying the best distance metric that maximizes the system accuracy. Compared to traditional approaches where a fixed distance metric is employed for all classes, the presented adaptive approach had better recognition accuracy especially in non-frontal poses. Another proposed system combines high dimensional feature vectors extracted from 2D and 3D modalities via a novel fusion technique. This system achieved 80% accuracy which outperforms the state of the art on the public VT-KFER dataset by more than 13%. The third proposed system has been designed and successfully tested to recognize the six basic expressions plus neutral in realtime using only 3D data captured by the Kinect. When tested on a public FER dataset, it achieved 67% (7% higher than other 3D-based FER systems) in multi-class mode and 89% (i.e., 9% higher than the state of the art) in binary mode. When the system was tested in realtime on 20 children, it achieved over 73% on a reduced set of expressions. To the best of our knowledge, this is the first known system that has been tested on relatively large dataset of children in realtime. The theoretical contributions include 1) the development of a novel feature selection approach that ranks the features based on their class separability, and 2) the development of the Dual Kernel Discriminant Analysis (DKDA) feature fusion algorithm. This later approach addresses the problem of fusing high dimensional noisy data that are highly nonlinear distributed.
PHD
APA, Harvard, Vancouver, ISO, and other styles
21

Marras, Ioannis. "Robust subspace learning techniques for tracking and recognition of human faces." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/41039.

Full text
Abstract:
Computer vision, in general, aims to duplicate (or in some cases compensate) human vision, and traditionally, have been used in performing routine, repetitive tasks, such as classification in massive assembly lines. Today, research on computer vision is spreading enormously so that it is almost impossible to itemize all of its subtopics. Despite of this fact, one can list relevant several applications, such as face processing (i.e. face, expression, and gesture recognition), computer human interaction, crowd surveillance, and content-based image retrieval. In this thesis, we propose subspace learning algorithms that head toward solving two important but largely understudied problems in automated face analysis: robust 2D plus 3D face tracking and robust 2D/3D face recognition in the wild. The methods that we propose for the former represent the pioneering work on face tracking and recognition. After describing all the unsolved problems a computer vision method for automated facial analysis has to deal with, we propose algorithms to deal with these problems. More specifically, we propose a subspace technique for robust rigid object tracking by fusing appearance models created based on different modalities. The proposed learning and fusing framework is robust, exact, computationally efficient and does not require off-line training. By using 3D information and an appropriate 3D motion model, pose and appearance are decoupled, and therefore learning and maintaining an updated model for appearance only is feasible by using efficient online subspace learning schemes, achieving in that way robust performance in very difficult tracking scenarios including extreme pose variations. Furthermore, we propose an efficient and robust subspace technique to gradient ascent automatic face recognition method which is based on a correlation-based approach to parametric object alignment. Our algorithm performs the face recognition task by registering two face images by iteratively maximizing their correlation coefficient using gradient ascent as well as an appropriate motion model. We show the robustness of our algorithm for the problem of face recognition in the presence of occlusions and non-uniform illumination changes. In addition, we introduce a simple, efficient and robust subspace-based method for learning from the azimuth angle of surface normals for 3D face recognition. We show that an efficient subspace-based data representation based on the normal azimuth angles can be used for robust face recognition from facial surfaces. We demonstrated some of the favourable properties of this framework for the application of 3D face recognition. Extensions of our scheme span a wide range of theoretical topics and applications, from statistical machine learning and clustering to 3D object recognition. An important aspect of this method is that it can achieve good face recognition/ verification performance by using raw 3D scans without any heavy preprocessing (i.e., model fitting, surface smoothing etc.). Finally, we propose a methodology that jointly learns a generative deformable model with minimal human intervention by using only a simple shape model of the object and images automatically downloaded from the Internet, and also extracts features appropriate for classification. The proposed algorithm is tested on various classification problems such as 'in-the-wild' face recognition, as well as, Internet image based vision applications such as gender classification and eye glasses detection on data collected automatically by querying into a web image search engine.
APA, Harvard, Vancouver, ISO, and other styles
22

Muller, Neil. "Facial recognition, eigenfaces and synthetic discriminant functions." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51756.

Full text
Abstract:
Thesis (PhD)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: In this thesis we examine some aspects of automatic face recognition, with specific reference to the eigenface technique. We provide a thorough theoretical analysis of this technique which allows us to explain many of the results reported in the literature. It also suggests that clustering can improve the performance of the system and we provide experimental evidence of this. From the analysis, we also derive an efficient algorithm for updating the eigenfaces. We demonstrate the ability of an eigenface-based system to represent faces efficiently (using at most forty values in our experiments) and also demonstrate our updating algorithm. Since we are concerned with aspects of face recognition, one of the important practical problems is locating the face in a image, subject to distortions such as rotation. We review two well-known methods for locating faces based on the eigenface technique.e These algorithms are computationally expensive, so we illustrate how the Synthetic Discriminant Function can be used to reduce the cost. For our purposes, we propose the concept of a linearly interpolating SDF and we show how this can be used not only to locate the face, but also to estimate the extent of the distortion. We derive conditions which will ensure a SDF is linearly interpolating. We show how many of the more popular SDF-type filters are related to the classic SDF and thus extend our analysis to a wide range of SDF-type filters. Our analysis suggests that by carefully choosing the training set to satisfy our condition, we can significantly reduce the size of the training set required. This is demonstrated by using the equidistributing principle to design a suitable training set for the SDF. All this is illustrated with several examples. Our results with the SDF allow us to construct a two-stage algorithm for locating faces. We use the SDF-type filters to obtain initial estimates of the location and extent of the distortion. This information is then used by one of the more accurate eigenface-based techniques to obtain the final location from a reduced search space. This significantly reduces the computational cost of the process.
AFRIKAANSE OPSOMMING: In hierdie tesis ondersoek ons sommige aspekte van automatiese gesigs- herkenning met spesifieke verwysing na die sogenaamde eigengesig ("eigen- face") tegniek. ‘n Deeglike teoretiese analise van hierdie tegniek stel ons in staat om heelparty van die resultate wat in die literatuur verskyn te verduidelik. Dit bied ook die moontlikheid dat die gedrag van die stelsel sal verbeter as die gesigte in verskillende klasse gegroepeer word. Uit die analise, herlei ons ook ‘n doeltreffende algoritme om die eigegesigte op te dateer. Ons demonstreer die vermoë van die stelsel om gesigte op ‘n doeltreffende manier te beskryf (ons gebruik hoogstens veertig eigegesigte) asook ons opdateringsalgoritme met praktiese voorbeelde. Verder ondersoek ons die belangrike probleem om gesigte in ‘n beeld te vind, veral as rotasie- en skaalveranderinge plaasvind. Ons bespreek twee welbekende algoritmes om gesigte te vind wat op eigengesigte gebaseer is. Hierdie algoritme is baie duur in terme van numerise berekeninge en ons ontwikkel n koste-effektiewe metode wat op die sogenaamde "Synthetic Discriminant Functions" (SDF) gebaseer is. Vir hierdie doel word die begrip van lineêr interpolerende SDF’s ingevoer. Dit stel ons in staat om nie net die gesig te vind nie, maar ook ‘n skatting van sy versteuring te bereken. Voorts kon ons voorwaardes aflei wat verseker dat ‘n SDF lineêr interpolerend is. Aangesien ons aantoon dat baie van die gewilde SDF-tipe filters aan die klassieke SDF verwant is, geld ons resultate vir ‘n hele verskeidenheid SDF- tipe filters. Ons analise toon ook dat ‘n versigtige keuse van die afrigdata mens in staat stel om die grootte van die afrigstel aansienlik te verminder. Dit word duidelik met behulp van die sogenaamde gelykverspreidings beginsel ("equidistributing principle") gedemonstreer. Al hierdie aspekte van die SDF’s word met voorbeelde geïllustreer. Ons resultate met die SDF laat ons toe om ‘n tweestap algoritme vir die vind van ‘n gesig in ‘n beeld te ontwikkel. Ons gebruik eers die SDF-tipe filters om skattings vir die posisie en versteuring van die gesig te kry en dan verfyn ons hierdie skattings deur een van die teknieke wat op eigengesigte gebaseer is te gebruik. Dit lei tot ‘n aansienlike vermindering in die berekeningstyd.
APA, Harvard, Vancouver, ISO, and other styles
23

Günther, Manuel [Verfasser]. "Statistical Gabor Graph Based Techniques for the Detection, Recognition, Classification, and Visualization of Human Faces / Manuel Günther." Aachen : Shaker, 2012. http://d-nb.info/1069046140/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Assefa, Anteneh. "Tracing and apportioning sources of dioxins using multivariate pattern recognition techniques." Doctoral thesis, Umeå universitet, Kemiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-102877.

Full text
Abstract:
High levels of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) in edible fish in the Baltic Sea have raised health concerns in the Baltic region and the rest of Europe. Thus, there are urgent needs to characterize sources in order to formulate effective mitigation strategies. The aim of this thesis is to contribute to a better understanding of past and present sources of PCDD/Fs in the Baltic Sea environment by exploring chemical fingerprints in sediments, air, and biota. The spatial and temporal patterns of PCDD/F distributions in the Baltic Sea during the 20th century were studied in Swedish coastal and offshore sediment cores. The results showed that PCDD/F levels peaked in 1975 (± 7 years) in coastal and 1991 (± 5 years) in offshore areas. The time trends of PCDD/Fs in the sediment cores also showed that environmental half-lives of these pollutants have been shorter in coastal than in offshore areas (15 ± 5 and 29 ± 14 years, respectively). Consequently, there have been remarkable recoveries in coastal areas, but slower recovery in offshore areas with 81 ± 12% and 38 ± 11% reductions from peak levels, respectively. Source-to-receptor multivariate modeling by Positive Matrix Factorization (PMF) showed that six types of PCDD/F sources are and have been important for the Baltic Sea environment: PCDD/Fs related to i) atmospheric background, ii) thermal processes, iii) manufacture and use of tetra-chlorophenol (TCP) and iv) penta-chlorophenol (PCP), v) industrial use of elementary chlo- rine and the chloralkali-process (Chl), and vi) hexa-CDD sources. The results showed that diffuse sources (i and ii) have consistently contributed >80% of the total amounts in the Southern Baltic Sea. In the Northern Baltic Sea, where the biota is most heavily contaminated, impacts of local sources (TCP, PCP and Chl) have been higher, contributing ca. 50% of total amounts. Among the six sources, only Thermal and chlorophenols (ii-iv) have had major impacts on biota. The impact of thermal sources has, however, been declining as shown from source apportioned time-trend data of PCDD/Fs in Baltic herring. In contrast, impacts of chlorophenol-associated sources generally increased, remained at steady-state or slowly decreased during 1990-2010, suggesting that these sources have substantially contributed to the persistently high levels of PCDD/Fs in Baltic biota. Atmospheric sources of PCDD/Fs for the Baltic region (Northern Europe) were also investigated, and specifically whether the inclusion of parallel measurements of metals in the analysis of air would help back-tracking sources. PCDD/Fs and metals in high-volume air samples from a rural field station near the shore of the central Baltic Sea were measured. The study focused on the winter season and air from the S and E sectors, as these samples showed elevated levels of PCDD/Fs, particularly PCDFs. Several metals were found to correlate significantly with the PCDFs. The wide range of candidate metals as source markers for PCDD/F emissions, and the lack of an up-to-date extensive compilation of source characteristics for metal emission from vari- ous sources, limited the use of the metals as source markers. The study was not able to pin-point primary PCDD/F sources for Baltic air, but it demonstrated a new promising approach for source tracing of air emissions. The best leads for back-tracking primary sources of atmospheric PCDD/Fs in Baltic air were seasonal trends and PCDD/F congener patterns, pointing at non-industrial related thermal sources related to heating. The non-localized natures of the sources raise challenges for managing the emissions and thus societal efforts are required to better control atmospheric emissions of PCDD/Fs.
EcoChange
BalticPOPs
APA, Harvard, Vancouver, ISO, and other styles
25

Poinsot, Audrey. "Traitements pour la reconnaissance biométrique multimodale : algorithmes et architectures." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS010.

Full text
Abstract:
Combiner les sources d'information pour créer un système de reconnaissance biométrique multimodal permet d'atténuer les limitations de chaque caractéristique utilisée, et donne l'opportunité d'améliorer significativement les performances. Le travail présenté dans ce manuscrit a été réalisé dans le but de proposer un système de reconnaissance performant, qui réponde à des contraintes d'utilisation grand-public, et qui puisse être implanté sur un système matériel de faible coût. La solution choisie explore les possibilités apportées par la multimodalité, et en particulier par la fusion du visage et de la paume. La chaîne algorithmique propose un traitement basé sur les filtres de Gabor, ainsi qu’une fusion des scores. Une base multimodale réelle de 130 sujets acquise sans contact a été conçue et réalisée pour tester les algorithmes. De très bonnes performances ont été obtenues, et ont été confirmées sur une base virtuelle constituée de deux bases publiques (les bases AR et PolyU). L'étude approfondie de l'architecture des DSP, et les différentes implémentations qui ont été réalisées sur un composant de type TMS320c64x, démontrent qu'il est possible d'implanter le système sur un unique DSP avec des temps de traitement très courts. De plus, un travail de développement conjoint d'algorithmes et d'architectures pour l'implantation FPGA a démontré qu'il était possible de réduire significativement ces temps de traitement
Including multiple sources of information in personal identity recognition reduces the limitations of each used characteristic and gives the opportunity to greatly improve performance. This thesis presents the design work done in order to build an efficient generalpublic recognition system, which can be implemented on a low-cost hardware platform. The chosen solution explores the possibilities offered by multimodality and in particular by the fusion of face and palmprint. The algorithmic chain consists in a processing based on Gabor filters and score fusion. A real database of 130 subjects has been designed and built for the study. High performance has been obtained and confirmed on a virtual database, which consists of two common public biometric databases (AR and PolyU). Thanks to a comprehensive study on the architecture of the DSP components and some implementations carried out on a DSP belonging to the TMS320c64x family, it has been proved that it is possible to implement the system on a single DSP with short processing times. Moreover, an algorithms and architectures development work for FPGA implementation has demonstrated that these times can be significantly reduced
APA, Harvard, Vancouver, ISO, and other styles
26

Aganj, Ehsan. "Multi-view Reconstruction and Texturing=Reconstruction multi-vues et texturation." Phd thesis, Ecole des Ponts ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00517742.

Full text
Abstract:
Dans cette thèse, nous étudions les problèmes de reconstruction statique et dynamique à partir de vues multiples et texturation, en s'appuyant sur des applications réelles et pratiques. Nous proposons trois méthodes de reconstruction destinées à l'estimation d'une représentation d'une scène statique/dynamique à partir d'un ensemble d'images/vidéos. Nous considérons ensuite le problème de texturation multi-vues en se concentrant sur la qualité visuelle de rendu..
APA, Harvard, Vancouver, ISO, and other styles
27

Chou, Yu-Shu, and 周煜書. "Face detection and recognition based on neural techniques." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/15101749270685630676.

Full text
Abstract:
碩士
義守大學
資訊管理學系碩士班
96
We develop and improve an algorithm which can detect faces with complex and noisy views, and recognize these identity in face images. We use a method that the value is above a threshold, then that location is classified as a face. The ability of face recognition are increasing. The system of face detection and recognition is divided into three stages: face detection, face location, face recognition. Our face detection algorithm improve the framework of neural network-based face detection(NNFD) proposed by Rowley(1998). We modify the detecting area of NNFD’s hidden layer. As the positive effect concerning the result of detecting specific features from shifted face image by overlapping the detecting area. In the face location stage, we use Gaussian filter to spread out the detective areas, if the value is above a threshold, then that location is classified as a face. The recognition algorithm of this study uses Gaussian parameter to extract the face features. The input patterns are clustering by the fuzzy c-means algorithm[18][19]. We feed these data to train the RBF neural networks, and use the RBF neural classifier to recognize faces. Experimental results show our system has better efficiency of the face detection and less training time of neural networks. We can promote the recognition ability of complex face images accurately.
APA, Harvard, Vancouver, ISO, and other styles
28

MATHUR, PALLAVI. "STUDY OF FACE RECOGNITION TECHNIQUES USING VARIOUS MOMENTS." Thesis, 2012. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14115.

Full text
Abstract:
The field of face recognition has been explored a lot and the work is still going on. In the presented work we have proposed a novel approach for face recognition using moments. Four methods have been used for feature extraction: Hu moments, Zernike moments, Legendre moments and Cumulants. Hu moments are a set of seven moments which have been derived from the conventional geometric moments. These are invariant against rotation, scaling and translation. Legendre moments and Zernike moments have an orthogonal basis set and can be used to represent an image with a minimum amount of information redundancy. They are based on the theory of orthogonal polynomials and can be used to recover an image from moment invariants. Cumulants are sensitive to the image details and therefore are suitable for representing the features of images. For feature extraction, moments of different orders are calculated which form the feature vectors. The obtained feature vectors are stored in the database and are classified using three classifiers: Minimum Distance Classifier, Support Vector Machine and K Nearest Neighbor. For testing the proposed approach, the ORL (Olivetty Research Laboratories) database is used. It consists of 40 subjects, each having 10 orientations.
APA, Harvard, Vancouver, ISO, and other styles
29

"Automatic segmentation and registration techniques for 3D face recognition." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074674.

Full text
Abstract:
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research.
Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets.
We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score.
Tang, Xinmin.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 109-117).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
30

Yadav, Govind. "Feature Extraction and Feature Selection Techniques for Face Recognition." Thesis, 2016. http://ethesis.nitrkl.ac.in/9351/1/2016_MT_GYadav.pdf.

Full text
Abstract:
Face recognition is one amongst the most significant applications of image processing. It’s a challenge to make an automated system which will be comparable to human ability for recognizing faces. Although other biometric techniques are reliable but problem with these techniques is that the individual has to interact with the system, while face recognition is a non-intrusive technique which can be performed without interaction with the system. Feature extraction is most crucial part of any pattern recognition problem. In our work we have applied different techniques for facial feature extraction like DWT, LWT, and Fast Discrete Curvelet Transform. For dimensionality reduction PCA was applied and Linear-SVM based classifier is used for classification. It was found in the study that feature extracted based on Curvelet transform were giving better results. Extracted feature were also tested for different feature selection algorithms. It was found that Conditional Redundancy based selected feature in face recognition outperforms other feature selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

"Learning-based descriptor for 2-D face recognition." 2010. http://library.cuhk.edu.hk/record=b5894302.

Full text
Abstract:
Cao, Zhimin.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 30-34).
Abstracts in English and Chinese.
Chapter 1 --- Introduction and related work --- p.1
Chapter 2 --- Learning-based descriptor for face recognition --- p.7
Chapter 2.1 --- Overview of framework --- p.7
Chapter 2.2 --- Learning-based descriptor extraction --- p.9
Chapter 2.2.1 --- Sampling and normalization --- p.9
Chapter 2.2.2 --- Learning-based encoding and histogram rep-resentation --- p.11
Chapter 2.2.3 --- PCA dimension reduction --- p.12
Chapter 2.2.4 --- Multiple LE descriptors --- p.14
Chapter 2.3 --- Pose-adaptive matching --- p.16
Chapter 2.3.1 --- Component -level face alignment --- p.17
Chapter 2.3.2 --- Pose-adaptive matching --- p.17
Chapter 2.3.3 --- Evaluations of pose-adaptive matching --- p.19
Chapter 3 --- Experiment --- p.21
Chapter 3.1 --- Results on the LFW benchmark --- p.21
Chapter 3.2 --- Results on Multi-PIE --- p.24
Chapter 4 --- Conclusion and future work --- p.27
Chapter 4.1 --- Conclusion --- p.27
Chapter 4.2 --- Future work --- p.28
Bibliography --- p.30
APA, Harvard, Vancouver, ISO, and other styles
32

LAI, YU-DIAN, and 賴育鈿. "A Mirroring and Monitoring System Using Face and Emotion Recognition Techniques." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vh4649.

Full text
Abstract:
碩士
逢甲大學
資訊工程學系
107
Personnel management is usually required in classrooms, small companies, or laboratories. Thus, an entrance monitoring system may be installed at the entrance and exit to facilitate management. We use face recognition technology in the entrance surveillance system to identify the person's identity. In this system, a smart mirror is designed, which can display the information of users. Managers can know the identity and emotions of members, which will help members and managers at the same time. In this study, we proposed a smart mirror and entrance monitoring system by using deep learning and context-aware technology. Among them, we use the deep learning model of VGG-Face. Using transfer learning, the system can be identified without collecting too much photo training. We proposed the Top-K method to improve the accuracy of the identification system. The smart mirror can know the user's information, such as age, gender, emotion, and identify the person. In addition, it can send a message alert to the administrator via the LINE bot. There are two types of message alert, including regular notifications and warning notifications. This message alert feature sends a regular notification if the system recognizes a member. Conversely, if the system identifies a stranger, this message alert feature sends a warning to notify the administrator. This smart mirror helps managers manage better. The system is suitable for small groups such as laboratories, homes, and companies.
APA, Harvard, Vancouver, ISO, and other styles
33

"Face authentication on mobile devices: optimization techniques and applications." 2005. http://library.cuhk.edu.hk/record=b5892581.

Full text
Abstract:
Pun Kwok Ho.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 106-111).
Abstracts in English and Chinese.
Chapter 1. --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.1.1 --- Introduction to Biometrics --- p.1
Chapter 1.1.2 --- Face Recognition in General --- p.2
Chapter 1.1.3 --- Typical Face Recognition Systems --- p.4
Chapter 1.1.4 --- Face Database and Evaluation Protocol --- p.5
Chapter 1.1.5 --- Evaluation Metrics --- p.7
Chapter 1.1.6 --- Characteristics of Mobile Devices --- p.10
Chapter 1.2 --- Motivation and Objectives --- p.12
Chapter 1.3 --- Major Contributions --- p.13
Chapter 1.3.1 --- Optimization Framework --- p.13
Chapter 1.3.2 --- Real Time Principal Component Analysis --- p.14
Chapter 1.3.3 --- Real Time Elastic Bunch Graph Matching --- p.14
Chapter 1.4 --- Thesis Organization --- p.15
Chapter 2. --- Related Work --- p.16
Chapter 2.1 --- Face Recognition for Desktop Computers --- p.16
Chapter 2.1.1 --- Global Feature Based Systems --- p.16
Chapter 2.1.2 --- Local Feature Based Systems --- p.18
Chapter 2.1.3 --- Commercial Systems --- p.20
Chapter 2.2 --- Biometrics on Mobile Devices --- p.22
Chapter 3. --- Optimization Framework --- p.24
Chapter 3.1 --- Introduction --- p.24
Chapter 3.2 --- Levels of Optimization --- p.25
Chapter 3.2.1 --- Algorithm Level --- p.25
Chapter 3.2.2 --- Code Level --- p.26
Chapter 3.2.3 --- Instruction Level --- p.27
Chapter 3.2.4 --- Architecture Level --- p.28
Chapter 3.3 --- General Optimization Workflow --- p.29
Chapter 3.4 --- Summary --- p.31
Chapter 4. --- Real Time Principal Component Analysis --- p.32
Chapter 4.1 --- Introduction --- p.32
Chapter 4.2 --- System Overview --- p.33
Chapter 4.2.1 --- Image Preprocessing --- p.33
Chapter 4.2.2 --- PCA Subspace Training --- p.34
Chapter 4.2.3 --- PCA Subspace Projection --- p.36
Chapter 4.2.4 --- Template Matching --- p.36
Chapter 4.3 --- Optimization using Fixed-point Arithmetic --- p.37
Chapter 4.3.1 --- Profiling Analysis --- p.37
Chapter 4.3.2 --- Fixed-point Representation --- p.38
Chapter 4.3.3 --- Range Estimation --- p.39
Chapter 4.3.4 --- Code Conversion --- p.42
Chapter 4.4 --- Experiments and Discussions --- p.43
Chapter 4.4.1 --- Experiment Setup --- p.43
Chapter 4.4.2 --- Execution Time --- p.44
Chapter 4.4.3 --- Space Requirement --- p.45
Chapter 4.4.4 --- Verification Accuracy --- p.45
Chapter 5. --- Real Time Elastic Bunch Graph Matching --- p.49
Chapter 5.1 --- Introduction --- p.49
Chapter 5.2 --- System Overview --- p.50
Chapter 5.2.1 --- Image Preprocessing --- p.50
Chapter 5.2.2 --- Landmark Localization --- p.51
Chapter 5.2.3 --- Feature Extraction --- p.52
Chapter 5.2.4 --- Template Matching --- p.53
Chapter 5.3 --- Optimization Overview --- p.54
Chapter 5.3.1 --- Computation Optimization --- p.55
Chapter 5.3.2 --- Memory Optimization --- p.56
Chapter 5.4 --- Optimization Strategies --- p.58
Chapter 5.4.1 --- Fixed-point Arithmetic --- p.60
Chapter 5.4.2 --- Gabor Masks and Bunch Graphs Precomputation --- p.66
Chapter 5.4.3 --- Improving Array Access Efficiency using ID array --- p.68
Chapter 5.4.4 --- Efficient Gabor Filter Selection --- p.75
Chapter 5.4.5 --- Fine Tuning System Cache Policy --- p.79
Chapter 5.4.6 --- Reducing Redundant Memory Access by Loop Merging --- p.80
Chapter 5.4.7 --- Maximizing Cache Reuse by Array Merging --- p.90
Chapter 5.4.8 --- Optimization of Trigonometric Functions using Table Lookup. --- p.97
Chapter 5.5 --- Summary --- p.99
Chapter 6. --- Conclusions --- p.103
Chapter 7. --- Bibliography --- p.106
APA, Harvard, Vancouver, ISO, and other styles
34

Chang, Chia-Kai, and 張家愷. "Face Recognition Method by Integrating the Techniques of Biometrics and Principal Component Analysis." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/56719495701817415015.

Full text
Abstract:
碩士
朝陽科技大學
資訊管理系
102
This study proposes a face recognition method by integrating the techniques of biometrics and principal component analysis (PCA). Based on this method, we construct a multi-face recognition system which is based on fourteen biometric features. In this system, improve the detection process by using two color space models to extract face regions from the picture. We capture the biometric features from every candidate of face image, calculate the difference of facial feature vector (DFFV) and find weights of feature vector by PCA. Then, these data are stored in facial database for face recognition. When a new face image is coming, we capture the biometric features can be captured from the coming face image, calculate DFFV, and compare them with DFFVs in database by progressively use the weights which obtained from the PCA to find the closest face. Finally, we continued test and regulate the experimental procedure, we obtain initial recognition success rate and confirmed our face recognition method which used PCA and biometrics can be used. And because this method use only some biometrics features to detection face, therefore, we can save more calculation time and amount of data. In the future, we need to study more the other research on biometrics and improve amount of features, we think it can be improve success rate make the face recognition both fast and accurately.
APA, Harvard, Vancouver, ISO, and other styles
35

"Symmetry for face analysis." 2005. http://library.cuhk.edu.hk/record=b5892640.

Full text
Abstract:
Yuan Tianqiang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 51-55).
Abstracts in English and Chinese.
abstract --- p.i
acknowledgments --- p.iv
table of contents --- p.v
list of figures --- p.vii
list of tables --- p.ix
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Reflectional Symmetry Detection --- p.1
Chapter 1.2 --- Research Progress on Face Analysis --- p.2
Chapter 1.2.1 --- Face Detection --- p.3
Chapter 1.2.2 --- Face Alignment --- p.4
Chapter 1.2.3 --- Face Recognition --- p.6
Chapter 1.3 --- Organization of this thesis --- p.8
Chapter Chapter 2 --- Local reflectional symmetry detection --- p.9
Chapter 2.1 --- Proposed Method --- p.9
Chapter 2.1.1 --- Symmetry measurement operator --- p.9
Chapter 2.1.2 --- Potential regions selection --- p.10
Chapter 2.1.3 --- Detection of symmetry axes --- p.11
Chapter 2.2 --- Experiments --- p.13
Chapter 2.2.1 --- Parameter setting and analysis --- p.13
Chapter 2.2.2 --- Experimental Results --- p.14
Chapter Chapter 3 --- Global perspective reflectional symmetry detection --- p.16
Chapter 3.1 --- Introduction of camera models --- p.16
Chapter 3.2 --- Property of Symmetric Point-Pair --- p.18
Chapter 3.3 --- analysis and Experiment --- p.20
Chapter 3.3.1 --- Confirmative Experiments --- p.20
Chapter 3.3.2 --- Face shape generation with PSI --- p.22
Chapter 3.3.3 --- Error Analysis --- p.24
Chapter 3.3.4 --- Experiments of Pose Estimation --- p.25
Chapter 3.4 --- Summary --- p.28
Chapter Chapter 4 --- Pre-processing of face analysis --- p.30
Chapter 4.1 --- Introduction of Hough Transform --- p.30
Chapter 4.2 --- Eye Detection --- p.31
Chapter 4.2.1 --- Coarse Detection --- p.32
Chapter 4.2.2 --- Refine the eyes positions --- p.34
Chapter 4.2.3 --- Experiments and Analysis --- p.35
Chapter 4.3 --- Face Components Detection with GHT --- p.37
Chapter 4.3.1 --- Parameter Analyses --- p.38
Chapter 4 3.2 --- R-table Construction --- p.38
Chapter 4.3.3 --- Detection Procedure and Voting Strategy --- p.39
Chapter 4.3.4 --- Experiments and Analysis --- p.41
Chapter Chapter 5 --- Pose estimation with face symmetry --- p.45
Chapter 5.1 --- Key points selection --- p.45
Chapter 5.2 --- Face Pose Estimation --- p.46
Chapter 5.2.1 --- Locating eye corners --- p.46
Chapter 5.2.2 --- Analysis and Summary --- p.47
Chapter Chapter 6 --- Conclusions and future work --- p.49
bibliography --- p.51
APA, Harvard, Vancouver, ISO, and other styles
36

"Rotation-invariant face detection in grayscale images." 2005. http://library.cuhk.edu.hk/record=b5892397.

Full text
Abstract:
Zhang Wei.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 73-78).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.ii
List of Figures --- p.viii
List of Tables --- p.ix
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Previous work --- p.2
Chapter 1.1.1 --- Learning-based approaches --- p.3
Chapter 1.1.2 --- Feature-based approaches --- p.7
Chapter 1.2 --- Thesis objective --- p.12
Chapter 1.3 --- The proposed detector --- p.13
Chapter 1.4 --- Thesis outline --- p.14
Chapter 2 --- The Edge Merging Algorithm --- p.16
Chapter 2.1 --- Edge detection --- p.16
Chapter 2.2 --- Edge breaking --- p.18
Chapter 2.2.1 --- Cross detection --- p.20
Chapter 2.2.2 --- Corner detection --- p.20
Chapter 2.3 --- Curve merging --- p.23
Chapter 2.3.1 --- The search region --- p.25
Chapter 2.3.2 --- The merging cost function --- p.27
Chapter 2.4 --- Ellipse fitting --- p.30
Chapter 2.5 --- Discussion --- p.33
Chapter 3 --- The Face Verifier --- p.35
Chapter 3.1 --- The face box --- p.35
Chapter 3.1.1 --- Face box localization --- p.36
Chapter 3.1.2 --- Conditioning the face box --- p.42
Chapter 3.2 --- Eye-mouth triangle search --- p.45
Chapter 3.3 --- Face model matching --- p.48
Chapter 3.3.1 --- Face model construction --- p.48
Chapter 3.3.2 --- Confidence of detection --- p.51
Chapter 3.4 --- Dealing with overlapped detections --- p.51
Chapter 3.5 --- Discussion --- p.53
Chapter 4 --- Experiments --- p.55
Chapter 4.1 --- The test sets --- p.55
Chapter 4.2 --- Experimental results --- p.56
Chapter 4.2.1 --- The ROC curves --- p.56
Chapter 4.3 --- Discussions --- p.61
Chapter 5 --- Conclusions --- p.69
Chapter 5.1 --- Conclusions --- p.69
Chapter 5.2 --- Suggestions for future work --- p.70
List of Original Contributions --- p.72
Bibliography --- p.73
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Kai-yi, and 王凱毅. "A Real-Time Face Tracking and Recognition System Based on Particle Filtering and AdaBoosting Techniques." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/4xrvmn.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
95
Owing to the demand of more efficient and friendly human computer interface, the researches on face processing have been rapidly grown in recent years. In addition to providing some kinds of service for human beings, one of the most important characteristics of a system is to naturally interact with people. In this thesis, a design and experimental study of a face tracking and recognition system is presented. Regarding the face tracking, we utilize a particle filter to localize faces in image sequences. Since we have considered the hair color information of a human head, it will keep tracking even if the person is back to a camera. We further adopt both the motion and color cues as the features to make the influence of the background as low as possible. In the face recognition phase, a new architecture is proposed to achieve fast recognition. After the face detection process, we will capture the face region and fed its features derived from the wavelet transform into a strong classifier which is trained by an AdaBoost learning algorithm. Compared with other machine learning algorithms, the AdaBoost algorithm has an advantage of facilitating the speed of convergence. Thus, we can update the training samples to deal with comprehensive circumstances but need not spend much computational cost. Finally, we further develop a bottom-up hierarchical classification structure for multi-class face recognition. Experimental results reveal that the face tracking rate is more than 95% in general situations and 88% when the face suffering from temporal occlusion. As for the face recognition, the accurate rate is more than 90%; besides this, the efficiency of system execution is very satisfactory, which reaches 20 frames per second at least.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Chih-hsin, and 汪至信. "Real-time Multi-Face Recognition and Tracking Techniques Used for the Interaction between Humans and Robots." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/05324340872357424096.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
97
More recently, the technique of face recognition has been markedly extended to realize the optimality of human-computer interfaces due to the promotion of “intelligent life.” Face recognition has been broadly applied in many areas, such as biometric identity authentication, entrance guard, and human-computer interface in recent years. In view of the above-mentioned facts, a completely automatic real-time multi-faces recognition and tracking system installed on a person following robot is presented in this thesis, including face detection, face recognition, and face tracking procedures. As to face detection, the AdaBoost technique is used in our system, and a structure of cascaded classifiers is adopted to detect human faces; in the face recognition procedure, we have captured face images and apply the two-dimensional Haar wavelet transform (2D-HWT) to acquire the low-frequency data of face images. We modify the discriminative common vectors (DCV) algorithm to setup the discriminative models of face features received from different persons. Finally, we utilize the minimum Euclidean distance to measure the similarity of the face image and a candidate person, and decide the most likely person by the majority voting of ten successive recognition results from a face image sequence. Subsequently, the results of recognition will be grouped into two classes: “master” and “stranger.” In our system, the robot will track the master unceasingly; after check the class of targets, our system will go to the face tracking procedure. Herein, we employ a two-level improved particle filter to dynamically locate multiple human faces. According to the position of the human face in an image, we issue a series of commands (moving forward, turning left or turning right) to drive the motors of wheels on a robot, and judge the distance between the robot and a person with the aid of a laser range finder to issue a set of commands (stop or turn backward) until the robot follows to a suitable distance in front of the person. Experimental results reveal that the face tracking rate is more than 97% in general situations and exceeds 82% when the face occlusion happening. As for the face recognition, the correct rate is over 93% in general situations; besides this, the efficiency of system execution attains 7 frames per second at least in our system. Such system performance is very satisfactory and we are encouraged to commercialize the robot.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Yu-Ta, and 林裕達. "Real-time Visual Face Tracking and Recognition Techniques Used for the Interaction between Humans and Robots." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50413688003753242620.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
96
Owing to the demand of more efficient and friendly human-computer interfaces, the researches on face processing have been rapidly grown in recent years. In addition to offering some kinds of service for human beings, one of the most important characteristics of a favorable system is to autonomously interact with people. Accordingly, face recognition has been broadly applied in the areas, such as biometric identity authentication, entrance guard, and human-computer interfaces. More recently, the technique of face recognition has been markedly extended to the applications of the optimality of human-computer interfaces due to the promotion of “intelligent life.” In view of the above-mentioned facts, a completely automatic real-time face tracking and recognition system installed on a person following robot is presented in this thesis, including face tracking and face recognition procedures. As to face detection, it is first based on skin color blocks and geometrical properties applied to eliminate the skin color regions that do not belong to the face in the HSV color space. Then we find the proper ranges of two eyes and one mouth according to the positions of pupils and the center of a mouth. Subsequently, we utilize the foundation of an isosceles triangle formed by the relative positions of two eyes and one mouth to judge whether the detected skin color regions a human face. In the face tracking procedure, we employ an improved particle filter to dynamically locate a human face. Since we have considered the hair color information of a human head, the particle filter will keep tracking even if the person is back to the sight of a camera. We further adopt both the motion and color cues as the features to make the influence of the background as low as possible. According to the position of the human face in an image, we issue a series of commands (moving forward, turning left or turning right) to drive the motors of wheels on a robot, and judge the distance between the robot and a person with the aid of three ultrasonic sensors to issue a set of commands (stop or turn backward) until the robot follows to a suitable distance from the person. At this moment, the system starts the recognition procedure that identifies whether the person is the master of the robot or not. In the face recognition procedure, after the face detection and tracking procedure, we have captured a face image and apply the two-dimensional Haar wavelet transform (2D-HWT) to acquire the low-frequency data of the face image. This method is able to overcome the drawbacks of extracting face features in traditional manners. Additionally, we improve the shortcomings of principal component analysis (PCA) which can not effectively distinguish from different classes and those of linear discriminant analysis (LDA) which may find no inverse matrix. And we then employ the discriminative common vectors (DCV) algorithm to setup the discriminative models of face features received from different persons. Finally, we utilize the minimum Euclidean distance to measure the similarity of the face image and a candidate person and decide the most likely person by the majority vote of ten successive recognition results from a face image sequence. Experimental results reveal that the face tracking rate is more than 95% in general situations and over 88% when the face suffers from temporal occlusion. As for the face recognition, the rate is more than 93% in general situations and still reaches 80% in complicated backgrounds; besides this, the efficiency of system execution is very satisfactory, which respectively attains 5 and 2 frames per second at least in the face tracking and recognition modes.
APA, Harvard, Vancouver, ISO, and other styles
40

"3D object retrieval and recognition." 2010. http://library.cuhk.edu.hk/record=b5894304.

Full text
Abstract:
Gong, Boqing.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (p. 53-59).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- 3D Object Representation --- p.1
Chapter 1.1.1 --- Polygon Mesh --- p.2
Chapter 1.1.2 --- Voxel --- p.2
Chapter 1.1.3 --- Range Image --- p.3
Chapter 1.2 --- Content-Based 3D Object Retrieval --- p.3
Chapter 1.3 --- 3D Facial Expression Recognition --- p.4
Chapter 1.4 --- Contributions --- p.5
Chapter 2 --- 3D Object Retrieval --- p.6
Chapter 2.1 --- A Conceptual Framework for 3D Object Retrieval --- p.6
Chapter 2.1.1 --- Query Formulation and User Interface --- p.7
Chapter 2.1.2 --- Canonical Coordinate Normalization --- p.8
Chapter 2.1.3 --- Representations of 3D Objects --- p.10
Chapter 2.1.4 --- Performance Evaluation --- p.11
Chapter 2.2 --- Public Databases --- p.13
Chapter 2.2.1 --- Databases of Generic 3D Objects --- p.14
Chapter 2.2.2 --- A Database of Articulated Objects --- p.15
Chapter 2.2.3 --- Domain-Specific Databases --- p.15
Chapter 2.2.4 --- Data Sets for the Shrec Contest --- p.16
Chapter 2.3 --- Experimental Systems --- p.16
Chapter 2.4 --- Challenges in 3D Object Retrieval --- p.17
Chapter 3 --- Boosting 3D Object Retrieval by Object Flexibility --- p.19
Chapter 3.1 --- Related Work --- p.19
Chapter 3.2 --- Object Flexibility --- p.21
Chapter 3.2.1 --- Definition --- p.21
Chapter 3.2.2 --- Computation of the Flexibility --- p.22
Chapter 3.3 --- A Flexibility Descriptor for 3D Object Retrieval --- p.24
Chapter 3.4 --- Enhancing Existing Methods --- p.25
Chapter 3.5 --- Experiments --- p.26
Chapter 3.5.1 --- Retrieving Articulated Objects --- p.26
Chapter 3.5.2 --- Retrieving Generic Objects --- p.27
Chapter 3.5.3 --- Experiments on Larger Databases --- p.28
Chapter 3.5.4 --- Comparison of Times for Feature Extraction --- p.31
Chapter 3.6 --- Conclusions & Analysis --- p.31
Chapter 4 --- 3D Object Retrieval with Referent Objects --- p.32
Chapter 4.1 --- 3D Object Retrieval with Prior --- p.32
Chapter 4.2 --- 3D Object Retrieval with Referent Objects --- p.34
Chapter 4.2.1 --- Natural and Man-made 3D Object Classification --- p.35
Chapter 4.2.2 --- Inferring Priors Using 3D Object Classifier --- p.36
Chapter 4.2.3 --- Reducing False Positives --- p.37
Chapter 4.3 --- Conclusions and Future Work --- p.38
Chapter 5 --- 3D Facial Expression Recognition --- p.39
Chapter 5.1 --- Introduction --- p.39
Chapter 5.2 --- Separation of BFSC and ESC --- p.43
Chapter 5.2.1 --- 3D Face Alignment --- p.43
Chapter 5.2.2 --- Estimation of BFSC --- p.44
Chapter 5.3 --- Expressional Regions and an Expression Descriptor --- p.45
Chapter 5.4 --- Experiments --- p.47
Chapter 5.4.1 --- Testing the Ratio of Preserved Energy in the BFSC Estimation --- p.47
Chapter 5.4.2 --- Comparison with Related Work --- p.48
Chapter 5.5 --- Conclusions --- p.50
Chapter 6 --- Conclusions --- p.51
Bibliography --- p.53
APA, Harvard, Vancouver, ISO, and other styles
41

"An investigation into the parameters influencing neural network based facial recognition." Thesis, 2012. http://hdl.handle.net/10210/7007.

Full text
Abstract:
D.Ing.
This thesis deals with an investigation into facial recognition and some variables that influence the performance of such a system. Firstly there is an investigation into the influence of image variability on the overall recognition performance of a system and secondly the performance and subsequent suitability of a neural network based system is tested. Both tests are carried out on two distinctly different databases, one more variable than the other. The results indicate that the greater the amount of variability the more negatively affected is the performance rating of a specific facial recognition system. The results further indicate the success with the implementation of a neural network system over a more conventional statistical system.
APA, Harvard, Vancouver, ISO, and other styles
42

Sherman, George Edward. "A model of an expert computer vision and recognition facility with applications of a proportion technique." 1985. http://hdl.handle.net/2097/27537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chien, Chin-Hsiang, and 簡晉翔. "3D human face reconstruction and recognition by using the techniques of multi-view synthesizing method with the aids of the depth images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50336494037594860799.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
101
In the past years, most of the three-dimensional reconstruction or recognition systems use two-dimensional image and its depth image to calculate the three-dimensional coordinates of the image to process the three-dimensional theme. Such operations usually take a considerable amount of computing costs. This research proposes another approach, point cloud, which can preserve feature vectors and color information, for three-dimensional face reconstruction and recognition. In the conventional 2D approach, it keeps tracking the information of each pixel of the 2D image. On the other hand, the point cloud system directly synthesizes the 2D image and its depth image into a point cloud model with 3D coordinates. Therefore it can reduce the computation complexity significantly. It can further construct a 3D space coordinate KD-Tree query system to accelerate the query search speed for searching the key points of the 3D coordinate. Generally, it uses some expensive equipments and laser scanners for three-dimensional facial reconstruction. In this research we try to use the Microsoft KINCET sensor to reconstruct the 3D human face. Compared with the expensive laser scanner, KINECT has the characteristics of cheap cost and can find the information of color image and depth image. In this research KINET is used to scan the human face in multi-view within 180. Then we use the iteration closest point (ICP) algorithm to match the multi-view human faces. By this approach the 3D data base group points of the human face can thus be established. The three-dimensional face model point cloud data via 3D SIFT (3D Scale Invariant Feature Transform) algorithm is applied to extract the feature key points. Then we use the three-dimensional coordinates of Euclidean distance to calculate the feature points and feature weights distance relationship to determine whether the face belongs to the same person. The experimental results show that under Gavab DB face database our approach has the recognition rate of 83.6%.
APA, Harvard, Vancouver, ISO, and other styles
44

G, Rajesh Babu. "Attendance Maintenance Using Face Recognition Technique." Thesis, 2014. http://raiith.iith.ac.in/114/1/CS11M09.pdf.

Full text
Abstract:
In any class room attendance maintenance is a hectic task for teacher, they need to call roll no or name. Based on the response and identication, they record the attendance for the student. Au- tomatic attendance maintenance system is challenging task because person identication in images is a dicult task. To solve the problem of identifying a person in images, some statistical based techniques are used such as Independent Component Analysis (ICA), Principal Component Analy- sis(PCA), LDA(Linear Discriminate analysis) etc. Biometric based techniques such as identication by iris, nger prints, face detection etc., are the most widely used techniques in computers. Our main motive is to maintain attendance in an organization by using face recognition technique. In Face recognition technique, the system learns the facial features and identies the human face in images. The Automation of attendance maintenance was implemented in two phases: image cap- turing and person identication . To capture the images we used Fire-i camera and for person face identication, we used PCA technique.
APA, Harvard, Vancouver, ISO, and other styles
45

tsai, Chuan-yi, and 蔡全益. "Using Stereo Vision Technique for Face Recognition." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/8py637.

Full text
Abstract:
碩士
國立臺北科技大學
工業工程與管理系所
93
Biometric measurements received an increasing interest for security applications in the last two decades. In particularly, face recognition has been an active research in this area. The objective of this study is to develop an effective face recognition system that extracts both 2D and 3D face features to improve the recognition performance. The proposed method derives 3D face information using a designed stereo face system. Then, it retrieves 2D and 3D face features with Principle Component Analysis (PCA) and Local Autocorrelation Coefficient (LAC) respectively. Eventually, the information of features are fused and fed into a Euclidean-distance classifier and a Backpropagation neural network for recognition. An experiment was conducted with 100 subjects. For each subject, thirteen stereo face images were taken with different expressions. Among them, the faces with expressions one to seven are used for training, and the rest of the expressions is used for testing. For the Euclidean-distance classifier, the proposed method does not improve the recognition result by combining the features derived from PCA with LAC; however, an improvement is observed when using the Back-Propagation Neural Network. In general, BP outperforms Euclidean distance in both 2D and 3D face recognition. Furthermore, the experimental results show that the proposed method effectively improves the recognition rate by combines the 2D with 3D face information.
APA, Harvard, Vancouver, ISO, and other styles
46

Gillan, Steven. "A technique for face recognition based on image registration." Thesis, 2010. http://hdl.handle.net/1828/2548.

Full text
Abstract:
This thesis presents a technique for face recognition that is based on image registration. The image registration technique is based on finding a set of feature points in the two images and using these feature points for registration. This is done in four steps. In the first, images are filtered with the Mexican hat wavelet to obtain the feature point locations. In the second, the Zernike moments of neighbourhoods around the feature points are calculated and compared in the third step to establish correspondence between feature points in the two images and in the fourth the transformation parameters between images are obtained using an iterative weighted least squares technique. The face recognition technique consists of three parts, a training part, an image registration part and a post-processing part. During training a set of images are chosen as the training images and the Zernike moments for the feature points of the training images are obtained and stored. In the registration part, the transformation parameters to register the training images with the images under consideration are obtained. In the post-processing, these transformation parameters are used to determine whether a valid match is found or not. The performance of the proposed method is evaluated using various face databases and it is compared with the performance of existing techniques. Results indicate that the proposed technique gives excellent results for face recognition in conditions of varying pose, illumination, background and scale. These results are comparable to other well known face recognition techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

(9795329), Xiaolong Fan. "A feature selection and classification technique for face recognition." Thesis, 2005. https://figshare.com/articles/thesis/A_feature_selection_and_classification_technique_for_face_recognition/13457450.

Full text
Abstract:
Project examines face recognition research, and presents a novel feature selection and classification technique - Genetic Algorithms (GA) for selection and Artificial Neural Network (ANN) for classification.

APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Yao-Ting, and 吳曜廷. "Face Recognition and Destiny Foreseeing by Using Fuzzy Classification Technique." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/87425209250827454844.

Full text
Abstract:
碩士
逢甲大學
自動控制工程所
98
This paper proposes a face recognition and fortunes foreseeing system for specified person by using fuzzy inference method. This system uses CCD camera to take a picture of specified person in the best distance, and uses a skin color detection method to find out the facial area by separating skin color scope. This achieves the purpose of first positioning. After the preliminary positioning , we locate the facial contour by using the ellipse template method. Find out the locations of eye and lip of the five sense organs in the human face, and then to get the complete shape for eye and lip separately by using image processing technique and morphology .In this research, we classify the sample template into some classes by using fuzzy classification rule in advance, this work will speed up to run the real-time jobs of face recognition, 3D face modeling and destiny foreseeing. Afterward, we apply the -Norm minimization criterion to calculate the certainty degree of recognized face and estimated destiny. Finally, we also infer the fortune foreseeing analysis based on the shapes of eye, lip and face as well as face feature recognition method.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang-LinTsou and 鄒昌霖. "Face Recognition using Dual-LBP Architecture Technique under Different Illumination." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/f392u2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tsai, Pei-Chun, and 蔡佩君. "Face detection and recognition based on fuzzy theory and neural technique." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50182223129448794553.

Full text
Abstract:
碩士
義守大學
資訊管理學系碩士班
96
We develop and improve an algorithm in order to detect the faces and recognize theses identity in daily life images in the varied background. We use less-dimension vectors to reduce images complexity and improving interference with noise in images,increasing ability of face detection and recognition. The system of face detection and recognition is divided into three stages: face detection, face location, and face recognition. In the first stage, we use a fuzzy Gaussian classifier and a face feature extracting neural network to detecting faces in image. In this stage, we hope to divide images to face images and non-face images roughly by fuzzy Gaussian classifier. We compute the fuzzy Gaussian parameters of input images, and then accumulate the square errors of Gaussian parameters between training patterns to exclude the most part of non-face image. Next, we feed the passed images to the feature extracting neural network for detecting faces accurately. In the face location stage, we use Gaussian spread method to remove some fault detections in the previous detecting stage and locate the faces in images. In the last stage, we use a fuzzy c-means and a framework of parallel neural networks to recognize the faces that located in the previous stage. The fuzzy c-means can classify each input image to some clusters and activate their small-scale parallel neural networks corresponsivelyto recognize the input images. Our algorithm can reduce the dimension of images, and eliminate a great deal of non-face images by classifier. Therefore, we can decrease the training time and recognition efficiently. Further, we can promote the detection and recognition ability of complex face images accurately.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography