Dissertations / Theses on the topic 'Facet extraction'

To see the other types of publications on this topic, follow the link: Facet extraction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Facet extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PORRINI, RICCARDO. "Construction and Maintenance of Domain Specific Knowledge Graphs for Web Data Integration." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/126789.

Full text
Abstract:
A Knowledge Graph (KG) is a semantically organized, machine readable collection of types, entities, and relations holding between them. A KG helps in mitigating semantic heterogeneity in scenarios that require the integration of data from independent sources into a so called dataspace, realized through the establishment of mappings between the sources and the KG. Applications built on top of a dataspace provide advanced data access features to end-users based on the representation provided by the KG, obtained through the enrichment of the KG with domain specific facets. A facet is a specialized type of relation that models a salient characteristic of entities of particular domains (e.g., the vintage of wines) from an end-user perspective. In order to enrich a KG with a salient and meaningful representation of data, domain experts in charge of maintaining the dataspace must be in possess of extensive knowledge about disparate domains (e.g., from wines to football players). From an end-user perspective, the difficulties in the definition of domain specific facets for dataspaces significantly reduce the user-experience of data access features and thus the ability to fulfill the information needs of end-users. Remarkably, this problem has not been adequately studied in the literature, which mostly focuses on the enrichment of the KG with a generalist, coverage oriented, and not domain specific representation of data occurring in the dataspace. Motivated by this challenge, this dissertation introduces automatic techniques to support domain experts in the enrichment of a KG with facets that provide a domain specific representation of data. Since facets are a specialized type of relations, the techniques proposed in this dissertation aim at extracting salient domain specific relations. The fundamental components of a dataspace, namely the KG and the mappings between sources and KG elements, are leveraged to elicitate such domain specific representation from specialized data sources of the dataspace, and to support domain experts with valuable information for the supervision of the process. Facets are extracted by leveraging already established mappings between specialized sources and the KG. After extraction, a domain specific interpretation of facets is provided by re-using relations already defined in the KG, to ensure tight integration of data. This dissertation introduces also a framework to profile the status of the KG, to support the supervision of domain experts in the above tasks. Altogether, the contributions presented in this dissertation provide a set of automatic techniques to support domain experts in the evolution of the KG of a dataspace towards a domain specific, end-user oriented representation. Such techniques analyze and exploit the fundamental components of a dataspace (KG, mappings, and source data) with an effectiveness not achievable with state-of-the-art approaches, as shown by extensive evaluations conducted in both synthetic and real world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
2

Low, Boon Kee. "Computer extraction of human faces." Thesis, De Montfort University, 1999. http://hdl.handle.net/2086/10668.

Full text
Abstract:
Due to the recent advances in visual communication and face recognition technologies, automatic face detection has attracted a great deal of research interest. Being a diverse problem, the development of face detection research has comprised contributions from researchers in various fields of sciences. This thesis examines the fundamentals of various face detection techniques implemented since the early 70's. Two groups of techniques are identified based on their approach in applying face knowledge as a priori: feature-based and image-based. One of the problems faced by the current feature-based techniques, is the lack of costeffective segmentation algorithms that are able to deal with issues such as background and illumination variations. As a result a novel facial feature segmentation algorithm is proposed in this thesis. The algorithm aims to combine spatial and temporal information using low cost techniques. In order to achieve this, an existing motion detection technique is analysed and implemented with a novel spatial filter, which itself is proved robust for segmentation of features in varying illumination conditions. Through spatio-temporal information fusion, the algorithm effectively addresses the background and illumination problems among several head and shoulder sequences. Comparisons of the algorithm with existing motion and spatial techniques establishes the efficacy of the combined approach.
APA, Harvard, Vancouver, ISO, and other styles
3

PANA-TALPEANU, RADU-MIHAI. "Trajectory extraction for automatic face sketching." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142429.

Full text
Abstract:
This project consists of a series of algorithms employed to obtain a simplistic but realistic representation of a human face. The final goal is for the sketch to be drawn onto paper by a robotic arm with a gripper holding a drawing instrument. The end application is mostly geared towards entertainment and combines the fields of human-machine interaction, machine learning and image processing. The first part focuses on manipulating an input digital image in order to obtain trajectories of a suitable format for the robot to process. Different techniques are presented, tested and compared, such as edge extraction, landmark detection, spline generation and principal component analysis. Results showed that an edge detector yields too many lines, while the generative spline method leads to overly simplistic faces. The best facial depiction was obtained by combining landmark localization with edge detection. The trajectories outputted by the different techniques are passed to the arm through the high level interface provided by ROS, the Robot Operating System and then drawn on paper.
Detta projekt består av en serie av algoritmer som används för att erhålla en förenklad men realistisk återgivning av ett mänskligt ansikte. Det slutgiltiga målet är att skissen ska ritas på papper av en robotarm med en gripare som håller ett ritinstrument. Tillämpningen är nöjesorienterad och kombinerar områdena människamaskin- interaktion, maskininlärning och bildbehandling. Den första delen fokuserar på att manipulera en mottagen digital bild så att banor i ett format lämpligt för roboten erhålls. Olika tekniker presenteras, testas och jämförs såsom kantdetektion, igenkänning av landmärke, spline-generering och principalkomponentanalys. Resultaten visade att en kantdetektor ger alltför många linjer och att spline-genererings-metoden leder till alltför förenklade ansikten. Den bästa ansiktsskildringen erhölls genom att kombinera lokalisering av landmärke med kantdetektering. De banor som erhållits genom de olika teknikerna överförs till armen genom ett högnivågränssnitt till ROS, Robot Operating System och sedan ritas på papper.
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Huu-Tuan. "Contributions to facial feature extraction for face recognition." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT034/document.

Full text
Abstract:
La tâche la plus délicate d'un système de reconnaissance faciale est la phase d'extraction de caractéristiques significatives et discriminantes. Dans le cadre de cette thèse, nous nous sommes focalisés sur cette tâche avec comme objectif l'élaboration d'une représentation de visage robuste aux variations majeures suivantes: variations d'éclairage, de pose, de temps, images de qualité différentes (vidéosurveillance). Par ailleurs, nous avons travaillé également dans une optique de traitement temps réel. Tout d'abord, en tenant compte des caractéristiques d'orientation des traits principaux du visages (yeux, bouche), une nouvelle variante nommée ELBP de célèbre descripteur LBP a été proposée. Elle s'appuie sur les informations de micro-texture contenues dans une ellipse horizontale. Ensuite, le descripteur EPOEM est construit afin de tenir compte des informations d'orientation des contours. Puis un descripteur nommée PLPQMC qui intégre des informations obtenues par filtrage monogénique dans le descripteur LPQ est proposé. Enfin le descripteur LPOG intégrant des informations de gradient est présenté. Chacun des descripteurs proposés est testé sur les 3 bases d'images AR, FERET et SCface. Il en résulte que les descripteurs PLPQMC et LPOG sont les plus performants et conduisent à des taux de reconnaissance comparables voire supérieur à ceux des meilleurs méthodes de l'état de l'art
Centered around feature extraction, the core task of any Face recognition system, our objective is devising a robust facial representation against major challenges, such as variations of illumination, pose and time-lapse and low resolution probe images, to name a few. Besides, fast processing speed is another crucial criterion. Towards these ends, several methods have been proposed through out this thesis. Firstly, based on the orientation characteristics of the facial information and important features, like the eyes and mouth, a novel variant of LBP, referred as ELBP, is designed for encoding micro patterns with the usage of an horizontal ellipse sample. Secondly, ELBP is exploited to extract local features from oriented edge magnitudes images. By this, the Elliptical Patterns of Oriented Edge Magnitudes (EPOEM) description is built. Thirdly, we propose a novel feature extraction method so called Patch based Local Phase Quantization of Monogenic components (PLPQMC). Lastly, a robust facial representation namely Local Patterns of Gradients (LPOG) is developed to capture meaningful features directly from gradient images. Chiefs among these methods are PLPQMC and LPOG as they are per se illumination invariant and blur tolerant. Impressively, our methods, while offering comparable or almost higher results than that of existing systems, have low computational cost and are thus feasible to deploy in real life applications
APA, Harvard, Vancouver, ISO, and other styles
5

Gunn, Steve R. "Dual active contour models for image feature extraction." Thesis, University of Southampton, 1996. https://eprints.soton.ac.uk/250089/.

Full text
Abstract:
Active contours are now a very popular technique for shape extraction, achieved by minimising a suitably formulated energy functional. Conventional active contour formulations suffer difficulty in appropriate choice of an initial contour and values of parameters. Recent approaches have aimed to resolve these problems, but can compromise other performance aspects. To relieve the problem in initialisation, an evolutionary dual active contour has been developed, which is combined with a local shape model to improve the parameterisation. One contour expands from inside the target feature, the other contracts from the outside. The two contours are inter-linked to provide a balanced technique with an ability to reject weak’local energy minima. Additionally a dual active contour configuration using dynamic programming has been developed to locate a global energy minimum and complements recent approaches via simulated annealing and genetic algorithms. These differ from conventional evolutionary approaches, where energy minimisation may not converge to extract the target shape, in contrast with the guaranteed convergence of a global approach. The new techniques are demonstrated to extract successfully target shapes in synthetic and real images, with superior performance to previous approaches. The new technique employing dynamic programming is deployed to extract the inner face boundary, along with a conventional normal-driven contour to extract the outer face boundary. Application to a database of 75 subjects showed that the outer contour was extracted successfully for 96% of the subjects and the inner contour was successful for 82%. This application highlights the advantages new dual active contour approaches for automatic shape extraction can confer.
APA, Harvard, Vancouver, ISO, and other styles
6

Urbansky, David. "WebKnox: Web Knowledge Extraction." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-23766.

Full text
Abstract:
This thesis focuses on entity and fact extraction from the web. Different knowledge representations and techniques for information extraction are discussed before the design for a knowledge extraction system, called WebKnox, is introduced. The main contribution of this thesis is the trust ranking of extracted facts with a self-supervised learning loop and the extraction system with its composition of known and refined extraction algorithms. The used techniques show an improvement in precision and recall in most of the matters for entity and fact extractions compared to the chosen baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Gao, Jiangning. "3D face recognition using multicomponent feature extraction from the nasal region and its environs." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.707585.

Full text
Abstract:
This thesis is dedicated to extracting expression robust features for 3D face recognition. The use of 3D imaging enables the extraction of discriminative features that can significantly improve the recognition performance due to the availability of facial surface information such as depth, surface normals and curvature. Expression robust analysis using information from both depth and surface normals is investigated by dividing the main facial region into patches of different scales. The nasal region and adjoining parts of the cheeks are utilized as they are more consistent over different expressions and are hard to deliberately occlude. In addition, in comparison with other parts of the face, these regions have a high potential to produce discriminative features for recognition and overcome pose variations. An overview and classification methodology of the widely used 3D face databases are first introduced to provide an appropriate reference for 3D face database selection. Using the FRGC and Bosphorus databases, a low complexity pattern rejector for expression robust 3D face recognition is proposed by matching curves on the nasal and its environs, which results in a low-dimension feature set of only 60 points. To extract discriminative features more locally, a novel multi-scale and multi-component local shape descriptor is further proposed, which achieves more competitive performances under the identification and verification scenarios. In contrast with many of the existing work on 3D face recognition that consider captures obtained with laser scanners or structured light, this thesis also investigates applications to reconstructed 3D captures from lower cost photometric stereo imaging systems that have applications in real-world situations. To this end, the performance of the expression robust face recognition algorithms developed for captures from laser scanners are further evaluated on the Photoface database, which contains naturalistic expression variations. To improve the recognition performance of all types of 3D captures, a universal landmarking algorithm is proposed that makes uses of different components of the surface normals. Using facial profile signatures and thresholded surface normal maps, facial roll and yaw rotations are calibrated and five main landmarks are robustly detected on the well-aligned 3D nasal region. The landmarking results show that the detected landmarks demonstrate high within-class consistency and can achieve good recognition performances under different expressions. This is also the first landmarking work specifically developed for the reconstructed 3D captures from photometric stereo imaging systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.

Full text
Abstract:
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Benn, David E. "Model-based feature extraction and classification for automatic face recognition." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Sung Uk. "On using gait to enhance face extraction for visual surveillance." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/340358/.

Full text
Abstract:
Visual surveillance finds increasing deployment for monitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. Unfortunately, the quality of the recorded imagery can be insufficient for this task. This study describes a programme of research aimed to ameliorate this limitation. Many face biometrics systems use controlled environments where subjects are viewed directly facing the camera. This is less likely to occur in surveillance environments, so it is necessary to handle pose variations of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3D head motion and gait trajectory, with super-resolution analysis. The face extraction procedure consists of three stages: i) head pose estimation by a 3D ellipsoidal model; ii) face region extraction by using a 2D or a 3D gait trajectory; and iii) frontal face extraction and reconstruction by estimating head pose and using super-resolution techniques. The head pose is estimated by using a 3D ellipsoidal model and non-linear optimisation. Region- and distance-based feature refinement methods are used and a direct mapping from the 2D image coordinate to the object coordinate is developed. In face region extraction the potential face region is extracted based on the 2D gait trajectory model when a person walks towards a camera. We model a looming field and show how this field affects the image sequences of the human walking. By fitting a 2D gait trajectory model the face region can then be tracked. For the general case of the human walking a 3D gait trajectory model and heel strike positions are used to extract the face region in 3D space. Wavelet decomposition is used to detect the gait cycle and a new heel strike detection method is developed. In face extraction a high resolution frontal face image is reconstructed with low resolution face images by analysing super-resolution. Based on the head pose and 3D ellipsoidal model the invalid low resolution face images are filtered and the frontal view face is reconstructed. By adapting the existing super-resolution the high resolution frontal face image can be synthesised, which is demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction and recognition process allowing for deployment in surveillance scenarios.
APA, Harvard, Vancouver, ISO, and other styles
11

Liang, Antoni. "Face Image Retrieval with Landmark Detection and Semantic Concepts Extraction." Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/54081.

Full text
Abstract:
This thesis proposes various novel approaches for improving the performances of automatic facial landmarks detection system based on the concept of pictorial tree structure model. Furthermore, a robust glasses landmark detection system is also proposed as glasses are commonly used. These proposed approaches are employed to develop an automatic semantic based face images retrieval system. The experiment results demonstrate significant improvements of all the proposed approaches towards accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Hond, Darryl. "Automatic extraction and recognition of faces from images with varied backgrounds." Thesis, University of Essex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Masip, David. "Feature extraction in face recognition on the use of internal and external features." Saarbrücken VDM Verlag Dr. Müller, 2005. http://d-nb.info/989265706/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Masip, Rodó David. "Face Classification Using Discriminative Features and Classifier Combination." Doctoral thesis, Universitat Autònoma de Barcelona, 2005. http://hdl.handle.net/10803/3051.

Full text
Abstract:
A mesura que la tecnologia evoluciona, apareixen noves aplicacions en el mon de la classificació facial. En el reconeixement de patrons, normalment veiem les cares com a punts en un espai de alta dimensionalitat definit pels valors dels seus pixels. Aquesta aproximació pateix diversos problemes: el fenomen de la "la maledicció de la dimensionalitat", la presència d'oclusions parcials o canvis locals en la il·luminació. Tradicionalment, només les característiques internes de les imatges facials s'han utilitzat per a classificar, on normalment es fa una extracció de característiques. Les tècniques d'extracció de característiques permeten reduir la influencia dels problemes mencionats, reduint també el soroll inherent de les imatges naturals alhora que es poden aprendre característiques invariants de les imatges facials. En la primera part d'aquesta tesi presentem alguns mètodes d'extracció de característiques clàssics: Anàlisi de Components Principals (PCA), Anàlisi de Components Independents (ICA), Factorització No Negativa de Matrius (NMF), i l'Anàlisi Discriminant de Fisher (FLD), totes elles fent alguna mena d'assumpció en les dades a classificar. La principal contribució d'aquest treball es una nova família de tècniques d'extracció de característiques usant el algorisme del Adaboost. El nostre mètode no fa cap assumpció en les dades a classificar, i construeix de forma incremental la matriu de projecció tenint en compte els exemples mes difícils
Per altra banda, en la segon apart de la tesi explorem el rol de les característiques externes en el procés de classificació facial, i presentem un nou mètode per extreure un conjunt alineat de característiques a partir de la informació externa que poden ser combinades amb les tècniques clàssiques millorant els resultats globals de classificació.
As technology evolves, new applications dealing with face classification appear. In pattern recognition, faces are usually seen as points in a high dimensional spaces defined by their pixel values. This approach must deal with several problems such as: the curse of dimensionality, the presence of partial occlusions or local changes in the illumination. Traditionally, only the internal features of face images have been used for classification purposes, where usually a feature extraction step is performed. Feature extraction techniques allow to reduce the influence of the problems mentioned, reducing also the noise inherent from natural images and learning invariant characteristics from face images. In the first part of this thesis some internal feature extraction methods are presented: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Non Negative Matrix Factorization (NMF), and Fisher Linear Discriminant Analysis (FLD), all of them making some kind of the assumption on the data to classify. The main contribution of our work is a non parametric feature extraction family of techniques using the Adaboost algorithm. Our method makes no assumptions on the data to classify, and incrementally builds the projection matrix taking into account the most difficult samples.
On the other hand, in the second part of this thesis we also explore the role of external features in face classification purposes, and present a method for extracting an aligned feature set from external face information that can be combined with the classic internal features improving the global performance of the face classification task.
APA, Harvard, Vancouver, ISO, and other styles
15

Wall, Helene. "Context-Based Algorithm for Face Detection." Thesis, Linköping University, Department of Science and Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4171.

Full text
Abstract:

Face detection has been a research area for more than ten years. It is a complex problem due to the high variability in faces and amongst faces; therefore it is not possible to extract a general pattern to be used for detection. This is what makes the face detection problem a challenge.

This thesis gives the reader a background to the face detection problem, where the two main approaches of the problem are described. A face detection algorithm is implemented using a context-based method in combination with an evolving neural network. The algorithm consists of two majors steps: detect possible face areas and within these areas detect faces. This method makes it possible to reduce the search space.

The performance of the algorithm is evaluated and analysed. There are several parameters that affect the performance; the feature extraction method, the classifier and the images used.

This work resulted in a face detection algorithm and the performance of the algorithm is evaluated and analysed. The analysis of the problems that occurred has provided a deeper understanding for the complexity of the face detection problem.

APA, Harvard, Vancouver, ISO, and other styles
16

Ahlberg, Jörgen. "Model-based coding : extraction, coding, and evaluation of face model parameters /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek761s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ahmad, Muhammad Imran. "Feature extraction and information fusion in face and palmprint multimodal biometrics." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2128.

Full text
Abstract:
Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at an earlier level by consolidating the features given by different traits can give a better result due to the richness of information at this stage. In this thesis, three novel methods are derived and implemented on face and palmprint modalities, taking advantage of the multimodal biometric fusion at feature level. The benefits of the proposed method are the enhanced capabilities in discriminating information in the fused features and capturing all of the information required to improve the classification performance. Multimodal biometric proposed here consists of several stages such as feature extraction, fusion, recognition and classification. Feature extraction gathers all important information from the raw images. A new local feature extraction method has been designed to extract information from the face and palmprint images in the form of sub block windows. Multiresolution analysis using Gabor transform and DCT is computed for each sub block window to produce compact local features for the face and palmprint images. Multiresolution Gabor analysis captures important information in the texture of the images while DCT represents the information in different frequency components. Important features with high discrimination power are then preserved by selecting several low frequency coefficients in order to estimate the model parameters. The local features extracted are fused in a new matrix interleaved method. The new fused feature vector is higher in dimensionality compared to the original feature vectors from both modalities, thus it carries high discriminating power and contains rich statistical information. The fused feature vector also has larger data points in the feature space which is advantageous for the training process using statistical methods. The underlying statistical information in the fused feature vectors is captured using GMM where several numbers of modal parameters are estimated from the distribution of fused feature vector. Maximum likelihood score is used to measure a degree of certainty to perform recognition while maximum likelihood score normalization is used for classification process. The use of likelihood score normalization is found to be able to suppress an imposter likelihood score when the background model parameters are estimated from a pool of users which include statistical information of an imposter. The present method achieved the highest recognition accuracy 97% and 99.7% when tested using FERET-PolyU dataset and ORL-PolyU dataset respectively.
APA, Harvard, Vancouver, ISO, and other styles
18

Kičina, Pavol. "Automatická identifikace tváří v reálných podmínkách." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218980.

Full text
Abstract:
This master‘s thesis describes the identification faces in real terms. It includes an overview of current methods of detection faces by the classifiers. It also includes various methods for detecting faces. The second part is a description of two programs designed to identify persons. The first program operates in real time under laboratory conditions, where using web camera acquires images of user's face. This program is designed to speed recognition of persons. The second program has been working on static images, in real terms. The main essence of this method is successful recognition of persons, therefore the emphasis on computational complexity. The programs I used a staged method of PCA, LDA and kernel PCA (KPCA). The first program only works with the PCA method, which has good results with respect to the success and speed of recognition. In the second program to compare methods, which passed the best method for KPCA.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Qi. "An integration framework of feature selection and extraction for appearance-based recognition." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 8.38 Mb., 141 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Samaria, Ferdinando Silvestro. "Face recognition using Hidden Markov Models." Thesis, University of Cambridge, 1995. https://www.repository.cam.ac.uk/handle/1810/244871.

Full text
Abstract:
This dissertation introduces work on face recognition using a novel technique based on Hidden Markov Models (HMMs). Through the integration of a priori structural knowledge with statistical information, HMMs can be used successfully to encode face features. The results reported are obtained using a database of images of 40 subjects, with 5 training images and 5 test images for each. It is shown how standard one-dimensional HMMs in the shape of top-bottom models can be parameterised, yielding successful recognition rates of up to around 85%. The insights gained from top-bottom models are extended to pseudo two-dimensional HMMs, which offer a better and more flexible model, that describes some of the twodimensional dependencies missed by the standard one-dimensional model. It is shown how pseudo two-dimensional HMMs can be implemented, yielding successful recognition rates of up to around 95%. The performance of the HMMs is compared with the Eigenface approach and various domain and resolution experiments are also carried out. Finally, the performance of the HMM is evaluated in a fully automated system, where database images are cropped automatically.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Cuiping Cohen Fernand S. "3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions /." Philadelphia, Pa. : Drexel University, 2006. http://hdl.handle.net/1860/1294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Smith, R. S. "Angular feature extraction and ensemble classification method for 2D, 2.5D and 3D face recognition." Thesis, University of Surrey, 2008. http://epubs.surrey.ac.uk/843069/.

Full text
Abstract:
It has been recognised that, within the context of face recognition, angular separation between centred feature vectors is a useful measure of dissimilarity. In this thesis we explore this observation in more detail and compare and contrast angular separation with the Euclidean, Manhattan and Mahalonobis distance metrics. This is applied to 2D, 2.5D and 3D face images and the investigation is done in conjunction with various feature extraction techniques such as local binary patterns (LBP) and linear discriminant analysis (LDA). We also employ error-correcting output code (ECOC) ensembles of support vector machines (SVMs) to project feature vectors non-linearly into a new and more discriminative feature space. It is shown that, for both face verification and face recognition tasks, angular separation is a more discerning dissimilarity measure than the others. It is also shown that the effect of applying the feature extraction algorithms described above is to considerably sharpen and enhance the ability of all metrics, but in particular angular separation, to distinguish inter-personal from extra-personal face image differences. A novel technique, known as angularisation, is introduced by which a data set that is well separated in the angular sense can be mapped into a new feature space in which other metrics are equally discriminative. This operation can be performed separately or it can be incorporated into an SVM kernel. The benefit of angularisation is that it allows strong classification methods to take advantage of angular separation without explicitly incorporating it into their construction. It is shown that the accuracy of ECOC ensembles can be improved in this way. A further aspect of the research is to compare the effectiveness of the ECOC approach to constructing ensembles of SVM base classifiers with that of binary hierarchical classifiers (BHC). Experiments are performed which lead to the conclusion that, for face recognition problems, ECOC yields greater classification accuracy than the BHC method. This is attributed primarily to the fact that the size of the training set decreases along a path from the root node to a leaf node of the BHC tree and this leads to great difficulties in constructing accurate base classifiers at the lower nodes.
APA, Harvard, Vancouver, ISO, and other styles
23

Ahonen, T. (Timo). "Face and texture image analysis with quantized filter response statistics." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291821.

Full text
Abstract:
Abstract Image appearance descriptors are needed for different computer vision applications dealing with, for example, detection, recognition and classification of objects, textures, humans, etc. Typically, such descriptors should be discriminative to allow for making the distinction between different classes, yet still robust to intra-class variations due to imaging conditions, natural changes in appearance, noise, and other factors. The purpose of this thesis is the development and analysis of photometric descriptors for the appearance of real life images. The two application areas included in this thesis are face recognition and texture classification. To facilitate the development and analysis of descriptors, a general framework for image description using statistics of quantized filter bank responses modeling their joint distribution is introduced. Several texture and other image appearance descriptors, including the local binary pattern operator, can be presented using this model. This framework, within which the thesis is presented, enables experimental evaluation of the significance of each of the components of this three-part chain forming a descriptor from an input image. The main contribution of this thesis is a face representation method using distributions of local binary patterns computed in local rectangular regions. An important factor of this contribution is to view feature extraction from a face image as a texture description problem. This representation is further developed into a more precise model by estimating local distributions based on kernel density estimation. Furthermore, a face recognition method tolerant to image blur using local phase quantization is presented. The thesis presents three new approaches and extensions to texture analysis using quantized filter bank responses. The first two aim at increasing the robustness of the quantization process. The soft local binary pattern operator accomplishes this by making a soft quantization to several labels, whereas Bayesian local binary patterns make use of a prior distribution of labelings, and aim for the one maximizing the a posteriori probability. Third, a novel method for computing rotation invariant statistics from histograms of local binary pattern labels using the discrete Fourier transform is introduced. All the presented methods have been experimentally validated using publicly available image datasets and the results of experiments are presented in the thesis. The face description approach proposed in this thesis has been validated in several external studies, and it has been utilized and further developed by several research groups working on face analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Yilmazturk, Mehmet Celaleddin. "Online And Semi-automatic Annotation Of Faces In Personal Videos." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611936/index.pdf.

Full text
Abstract:
Video annotation has become an important issue due to the rapidly increasing amount of video available. For efficient video content searches, annotation has to be done beforehand, which is a time-consuming process if done manually. Automatic annotation of faces for person identification is a major challenge in the context of content-based video retrieval. This thesis work focuses on the development of a semi-automatic face annotation system which benefits from online learning methods. The system creates a face database by using face detection and tracking algorithms to collect samples of the encountered faces in the video and by receiving labels from the user. Using this database a learner model is trained. While the training session continues, the system starts offering labels for the newly encountered faces and lets the user acknowledge or correct the suggested labels hence a learner is updated online throughout the video. The user is free to train the learner until satisfactory results are obtained. In order to create a face database, a shot boundary algorithm is implemented to partition the video into semantically meaningful segments and the user browses through the video from one shot boundary to the next. A face detector followed by a face tracker is implemented to collect face samples within two shot boundary frames. For online learning, feature extraction and classification methods which are computationally efficient are investigated and evaluated. Sequential variants of some robust batch classification algorithms are implemented. Combinations of feature extraction and classification methods have been tested and compared according to their face recognition accuracy and computational performances.
APA, Harvard, Vancouver, ISO, and other styles
25

Urbansky, David. "Automatic Extraction and Assessment of Entities from the Web." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-97469.

Full text
Abstract:
The search for information about entities, such as people or movies, plays an increasingly important role on the Web. This information is still scattered across many Web pages, making it more time consuming for a user to find all relevant information about an entity. This thesis describes techniques to extract entities and information about these entities from the Web, such as facts, opinions, questions and answers, interactive multimedia objects, and events. The findings of this thesis are that it is possible to create a large knowledge base automatically using a manually-crafted ontology. The precision of the extracted information was found to be between 75–90 % (facts and entities respectively) after using assessment algorithms. The algorithms from this thesis can be used to create such a knowledge base, which can be used in various research fields, such as question answering, named entity recognition, and information retrieval.
APA, Harvard, Vancouver, ISO, and other styles
26

Mahoor, Mohammad Hossein. "A Multi-Modal Approach for Face Modeling and Recognition." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_dissertations/32.

Full text
Abstract:
This dissertation describes a new methodology for multi-modal (2-D + 3-D) face modeling and recognition. There are advantages in using each modality for face recognition. For example, the problems of pose variation and illumination condition, which cannot be resolved easily by using the 2-D data, can be handled by using the 3-D data. However, texture, which is provided by 2-D data, is an important cue that cannot be ignored. Therefore, we use both the 2-D and 3-D modalities for face recognition and fuse the results of face recognition by each modality to boost the overall performance of the system. In this dissertation, we consider two different cases for multi-modal face modeling and recognition. In the first case, the 2-D and 3-D data are registered. In this case we develop a unified graph model called Attributed Relational Graph (ARG) for face modeling and recognition. Based on the ARG model, the 2-D and 3-D data are included in a single model. The developed ARG model consists of nodes, edges, and mutual relations. The nodes of the graph correspond to the landmark points that are extracted by an improved Active Shape Model (ASM) technique. In order to extract the facial landmarks robustly, we improve the Active Shape Model technique by using the color information. Then, at each node of the graph, we calculate the response of a set of log-Gabor filters applied to the facial image texture and shape information (depth values); these features are used to model the local structure of the face at each node of the graph. The edges of the graph are defined based on Delaunay triangulation and a set of mutual relations between the sides of the triangles are defined. The mutual relations boost the final performance of the system. The results of face matching using the 2-D and 3-D attributes and the mutual relations are fused at the score level. In the second case, the 2-D and 3-D data are not registered. This lack of registration could be due to different reasons such as time lapse between the data acquisitions. Therefore, the 2-D and 3-D modalities are modeled independently. For the 3-D modality, we developed a fully automated system for 3-D face modeling and recognition based on ridge images. The problem with shape matching approaches such as Iterative Closest Points (ICP) or Hausdorff distance is the computational complexity. We model the face by 3-D binary ridge images and use them for matching. In order to match the ridge points (either using the ICP or the Hausdorff distance), we extract three facial landmark points: namely, the two inner corners of the eyes and the tip of the nose, on the face surface using the Gaussian curvature. These three points are used for initial alignment of the constructed ridge images. As a result of using ridge points, which are just a fraction of the total points on the surface of the face, the computational complexity of the matching is reduced by two orders of magnitude. For the 2-D modality, we model the face using an Attributed Relational Graph. The results of the 2-D and 3-D matching are fused at the score level. There are various techniques to fuse the 2-D and 3-D modalities. In this dissertation, we fuse the matching results at the score level to enhance the overall performance of our face recognition system. We compare the Dempster-Shafer theory of evidence and the weighted sum rule for fusion. We evaluate the performance of the above techniques for multi-modal face recognition on various databases such as Gavab range database, FRGC (Face Recognition Grand Challenge) V2.0, and the University of Miami face database.
APA, Harvard, Vancouver, ISO, and other styles
27

Sharonova, Natalia Valeriyevna, Anastsiia Doroshenko, and Olga Cherednichenko. "Towards the ontology-based approach for factual information matching." Thesis, Друкарня Мадрид, 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/46351.

Full text
Abstract:
Factual information is information based on facts or relating to facts. The reliability of automatically extracted facts is the main problem of processing factual information. The fact retrieval system remains one of the most effective tools for identifying the information for decision-making. In this work, we explore how can natural language processing methods and problem domain ontology help to check contradictions and mismatches in facts automatically.
APA, Harvard, Vancouver, ISO, and other styles
28

Gaspar, Thiago Lombardi. "Reconhecimento de faces humanas usando redes neurais MLP." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-27042006-231620/.

Full text
Abstract:
O objetivo deste trabalho foi desenvolver um algoritmo baseado em redes neurais para o reconhecimento facial. O algoritmo contém dois módulos principais, um módulo para a extração de características e um módulo para o reconhecimento facial, sendo aplicado sobre imagens digitais nas quais a face foi previamente detectada. O método utilizado para a extração de características baseia-se na aplicação de assinaturas horizontais e verticais para localizar os componentes faciais (olhos e nariz) e definir a posição desses componentes. Como entrada foram utilizadas imagens faciais de três bancos distintos: PICS, ESSEX e AT&T. Para esse módulo, a média de acerto foi de 86.6%, para os três bancos de dados. No módulo de reconhecimento foi utilizada a arquitetura perceptron multicamadas (MLP), e para o treinamento dessa rede foi utilizado o algoritmo de aprendizagem backpropagation. As características faciais extraídas foram aplicadas nas entradas dessa rede neural, que realizou o reconhecimento da face. A rede conseguiu reconhecer 97% das imagens que foram identificadas como pertencendo ao banco de dados utilizado. Apesar dos resultados satisfatórios obtidos, constatou-se que essa rede não consegue separar adequadamente características faciais com valores muito próximos, e portanto, não é a rede mais eficiente para o reconhecimento facial
This research presents a facial recognition algorithm based in neural networks. The algorithm contains two main modules: one for feature extraction and another for face recognition. It was applied in digital images from three database, PICS, ESSEX and AT&T, where the face was previously detected. The method for feature extraction was based on previously knowledge of the facial components location (eyes and nose) and on the application of the horizontal and vertical signature for the identification of these components. The mean result obtained for this module was 86.6% for the three database. For the recognition module it was used the multilayer perceptron architecture (MLP), and for training this network it was used the backpropagation algorithm. The extracted facial features were applied to the input of the neural network, that identified the face as belonging or not to the database with 97% of hit ratio. Despite the good results obtained it was verified that the MLP could not distinguish facial features with very close values. Therefore the MLP is not the most efficient network for this task
APA, Harvard, Vancouver, ISO, and other styles
29

Zuniga, Miguel Salas. "Extracting skull-face models form MRI datasets for use in craniofacial reconstruction." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tambay, Alain Alimou. "Testing Fuzzy Extractors for Face Biometrics: Generating Deep Datasets." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41429.

Full text
Abstract:
Biometrics can provide alternative methods for security than conventional authentication methods. There has been much research done in the field of biometrics, and efforts have been made to make them more easily usable in practice. The initial application for our work is a proof of concept for a system that would expedite some low-risk travellers’ arrival into the country while preserving the user’s privacy. This thesis focuses on the subset of problems related to the generation of cryptographic keys from noisy data, biometrics in our case. This thesis was built in two parts. In the first, we implemented a key generating quantization-based fuzzy extractor scheme for facial feature biometrics based on the work by Dodis et al. and Sutcu, Li, and Memon. This scheme was modified to increased user privacy, address some implementation-based issues, and add testing-driven changes to tailor it towards its expected real-world usage. We show that our implementation does not significantly affect the scheme's performance, while providing additional protection against malicious actors that may gain access to the information stored on a server where biometric information is stored. The second part consists of the creation of a process to automate the generation of deep datasets suitable for the testing of similar schemes. The process led to the creation of a larger dataset than those available for free online for minimal work, and showed that these datasets can be further expanded with only little additional effort. This larger dataset allowed for the creation of more representative recognition challenges. We were able to show that our implementation performed similarly to other non-commercial schemes. Further refinement will be necessary if this is to be compared to commercial applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Venkatesan, Janani. "Video Data Collection for Continuous Identity Assurance." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6424.

Full text
Abstract:
Frequently monitoring the identity of a person connected to a secure system is an important component in a cyber-security system. Identity Assurance (IA) mechanisms which continuously confirm and verify users’ identity after the initial authentication process ensure integrity and security. Such systems prevent unauthorized access and eliminate the need of an authorized user to present credentials repeatedly for verification. Very few cyber-security systems deploy such IA modules. These IA modules are typically based on computer vision and machine learning algorithms. These algorithms work effectively when trained with representative datasets. This thesis describes our effort at collecting a small dataset of multi-view videos of typical work session of several subjects to serve as a resource for other researchers of IA algorithms to evaluate and compare the performance of their algorithms with those of others. We also present a Proof of Concept (POC) face matching algorithm and experimental results with this POC implementation for a subset of collected dataset.
APA, Harvard, Vancouver, ISO, and other styles
32

Ener, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.

Full text
Abstract:
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
APA, Harvard, Vancouver, ISO, and other styles
33

Wihlborg, Åsa. "Using an XML-driven approach to create tools for program understanding : An implementation for Configura and CET Designer." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-66414.

Full text
Abstract:
A major problem during development and maintenance of software is lack of quality documentation. Many programers have problems identifying which infor- mation is relevant for someone with no knowledge of the system and therefore write incomplete documentation. One way to get around these problems would be to use a tool that extracts information from both comments and the actual source code and presents the structure of the program visually. This thesis aims to design an XML-driven system for the extraction and pre- sentation of meta information about source code to that purpose. Relevant meta information in this case is, for example, which entities (classes, methods, variables, etc.) exist in the program and how they interact with each other. The result is a prototype implemented to manage two company developed lan- guages. The prototype demonstrates how the system can be implemented and show that the approach is scalable. The prototype is not suitable for commercial use due to its abstraction level, but with the help of qualified XML databases there are great possibilities to build a usable system using the same techniques in the future.
Ett stort problem under utvecklingen och underhållet av mjukvara är bristande dokumentation av källkoden. Många programmerare har svårt att identifiera vilken information som är viktig för någon som inte är insatt i systemet och skriver därför bristfällig dokumentation. Ett sätt att komma runt dessa problem skulle vara att använda verktyg som extraherar information från såväl kommentarer som faktisk källkod och presenterar programmets struktur påett tydligt och visuellt sätt. Det här examensarbetet ämnar att designa ett system för XML-driven extra- hering och presentation av metainformation om källkoden med just det syftet. Metainformationen som avses här är exempelvis vilka entiteter (klasser, metoder, variabler, mm.) som finns i källkoden samt hur dessa interagerar med varandra. Resultatet är en prototyp implementerad för att hantera tvåföretagsutvecklade språk. Prototypen demonstrerar hur systemet kan implementeras och visar att me- toden är skalbar. Prototypen är abstraktionsmässigt inte lämplig för kommersiellt bruk men med hjälp av kvalificerade XML-databaser finns det stora möjligheter att i framtiden bygga ett praktiskt användbart system baserat på samma tekniker.
APA, Harvard, Vancouver, ISO, and other styles
34

Cui, Chen. "Adaptive weighted local textural features for illumination, expression and occlusion invariant face recognition." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1374782158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zou, Le. "3D face recognition with wireless transportation." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

SILVA, José Ivson Soares da. "Reconhecimento facial em imagens de baixa resolução." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16367.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-07T12:14:52Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5)
Made available in DSpace on 2016-04-07T12:14:52Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5) Previous issue date: 2015-02-24
FADE
Tem crescido o uso de sistemas computacionais para reconhecimento de pessoas por meio de dados biométricos, consequentemente os métodos para realizar o reconhecimento tem evoluído. A biometria usada no reconhecimento pode ser face, voz, impressão digital ou qualquer característica física capaz de distinguir as pessoas. Mudanças causadas por cirurgias, envelhecimento ou cicatrizes, podem não causar mudanças significativas nas características faciais tornando possível o reconhecimento após essas mudanças de aparência propositais ou não. Por outro lado tais mudanças se tornam um desafio para sistemas de reconhecimento automático. Além das mudanças físicas há outros fatores na obtenção da imagem que influenciam o reconhecimento facial como resolução da imagem, posição da face em relação a câmera, iluminação do ambiente, oclusão, expressão. A distância que uma pessoa aparece na cena modifica a resolução da região da sua face, o objetivo de sistemas direcionados a esse contexto é que a influência da resolução nas taxas de reconhecimento seja minimizada. Uma pessoa mais distante da câmera tem sua face na imagem numa resolução menor que uma que esteja mais próxima. Sistemas de reconhecimento facial têm um menor desempenho ao tratar imagens faciais de baixa resolução. Uma das fases de um sistema de reconhecimento é a extração de características, que processa os dados de entrada e fornece um conjunto de informações mais representativas das imagens. Na fase de extração de características os padrões da base de dados de treinamento são recebidos numa mesma dimensão, ou seja, no caso de imagens numa mesma resolução. Caso as imagens disponíveis para o treinamento sejam de resoluções diferentes ou as imagens de teste sejam de resolução diferente do treinamento, faz-se necessário que na fase de pré-processamento haja um tratamento de resolução. O tratamento na resolução pode ser aplicando um aumento da resolução das imagens menores ou redução da resolução das imagens maiores. O aumento da resolução não garante um ganho de informação que possa melhorar o desempenho dos sistemas. Neste trabalho são desenvolvidos dois métodos executados na fase de extração de características realizada por Eigenface, os vetores de características são redimensionados para uma nova escala menor por meio de interpolação, semelhante ao que acontece no redimensionamento de imagens. No primeiro método, após a extração de características, os vetores de características e as imagens de treinamento são redimensionados. Então, as imagens de treinamento e teste são projetadas no espaço de características pelos vetores de dimensão reduzida. No segundo método, apenas os vetores de características são redimensionados e multiplicados por um fator de compensação. Então, as imagens de treinamento são projetadas pelos vetores originais e as imagens de teste são projetadas pelos vetores reduzidos para o mesmo espaço. Os métodos propostos foram testados em 4 bases de dados de reconhecimento facial com a presença de problemas de variação de iluminação, variação de expressão facial, presença óculos e posicionamento do rosto.
In the last decades the use of computational systems to recognize people by biometric data is increasing, consequently the efficacy of methods to perform recognition is improving. The biometry used for recognition can be face, voice, fingerprint or other physical feature that enables the distiction of different persons. Facial changes caused by surgery, aging or scars, does not necessarily causes significant changes in facial features. For a human it is possible recognize other person after these interventions of the appearance. On the other hand, these interventions become a challenge to computer recognition systems. Beyond the physical changes there are other factors in aquisition of an image that influence the face recognition such as the image resolution, position between face and camera, light from environment, occlusions and variation of facial expression. The distance that a person is at image aquisition changes the resolution of face image. The objective of systems for this context is to minimize the influence of the image resolution for the recognition. A person more distant from the camera has the image of the face in a smaller resolution than a person near the camera. Face recognition systems have a poor performance to analyse low resolution image. One of steps of a recognition system is the features extraction that processes the input data so provides more representative images. In the features extraction step the images from the training database are received at same dimension, in other words, to analyse the images they have the same resolution. If the training images have different resolutions of test images it is necessary a preprocessing to normalize the image resolution. The preprocessing of an image can be to increase the resolution of small images or to reduce the resolution of big images. The increase resolution does not guarantee that there is a information gain that can improves the performance of the recognition systems. In this work two methods are developed at features extraction step based on Eigenface. The feature vectors are resized to a smaller scale, similar to image resize. In first method, after the feature extraction step, the feature vectors and the training images are resized. Then the training and test images are projected to feature space by the resized feature vectors. In second method, only the feature vectors are resized and multiplied by a compensation factor. The training images are projected by original feature vectors and the test images are projected by resized feature vectors to the same space. The proposed methods were tested in 4 databases of face recognition with presence of light variation, variation of facial expression, use of glasses and face position.
APA, Harvard, Vancouver, ISO, and other styles
37

Mamadou, Diarra. "Extraction et fusion de points d'intérêt et textures spectraux pour l'identification, le contrôle et la sécurité." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK031/document.

Full text
Abstract:
La biométrie est une technologie émergente qui propose de nouvelles méthodes de contrôle, d’identification et de sécurité. Les systèmes biométriques sont souvent victimes de menaces. La reconnaissance faciale est populaire et plusieurs approches existantes utilisent des images dans le spectre visible. Ces systèmes traditionnels opérant dans le spectre visible souffrent de plusieurs limitations dues aux changements d’éclairage, de poses et d’expressions faciales. La méthodologie présentée dans cette thèse est basée sur de la reconnaissance faciale multispectrale utilisant l'imagerie infrarouge et visible, pour améliorer la performance de la reconnaissance faciale et pallier les insuffisances du spectre visible. Les images multispectrales utilisées cette étude sont obtenues par fusion d’images visibles et infrarouges. Les différentes techniques de reconnaissance sont basées sur l’extraction de caractéristiques telles que la texture et les points d’intérêt par les techniques suivantes : une extraction hybride de caractéristiques, une extraction binaire de caractéristiques, une mesure de similarité tenant compte des caractéristiques extraites
Biometrics is an emerging technology that proposes new methods of control, identification and security. Biometric systems are often subject to risks. Face recognition is popular and several existing approaches use images in the visible spectrum. These traditional systems operating in the visible spectrum suffer from several limitations due to changes in lighting, poses and facial expressions. The methodology presented in this thesis is based on multispectral facial recognition using infrared and visible imaging, to improve the performance of facial recognition and to overcome the deficiencies of the visible spectrum. The multispectral images used in this study are obtained by fusion of visible and infrared images. The different recognition techniques are based on features extraction such as texture and points of interest by the following techniques: a hybrid feature extraction, a binary feature extraction, a similarity measure taking into account the extracted characteristics
APA, Harvard, Vancouver, ISO, and other styles
38

Pyun, Nam Jun. "Extraction d’une image dans une vidéo en vue de la reconnaissance du visage." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCB132/document.

Full text
Abstract:
Une vidéo est une source particulièrement riche en informations. Parmi tous les objets que nous pouvons y trouver, les visages humains sont assurément les plus saillants, ceux qui attirent le plus l’attention des spectateurs. Considérons une séquence vidéo dont chaque trame contient un ou plusieurs visages en mouvement. Ils peuvent appartenir à des personnes connues ou qui apparaissent de manière récurrente dans la vidéo Cette thèse a pour but de créer une méthodologie afin d’extraire une ou plusieurs images de visage en vue d’appliquer, par la suite, un algorithme de reconnaissance du visage. La principale hypothèse de cette thèse réside dans le fait que certains exemplaires d’un visage sont meilleurs que d’autres en vue de sa reconnaissance. Un visage est un objet 3D non rigide projeté sur un plan pour obtenir une image. Ainsi, en fonction de la position relative de l’objectif par rapport au visage, l’apparence de ce dernier change. Considérant les études sur la reconnaissance de visages, on peut supposer que les exemplaires d’un visage, les mieux reconnus sont ceux de face. Afin d’extraire les exemplaires les plus frontaux possibles, nous devons d’une part estimer la pose de ce visage. D’autre part, il est essentiel de pouvoir suivre le visage tout au long de la séquence. Faute de quoi, extraire des exemplaires représentatifs d’un visage perd tout son sens. Les travaux de cette thèse présentent trois parties majeures. Dans un premier temps, lorsqu’un visage est détecté dans une séquence, nous cherchons à extraire position et taille des yeux, du nez et de la bouche. Notre approche se base sur la création de cartes d’énergie locale principalement à direction horizontale. Dans un second temps, nous estimons la pose du visage en utilisant notamment les positions relatives des éléments que nous avons extraits. Un visage 3D a trois degrés de liberté : le roulis, le lacet et le tangage. Le roulis est estimé grâce à la maximisation d’une fonction d’énergie horizontale globale au visage. Il correspond à la rotation qui s’effectue parallèlement au plan de l’image. Il est donc possible de le corriger pour qu’il soit nul, contrairement aux autres rotations. Enfin, nous proposons un algorithme de suivi de visage basé sur le suivi des yeux dans une séquence vidéo. Ce suivi repose sur la maximisation de la corrélation des cartes d’énergie binarisées ainsi que sur le suivi des éléments connexes de cette carte binaire. L’ensemble de ces trois méthodes permet alors tout d’abord d’évaluer la pose d’un visage qui se trouve dans une trame donnée puis de lier tous les visages d’une même personne dans une séquence vidéo, pour finalement extraire plusieurs exemplaires de ce visage afin de les soumettre à un algorithme de reconnaissance du visage
The aim of this thesis is to create a methodology in order to extract one or a few representative face images of a video sequence with a view to apply a face recognition algorithm. A video is a media particularly rich. Among all the objects present in the video, human faces are, for sure, the most salient objects. Let us consider a video sequence where each frame contains a face of the same person. The primary assumption of this thesis is that some samples of this face are better than the others in terms of face recognition. A face is a non-rigid 3D object that is projected on a plan to form an image. Hence, the face appearance changes according to the relative positions of the camera and the face. Many works in the field of face recognition require faces as frontal as possible. To extract the most frontal face samples, on the one hand, we have to estimate the head pose. On the other hand, tracking the face is also essential. Otherwise, extraction representative face samples are senseless. This thesis contains three main parts. First, once a face has been detected in a sequence, we try to extract the positions and sizes of the eyes, the nose and the mouth. Our approach is based on local energy maps mainly with a horizontal direction. In the second part, we estimate the head pose using the relative positions and sizes of the salient elements detected in the first part. A 3D face has 3 degrees of freedom: the roll, the yaw and the pitch. The roll is estimated by the maximization of a global energy function computed on the whole face. Since this roll corresponds to the rotation which is parallel to the image plan, it is possible to correct it to have a null roll value face, contrary to other rotations. In the last part, we propose a face tracking algorithm based on the tracking of the region containing both eyes. This tracking is based on the maximization of a similarity measure between two consecutive frames. Therefore, we are able to estimate the pose of the face present in a video frame, then we are also able to link all the faces of the same person in a video sequence. Finally, we can extract several samples of this face in order to apply a face recognition algorithm on them
APA, Harvard, Vancouver, ISO, and other styles
39

Chahla, Charbel. "Non-linear feature extraction for object re-identification in cameras networks." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0023.

Full text
Abstract:
La réplication du système visuel utilisé par le cerveau pour traiter l'information est un domaine de grand intérêt. Cette thèse se situe dans le cadre d'un système automatisé capable d'analyser les traits du visage lorsqu'une personne est proche des caméras et suivre son identité lorsque ces traits ne sont plus traçables. La première partie est consacrée aux procédures d'estimation de pose de visage pour les utiliser dans les scénarios de reconnaissance faciale. Nous avons proposé une nouvelle méthode basée sur une représentation sparse et on l'a appelé Sparse Label sensible Local Preserving Projections. Dans un environnement incontrôlé, la ré-identification de personne reposant sur des données biométriques n'est pas réalisable. Par contre, les caractéristiques basées sur l'apparence des personnes peuvent être exploitées plus efficacement. Dans ce contexte, nous proposons une nouvelle approche pour la ré-identification dans un réseau de caméras non chevauchantes. Pour fournir une mesure de similarité, chaque image est décrite par un vecteur de similarité avec une collection de prototypes. La robustesse de l'algorithme est améliorée en proposant la procédure Color Categorisation. Dans la dernière partie de cette thèse, nous proposons une architecture Siamese de deux réseaux neuronaux convolutionnels (CNN), chaque CNN étant réduit à seulement onze couches. Cette architecture permet à une machine d'être alimentée directement avec des données brutes pour faire la classification
Replicating the visual system that the brain uses to process the information is an area of substantial interest. This thesis is situated in the context of a fully automated system capable of analyzing facial features when the target is near the cameras, and tracking his identity when his facial features are no more traceable. The first part of this thesis is devoted to face pose estimation procedures to be used in face recognition scenarios. We proposed a new label-sensitive embedding based on a sparse representation called Sparse Label sensitive Locality Preserving Projections. In an uncontrolled environment observed by cameras from an unknown distance, person re-identification relying upon conventional biometrics such as face recognition is not feasible. Instead, visual features based on the appearance of people can be exploited more reliably. In this context, we propose a new embedding scheme for single-shot person re-identification under non overlapping target cameras. Each person is described as a vector of kernel similarities to a collection of prototype person images. The robustness of the algorithm is improved by proposing the Color Categorization procedure. In the last part of this thesis, we propose a Siamese architecture of two Convolutional Neural Networks (CNN), with each CNN reduced to only eleven layers. This architecture allows a machine to be fed directly with raw data and to automatically discover the representations needed for classification
APA, Harvard, Vancouver, ISO, and other styles
40

Bianchi, Marcelo Franceschi de. "Extração de características de imagens de faces humanas através de wavelets, PCA e IMPCA." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-10072006-002119/.

Full text
Abstract:
Reconhecimento de padrões em imagens é uma área de grande interesse no mundo científico. Os chamados métodos de extração de características, possuem as habilidades de extrair características das imagens e também de reduzir a dimensionalidade dos dados gerando assim o chamado vetor de características. Considerando uma imagem de consulta, o foco de um sistema de reconhecimento de imagens de faces humanas é pesquisar em um banco de imagens, a imagem mais similar à imagem de consulta, de acordo com um critério dado. Este trabalho de pesquisa foi direcionado para a geração de vetores de características para um sistema de reconhecimento de imagens, considerando bancos de imagens de faces humanas, para propiciar tal tipo de consulta. Um vetor de características é uma representação numérica de uma imagem ou parte dela, descrevendo seus detalhes mais representativos. O vetor de características é um vetor n-dimensional contendo esses valores. Essa nova representação da imagem propicia vantagens ao processo de reconhecimento de imagens, pela redução da dimensionalidade dos dados. Uma abordagem alternativa para caracterizar imagens para um sistema de reconhecimento de imagens de faces humanas é a transformação do domínio. A principal vantagem de uma transformação é a sua efetiva caracterização das propriedades locais da imagem. As wavelets diferenciam-se das tradicionais técnicas de Fourier pela forma de localizar a informação no plano tempo-freqüência; basicamente, têm a capacidade de mudar de uma resolução para outra, o que as fazem especialmente adequadas para análise, representando o sinal em diferentes bandas de freqüências, cada uma com resoluções distintas correspondentes a cada escala. As wavelets foram aplicadas com sucesso na compressão, melhoria, análise, classificação, caracterização e recuperação de imagens. Uma das áreas beneficiadas onde essas propriedades tem encontrado grande relevância é a área de visão computacional, através da representação e descrição de imagens. Este trabalho descreve uma abordagem para o reconhecimento de imagens de faces humanas com a extração de características baseado na decomposição multiresolução de wavelets utilizando os filtros de Haar, Daubechies, Biorthogonal, Reverse Biorthogonal, Symlet, e Coiflet. Foram testadas em conjunto as técnicas PCA (Principal Component Analysis) e IMPCA (Image Principal Component Analysis), sendo que os melhores resultados foram obtidos utilizando a wavelet Biorthogonal com a técnica IMPCA
Image pattern recognition is an interesting area in the scientific world. The features extraction method refers to the ability to extract features from images, reduce the dimensionality and generates the features vector. Given a query image, the goal of a features extraction system is to search the database and return the most similar to the query image according to a given criteria. Our research addresses the generation of features vectors of a recognition image system for human faces databases. A feature vector is a numeric representation of an image or part of it over its representative aspects. The feature vector is a n-dimensional vector organizing such values. This new image representation can be stored into a database and allow a fast image retrieval. An alternative for image characterization for a human face recognition system is the domain transform. The principal advantage of a transform is its effective characterization for their local image properties. In the past few years researches in applied mathematics and signal processing have developed practical wavelet methods for the multi scale representation and analysis of signals. These new tools differ from the traditional Fourier techniques by the way in which they localize the information in the time-frequency plane; in particular, they are capable of trading on type of resolution for the other, which makes them especially suitable for the analysis of non-stationary signals. The wavelet transform is a set basis function that represents signals in different frequency bands, each one with a resolution matching its scale. They have been successfully applied to image compression, enhancement, analysis, classification, characterization and retrieval. One privileged area of application where these properties have been found to be relevant is computer vision, especially human faces imaging. In this work we describe an approach to image recognition for human face databases focused on feature extraction based on multiresolution wavelets decomposition, taking advantage of Biorthogonal, Reverse Biorthogonal, Symlet, Coiflet, Daubechies and Haar. They were tried in joint the techniques together the PCA (Principal Component Analysis) and IMPCA (Image Principal Component Analysis)
APA, Harvard, Vancouver, ISO, and other styles
41

Youmaran, Richard. "Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19729.

Full text
Abstract:
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
APA, Harvard, Vancouver, ISO, and other styles
42

Junior, Jozias Rolim de Araújo. "Reconhecimento multibiométrico baseado em imagens de face parcialmente ocluídas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-24122018-011508/.

Full text
Abstract:
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas. De forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar ou verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou em impressões digitais. Entretanto, existem sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Atualmente, tem havido progresso significativo em reconhecimento automático de face em condições controladas. Em aplicações do mundo real, o reconhecimento facial sofre de uma série de problemas nos cenários não controlados. Esses problemas são devidos, principalmente, a diferentes variações faciais que podem mudar muito a aparência da face, incluindo variações de expressão, de iluminação, alterações da pose, assim como oclusões parciais. Em comparação com o grande número de trabalhos na literatura em relação aos problemas de variação de expressão/iluminação/pose, o problema de oclusão é relativamente negligenciado pela comunidade científica. Embora tenha sido dada pouca atenção ao problema de oclusão na literatura de reconhecimento facial, a importância deste problema deve ser enfatizada, pois a presença de oclusão é muito comum em cenários não controlados e pode estar associada a várias questões de segurança. Por outro lado, a Multibiométria é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolida múltiplas fontes de informação visando melhorar a performance do sistema biométrico. Multibiométria é baseada no conceito de que informações obtidas a partir de diferentes modalidades ou da mesma modalidade capturada de diversas formas se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. A fim de melhorar a performance dos sistemas biométricos faciais na presença de oclusão parciais será investigado o emprego de diferentes técnicas de reconstrução de oclusões parciais de forma a gerar diferentes imagens de face, as quais serão combinadas no nível de extração de característica e utilizadas como entrada para um classificador neural. Os resultados demonstram que a abordagem proposta é capaz de melhorar a performance dos sistemas biométricos baseados em face parcialmente ocluídas
With the advancement of technology, traditional strategies for identifying people have become more susceptible to failures. In order to overcome these difficulties, some approaches have been proposed in the literature. Among these approaches, Biometrics stands out. The field of biometrics covers a wide range of technologies used to identify or verify a person\'s identity by measuring and analyzing physical and / or behavioral aspects of the human being. As a result, a biometry has a wide field of applications in systems that require a secure identification of its users. The most popular biometric systems are based on facial recognition or fingerprints. However, there are biometric systems that use the iris, retinal scan, voice, hand geometry, and facial thermograms. Currently, there has been significant progress in automatic face recognition under controlled conditions. In real world applications, facial recognition suffers from a number of problems in uncontrolled scenarios. These problems are mainly due to different facial variations that can greatly change the appearance of the face, including variations in expression, illumination, posture, as well as partial occlusions. Compared with the large number of papers in the literature regarding problems of expression / illumination / pose variation, the occlusion problem is relatively neglected by the research community. Although attention has been paid to the occlusion problem in the facial recognition literature, the importance of this problem should be emphasized, since the presence of occlusion is very common in uncontrolled scenarios and may be associated with several safety issues. On the other hand, multibiometry is a relatively new approach to biometric knowledge representation that aims to consolidate multiple sources of information to improve the performance of the biometric system. Multibiometry is based on the concept that information obtained from different modalities or from the same modalities captured in different ways complement each other. Accordingly, a suitable combination of such information may be more useful than the use of information obtained from any of the individuals modalities. In order to improve the performance of facial biometric systems in the presence of partial occlusion, the use of different partial occlusion reconstruction techniques was investigated in order to generate different face images, which were combined at the feature extraction level and used as input for a neural classifier. The results demonstrate that the proposed approach is capable of improving the performance of biometric systems based on partially occluded faces
APA, Harvard, Vancouver, ISO, and other styles
43

Hauser, Václav. "Rozpoznávání obličejů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219434.

Full text
Abstract:
This master thesis deals with the detection and recognition of faces in the image. The content of this thesis is a description of methods that are used for the face detection and recognition. Method described in detail is the principal component analysis (PCA). This method is subsequently used in the implementation of face recognition in video sequence. In conjunction with the implementation work describes the OpenCV library package, which was used for implementation, specifically the C ++ API. Finally described application tests were done on two different video sequences.
APA, Harvard, Vancouver, ISO, and other styles
44

Tshering, Nima. "Fac tExtraction For Ruby On Rails Platform." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2542.

Full text
Abstract:
In the field of software engineering, software architecture plays an important role particularly in areas of critical and large-scale software system development and over the years it has evolved as an important sub-discipline within the field of software engineering. However, software architecture is still an emerging discipline mainly attributed by the lack of standardized way for architectural representation and also due to lack of analysis methods that can determine if the intended architecture translates into correct implementation during the software development [HNS00]. Architecture compliance checking [KP07] is a technique used to resolve latter part of the problem and Fraunhofer SAVE (Software Architecture Visualization and Evaluation) is a compliance-checking tool that uses fact extraction. This master’s thesis provides fact extraction support to Fraunhofer SAVE for a system developed using Ruby on Rail framework by developing a fact extractor. The fact extractor was developed as an eclipse plug-in in Java that was integrated with SAVE platform, it consists of a parser that parses Ruby source code and then generates an abstract syntax tree. The architectural facts are extracted by analyzing these abstract syntax trees using a visitor pattern from which architecture of the system are generated. It is represented using the internal model of the SAVE platform. The fact extractor was validated using two reference systems of differing sizes developed using Ruby on Rails framework. A reference system with smaller size, which contains all the relevant Ruby language constructs, was used to evaluate correctness and completeness of the fact extractor. The evaluation result showed the correctness value of 1.0 or 100% and completeness value of 1.0 or 100%. Afterwards, a larger application with more complex architecture was used to validate the performance and robustness of the fact extractor. It has successfully extracted, analyzed and build the SAVE model of this large system by taking 0.05 seconds per component without crashing. Based these computations, it was concluded that the performance of the fact extractor was acceptable as it performed better than C# fact extractor.
APA, Harvard, Vancouver, ISO, and other styles
45

Trejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.

Full text
Abstract:

In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.

Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.

APA, Harvard, Vancouver, ISO, and other styles
46

Дорошенко, Анастасія Юріївна. "Інформаційна технологія інтелектуального аналізу фактографічних текстових ресурсів." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/40168.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2019. У дисертаційній роботі вирішена актуальна науково-практична задача розробки моделей та інформаційної технології інтелектуального аналізу фактографічної інформації. На основі аналізу моделей та методів обробки фактографічних даних у мережевих потоках сформульовано основні вимоги до розробки інформаційної технології інтелектуального аналізу фактографічних ресурсів. У якості математичного інструментарію моделювання фактів визначено теорію категорій, її проективну та предикатну інтерпретації. Запропоновано для опису фактографічної інформації використовувати теорію інтелекту, метод компараторної ідентифікації та апарат алгебрологічних рівнянь. Розроблено моделі тематичного пошуку та екстракції фактографічної інформації на основі інтелектуальної процедури оцінки текстової інформації. Запропоновано для опису фактів використання двох типів триплетів: "Суб'єкт – Предикат – Об'єкт" та "Предмет – Атрибут – Значення", що дозволяє вилучати поняття зі слабоструктурованих текстових ресурсів та описувати відношення між ними у структурованому вигляді. Сформовано підхід до видобування фактографічних даних з текстових джерел, запропоновано використання онтологій для опису процесів інтеграції фактографічної інформації. Запропоновано використання нового напівавтоматичного методу для розширення базової онтології на прикладі предметних областей "радіаційна безпека" та "обробка патентно-кон'юнктурної інформації". Проведено апробацію розроблених моделей, підходів та інформаційної технології та впроваджено результати дослідження у реальні інформаційні системи. Розроблено еталонну архітектуру, програмні компоненти серверної частини програмної системи, що дозволяє проводити екстракцію даних на основі використання гнучкого конфігурування та предикатної моделі видобування даних.
The dissertation for a candidate degree in technical sciences, specialty 05.13.06 – Information Technologies. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The actual scientific and practical task of developing models and information technology of intellectual analysis of factual information is solved in the dissertation. On the basis of analysis of models and methods of processing factual data in network streams, the basic requirements for the development of information technology of intellectual analysis of factual resources are formulated. The theory of categories, its projective and predicate interpretations is determined as a mathematical tool for modeling facts. It is proposed to use the theory of intelligence, the method of comparative identification and the apparatus of algebra-logical equations to describe factual information. Models of thematic search and extraction of factual information on the basis of the intellectual procedure for evaluating textual information have been developed. It is proposed to describe the use of two types of triplets: "Subject – Predicate – Object" and "Item – Attribute – Value", which allows you to remove the concept of weakly structured text resources and describe the relationship between them in a structured form. An approach to extracting factual data from text sources has been formed, and the use of ontologies for the description of the processes of integration of factual information is proposed. The use of a new semi-automatic method is proposed for extending the basic ontology, on the example of the subject areas "radiation safety" and "processing of patent information". Approbation of developed models, approaches and information technology was carried out and the results of research were implemented in real information systems. The reference architecture, software components of the server part of the software system, which allows data extraction based on the use of flexible configuration and predicate data mining model, is developed.
APA, Harvard, Vancouver, ISO, and other styles
47

Přinosil, Jiří. "Analýza emocionálních stavů na základě obrazových předloh." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233488.

Full text
Abstract:
This dissertation thesis deals with the automatic system for basic emotional facial expressions recognition from static images. Generally the system is divided into the three independent parts, which are linked together in some way. The first part deals with automatic face detection from color images. In this part they were proposed the face detector based on skin color and the methods for eyes and lips position localization from detected faces using color maps. A part of this is modified Viola-Jones face detector, which was even experimentally used for eyes detection. The both face detectors were tested on the Georgia Tech Face Database. Another part of the automatic system is features extraction process, which consists of two statistical methods and of one method based on image filtering using set of Gabor’s filters. For purposes of this thesis they were experimentally used some combinations of features extracted using these methods. The last part of the automatic system is mathematical classifier, which is represented by feed-forward neural network. The automatic system is utilized by adding an accurate particular facial features localization using active shape model. The whole automatic system was benchmarked on recognizing of basic emotional facial expressions using the Japanese Female Facial Expression database.
APA, Harvard, Vancouver, ISO, and other styles
48

Elmahmudi, Ali A. M., and Hassan Ugail. "Experiments on deep face recognition using partial faces." 2018. http://hdl.handle.net/10454/16872.

Full text
Abstract:
Yes
Face recognition is a very current subject of great interest in the area of visual computing. In the past, numerous face recognition and authentication approaches have been proposed, though the great majority of them use full frontal faces both for training machine learning algorithms and for measuring the recognition rates. In this paper, we discuss some novel experiments to test the performance of machine learning, especially the performance of deep learning, using partial faces as training and recognition cues. Thus, this study sharply differs from the common approaches of using the full face for recognition tasks. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the forehead. In this study, we use a convolutional neural network based architecture along with the pre-trained VGG-Face model to extract features for training. We then use two classifiers namely the cosine similarity and the linear support vector machine to test the recognition rates. We ran our experiments on the Brazilian FEI dataset consisting of 200 subjects. Our results show that the cheek of the face has the lowest recognition rate with 15% while the (top, bottom and right) half and the 3/4 of the face have near 100% recognition rates.
Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
APA, Harvard, Vancouver, ISO, and other styles
49

李宗岳. "Dynamic Face Detection via Adaptive Face Features Extraction." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/07139451006948106318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yu, Hui-Min, and 余惠民. "Face Extraction Based on Enhanced." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/57564058629944975144.

Full text
Abstract:
碩士
義守大學
資訊工程學系
92
The thesis proposed a novel color image face extraction that can be applied to establish human face database. The paper addressed a new Edge Detection technique; with this edge detection, we could achieve a region growing of human face, then the new facial feature detection method is used to generate the candidate of face. Therefore, the proposed method can be used to face extraction under a complex background, even the background has similar color of skin. The proposed technique has three steps, as 1)An Enhanced Edge Detection which can get a more integral Edge Detection, and the result will be used to be a base of face extraction. 2)Use DCT(Discrete Cosine Transform)approach to detect skin color distribution in an image. 3)The method of feature detection and extraction of human face. Regarding these processes of the face extraction, its complete procedure is complex, the purpose is that we could get a more accurate Face extraction. Besides, most of the papers concerning face extraction, their method is to alter images into gray level from color space. Eventually, the purpose of the proposed paper is to develop a technique to identify a face from a complex image, which can support different application in different area.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography