Dissertations / Theses on the topic 'Facet extraction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Facet extraction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
PORRINI, RICCARDO. "Construction and Maintenance of Domain Specific Knowledge Graphs for Web Data Integration." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/126789.
Full textLow, Boon Kee. "Computer extraction of human faces." Thesis, De Montfort University, 1999. http://hdl.handle.net/2086/10668.
Full textPANA-TALPEANU, RADU-MIHAI. "Trajectory extraction for automatic face sketching." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142429.
Full textDetta projekt består av en serie av algoritmer som används för att erhålla en förenklad men realistisk återgivning av ett mänskligt ansikte. Det slutgiltiga målet är att skissen ska ritas på papper av en robotarm med en gripare som håller ett ritinstrument. Tillämpningen är nöjesorienterad och kombinerar områdena människamaskin- interaktion, maskininlärning och bildbehandling. Den första delen fokuserar på att manipulera en mottagen digital bild så att banor i ett format lämpligt för roboten erhålls. Olika tekniker presenteras, testas och jämförs såsom kantdetektion, igenkänning av landmärke, spline-generering och principalkomponentanalys. Resultaten visade att en kantdetektor ger alltför många linjer och att spline-genererings-metoden leder till alltför förenklade ansikten. Den bästa ansiktsskildringen erhölls genom att kombinera lokalisering av landmärke med kantdetektering. De banor som erhållits genom de olika teknikerna överförs till armen genom ett högnivågränssnitt till ROS, Robot Operating System och sedan ritas på papper.
Nguyen, Huu-Tuan. "Contributions to facial feature extraction for face recognition." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT034/document.
Full textCentered around feature extraction, the core task of any Face recognition system, our objective is devising a robust facial representation against major challenges, such as variations of illumination, pose and time-lapse and low resolution probe images, to name a few. Besides, fast processing speed is another crucial criterion. Towards these ends, several methods have been proposed through out this thesis. Firstly, based on the orientation characteristics of the facial information and important features, like the eyes and mouth, a novel variant of LBP, referred as ELBP, is designed for encoding micro patterns with the usage of an horizontal ellipse sample. Secondly, ELBP is exploited to extract local features from oriented edge magnitudes images. By this, the Elliptical Patterns of Oriented Edge Magnitudes (EPOEM) description is built. Thirdly, we propose a novel feature extraction method so called Patch based Local Phase Quantization of Monogenic components (PLPQMC). Lastly, a robust facial representation namely Local Patterns of Gradients (LPOG) is developed to capture meaningful features directly from gradient images. Chiefs among these methods are PLPQMC and LPOG as they are per se illumination invariant and blur tolerant. Impressively, our methods, while offering comparable or almost higher results than that of existing systems, have low computational cost and are thus feasible to deploy in real life applications
Gunn, Steve R. "Dual active contour models for image feature extraction." Thesis, University of Southampton, 1996. https://eprints.soton.ac.uk/250089/.
Full textUrbansky, David. "WebKnox: Web Knowledge Extraction." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-23766.
Full textGao, Jiangning. "3D face recognition using multicomponent feature extraction from the nasal region and its environs." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.707585.
Full textAl-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.
Full textBenn, David E. "Model-based feature extraction and classification for automatic face recognition." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324811.
Full textJung, Sung Uk. "On using gait to enhance face extraction for visual surveillance." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/340358/.
Full textLiang, Antoni. "Face Image Retrieval with Landmark Detection and Semantic Concepts Extraction." Thesis, Curtin University, 2017. http://hdl.handle.net/20.500.11937/54081.
Full textHond, Darryl. "Automatic extraction and recognition of faces from images with varied backgrounds." Thesis, University of Essex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242259.
Full textMasip, David. "Feature extraction in face recognition on the use of internal and external features." Saarbrücken VDM Verlag Dr. Müller, 2005. http://d-nb.info/989265706/04.
Full textMasip, Rodó David. "Face Classification Using Discriminative Features and Classifier Combination." Doctoral thesis, Universitat Autònoma de Barcelona, 2005. http://hdl.handle.net/10803/3051.
Full textPer altra banda, en la segon apart de la tesi explorem el rol de les característiques externes en el procés de classificació facial, i presentem un nou mètode per extreure un conjunt alineat de característiques a partir de la informació externa que poden ser combinades amb les tècniques clàssiques millorant els resultats globals de classificació.
As technology evolves, new applications dealing with face classification appear. In pattern recognition, faces are usually seen as points in a high dimensional spaces defined by their pixel values. This approach must deal with several problems such as: the curse of dimensionality, the presence of partial occlusions or local changes in the illumination. Traditionally, only the internal features of face images have been used for classification purposes, where usually a feature extraction step is performed. Feature extraction techniques allow to reduce the influence of the problems mentioned, reducing also the noise inherent from natural images and learning invariant characteristics from face images. In the first part of this thesis some internal feature extraction methods are presented: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Non Negative Matrix Factorization (NMF), and Fisher Linear Discriminant Analysis (FLD), all of them making some kind of the assumption on the data to classify. The main contribution of our work is a non parametric feature extraction family of techniques using the Adaboost algorithm. Our method makes no assumptions on the data to classify, and incrementally builds the projection matrix taking into account the most difficult samples.
On the other hand, in the second part of this thesis we also explore the role of external features in face classification purposes, and present a method for extracting an aligned feature set from external face information that can be combined with the classic internal features improving the global performance of the face classification task.
Wall, Helene. "Context-Based Algorithm for Face Detection." Thesis, Linköping University, Department of Science and Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4171.
Full textFace detection has been a research area for more than ten years. It is a complex problem due to the high variability in faces and amongst faces; therefore it is not possible to extract a general pattern to be used for detection. This is what makes the face detection problem a challenge.
This thesis gives the reader a background to the face detection problem, where the two main approaches of the problem are described. A face detection algorithm is implemented using a context-based method in combination with an evolving neural network. The algorithm consists of two majors steps: detect possible face areas and within these areas detect faces. This method makes it possible to reduce the search space.
The performance of the algorithm is evaluated and analysed. There are several parameters that affect the performance; the feature extraction method, the classifier and the images used.
This work resulted in a face detection algorithm and the performance of the algorithm is evaluated and analysed. The analysis of the problems that occurred has provided a deeper understanding for the complexity of the face detection problem.
Ahlberg, Jörgen. "Model-based coding : extraction, coding, and evaluation of face model parameters /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek761s.pdf.
Full textAhmad, Muhammad Imran. "Feature extraction and information fusion in face and palmprint multimodal biometrics." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2128.
Full textKičina, Pavol. "Automatická identifikace tváří v reálných podmínkách." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218980.
Full textLi, Qi. "An integration framework of feature selection and extraction for appearance-based recognition." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 8.38 Mb., 141 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220745.
Full textSamaria, Ferdinando Silvestro. "Face recognition using Hidden Markov Models." Thesis, University of Cambridge, 1995. https://www.repository.cam.ac.uk/handle/1810/244871.
Full textZhang, Cuiping Cohen Fernand S. "3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions /." Philadelphia, Pa. : Drexel University, 2006. http://hdl.handle.net/1860/1294.
Full textSmith, R. S. "Angular feature extraction and ensemble classification method for 2D, 2.5D and 3D face recognition." Thesis, University of Surrey, 2008. http://epubs.surrey.ac.uk/843069/.
Full textAhonen, T. (Timo). "Face and texture image analysis with quantized filter response statistics." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291821.
Full textYilmazturk, Mehmet Celaleddin. "Online And Semi-automatic Annotation Of Faces In Personal Videos." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611936/index.pdf.
Full textUrbansky, David. "Automatic Extraction and Assessment of Entities from the Web." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-97469.
Full textMahoor, Mohammad Hossein. "A Multi-Modal Approach for Face Modeling and Recognition." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_dissertations/32.
Full textSharonova, Natalia Valeriyevna, Anastsiia Doroshenko, and Olga Cherednichenko. "Towards the ontology-based approach for factual information matching." Thesis, Друкарня Мадрид, 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/46351.
Full textGaspar, Thiago Lombardi. "Reconhecimento de faces humanas usando redes neurais MLP." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-27042006-231620/.
Full textThis research presents a facial recognition algorithm based in neural networks. The algorithm contains two main modules: one for feature extraction and another for face recognition. It was applied in digital images from three database, PICS, ESSEX and AT&T, where the face was previously detected. The method for feature extraction was based on previously knowledge of the facial components location (eyes and nose) and on the application of the horizontal and vertical signature for the identification of these components. The mean result obtained for this module was 86.6% for the three database. For the recognition module it was used the multilayer perceptron architecture (MLP), and for training this network it was used the backpropagation algorithm. The extracted facial features were applied to the input of the neural network, that identified the face as belonging or not to the database with 97% of hit ratio. Despite the good results obtained it was verified that the MLP could not distinguish facial features with very close values. Therefore the MLP is not the most efficient network for this task
Zuniga, Miguel Salas. "Extracting skull-face models form MRI datasets for use in craniofacial reconstruction." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527222.
Full textTambay, Alain Alimou. "Testing Fuzzy Extractors for Face Biometrics: Generating Deep Datasets." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41429.
Full textVenkatesan, Janani. "Video Data Collection for Continuous Identity Assurance." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6424.
Full textEner, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.
Full textWihlborg, Åsa. "Using an XML-driven approach to create tools for program understanding : An implementation for Configura and CET Designer." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-66414.
Full textEtt stort problem under utvecklingen och underhållet av mjukvara är bristande dokumentation av källkoden. Många programmerare har svårt att identifiera vilken information som är viktig för någon som inte är insatt i systemet och skriver därför bristfällig dokumentation. Ett sätt att komma runt dessa problem skulle vara att använda verktyg som extraherar information från såväl kommentarer som faktisk källkod och presenterar programmets struktur påett tydligt och visuellt sätt. Det här examensarbetet ämnar att designa ett system för XML-driven extra- hering och presentation av metainformation om källkoden med just det syftet. Metainformationen som avses här är exempelvis vilka entiteter (klasser, metoder, variabler, mm.) som finns i källkoden samt hur dessa interagerar med varandra. Resultatet är en prototyp implementerad för att hantera tvåföretagsutvecklade språk. Prototypen demonstrerar hur systemet kan implementeras och visar att me- toden är skalbar. Prototypen är abstraktionsmässigt inte lämplig för kommersiellt bruk men med hjälp av kvalificerade XML-databaser finns det stora möjligheter att i framtiden bygga ett praktiskt användbart system baserat på samma tekniker.
Cui, Chen. "Adaptive weighted local textural features for illumination, expression and occlusion invariant face recognition." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1374782158.
Full textZou, Le. "3D face recognition with wireless transportation." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1448.
Full textSILVA, José Ivson Soares da. "Reconhecimento facial em imagens de baixa resolução." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16367.
Full textMade available in DSpace on 2016-04-07T12:14:52Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5) Previous issue date: 2015-02-24
FADE
Tem crescido o uso de sistemas computacionais para reconhecimento de pessoas por meio de dados biométricos, consequentemente os métodos para realizar o reconhecimento tem evoluído. A biometria usada no reconhecimento pode ser face, voz, impressão digital ou qualquer característica física capaz de distinguir as pessoas. Mudanças causadas por cirurgias, envelhecimento ou cicatrizes, podem não causar mudanças significativas nas características faciais tornando possível o reconhecimento após essas mudanças de aparência propositais ou não. Por outro lado tais mudanças se tornam um desafio para sistemas de reconhecimento automático. Além das mudanças físicas há outros fatores na obtenção da imagem que influenciam o reconhecimento facial como resolução da imagem, posição da face em relação a câmera, iluminação do ambiente, oclusão, expressão. A distância que uma pessoa aparece na cena modifica a resolução da região da sua face, o objetivo de sistemas direcionados a esse contexto é que a influência da resolução nas taxas de reconhecimento seja minimizada. Uma pessoa mais distante da câmera tem sua face na imagem numa resolução menor que uma que esteja mais próxima. Sistemas de reconhecimento facial têm um menor desempenho ao tratar imagens faciais de baixa resolução. Uma das fases de um sistema de reconhecimento é a extração de características, que processa os dados de entrada e fornece um conjunto de informações mais representativas das imagens. Na fase de extração de características os padrões da base de dados de treinamento são recebidos numa mesma dimensão, ou seja, no caso de imagens numa mesma resolução. Caso as imagens disponíveis para o treinamento sejam de resoluções diferentes ou as imagens de teste sejam de resolução diferente do treinamento, faz-se necessário que na fase de pré-processamento haja um tratamento de resolução. O tratamento na resolução pode ser aplicando um aumento da resolução das imagens menores ou redução da resolução das imagens maiores. O aumento da resolução não garante um ganho de informação que possa melhorar o desempenho dos sistemas. Neste trabalho são desenvolvidos dois métodos executados na fase de extração de características realizada por Eigenface, os vetores de características são redimensionados para uma nova escala menor por meio de interpolação, semelhante ao que acontece no redimensionamento de imagens. No primeiro método, após a extração de características, os vetores de características e as imagens de treinamento são redimensionados. Então, as imagens de treinamento e teste são projetadas no espaço de características pelos vetores de dimensão reduzida. No segundo método, apenas os vetores de características são redimensionados e multiplicados por um fator de compensação. Então, as imagens de treinamento são projetadas pelos vetores originais e as imagens de teste são projetadas pelos vetores reduzidos para o mesmo espaço. Os métodos propostos foram testados em 4 bases de dados de reconhecimento facial com a presença de problemas de variação de iluminação, variação de expressão facial, presença óculos e posicionamento do rosto.
In the last decades the use of computational systems to recognize people by biometric data is increasing, consequently the efficacy of methods to perform recognition is improving. The biometry used for recognition can be face, voice, fingerprint or other physical feature that enables the distiction of different persons. Facial changes caused by surgery, aging or scars, does not necessarily causes significant changes in facial features. For a human it is possible recognize other person after these interventions of the appearance. On the other hand, these interventions become a challenge to computer recognition systems. Beyond the physical changes there are other factors in aquisition of an image that influence the face recognition such as the image resolution, position between face and camera, light from environment, occlusions and variation of facial expression. The distance that a person is at image aquisition changes the resolution of face image. The objective of systems for this context is to minimize the influence of the image resolution for the recognition. A person more distant from the camera has the image of the face in a smaller resolution than a person near the camera. Face recognition systems have a poor performance to analyse low resolution image. One of steps of a recognition system is the features extraction that processes the input data so provides more representative images. In the features extraction step the images from the training database are received at same dimension, in other words, to analyse the images they have the same resolution. If the training images have different resolutions of test images it is necessary a preprocessing to normalize the image resolution. The preprocessing of an image can be to increase the resolution of small images or to reduce the resolution of big images. The increase resolution does not guarantee that there is a information gain that can improves the performance of the recognition systems. In this work two methods are developed at features extraction step based on Eigenface. The feature vectors are resized to a smaller scale, similar to image resize. In first method, after the feature extraction step, the feature vectors and the training images are resized. Then the training and test images are projected to feature space by the resized feature vectors. In second method, only the feature vectors are resized and multiplied by a compensation factor. The training images are projected by original feature vectors and the test images are projected by resized feature vectors to the same space. The proposed methods were tested in 4 databases of face recognition with presence of light variation, variation of facial expression, use of glasses and face position.
Mamadou, Diarra. "Extraction et fusion de points d'intérêt et textures spectraux pour l'identification, le contrôle et la sécurité." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK031/document.
Full textBiometrics is an emerging technology that proposes new methods of control, identification and security. Biometric systems are often subject to risks. Face recognition is popular and several existing approaches use images in the visible spectrum. These traditional systems operating in the visible spectrum suffer from several limitations due to changes in lighting, poses and facial expressions. The methodology presented in this thesis is based on multispectral facial recognition using infrared and visible imaging, to improve the performance of facial recognition and to overcome the deficiencies of the visible spectrum. The multispectral images used in this study are obtained by fusion of visible and infrared images. The different recognition techniques are based on features extraction such as texture and points of interest by the following techniques: a hybrid feature extraction, a binary feature extraction, a similarity measure taking into account the extracted characteristics
Pyun, Nam Jun. "Extraction d’une image dans une vidéo en vue de la reconnaissance du visage." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCB132/document.
Full textThe aim of this thesis is to create a methodology in order to extract one or a few representative face images of a video sequence with a view to apply a face recognition algorithm. A video is a media particularly rich. Among all the objects present in the video, human faces are, for sure, the most salient objects. Let us consider a video sequence where each frame contains a face of the same person. The primary assumption of this thesis is that some samples of this face are better than the others in terms of face recognition. A face is a non-rigid 3D object that is projected on a plan to form an image. Hence, the face appearance changes according to the relative positions of the camera and the face. Many works in the field of face recognition require faces as frontal as possible. To extract the most frontal face samples, on the one hand, we have to estimate the head pose. On the other hand, tracking the face is also essential. Otherwise, extraction representative face samples are senseless. This thesis contains three main parts. First, once a face has been detected in a sequence, we try to extract the positions and sizes of the eyes, the nose and the mouth. Our approach is based on local energy maps mainly with a horizontal direction. In the second part, we estimate the head pose using the relative positions and sizes of the salient elements detected in the first part. A 3D face has 3 degrees of freedom: the roll, the yaw and the pitch. The roll is estimated by the maximization of a global energy function computed on the whole face. Since this roll corresponds to the rotation which is parallel to the image plan, it is possible to correct it to have a null roll value face, contrary to other rotations. In the last part, we propose a face tracking algorithm based on the tracking of the region containing both eyes. This tracking is based on the maximization of a similarity measure between two consecutive frames. Therefore, we are able to estimate the pose of the face present in a video frame, then we are also able to link all the faces of the same person in a video sequence. Finally, we can extract several samples of this face in order to apply a face recognition algorithm on them
Chahla, Charbel. "Non-linear feature extraction for object re-identification in cameras networks." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0023.
Full textReplicating the visual system that the brain uses to process the information is an area of substantial interest. This thesis is situated in the context of a fully automated system capable of analyzing facial features when the target is near the cameras, and tracking his identity when his facial features are no more traceable. The first part of this thesis is devoted to face pose estimation procedures to be used in face recognition scenarios. We proposed a new label-sensitive embedding based on a sparse representation called Sparse Label sensitive Locality Preserving Projections. In an uncontrolled environment observed by cameras from an unknown distance, person re-identification relying upon conventional biometrics such as face recognition is not feasible. Instead, visual features based on the appearance of people can be exploited more reliably. In this context, we propose a new embedding scheme for single-shot person re-identification under non overlapping target cameras. Each person is described as a vector of kernel similarities to a collection of prototype person images. The robustness of the algorithm is improved by proposing the Color Categorization procedure. In the last part of this thesis, we propose a Siamese architecture of two Convolutional Neural Networks (CNN), with each CNN reduced to only eleven layers. This architecture allows a machine to be fed directly with raw data and to automatically discover the representations needed for classification
Bianchi, Marcelo Franceschi de. "Extração de características de imagens de faces humanas através de wavelets, PCA e IMPCA." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-10072006-002119/.
Full textImage pattern recognition is an interesting area in the scientific world. The features extraction method refers to the ability to extract features from images, reduce the dimensionality and generates the features vector. Given a query image, the goal of a features extraction system is to search the database and return the most similar to the query image according to a given criteria. Our research addresses the generation of features vectors of a recognition image system for human faces databases. A feature vector is a numeric representation of an image or part of it over its representative aspects. The feature vector is a n-dimensional vector organizing such values. This new image representation can be stored into a database and allow a fast image retrieval. An alternative for image characterization for a human face recognition system is the domain transform. The principal advantage of a transform is its effective characterization for their local image properties. In the past few years researches in applied mathematics and signal processing have developed practical wavelet methods for the multi scale representation and analysis of signals. These new tools differ from the traditional Fourier techniques by the way in which they localize the information in the time-frequency plane; in particular, they are capable of trading on type of resolution for the other, which makes them especially suitable for the analysis of non-stationary signals. The wavelet transform is a set basis function that represents signals in different frequency bands, each one with a resolution matching its scale. They have been successfully applied to image compression, enhancement, analysis, classification, characterization and retrieval. One privileged area of application where these properties have been found to be relevant is computer vision, especially human faces imaging. In this work we describe an approach to image recognition for human face databases focused on feature extraction based on multiresolution wavelets decomposition, taking advantage of Biorthogonal, Reverse Biorthogonal, Symlet, Coiflet, Daubechies and Haar. They were tried in joint the techniques together the PCA (Principal Component Analysis) and IMPCA (Image Principal Component Analysis)
Youmaran, Richard. "Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19729.
Full textJunior, Jozias Rolim de Araújo. "Reconhecimento multibiométrico baseado em imagens de face parcialmente ocluídas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-24122018-011508/.
Full textWith the advancement of technology, traditional strategies for identifying people have become more susceptible to failures. In order to overcome these difficulties, some approaches have been proposed in the literature. Among these approaches, Biometrics stands out. The field of biometrics covers a wide range of technologies used to identify or verify a person\'s identity by measuring and analyzing physical and / or behavioral aspects of the human being. As a result, a biometry has a wide field of applications in systems that require a secure identification of its users. The most popular biometric systems are based on facial recognition or fingerprints. However, there are biometric systems that use the iris, retinal scan, voice, hand geometry, and facial thermograms. Currently, there has been significant progress in automatic face recognition under controlled conditions. In real world applications, facial recognition suffers from a number of problems in uncontrolled scenarios. These problems are mainly due to different facial variations that can greatly change the appearance of the face, including variations in expression, illumination, posture, as well as partial occlusions. Compared with the large number of papers in the literature regarding problems of expression / illumination / pose variation, the occlusion problem is relatively neglected by the research community. Although attention has been paid to the occlusion problem in the facial recognition literature, the importance of this problem should be emphasized, since the presence of occlusion is very common in uncontrolled scenarios and may be associated with several safety issues. On the other hand, multibiometry is a relatively new approach to biometric knowledge representation that aims to consolidate multiple sources of information to improve the performance of the biometric system. Multibiometry is based on the concept that information obtained from different modalities or from the same modalities captured in different ways complement each other. Accordingly, a suitable combination of such information may be more useful than the use of information obtained from any of the individuals modalities. In order to improve the performance of facial biometric systems in the presence of partial occlusion, the use of different partial occlusion reconstruction techniques was investigated in order to generate different face images, which were combined at the feature extraction level and used as input for a neural classifier. The results demonstrate that the proposed approach is capable of improving the performance of biometric systems based on partially occluded faces
Hauser, Václav. "Rozpoznávání obličejů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219434.
Full textTshering, Nima. "Fac tExtraction For Ruby On Rails Platform." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2542.
Full textTrejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.
Full textIn this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.
Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
Дорошенко, Анастасія Юріївна. "Інформаційна технологія інтелектуального аналізу фактографічних текстових ресурсів." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/40168.
Full textThe dissertation for a candidate degree in technical sciences, specialty 05.13.06 – Information Technologies. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The actual scientific and practical task of developing models and information technology of intellectual analysis of factual information is solved in the dissertation. On the basis of analysis of models and methods of processing factual data in network streams, the basic requirements for the development of information technology of intellectual analysis of factual resources are formulated. The theory of categories, its projective and predicate interpretations is determined as a mathematical tool for modeling facts. It is proposed to use the theory of intelligence, the method of comparative identification and the apparatus of algebra-logical equations to describe factual information. Models of thematic search and extraction of factual information on the basis of the intellectual procedure for evaluating textual information have been developed. It is proposed to describe the use of two types of triplets: "Subject – Predicate – Object" and "Item – Attribute – Value", which allows you to remove the concept of weakly structured text resources and describe the relationship between them in a structured form. An approach to extracting factual data from text sources has been formed, and the use of ontologies for the description of the processes of integration of factual information is proposed. The use of a new semi-automatic method is proposed for extending the basic ontology, on the example of the subject areas "radiation safety" and "processing of patent information". Approbation of developed models, approaches and information technology was carried out and the results of research were implemented in real information systems. The reference architecture, software components of the server part of the software system, which allows data extraction based on the use of flexible configuration and predicate data mining model, is developed.
Přinosil, Jiří. "Analýza emocionálních stavů na základě obrazových předloh." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233488.
Full textElmahmudi, Ali A. M., and Hassan Ugail. "Experiments on deep face recognition using partial faces." 2018. http://hdl.handle.net/10454/16872.
Full textFace recognition is a very current subject of great interest in the area of visual computing. In the past, numerous face recognition and authentication approaches have been proposed, though the great majority of them use full frontal faces both for training machine learning algorithms and for measuring the recognition rates. In this paper, we discuss some novel experiments to test the performance of machine learning, especially the performance of deep learning, using partial faces as training and recognition cues. Thus, this study sharply differs from the common approaches of using the full face for recognition tasks. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the forehead. In this study, we use a convolutional neural network based architecture along with the pre-trained VGG-Face model to extract features for training. We then use two classifiers namely the cosine similarity and the linear support vector machine to test the recognition rates. We ran our experiments on the Brazilian FEI dataset consisting of 200 subjects. Our results show that the cheek of the face has the lowest recognition rate with 15% while the (top, bottom and right) half and the 3/4 of the face have near 100% recognition rates.
Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
李宗岳. "Dynamic Face Detection via Adaptive Face Features Extraction." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/07139451006948106318.
Full textYu, Hui-Min, and 余惠民. "Face Extraction Based on Enhanced." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/57564058629944975144.
Full text義守大學
資訊工程學系
92
The thesis proposed a novel color image face extraction that can be applied to establish human face database. The paper addressed a new Edge Detection technique; with this edge detection, we could achieve a region growing of human face, then the new facial feature detection method is used to generate the candidate of face. Therefore, the proposed method can be used to face extraction under a complex background, even the background has similar color of skin. The proposed technique has three steps, as 1)An Enhanced Edge Detection which can get a more integral Edge Detection, and the result will be used to be a base of face extraction. 2)Use DCT(Discrete Cosine Transform)approach to detect skin color distribution in an image. 3)The method of feature detection and extraction of human face. Regarding these processes of the face extraction, its complete procedure is complex, the purpose is that we could get a more accurate Face extraction. Besides, most of the papers concerning face extraction, their method is to alter images into gray level from color space. Eventually, the purpose of the proposed paper is to develop a technique to identify a face from a complex image, which can support different application in different area.