Dissertations / Theses on the topic 'Face verification'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Face verification.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Romano, Raquel Andrea. "Real-time face verification." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36649.
Full textIncludes bibliographical references (p. 57-59).
by Raquel Andrea Romano.
M.S.
Short, J. "Illumination invariance for face verification." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843404/.
Full textMcCool, Christopher Steven. "Hybrid 2D and 3D face verification." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.
Full textMcCool, Christopher Steven. "Hybrid 2D and 3D face verification." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.
Full textBourlai, Thirimachos. "Designing a smart card face verification system." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843504/.
Full textSanderson, Conrad, and conradsand@ieee org. "Automatic Person Verification Using Speech and Face Information." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030422.105519.
Full textSanderson, Conrad. "Automatic Person Verification Using Speech and Face Information." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/367191.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
Jonsson, K. T. "Robust correlation and support vector machines for face identification." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/799/.
Full textRamos, Sanchez M. Ulises. "Aspects of facial biometrics for verification of personal identity." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/792194/.
Full textTan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." University of Sydney. Electrical and Information Engineering, 2004. http://hdl.handle.net/2123/586.
Full textTan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." Thesis, The University of Sydney, 2003. http://hdl.handle.net/2123/586.
Full textAnantharajah, Kaneswaran. "Robust face clustering for real-world data." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/89400/1/Kaneswaran_Anantharajah_Thesis.pdf.
Full textLopes, Daniel Pedro Ferreira. "Face verication for an access control system in unconstrained environment." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23395.
Full textO reconhecimento facial tem vindo a receber bastante atenção ao longo dos últimos anos não só na comunidade cientifica, como também no ramo comercial. Uma das suas várias aplicações e o seu uso num controlo de acessos onde um indivíduo tem uma ou várias fotos associadas a um documento de identificação (também conhecido como verificação de identidade). Embora atualmente o estado da arte apresente muitos estudos em que tanto apresentam novos algoritmos de reconhecimento como melhorias aos já desenvolvidos, existem mesmo assim muitos problemas ligados a ambientes não controlados, a aquisição de imagem e a escolha dos algoritmos de deteção e de reconhecimento mais eficazes. Esta tese aborda um ambiente desafiador para a verificação facial: um cenário não controlado para o acesso a infraestruturas desportivas. Uma vez que não existem condições de iluminação controladas nem plano de fundo controlado, isto torna um cenário complicado para a implementação de um sistema de verificação facial. Esta tese apresenta um estudo sobre os mais importantes algoritmos de detecção e reconhecimento facial assim como técnicas de pré-processamento tais como o alinhamento facial, a igualização de histograma, com o objetivo de melhorar a performance dos mesmos. Também em são apresentados dois métodos para a aquisição de imagens envolvendo a seleção de imagens e calibração da câmara. São apresentados resultados experimentais detalhados baseados em duas bases de dados criadas especificamente para este estudo. No uso de técnicas de pré-processamento apresentadas, foi possível presenciar melhorias até 20% do desempenho dos algoritmos de reconhecimento referentes a verificação de identidade. Com os métodos apresentados para os testes ao ar livre, foram conseguidas melhorias na ordem dos 30%.
Face Recognition has been received great attention over the last years, not only on the research community, but also on the commercial side. One of the many uses of face recognition is its use on access control systems where a person has one or several photos associated to an Identi cation Document (also known as identity veri cation). Although there are many studies nowadays, both presenting new algorithms or just improvements of the already developed ones, there are still many open problems regarding face recognition in uncontrolled environments, from the image acquisition conditions to the choice of the most e ective detection and recognition algorithms, just to name a few. This thesis addresses a challenging environment for face veri cation: an unconstrained environment for sports infrastructures access. As there are no controlled lightning conditions nor controlled background, this makes a di cult scenario to implement a face veri cation system. This thesis presents a study of some of the most important facial detection and recognition algorithms as well as some pre-processing techniques, such as face alignment and histogram equalization, with the aim to improve their performance. It also introduces some methods for a more e cient image acquisition based on image selection and camera calibration, specially designed for addressing this problem. Detailed experimental results are presented based on two new databases created speci cally for this study. Using pre-processing techniques, it was possible to improve the recognition algorithms performances up to 20% regarding veri cation results. With the methods presented for the outdoor tests, performances had improvements up to 30%
Hmani, Mohamed Amine. "Use of Biometrics for the Regeneration of Revocable Crypto-biometric Keys." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS013.
Full textThis thesis aims to regenerate crypto-biometric keys (cryptographic keys obtained with biometric data) that are resistant to quantum cryptanalysis methods. The challenge is to obtain keys with high entropy to have a high level of security, knowing that the entropy contained in biometric references limits the entropy of the key. Our choice was to exploit facial biometrics.We first created a state-of-the-art face recognition system based on public frameworks and publicly available data based on DNN embedding extractor architecture and triplet loss function. We participated in two H2020 projects. For the SpeechXRays project, we provided implementations of classical and cancelable face biometrics. For the H2020 EMPATHIC project, we created a face verification REST API. We also participated in the NIST SRE19 multimedia challenge with the final version of our classical face recognition system.In order to obtain crypto-biometric keys, it is necessary to have binary biometric references. To obtain the binary representations directly from face images, we proposed an original method, leveraging autoencoders and the previously implemented classical face biometrics. We also exploited the binary representations to create a cancelable face verification system.Regarding our final goal, to generate crypto-biometric keys, we focused on symmetric keys. Symmetric encryption is threatened by the Groover algorithm because it reduces the complexity of a brute force attack on a symmetric key from 2N à 2(N/2). To mitigate the risk introduced by quantum computing, we need to increase the size of the keys. To this end, we tried to make the binary representation longer and more discriminative. For the keys to be resistant to quantum computing, they should have double the length.We succeeded in regenerating crypto-biometric keys longer than 400bits (with low false acceptance and false rejection rates) thanks to the quality of the binary embeddings. The crypto-biometric keys have high entropy and are resistant to quantum cryptanalysis, according to the PQCrypto project, as they satisfy the length requirement. The keys are regenerated using a fuzzy commitment scheme leveraging BCH codes
Chen, Lihui. "Towards an efficient, unsupervised and automatic face detection system for unconstrained environments." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/8132.
Full textCook, James Allen. "A decompositional investigation of 3D face recognition." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16653/1/James_Allen_Cook_Thesis.pdf.
Full textCook, James Allen. "A decompositional investigation of 3D face recognition." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16653/.
Full textALI, ARSLAN. "Deep learning techniques for biometric authentication and robust classification." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2910084.
Full textSantos, Alexandre Alberto Werlang dos. "Avaliação de empresas com foco na apuração dos haveres do sócio retirante, em face da jurisprudência dos tribunais pátrios : uma abordagem multidisciplinar." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/79101.
Full textThis study aims to demonstrate the assessment model adopted by the Brazilian judiciary company, for purposes of calculating the assets of the migrant partner, this model that reflects the prevailing understanding of Brazilian law. The migrant or dissident shareholder is one who withdraws from society by choice, by exclusion of the other partners, through death, bankruptcy partner, or as a result of judicial pledge of the shares of the partner. Evaluate a company is a difficult task, behold companies represent a set of assets and liabilities, and there are numerous intangible assets and liabilities are difficult to measure. Brazilian law provides that the assets of the migrant partner will be calculated by a special balance for this purpose. This balance is called balance determination. The balance of determination equals a balance sheet, along the lines of traditional accounting, which will be determined on the date of the resolution of the company in relation to socio retirante. O balance determination equals a balance sheet, along the lines of traditional accounting, which will be determined the date of the resolution of the company in relation to socio retirante. According to the jurisprudence of the courts patriotic, balance determination must include intangible assets and liabilities. Intangible assets were included in goodwill, according to the jurisprudence of the courts. There are various models of business valuation to be applied, especially the models presented by economics, accounting and finance. The evaluation model based companies in the discounted cash flow method is mostly used by business appraisers. Current law allows the partners can collude on any social contract evaluation criteria of business for purposes of ascertaining the assets of the partner retirante. Identifying the model adopted by the Brazilian judiciary, perhaps the present study may in resolving corporate conflicts and thus contribute to the judiciary, in order to reduce the number of processes that encumber both the Brazilian society.
Luken, Jackson. "QED: A Fact Verification and Evidence Support System." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555074124008897.
Full textO'Cull, Douglas C. "TELEMETRY SIMULATOR PROVIDES PRE-MISSION VERIFICATION OF TELEMETRY RECEIVE SYSTEM." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608548.
Full textWith the increased concerns for reducing cost and improving reliability in today's telemetry systems, many users are employing simulation and automation to guarantee reliable telemetry systems operation. Pre-Mission simulation of the telemetry system will reduce the cost associated with a loss of mission data. In order to guarantee the integrity of the receive system, the user must be able to simulate several conditions of the transmitted signal. These include Doppler shift and dynamic fade simulation. Additionally, the simulator should be capable of transmitting industry standard PCM data streams to allow pre-mission bit error rate testing of the receive system. Furthermore, the simulator should provide sufficient output power to allow use as a boresite transmitter to check all aspects of the receive link. Finally, the simulator must be able to operate at several frequency bands and modulation modes to keep cost to a minimum.
Saeed, Mohammed. "Employing Transformers and Humans for Textual-Claim Verification." Electronic Thesis or Diss., Sorbonne université, 2022. https://theses.hal.science/tel-03922010.
Full textThroughout the last years, there has been a surge in false news spreading across the public. Despite efforts made in alleviating "fake news", there remains a lot of ordeals when trying to build automated fact-checking systems, including the four we discuss in this thesis. First, it is not clear how to bridge the gap between input textual claims, which are to be verified, and structured data that is to be used for claim verification. We take a step in this direction by introducing Scrutinizer, a data-driven fact-checking system that translates textual claims to SQL queries, with the aid of a human-machine interaction component. Second, we enhance reasoning capabilities of pre-trained language models (PLMs) by introducing RuleBert, a PLM that is fine-tuned on data coming from logical rules. Third, PLMs store vast information; a key resource in fact-checking applications. Still, it is not clear how to efficiently access them. Several works try to address this limitation by searching for optimal prompts or relying on external data, but they do not put emphasis on the expected type of the output. For this, we propose Type Embeddings (TEs), additional input embeddings that encode the desired output type when querying PLMs. We discuss how to compute a TE, and provide several methods for analysis. We then show a boost in performance for the LAMA dataset and promising results for text detoxification. Finally, we analyze the BirdWatch program, a community-driven approach to fact-checking tweets. All in all, the work in this thesis aims at a better understanding of how machines and humans could aid in reinforcing and scaling manual fact-checking
Svensson, Linus. "Checkpoint : A case study of a verification project during the 2019 Indian election." Thesis, Södertörns högskola, Journalistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-41826.
Full textBigot, Laurent. "L’essor du fact-checking : de l’émergence d’un genre journalistique au questionnement sur les pratiques professionnelles." Thesis, Paris 2, 2017. http://www.theses.fr/2017PA020076/document.
Full textA growing number of newsrooms around the world have established fact-checking headings or rubrics. They are dedicated to assess the veracity of claims, especially by politicians. This practice revisits an older fact-checking practice, born in the United States in the 1920’s and based on an exhaustive and systematic checking of magazines’ contents before publishing. The ‘modern’ version of fact-checking embodies both the willingness of online newsrooms to restore verified contents —despite the structural and economic crisis of the press— and their ability to capitalize on digital tools which enhance access to information. Through some thirty semi-structured interviews with French fact-checkers and the study of a sample of 300 articles and chronicles from seven media, this PhD thesis examines the extent to which fact-checking, as a journalistic genre, certainly valorizes a credible method, but also —and indirectly— reveals shortcomings in professional practices. Finally, it discusses how the promotion of more qualitative content, as well as media literacy, could place fact-checking at the heart of editorial strategies —the latter aiming at retrieving trust from the audience
Ha, Wonsook. "Non-isothermal fate and transport of drip-applied fumigants in plastic-mulched soil beds model development and verification /." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0012921.
Full textGuillaumin, Matthieu. "Données multimodales pour l'analyse d'image." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM048.
Full textThis dissertation delves into the use of textual metadata for image understanding. We seek to exploit this additional textual information as weak supervision to improve the learning of recognition models. There is a recent and growing interest for methods that exploit such data because they can potentially alleviate the need for manual annotation, which is a costly and time-consuming process. We focus on two types of visual data with associated textual information. First, we exploit news images that come with descriptive captions to address several face related tasks, including face verification, which is the task of deciding whether two images depict the same individual, and face naming, the problem of associating faces in a data set to their correct names. Second, we consider data consisting of images with user tags. We explore models for automatically predicting tags for new images, i. E. Image auto-annotation, which can also used for keyword-based image search. We also study a multimodal semi-supervised learning scenario for image categorisation. In this setting, the tags are assumed to be present in both labelled and unlabelled training data, while they are absent from the test data. Our work builds on the observation that most of these tasks can be solved if perfectly adequate similarity measures are used. We therefore introduce novel approaches that involve metric learning, nearest neighbour models and graph-based methods to learn, from the visual and textual data, task-specific similarities. For faces, our similarities focus on the identities of the individuals while, for images, they address more general semantic visual concepts. Experimentally, our approaches achieve state-of-the-art results on several standard and challenging data sets. On both types of data, we clearly show that learning using additional textual information improves the performance of visual recognition systems
Guillaumin, Matthieu. "Données multimodales pour l'analyse d'image." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00522278/en/.
Full textPOCHETTINO, TERESA. "La valutazione energetico-ambientale dell’ospedale per acuti in fase d’uso. Criteri, indicatori, metodologie di verifica.Energetic and environmental operational hospital buildings assessment. Criteria, indicators and verification methods." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2497148.
Full textScheepers, Jill. "Analysis of cryptocurrency verification challenges faced by the South African Revenue Service and tax authorities in other BRICS countries and whether SARS’ powers to gather information relating to cryptocurrency transactions are on par with those of other BRICS countries." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/31231.
Full textFalade, Joannes Chiderlos. "Identification rapide d'empreintes digitales, robuste à la dissimulation d'identité." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC231.
Full textBiometrics are increasingly used for identification purposes due to the close relationship between the person and their identifier (such as fingerprint). We focus this thesis on the issue of identifying individuals from their fingerprints. The fingerprint is a biometric data widely used for its efficiency, simplicity and low cost of acquisition. The fingerprint comparison algorithms are mature and it is possible to obtain in less than 500 ms a similarity score between a reference template (enrolled on an electronic passport or database) and an acquired template. However, it becomes very important to check the identity of an individual against an entire population in a very short time (a few seconds). This is an important issue due to the size of the biometric database (containing a set of individuals of the order of a country). Thus, the first part of the subject of this thesis concerns the identification of individuals using fingerprints. Our topic focuses on the identification with N being at the scale of a million and representing the population of a country for example. Then, we use classification and indexing methods to structure the biometric database and speed up the identification process. We have implemented four identification methods selected from the state of the art. A comparative study and improvements were proposed on these methods. We also proposed a new fingerprint indexing solution to perform the identification task which improves existing results. A second aspect of this thesis concerns security. A person may want to conceal their identity and therefore do everything possible to defeat the identification. With this in mind, an individual may provide a poor quality fingerprint (fingerprint portion, low contrast by lightly pressing the sensor...) or provide an altered fingerprint (impression intentionally damaged, removal of the impression with acid, scarification...). It is therefore in the second part of this thesis to detect dead fingers and spoof fingers (silicone, 3D fingerprint, latent fingerprint) used by malicious people to attack the system. In general, these methods use machine learning techniques and deep learning. Secondly, we proposed a new presentation attack detection solution based on the use of statistical descriptors on the fingerprint. Thirdly, we have also build three presentation attacks detection workflow for fake fingerprint using deep learning. Among these three deep solutions implemented, two come from the state of the art; then the third an improvement that we propose. Our solutions are tested on the LivDet competition databases for presentation attack detection
Hung, Wen Hsuan, and 洪文軒. "Face Verification from a Face Motion Video Clip." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/97899687858591408454.
Full text國立交通大學
多媒體工程研究所
97
The system proposed in this thesis uses a face motion video clip to perform face verification. This system consists of three parts.The first part is to separate the background from the face by using the frame difference technique and the imformation of skin color .This skin color model is constructed automatically from the training input viedo clip. The second step is to extract the facial feature points based on the AAM shape and appearance models built from the training image set .This method can resist the intensity change and image geometric variation.The final part is to verify the identity of the face.It reconstructs the 3D model without camera calibration and verifies the face identity by registering the facial feature points to the gallery face image through the 2D image projection of the 3D face model constructed.
"Face verification in the wild." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291313.
Full textThesis M.Phil. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 86-98).
Abstracts also in Chinese.
Title from PDF title page (viewed on 19, September, 2016).
Duan, Chih-Hsueh, and 段志學. "Face Verification with Local Sparse Representation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/63144328131569334187.
Full textHuang, Chun-Min, and 黃俊閔. "Face Verification Using Eigen Correlation Filter." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/79692841753957922440.
Full text國立暨南國際大學
電機工程學系
100
Face verification is a branch of face recognition can be used as pre-treatment or in combination with other identification to improve recognition results. In addition, the human face recognition technology not only can be used for identity, even in the images and related multimedia applications can see it. Face verification from face images has many applications and is thus an important research topic. In this paper, a one-dimensional correlation filter based class-dependence feature analysis(1D-CFA) method is presented for face verification. Compared with original CFA that works in the two dimensional(2D)image space, 1D-CFA encodes the image data as vectors. In 1D-CFA, a new correlation filter called optimal trade-off filter(OTF), which is designed in the low-dimensional kernel principal component analysis(KPCA)subspace, is proposed for effective feature extraction. We will discuss a new correlation filter module called the eigen filter that designed in the kernel principal component analysis(KPCA)subspace. In the thesis, the system structure can be divided into three parts:(1)preprocessing module,(2)Training module and(3)Test module. The experimental results show that the best performance of 88.2% is achieved with the kernel principal component analysis (KPCA) and optimal trade-off filter(OTF).
"Deep learning face representation by joint identification-verification." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291589.
Full textThesis Ph.D. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 100-106).
Abstracts also in Chinese.
Title from PDF title page (viewed on 26, October, 2016).
Pei-HsunWu and 吳沛勳. "Metric-Learning Face Verification Using Local Binary Pattern." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/49593027304201220818.
Full textHUANG, JYUN-WE, and 黃駿偉. "Face Verification System Based on Generative Adversarial Network." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/432z6g.
Full textRahadian, Fattah Azzuhry, and 哈帝恩. "Compact and Low-Cost CNN for Face Verification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/6y847b.
Full text國立中央大學
資訊工程學系
107
In recent years, face verification has been widely used to secure various transactions on the internet. The current state-of-the-art in face verification is convolutional neural network (CNN). Despite the performance of CNN, deploying CNN in mobile and embedded devices is still challenging because the available computational resource on these devices is constrained. In this paper, we propose a lightweight CNN for face verification using several methods. First, a modified version of ShuffleNet V2 called ShuffleHalf is used as the backbone network for the FaceNet algorithm. Second, the feature maps in the model are reused using two proposed methods called Reuse Later and Reuse ShuffleBlock. Reuse Later works by reusing the potentially unused features by connecting the features directly to the fully connected layer. Meanwhile, Reuse ShuffleBlock works by reusing the feature maps output of the first 1x1 convolution in the basic building block of ShuffleNet V2 (ShuffleBlock). This method is used to reduce the percentage of 1x1 convolution in the model because 1x1 convolution operation is computationally expensive. Third, kernel size is increased as the number of channels increases to obtain the same receptive field size with less computational complexity. Fourth, the depthwise convolution operations are used to replace some ShuffleBlocks. Fifth, other existing previous state-of-the-art algorithms are combined with the proposed method to see if they can increase the performance-efficiency tradeoff of the proposed method. Experimental results on five testing datasets show that ShuffleHalf achieves better accuracy than all other baselines with only 48% FLOPs of the previous state-of-the-art algorithm, MobileFaceNet. The accuracy of ShuffleHalf is further improved by reusing the feature. This method can also reduce the computational complexity to only 42% FLOPs of MobileFaceNet. Meanwhile, both changing kernel size and using depthwise repetition can further decrease computational complexity to only 38% FLOPs of MobileFaceNet with better performance than MobileFaceNet. Combination with some existing methods does not increase the accuracy nor performance-efficiency tradeoff of the model. However, adding shortcut connections and using Swish activation function can improve the accuracy of the model without any noticeable increase in the computational complexity.
Lin, Meng-Ying, and 林孟穎. "Face Verification by Exploiting Reconstructive and Discriminative Coupled Subspaces." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/v5m6tq.
Full text淡江大學
資訊工程學系碩士班
104
Face verification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images in the database are high-quality while probe images are usually low-resolution or with heavy occlusion. This study, we proposed a regression-based approach for face verification in the low-quality scenario. We adopt principal component analysis (PCA) approach to construct the correlation between pairwise samples, where each sample contains heterogeneous pairwise facial image captured in terms of different modalities or features (e.g., low-resolution vs. high-resolution, or occluded facial image vs. non-occluded one). Three common feature spaces are reconstructed by cross-domain pairwise samples, with the goal of eliminating appearance variations and maximizing discrimination between different subjects. Such derived subspaces are then used to represent the subjects of interest, and achieve satisfactory verification performance. Experiments on a variety of synthesis-based verification tasks under low-resolution and occlusion cases would verify the effectiveness of our proposed learning framework.
Chen, Yen-Heng, and 陳衍亨. "Identity Verification by 3-D Information from Face Images." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/44579892499561352143.
Full text國立交通大學
資訊科學系
90
Identity verification is an essential component of a security system. The traditional way to verify a person is asking him to show some kind of document, e.g., an ID card. Compared to the traditional way, using human face to verify a person is a more convenient approach. We all know that a human face is a 3-D entity. However, existing face recognition methods analyze human face images in two dimensions, they discard 3-D information of the face images.The approach proposed in this thesis uses 3-D information of a face to do the verification. The 3-D information is represented by a projective invariant called relative affine structure. If the images are taken from the same person, the relative affine structures between these images remain unchanged. Based on such a property, an identity verification system using human face images can be built.
Liu, Hsien-Chang, and 劉憲璋. "Personalized Face Verification System Based on Cluster-Dependent LDA Subspace." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37515340169234844138.
Full text國立臺灣大學
資訊工程學研究所
91
Recently, person authentication becomes more and more important as technology advances. How to build a safe and convenient identity verification system is a hot research topic in academia and business. In this thesis, we introduce a personalized face verification system based on cluster dependent LDA subspace. The training of the system can be divided into three parts: the initial training, on-site training, and on-site evaluation. In the initial training, we select some human face images of our database as representative face images. The images can be clustered by using K-means clustering method. For on-site training, the client must give some face images for on-site training. We can assign the client to the closest cluster. To separate the client from other representative people in the cluster, we will adopt LDA method to the LDA subspace. At last, we use information of the client and impostors to adjust the threshold. In the part of system operation and on-line training, we can manually input the password when we cannot verified by the system. The system can get more training images to retain the LDA subspace and threshold. We also compare three different matching scores. The experimental results show our method outperforms the traditional LDA method。
Hung, Chien-Yu, and 洪倩玉. "Dynamic Linear Discriminant Analysis for Online Face Recognition and Verification." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59833427175508565421.
Full text國立成功大學
資訊工程學系碩博士班
91
Linear Discriminant Analysis (LDA) is a popular linear transformation method for face recognition verification. Using LDA, we can extract the low-dimensional discriminative feature parameter for human faces. In the applications of face recognition and verification, it is usually necessary to enroll the system with new papers and templates. Also, we often need remove the out-of-date persons or templates from the system model. Namely, using the LDA model, the within and between class scatter matrices and the transformation matrices should be recomputed. However, such a recomputation is very time-consuming. To overcome this weakness, a dynamic LDA algorithm is proposed in this paper. Apply this algorithm, we cannot only save a huge amount of computation time but also obtain the updated new parameters with relatively small storage of model parameters. Moreover, in face verification system, we estimate the optimal matrix via by combining the theories of LDA and Maximum Likelihood Linear Transformation (MLLT). We also derive the distribution of likelihood ratio based on MLLT to be the F distribution. Then, the face verification system is carried out via hypothesis testing using different significant levels for F distribution. The advantage of new method is that the verification decision is done according to statistically meaning "significant levels". This superiority is attractive compared to the conventional method using empirical thresholds. In the experiments, we obtain desirable performance using IIS face database and CSIE/NCKU car face database. An online dynamic face recognition and verification demo system is implemented.
Deng, Peter Shaohua, and 鄧少華. "Biometric-based Pattern Recognition -- Handwritten Signature Verification and Face Recognition." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/79526579531096548003.
Full text國立中央大學
資訊工程研究所
88
In this dissertation, two biometric-based pattern recognition problems were studied, i.e., off-line handwritten signature verification and human face recognition. Biometrics, by definition, is the automated technique of measuring a physical characteristic or person trait of an individual and comparing the characteristic or trait to a database for purposes of recognizing or authenticating that individual. Biometrics uses physical characteristics, defined as the things we are, and personal traits, defined as the things we behave, including facial thermographs, chemical composition of body odor, retina and iris, fingerprints, hand geometry, skin pores, wrist/hand veins, handwritten signature, keystrokes or typing, and voiceprint. To deal with the first biometric-based pattern recognition problem, i.e., off-line handwritten signature verification. Wavelet theory, zero-crossing, dynamic time warping, and nonlinear integer programming form the main body of our methodology. The proposed system can automatically identify useful features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced closed contours are decomposed into multiresolutional signals using wavelet transforms. Then the zero-crossings corresponding to the curvature data are extracted as features for matching. Moreover, a statistical measurement is devised to decide systematically which closed contours and their associated frequency data of a writer are most stable of a writer are most stable and discriminating. Based on these data, the optimal threshold value which controls the accuracy of the feature extraction process is calculated. The proposed approach can be applied to both on-line and off-line signature verification systems. The second biometric-based pattern recognition problem we deal with is human face recognition; we applied the minimum classification error (MCE) technique proposed by Juang and Katagiri[11]. In this technique, the classical discriminant analysis methodology is blended with the classification rule in a new functional form and is used as the design objective criterion to be optimized by numerical search algorithm. In our work, the MCE formulation is incorporated into a three-layer neural network classifier called multilayer perceptron (MLP). Unlike the traditional probabilistic-based Bayes decision technique, the proposed approach is not necessary to assume the probability model of each class. Besides, the classifier works well even when the size of a training set is small. Moreover, no matter in normal environment or harsh environment, the MCE-based method is superior to the minimum sum-squared error (MSE) based method which is commonly used in traditional neural network classifier. Finally, by incorporating a fast face detection algorithm into the system to help for extracting the face-only image from a complex background, the MCE-based face recognition system is robust to image acquired from harsh environment. Experimental results confirm that our approach outperforms the previous approaches.
Liang, Te-Hsiang, and 梁子祥. "Implementation of the Identity Verification Mechanism Based on Face Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41188398179101704033.
Full text國立臺灣科技大學
自動化及控制研究所
104
This thesis discusses the face recognition with the application of identity verification. In order to narrow down the range in an image to find the face, the Haar-like AdaBoost is used for the face detection. Then KAZE algorithm is first applied here for the feature extraction in the face recognition. KAZE feature algorithm was first proposed in 2012, and the KAZE features can be described and detected in a nonlinear scale space by means of nonlinear diffusion filtering. In this research, we adopt new algorithm KAZE, instead of using the traditional methods, e.g., SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Robust Features) to process the face detection. Furthermore, based on the used method, we try to analyze the identity verification problems, including (a) the similarity between one person's photo and his other photos, (b) the similarity between one person with and without glasses, (c) the similarity between one person and the other ones with the same gender, and (d) the similarity between one person and the other ones with the different gender. Simulation results indicate that the above mentioned similarities can reach (a) 90%, (b) 92%, (c) 60%, and (d) 67%. We also apply the proposed method to (a) the home access control system, and (b) the identity verification mechanism. In the simulation of the latter application, we use 200 photos for comparison to get the different similarity values to judge if the identity is correct. Simulation results show that the accuracy can reach more than 90%.
Hsu, Ching-chia, and 許徑嘉. "Face Verification and Lip Reading Systems based on Sparse Representation." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63219376372299366670.
Full text國立中央大學
資訊工程學系
101
Face verification has many applications. The critical problem which lots of researchers concern is how to apply to real-world. In order to robust orientation, translation and scaling of face images, we extract SIFT features of face images which is built dictionary of sparse representation. We propose two kinds of method to extend dictionary via K-means and information theory(extended dictionary and incremental dictionary). Experiments show that we can increase sparseness of sparse coefficients efficiently, also can improve verification rate and reconstruction error via extended dictionary. This paper utilize BCS to solve optimization problem. Compare to OMP algorithm, BCS not only can solve optimization problem but also can improve dictionary by covariance which can decrease uncertainty of observation vectors. Experiments show that incremental dictionary do increases residual of reconstruction error. Lip reading has utilized ASM or AAM as features past few years. We concern that it might lose some useful information, therefore we consider whole image information by extracting SIFT features. In order to train HMM model via SIFT features, we utilize BOF to transform matrices of SIFT features into vectors. We experiment letters A-Z, and the result show that performance of proposed method is better than baseline systems.
Lin, Tzu-Hao, and 林子皓. "A Study on Face Verification with Local Appearance-Based Methods." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/9g94rd.
Full text國立清華大學
資訊工程學系所
106
In recent years, due to the fantastic applications revealed in the mass media are progressively walking out to the reality, face recognition is getting more and more attention to people all over the world. Between the two modes of face recognition, verification is simpler and more suitable than identification in some practical applications such as authentication. To describe the information in human faces more elaborately, we prefer the local appearance-based methods among the various face recognition approaches. In this thesis, we studied three local appearance-based methods: GOP-Face (Gradient Orientation Pyramid), LBP-Face (Local Binary Pattern) and DT-CWT-Face (Dual Tree-Complex Wavelet Transform), and tried to give a clear overview of these three methods. Furthermore, we use face verification to experiment their robustness to variations like spatial shift, illumination changes and age progression on the ORL, Yale and FERET databases with k nearest neighbor classifier. The results verified that LBP-Face and DT-CWT-Face are actually more robust to the spatial shift and DT-CWT-Face is surprisingly robust to age progression, and is even better than GOP-Face. However, the performance against illumination change is not as good as expected.
N, Krishna. "A study of eigenvector based face verification in static images." Thesis, 2007. http://ethesis.nitrkl.ac.in/4371/1/A_Study_of_Eigenvector_Based_Face_Verification.pdf.
Full textWei, Yu-Chen, and 魏育誠. "The Research of Identity Verification by Using Face and Hand Features." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/mtpc4u.
Full text崑山科技大學
電機工程研究所
92
Along the modern technology development, the preservation of many confidential documents and the management of user identity have become more important. It may include the entrance control system, financial management, criminal detection, and computer certification etc., which all require a set of strong identity verification system. Therefore, identity verification technique will play a more important role in the informationalized society of the 21st century, and how to construct a set of safe and convenient identity verification system has been the hot study topic in the academic and industrial field. Most current identity verification study has been aimed at certain part of the individual physical feature, which made the identification rate much lower comparing to the use of multiple physiological features for identification. Therefore, the main purpose of this study is to upgrade the identification rate by combining both the human face and the hand geometric features, and to develop a set of identity verification system with multiple biological features. In this system, the basic image processing techniques including thresholding method, edge detection, image form treatment and image projection etc, are used to find out the coordinates of feature points automatically, and then to calculate their combination of corresponding eigenvectors. On the comparison aspect, due to the eigenvector includes the eigenvalues of each identity, therefore, this study has made the use of difference value calculation methods including Euclidean distance and Hamming distance to compare and to check the degree of similarity among the eigenvector to achieve the goal of identification. Upon completion of the theoretical inference, this study, in addition to produce a complete calculation method, has also proved the practical effect of the identity verification system with multiple physical features by means of both human face and palm shape images.
CHU, Jia Der, and 朱家德. "Application of 1-D Wavelet Transform in Speaker and Face Verification." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59740085959243294870.
Full text義守大學
電機工程學系
91
This paper will study two problem in biometrics ─ speaker and face recognition. Firstly, to speaker recognition problem, we use wavelet transform to decompose speech signal into high and low frequency coefficients, then we apply difference traditional methods include PCA, LPCC, Fractal, and WTFT to extract low or high frequency as feature, which combined probabilistic neural network classifier to match voiceprint. This shows the proposed method will improve recognition rate and efficiency. Besides, to face recognition, we will obtain cumulative gray curve after 2D face image projected in horizon. Using discrete wavelet transform extracts low frequency coefficients as feature. For face identification and face matching application modes, we precede a set of experiments. The facial images are sampled from ORL database.Our experiments reveal that the proposed method possesses excellent recognition performance and efficiency. It is advantageous to realize a facial recognition system in a hardware-friendly, resource-constrained embedded environment.
Yi-ChunLee and 李易俊. "A Gabor Feature Based Horizontal and Vertical Discriminant for Face Verification." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27933947409060647591.
Full text國立成功大學
電腦與通信工程研究所
100
This thesis proposes three different approaches for face recognition. In the first approach, a novel feature extraction method for face recognition is proposed. The digital curvelet transform is used to extract the face features. In the method, an original image is convolved with 6 Gabor filters corresponding to various orientations and scales to give its Gabor representation. Then, the Gabor representation is analyzed by the ridgelet transform followed by the two-dimensional principal component analysis (2DPCA) which computes the eigenvectors of the ridgelet image covariance matrix. Experiments showed that the correct recognition rate of our method is up to 95.5%. For the second approach, a new method of the two-dimensional locality preserving projections (2DLPP) was proposed to extract Gabor features for face recognition. The 2DPCA is first utilized for dimensionality reduction of Gabor feature space, which is implemented directly from 2D image matrices. The objective of 2DLPP is to preserve the local structure of the image space by detecting the intrinsic manifold structure. In the method, an original image is convolved with Gabor filters corresponding to various orientations and scales to give its Gabor representation. Experiments are conducted on the ORL face database, which shows higher recognition performance of the proposed method. The top recognition rate can reach 95.5%. In the last approach, a novel discriminant analysis method for a Gabor-based image feature extraction and representation is proposed and then implemented. The horizontal and vertical two-dimensional principal component analysis (HV-2DPCA) is directly applied to a Gabor face to reduce the redundant information and preserves a bi-directional characteristic as well. It is followed by an enhanced Fisher linear discriminant model (EFM) generating a low-dimensional feature representation with enhanced discrimination power. By the most discriminant features, different types of classes of training samples are made widely apart and the same category classes are made as compact as possible. This novel algorithm is designated as the horizontal and vertical enhanced Gabor Fisher discriminant (HV-EGF). By use of various dimensions of features as well as various numbers of training samples, our experiments indicate that the proposed HV-EGF method provides a superior recognition accuracy relative to those by the Fisher linear discriminant (FLD), the EFM and the Gabor Fisher classifier (GFC) methods. In our proposal, the recognition accuracies up to 99.0% and 97.7% are reached with images of features dimensions and on the ORL and the Yale databases, respectively.