To see the other types of publications on this topic, follow the link: Face verification.

Dissertations / Theses on the topic 'Face verification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Face verification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Romano, Raquel Andrea. "Real-time face verification." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36649.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 57-59).
by Raquel Andrea Romano.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
2

Short, J. "Illumination invariance for face verification." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843404/.

Full text
Abstract:
The task of face verification is made more difficult when the illumination conditions of image capture are not constrained. The differences in illumination conditions between the stored images of the client and the probe image can be lessened by the application of photometric normalisation. Photometric normalisation is the method of pre-processing an image to a representation that is robust to the illumination conditions of image capture. This thesis presents experiments comparing several photometric normalisation methods. The results demonstrate that the anisotropic smoothing pre-processing algorithm of Gross and Brajovic yields the best results of the photometric normalisations tested. The thesis presents an investigation into the behaviour of the anisotropic smoothing method, showing that performance is sensitive to the selection of its parameter. A method of optimising this parameter is suggested and experimental results show that it offers an improvement in verification rates. The variation of illumination across regions of the face is smaller than across the whole face. A novel component-based approach to face verification is presented to take advantage of this fact. The approach consists of carrying out verification on a number of images containing components of the face and fusing the result. As the component images are more robust to illumination, the choice of photometric normalisation is again investigated in the component-based context. The thesis presents the useful result that the simpler normalisations offer the best results when applied to facial component images. Experiments investigating the various methods of fusing the information from the components are presented, as is the issue of score normalisation. Methods of selecting which components are most useful for verification are also tested. The method of pruning the negative components of the linear discriminant analysis weight vector has been applied to the task of selecting the best subset of face components for verification. The pruned linear discriminant analysis method does not perform as well as the well known sequential floating forward selection method on the well illuminated XM2VTS database, however it achieves better generalisation when applied to the more challenging conditions of the XM2VTS dark set.
APA, Harvard, Vancouver, ISO, and other styles
3

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.

Full text
Abstract:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
APA, Harvard, Vancouver, ISO, and other styles
4

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.

Full text
Abstract:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
APA, Harvard, Vancouver, ISO, and other styles
5

Bourlai, Thirimachos. "Designing a smart card face verification system." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843504/.

Full text
Abstract:
This thesis describes a face verification system that is smart-card-based. The objectives were to identify the key parameters that affect the design of such a system, to investigate die general optimisation problem and test its robustness when each key parameter is optimised. Some of these parameters have been coarsely investigated in the literature in the context of the general face recognition problem. However, the previous work only partially fulfilled the requirements of a smart-card-based system, in which the severe engineering constraints and limitations imposed by smart cards have to be taken into account in the overall design process. To address these problems on the proposed fully localised architecture of the smart card face verification system (SCFVS), the work starts with the selection of the client specific linear discriminant analysis (CS-LDA) algorithm, suitable to be ported to the target platform on which the biometric process can run. Then the main functional parts of the system are presented: face image geometric alignment, photometric normalisation, feature extraction, and on-card verification. Each part consists of a series of basic steps, where the role of each step is fixed. However, the algorithm is systematically varied in some steps to investigate the effect on system performance, and system complexity in terms of speed and memory management. Two major problems have been considered. The first problem are the restrictions that both face verification and smart card technology impose and the second is the extreme complexity of the system, in terms of the number of processing stages and system design parameters. In the simplified search procedure adopted, a number of parameters has been selected out of the complete parameter set involved in a generic SCFVS. This set was recommended by previous main-frame based studies, and deemed to provide acceptable performance. System optimisation in the context of smart card implementation has been conducted starting from those parameters involved in the pre-processing stage of the system, and then those involved in the remaining stages. A joint optimisation framework of the key parameters can also be adopted, assuming that then- effect is independent. Experimental results obtained on a number of publicly available face databases (used to evaluate the system performance) show the significant benefits of this design both in terms of performance and system speed. The different results achieved on different databases indicate that optimum parameters of the system are, to a certain extent, training database dependent.
APA, Harvard, Vancouver, ISO, and other styles
6

Sanderson, Conrad, and conradsand@ieee org. "Automatic Person Verification Using Speech and Face Information." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030422.105519.

Full text
Abstract:
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
APA, Harvard, Vancouver, ISO, and other styles
7

Sanderson, Conrad. "Automatic Person Verification Using Speech and Face Information." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/367191.

Full text
Abstract:
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
APA, Harvard, Vancouver, ISO, and other styles
8

Jonsson, K. T. "Robust correlation and support vector machines for face identification." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/799/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ramos, Sanchez M. Ulises. "Aspects of facial biometrics for verification of personal identity." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/792194/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." University of Sydney. Electrical and Information Engineering, 2004. http://hdl.handle.net/2123/586.

Full text
Abstract:
Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of $\rho$ and $\eta$ prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked.
APA, Harvard, Vancouver, ISO, and other styles
11

Tan, Teewoon. "HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING." Thesis, The University of Sydney, 2003. http://hdl.handle.net/2123/586.

Full text
Abstract:
Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of $\rho$ and $\eta$ prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked.
APA, Harvard, Vancouver, ISO, and other styles
12

Anantharajah, Kaneswaran. "Robust face clustering for real-world data." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/89400/1/Kaneswaran_Anantharajah_Thesis.pdf.

Full text
Abstract:
This thesis has investigated how to cluster a large number of faces within a multi-media corpus in the presence of large session variation. Quality metrics are used to select the best faces to represent a sequence of faces; and session variation modelling improves clustering performance in the presence of wide variations across videos. Findings from this thesis contribute to improving the performance of both face verification systems and the fully automated clustering of faces from a large video corpus.
APA, Harvard, Vancouver, ISO, and other styles
13

Lopes, Daniel Pedro Ferreira. "Face verication for an access control system in unconstrained environment." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23395.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
O reconhecimento facial tem vindo a receber bastante atenção ao longo dos últimos anos não só na comunidade cientifica, como também no ramo comercial. Uma das suas várias aplicações e o seu uso num controlo de acessos onde um indivíduo tem uma ou várias fotos associadas a um documento de identificação (também conhecido como verificação de identidade). Embora atualmente o estado da arte apresente muitos estudos em que tanto apresentam novos algoritmos de reconhecimento como melhorias aos já desenvolvidos, existem mesmo assim muitos problemas ligados a ambientes não controlados, a aquisição de imagem e a escolha dos algoritmos de deteção e de reconhecimento mais eficazes. Esta tese aborda um ambiente desafiador para a verificação facial: um cenário não controlado para o acesso a infraestruturas desportivas. Uma vez que não existem condições de iluminação controladas nem plano de fundo controlado, isto torna um cenário complicado para a implementação de um sistema de verificação facial. Esta tese apresenta um estudo sobre os mais importantes algoritmos de detecção e reconhecimento facial assim como técnicas de pré-processamento tais como o alinhamento facial, a igualização de histograma, com o objetivo de melhorar a performance dos mesmos. Também em são apresentados dois métodos para a aquisição de imagens envolvendo a seleção de imagens e calibração da câmara. São apresentados resultados experimentais detalhados baseados em duas bases de dados criadas especificamente para este estudo. No uso de técnicas de pré-processamento apresentadas, foi possível presenciar melhorias até 20% do desempenho dos algoritmos de reconhecimento referentes a verificação de identidade. Com os métodos apresentados para os testes ao ar livre, foram conseguidas melhorias na ordem dos 30%.
Face Recognition has been received great attention over the last years, not only on the research community, but also on the commercial side. One of the many uses of face recognition is its use on access control systems where a person has one or several photos associated to an Identi cation Document (also known as identity veri cation). Although there are many studies nowadays, both presenting new algorithms or just improvements of the already developed ones, there are still many open problems regarding face recognition in uncontrolled environments, from the image acquisition conditions to the choice of the most e ective detection and recognition algorithms, just to name a few. This thesis addresses a challenging environment for face veri cation: an unconstrained environment for sports infrastructures access. As there are no controlled lightning conditions nor controlled background, this makes a di cult scenario to implement a face veri cation system. This thesis presents a study of some of the most important facial detection and recognition algorithms as well as some pre-processing techniques, such as face alignment and histogram equalization, with the aim to improve their performance. It also introduces some methods for a more e cient image acquisition based on image selection and camera calibration, specially designed for addressing this problem. Detailed experimental results are presented based on two new databases created speci cally for this study. Using pre-processing techniques, it was possible to improve the recognition algorithms performances up to 20% regarding veri cation results. With the methods presented for the outdoor tests, performances had improvements up to 30%
APA, Harvard, Vancouver, ISO, and other styles
14

Hmani, Mohamed Amine. "Use of Biometrics for the Regeneration of Revocable Crypto-biometric Keys." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS013.

Full text
Abstract:
Ce travail de thèse vise à régénérer des clés crypto-biométriques (clés cryptographiques obtenues avec des données biométriques) résistantes aux méthodes de cryptanalyse quantique. Le défi est d'obtenir des clés avec une haute entropie pour avoir un haut niveau de sécurité, sachant que l'entropie contenue dans les références biométriques limite l'entropie de la clé. Notre choix a été d'exploiter la biométrie faciale.Nous avons d'abord créé un système de reconnaissance faciale de pointe basé en utilisant des bases de données publiques. Notre architecture utilise des réseaux de neurones profonds avec une fonction de perte‘Triplet loss'. Nous avons participé à deux Projets européens H2020 pour lesquelles nous avons fournit des adapations de notres systeme de reconnaise de visage. Nous avons également participé au challenge multimédia NIST SRE19 avec la version finale de notre système classique de reconnaissance faciale qui a donnée d'excellents résultats.Pour obtenir des clés crypto-biométriques, il est nécessaire de disposer de références biométriques binaires. Pour obtenir les représentations binaires directement à partir d'images de visage, nous avons proposé une méthode novatrice tirant parti des auto-encodeurs et la biométrie faciale classique précédemment mise en œuvre. Nous avons également exploité les représentations binaires pour créer un système de vérification de visage cancelable.Concernant notre objectif final, générer des clés crypto-biométriques, nous nous sommes concentrés sur les clés symétriques. Le chiffrement symétrique est menacé par l'algorithme Groover parce qu'il réduit la complexité d'une attaque par force brute de 2(N/2).. Pour atténuer le risque introduit par l'informatique quantique, nous devons augmenter la taille des clés. Pour cela, nous avons essayé de faire la représentation binaire plus longue et plus discriminante.Nous avons réussi à régénérer des clés crypto-biométriques de plus de 400 bits grâce à la qualité des plongements binaires. Les clés crypto-biométriques ont une haute entropie et résistent à la cryptanalyse quantique selon le PQCrypto projet car ils satisfont à l'exigence de longueur. Les clés sont régénérées à l'aide d'un schéma de "fuzzy commitment" en utilisant les codes BCH
This thesis aims to regenerate crypto-biometric keys (cryptographic keys obtained with biometric data) that are resistant to quantum cryptanalysis methods. The challenge is to obtain keys with high entropy to have a high level of security, knowing that the entropy contained in biometric references limits the entropy of the key. Our choice was to exploit facial biometrics.We first created a state-of-the-art face recognition system based on public frameworks and publicly available data based on DNN embedding extractor architecture and triplet loss function. We participated in two H2020 projects. For the SpeechXRays project, we provided implementations of classical and cancelable face biometrics. For the H2020 EMPATHIC project, we created a face verification REST API. We also participated in the NIST SRE19 multimedia challenge with the final version of our classical face recognition system.In order to obtain crypto-biometric keys, it is necessary to have binary biometric references. To obtain the binary representations directly from face images, we proposed an original method, leveraging autoencoders and the previously implemented classical face biometrics. We also exploited the binary representations to create a cancelable face verification system.Regarding our final goal, to generate crypto-biometric keys, we focused on symmetric keys. Symmetric encryption is threatened by the Groover algorithm because it reduces the complexity of a brute force attack on a symmetric key from 2N à 2(N/2). To mitigate the risk introduced by quantum computing, we need to increase the size of the keys. To this end, we tried to make the binary representation longer and more discriminative. For the keys to be resistant to quantum computing, they should have double the length.We succeeded in regenerating crypto-biometric keys longer than 400bits (with low false acceptance and false rejection rates) thanks to the quality of the binary embeddings. The crypto-biometric keys have high entropy and are resistant to quantum cryptanalysis, according to the PQCrypto project, as they satisfy the length requirement. The keys are regenerated using a fuzzy commitment scheme leveraging BCH codes
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Lihui. "Towards an efficient, unsupervised and automatic face detection system for unconstrained environments." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/8132.

Full text
Abstract:
Nowadays, there is growing interest in face detection applications for unconstrained environments. The increasing need for public security and national security motivated our research on the automatic face detection system. For public security surveillance applications, the face detection system must be able to cope with unconstrained environments, which includes cluttered background and complicated illuminations. Supervised approaches give very good results on constrained environments, but when it comes to unconstrained environments, even obtaining all the training samples needed is sometimes impractical. The limitation of supervised approaches impels us to turn to unsupervised approaches. In this thesis, we present an efficient and unsupervised face detection system, which is feature and configuration based. It combines geometric feature detection and local appearance feature extraction to increase stability and performance of the detection process. It also contains a novel adaptive lighting compensation approach to normalize the complicated illumination in real life environments. We aim to develop a system that has as few assumptions as possible from the very beginning, is robust and exploits accuracy/complexity trade-offs as much as possible. Although our attempt is ambitious for such an ill posed problem-we manage to tackle it in the end with very few assumptions.
APA, Harvard, Vancouver, ISO, and other styles
16

Cook, James Allen. "A decompositional investigation of 3D face recognition." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16653/1/James_Allen_Cook_Thesis.pdf.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
17

Cook, James Allen. "A decompositional investigation of 3D face recognition." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16653/.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
18

ALI, ARSLAN. "Deep learning techniques for biometric authentication and robust classification." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2910084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Santos, Alexandre Alberto Werlang dos. "Avaliação de empresas com foco na apuração dos haveres do sócio retirante, em face da jurisprudência dos tribunais pátrios : uma abordagem multidisciplinar." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/79101.

Full text
Abstract:
Este estudo visa demonstrar o modelo de avaliação de empresa adotado pelo Judiciário Brasileiro, para fins de apuração dos haveres do sócio retirante, modelo este que reflita o entendimento majoritário da jurisprudência Brasileira. O sócio retirante ou dissidente é aquele que se retira da sociedade por vontade própria, por exclusão dos demais sócios, por morte, por falência do sócio, ou em decorrência da penhora judicial das quotas sociais do sócio. Avaliar uma empresa é uma tarefa difícil, eis que as empresas representam um conjunto de ativos e passivos; sendo que existem inúmeros ativos e passivos intangíveis de difícil mensuração. A lei brasileira dispõe que os haveres do sócio retirante serão apurados por um balanço especial para esse fim. Esse balanço é denominado de balanço de determinação. O balanço de determinação equivale a um balanço patrimonial, nos moldes da contabilidade tradicional, que será apurado na data da resolução da sociedade em relação ao sócio retirante. O balanço de determinação equivale a um balanço patrimonial, nos moldes da contabilidade tradicional, que será apurado na data da resolução da sociedade em relação ao sócio retirante. Segundo a jurisprudência dos Tribunais pátrios, o balanço de determinação deverá contemplar os ativos e passivos intangíveis. Os ativos intangíveis estariam contemplados no fundo de comércio, segundo a jurisprudência dos Tribunais. Existem vários modelos de avaliação de empresas a serem aplicados, sobretudo os modelos apresentados pela ciência econômica, contábil e financeira. O modelo de avaliação de empresas baseado no fluxo de caixa descontado é método mais utilizado pelos peritos avaliadores de empresas. A legislação vigente permite que os sócios possam pactuar no contrato social qualquer critério de avaliação de empresa, para fins de apurar os haveres do sócio retirante. Identificando o modelo adotado pelo Judiciário Brasileiro, quiçá o presente estudo poderá na solução dos conflitos societários e assim contribuir com o Poder Judiciário, no sentido de reduzir o número de processos que tanto oneram a sociedade Brasileira.
This study aims to demonstrate the assessment model adopted by the Brazilian judiciary company, for purposes of calculating the assets of the migrant partner, this model that reflects the prevailing understanding of Brazilian law. The migrant or dissident shareholder is one who withdraws from society by choice, by exclusion of the other partners, through death, bankruptcy partner, or as a result of judicial pledge of the shares of the partner. Evaluate a company is a difficult task, behold companies represent a set of assets and liabilities, and there are numerous intangible assets and liabilities are difficult to measure. Brazilian law provides that the assets of the migrant partner will be calculated by a special balance for this purpose. This balance is called balance determination. The balance of determination equals a balance sheet, along the lines of traditional accounting, which will be determined on the date of the resolution of the company in relation to socio retirante. O balance determination equals a balance sheet, along the lines of traditional accounting, which will be determined the date of the resolution of the company in relation to socio retirante. According to the jurisprudence of the courts patriotic, balance determination must include intangible assets and liabilities. Intangible assets were included in goodwill, according to the jurisprudence of the courts. There are various models of business valuation to be applied, especially the models presented by economics, accounting and finance. The evaluation model based companies in the discounted cash flow method is mostly used by business appraisers. Current law allows the partners can collude on any social contract evaluation criteria of business for purposes of ascertaining the assets of the partner retirante. Identifying the model adopted by the Brazilian judiciary, perhaps the present study may in resolving corporate conflicts and thus contribute to the judiciary, in order to reduce the number of processes that encumber both the Brazilian society.
APA, Harvard, Vancouver, ISO, and other styles
20

Luken, Jackson. "QED: A Fact Verification and Evidence Support System." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555074124008897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

O'Cull, Douglas C. "TELEMETRY SIMULATOR PROVIDES PRE-MISSION VERIFICATION OF TELEMETRY RECEIVE SYSTEM." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608548.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
With the increased concerns for reducing cost and improving reliability in today's telemetry systems, many users are employing simulation and automation to guarantee reliable telemetry systems operation. Pre-Mission simulation of the telemetry system will reduce the cost associated with a loss of mission data. In order to guarantee the integrity of the receive system, the user must be able to simulate several conditions of the transmitted signal. These include Doppler shift and dynamic fade simulation. Additionally, the simulator should be capable of transmitting industry standard PCM data streams to allow pre-mission bit error rate testing of the receive system. Furthermore, the simulator should provide sufficient output power to allow use as a boresite transmitter to check all aspects of the receive link. Finally, the simulator must be able to operate at several frequency bands and modulation modes to keep cost to a minimum.
APA, Harvard, Vancouver, ISO, and other styles
22

Saeed, Mohammed. "Employing Transformers and Humans for Textual-Claim Verification." Electronic Thesis or Diss., Sorbonne université, 2022. https://theses.hal.science/tel-03922010.

Full text
Abstract:
Au cours des dernières années, il y a eu une augmentation des fausses nouvelles. Malgré les efforts déployés pour atténuer les "fake news", il reste de nombreuses épreuves à surmonter lorsqu'on essaie de construire des systèmes de vérification automatique des faits, dont les quatre que nous abordons dans cette thèse. Tout d'abord, il n'est pas clair comment combler le fossé entre les affirmations textuelles en entrée, qui doivent être vérifiées, et les données structurées qui doivent être utilisées pour la vérification des affirmations. Nous faisons un pas dans cette direction en introduisant Scrutinizer, un système piloté par les données qui traduit les revendications textuelles en requêtes SQL, à l'aide d'un composant d'interaction homme-machine. Deuxièmement, nous améliorons les capacités de raisonnement des modèles de langage pré-entraînés (PLMs) en introduisant RuleBert, un PLM qui s'appuie sur des données provenant de règles logiques. Troisièmement, nous proposons des incorporations de type (TE), des incorporations d'entrée supplémentaires qui codent le type de sortie souhaité lors de l'interrogation des PLM. Nous expliquons comment calculer un TE et fournissons plusieurs méthodes d'analyse. Nous montrons ensuite une augmentation des performances pour le jeu de données LAMA et des résultats prometteurs pour la désintoxication de textes. Enfin, nous analysons le programme BirdWatch, une approche communautaire de la vérification des faits dans les tweets. Dans l'ensemble, le travail de cette thèse vise à mieux comprendre comment les machines et les humains pourraient aider à renforcer et à étendre la vérification manuelle des faits
Throughout the last years, there has been a surge in false news spreading across the public. Despite efforts made in alleviating "fake news", there remains a lot of ordeals when trying to build automated fact-checking systems, including the four we discuss in this thesis. First, it is not clear how to bridge the gap between input textual claims, which are to be verified, and structured data that is to be used for claim verification. We take a step in this direction by introducing Scrutinizer, a data-driven fact-checking system that translates textual claims to SQL queries, with the aid of a human-machine interaction component. Second, we enhance reasoning capabilities of pre-trained language models (PLMs) by introducing RuleBert, a PLM that is fine-tuned on data coming from logical rules. Third, PLMs store vast information; a key resource in fact-checking applications. Still, it is not clear how to efficiently access them. Several works try to address this limitation by searching for optimal prompts or relying on external data, but they do not put emphasis on the expected type of the output. For this, we propose Type Embeddings (TEs), additional input embeddings that encode the desired output type when querying PLMs. We discuss how to compute a TE, and provide several methods for analysis. We then show a boost in performance for the LAMA dataset and promising results for text detoxification. Finally, we analyze the BirdWatch program, a community-driven approach to fact-checking tweets. All in all, the work in this thesis aims at a better understanding of how machines and humans could aid in reinforcing and scaling manual fact-checking
APA, Harvard, Vancouver, ISO, and other styles
23

Svensson, Linus. "Checkpoint : A case study of a verification project during the 2019 Indian election." Thesis, Södertörns högskola, Journalistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-41826.

Full text
Abstract:
This thesis examines the Checkpoint research project and verification initiative that was introduced to address misinformation in private messaging applications during the 2019 Indian general election. Over two months, throughout the seven phases of the election, a team of analysts verified election related misinformation spread on the closed messaging network WhatsApp. Building on new automated technology, the project introduced a WhatsApp tipline which allowed users of the application to submit content to a team of analysts that verified user-generated content in an unprecedented way. The thesis presents a detailed ethnographic account of the implementation of the verification project. Ethnographic fieldwork has been combined with a series of semi-structured interviews in which analysts are underlining the challenges they faced throughout the project. Among the challenges, this study found that India’s legal framework limited the scope of the project so that the organisers had to change approach from an editorial project to one that was research based. Another problem touched the methodology of verification. Analysts perceived the use of online verification tools as a limiting factor when verifying content, as they experienced a need for more traditional journalistic verification methods. Technology was also a limiting factor. The tipline was quickly flooded with verification requests, the majority of which were unverifiable, and the team had to sort the queries manually. Existing technology such as image match check could be further implemented to deal more efficiently with multiple queries in future projects.
APA, Harvard, Vancouver, ISO, and other styles
24

Bigot, Laurent. "L’essor du fact-checking : de l’émergence d’un genre journalistique au questionnement sur les pratiques professionnelles." Thesis, Paris 2, 2017. http://www.theses.fr/2017PA020076/document.

Full text
Abstract:
De plus en plus de médias dans le monde disposent de rubriques ou chroniques dédiées au fact-checking. Elles visent notamment à vérifier la véracité de propos tenus par des responsables politiques. Cette pratique revisite celle née aux États-Unis dans les années 1920, qui consistait à vérifier de manière exhaustive et systématique les contenus avant parution. Ce fact-checking « moderne » incarne une stratégie des rédactions web – en dépit des crises structurelles et conjoncturelles – pour renouer avec la diffusion de contenus mieux vérifiés, ainsi que leur capacité à mettre à profit les outils numériques qui facilitent l’accès à l’information. À travers une trentaine d’entretiens semi-directifs avec des fact-checkeurs français et l’étude de 300 articles et chroniques issus de sept médias différents, ce travail de recherche analyse dans quelle mesure le fact-checking, en tant que genre journalistique, valorise une démarche crédible, mais révèle aussi, en creux, des manquements dans les pratiques professionnelles. Il examine, enfin, comment la promotion de contenus plus qualitatifs et l’éducation aux médias sont de nature à placer le fact-checking au cœur des stratégies éditoriales, destinées à regagner la confiance des publics
A growing number of newsrooms around the world have established fact-checking headings or rubrics. They are dedicated to assess the veracity of claims, especially by politicians. This practice revisits an older fact-checking practice, born in the United States in the 1920’s and based on an exhaustive and systematic checking of magazines’ contents before publishing. The ‘modern’ version of fact-checking embodies both the willingness of online newsrooms to restore verified contents —despite the structural and economic crisis of the press— and their ability to capitalize on digital tools which enhance access to information. Through some thirty semi-structured interviews with French fact-checkers and the study of a sample of 300 articles and chronicles from seven media, this PhD thesis examines the extent to which fact-checking, as a journalistic genre, certainly valorizes a credible method, but also —and indirectly— reveals shortcomings in professional practices. Finally, it discusses how the promotion of more qualitative content, as well as media literacy, could place fact-checking at the heart of editorial strategies —the latter aiming at retrieving trust from the audience
APA, Harvard, Vancouver, ISO, and other styles
25

Ha, Wonsook. "Non-isothermal fate and transport of drip-applied fumigants in plastic-mulched soil beds model development and verification /." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0012921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Guillaumin, Matthieu. "Données multimodales pour l'analyse d'image." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM048.

Full text
Abstract:
La présente thèse s'intéresse à l'utilisation de méta-données textuelles pour l'analyse d'image. Nous cherchons à utiliser ces informations additionelles comme supervision faible pour l'apprentissage de modèles de reconnaissance visuelle. Nous avons observé un récent et grandissant intérêt pour les méthodes capables d'exploiter ce type de données car celles-ci peuvent potentiellement supprimer le besoin d'annotations manuelles, qui sont coûteuses en temps et en ressources. Nous concentrons nos efforts sur deux types de données visuelles associées à des informations textuelles. Tout d'abord, nous utilisons des images de dépêches qui sont accompagnées de légendes descriptives pour s'attaquer à plusieurs problèmes liés à la reconnaissance de visages. Parmi ces problèmes, la vérification de visages est la tâche consistant à décider si deux images représentent la même personne, et le nommage de visages cherche à associer les visages d'une base de données à leur noms corrects. Ensuite, nous explorons des modèles pour prédire automatiquement les labels pertinents pour des images, un problème connu sous le nom d'annotation automatique d'image. Ces modèles peuvent aussi être utilisés pour effectuer des recherches d'images à partir de mots-clés. Nous étudions enfin un scénario d'apprentissage multimodal semi-supervisé pour la catégorisation d'image. Dans ce cadre de travail, les labels sont supposés présents pour les données d'apprentissage, qu'elles soient manuellement annotées ou non, et absentes des données de test. Nos travaux se basent sur l'observation que la plupart de ces problèmes peuvent être résolus si des mesures de similarité parfaitement adaptées sont utilisées. Nous proposons donc de nouvelles approches qui combinent apprentissage de distance, modèles par plus proches voisins et méthodes par graphes pour apprendre, à partir de données visuelles et textuelles, des similarités visuelles spécifiques à chaque problème. Dans le cas des visages, nos similarités se concentrent sur l'identité des individus tandis que, pour les images, elles concernent des concepts sémantiques plus généraux. Expérimentalement, nos approches obtiennent des performances à l'état de l'art sur plusieurs bases de données complexes. Pour les deux types de données considérés, nous montrons clairement que l'apprentissage bénéficie de l'information textuelle supplémentaire résultant en l'amélioration de la performance des systèmes de reconnaissance visuelle
This dissertation delves into the use of textual metadata for image understanding. We seek to exploit this additional textual information as weak supervision to improve the learning of recognition models. There is a recent and growing interest for methods that exploit such data because they can potentially alleviate the need for manual annotation, which is a costly and time-consuming process. We focus on two types of visual data with associated textual information. First, we exploit news images that come with descriptive captions to address several face related tasks, including face verification, which is the task of deciding whether two images depict the same individual, and face naming, the problem of associating faces in a data set to their correct names. Second, we consider data consisting of images with user tags. We explore models for automatically predicting tags for new images, i. E. Image auto-annotation, which can also used for keyword-based image search. We also study a multimodal semi-supervised learning scenario for image categorisation. In this setting, the tags are assumed to be present in both labelled and unlabelled training data, while they are absent from the test data. Our work builds on the observation that most of these tasks can be solved if perfectly adequate similarity measures are used. We therefore introduce novel approaches that involve metric learning, nearest neighbour models and graph-based methods to learn, from the visual and textual data, task-specific similarities. For faces, our similarities focus on the identities of the individuals while, for images, they address more general semantic visual concepts. Experimentally, our approaches achieve state-of-the-art results on several standard and challenging data sets. On both types of data, we clearly show that learning using additional textual information improves the performance of visual recognition systems
APA, Harvard, Vancouver, ISO, and other styles
27

Guillaumin, Matthieu. "Données multimodales pour l'analyse d'image." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00522278/en/.

Full text
Abstract:
La présente thèse s'intéresse à l'utilisation de méta-données textuelles pour l'analyse d'image. Nous cherchons à utiliser ces informations additionelles comme supervision faible pour l'apprentissage de modèles de reconnaissance visuelle. Nous avons observé un récent et grandissant intérêt pour les méthodes capables d'exploiter ce type de données car celles-ci peuvent potentiellement supprimer le besoin d'annotations manuelles, qui sont coûteuses en temps et en ressources. Nous concentrons nos efforts sur deux types de données visuelles associées à des informations textuelles. Tout d'abord, nous utilisons des images de dépêches qui sont accompagnées de légendes descriptives pour s'attaquer à plusieurs problèmes liés à la reconnaissance de visages. Parmi ces problèmes, la vérification de visages est la tâche consistant à décider si deux images représentent la même personne, et le nommage de visages cherche à associer les visages d'une base de données à leur noms corrects. Ensuite, nous explorons des modèles pour prédire automatiquement les labels pertinents pour des images, un problème connu sous le nom d'annotation automatique d'image. Ces modèles peuvent aussi être utilisés pour effectuer des recherches d'images à partir de mots-clés. Nous étudions enfin un scénario d'apprentissage multimodal semi-supervisé pour la catégorisation d'image. Dans ce cadre de travail, les labels sont supposés présents pour les données d'apprentissage, qu'elles soient manuellement annotées ou non, et absentes des données de test. Nos travaux se basent sur l'observation que la plupart de ces problèmes peuvent être résolus si des mesures de similarité parfaitement adaptées sont utilisées. Nous proposons donc de nouvelles approches qui combinent apprentissage de distance, modèles par plus proches voisins et méthodes par graphes pour apprendre, à partir de données visuelles et textuelles, des similarités visuelles spécifiques à chaque problème. Dans le cas des visages, nos similarités se concentrent sur l'identité des individus tandis que, pour les images, elles concernent des concepts sémantiques plus généraux. Expérimentalement, nos approches obtiennent des performances à l'état de l'art sur plusieurs bases de données complexes. Pour les deux types de données considérés, nous montrons clairement que l'apprentissage bénéficie de l'information textuelle supplémentaire résultant en l'amélioration de la performance des systèmes de reconnaissance visuelle.
APA, Harvard, Vancouver, ISO, and other styles
28

POCHETTINO, TERESA. "La valutazione energetico-ambientale dell’ospedale per acuti in fase d’uso. Criteri, indicatori, metodologie di verifica.Energetic and environmental operational hospital buildings assessment. Criteria, indicators and verification methods." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2497148.

Full text
Abstract:
ENERGETIC AND ENVIRONMENTAL OPERATIONAL HOSPITAL BUILDINGS ASSESSMENT. CRITERIA, INDICATORS AND VERIFICATION METHODS. Author: Teresa Pochettino. Tutor: Prof. Ing. C. Caldera, Co-Tutor: Prof. Ing. S. Corgnati, Prof. Arch. S. Belforte, Lighting External supervisor: Prof. Arch. C. Aghemo, Arch. V. Lo Verso. RESEARCH THEME INTRODUCTION: The acute hospital constitutes a complex organizational and functional system that it is not possible to standardize according to the typology, the architectural solutions and the plant characteristics. The functional areas the hospital consists of represent both a working place for the staff people (medical, administrative and technical employees), and a living environment, for the patients. Sometimes the different categories overlay, which results in conflicting requirements. The Italian national scene is characterized by very old health care buildings (more than 60% achieved before 1970) subject to, during the course of the years, numerous interventions (on buildings systems, components and plants) characterized by partial visions (mainly focused on the aim of "humanization") and by a sector-based planning, in which medical requirements and demands have being prevailing. Economic resources absorbed by hospital settlements represent an average of 50% of the economic expenses allocated by the different European Countries for the health care costs. The energy consumption costs, despite their amount, have in this panorama an average of 2 -2.5% of the total management expenses, which are mainly absorbed by the staff salaries. The hospital resources consumption and emissions footprints have found, over the last years, great interest in many international and national research projects which showed the opportunity of producing significant energy savings and economic and environmental benefits. The reasons for this attention are mainly due to the awareness that the huge consumptions and the significant environment impact may offer a great potential for intervention and improvement, and to the need to cope with the basic contradiction inherent the effect that the hospital produces with its energy demand and related emissions on people health. The great building-plants system complexity and the situations variability, make it difficult to operate hospital presidia evaluation that can be generalized, and requires: the definition of targeted instruments to apply for the existing buildings analysis, definition of critical aspects, evaluation of the proposed measures effectiveness and definition of action priorities, especially facing the lack of financial resources availability, within the national and regional health planning. A strategy to deal with this complexity is offered by energetic and environmental assessment methodologies which integrate energetic aspects with environmental, comfort and operational ones, whose balance and synergy are needed conditions in health care buildings. The main energetic and environmental methodologies, (BREEAM, LEED, HQE), have developed specific assessment protocols for the hospital buildings designing phase, while SBMethod proposes a reference grid, which can be used to develop criteria and indicators coherent with the relevant building practices in the various nations, for both the design and the operative phases. RESEARCH OBJECTIVES: The research had the goal to define a set of criteria and indicators to assess operative acute hospitals’ environmental performances, identify the most critical situations and define the intervention/financing priorities, on the basis of their potential environmental and energetic effectiveness. METHODOLOGICAL ACTIONS: To define a list of specific criteria as well as to process them, the systemic approach proposed by the buildings environmental and energetic assessment methodologies, mainly available for the design phase, adapted for the operational phase, was chosen to be used as reference through the following path: - identification of the macro-themes shared by the investigated methodologies (BREEAM, LEED, SBMethod, HQE); - comparison between the criteria classification adopted by the investigated methodologies and the alignment according to the identified macro-themes; - integration of the criteria that were missing into the SBMethod evaluation grid for the operational phase; - extrapolation of a criteria set for the acute hospital operational evaluation phase; - Identification of assessment indicators and methods for each one of the selected criteria in the operational phase and definition of the benchmark levels; - field monitoring, evaluations and measurements, to support the validation of the alternative assessment methods and proposed benchmark; - definition of assessment criteria and procedure evaluation ; Because of the specificity and the complexity of some of the addressed topics, it was necessary to cooperate with experts related to different disciplinary areas such as: - architectural technology - building physics and services (for the energetic and lighting aspects); - working places hygiene (with the Turin University, and C. T. O. Hospital, Industrial toxicology and epidemiology Technical Service) Moreover, in the context of the developed research, confronting with the responsible management operators (internal and external to the cribs) played a crucial role, both for the competence and for the support needed to the in-the-field monitoring and investigations activities. RESULTS: Forty-five criteria, each with the relative evaluation cards, were processed in the research path, divided into the five SBMethod reference macro areas (site quality, resource consumption, environmental emission, environmental indoor quality, service quality). For each of the identified criteria indicators, testing methods and benchmark levels were defined and developed, based on laws and technical standards and then supported by a process to verify their applicability to the assessment of the operational acute hospital (or to its specific areas). For 21 of these criteria it has been possible to carry out field monitoring, instrumental measurements, and in some cases even dynamic simulations, which have supported the definition of the operability of the verification process and the identified performance levels reliability. INNOVATION ASPECTS: The definition of a specific set of criteria, indicators, verification methodologies, appropriately calibrated to evaluate the hospitals during its operational phase, represents, in itself, an innovative aspect. It is considered of particular interest, moreover, the definition of certain criteria which were not provided within the framework of the investigated instruments, such as: the indoor air quality as a function of the VOC presence; the energy demand for electric lighting, in addition to the in-depth, adaptations, and integrations of a large number of available criteria (radon, asbestos, etc.). As for the lighting comfort aspects it has been possible to integrate the parameters of environmental physics with considerations related to the healing design concepts (on the basis of contextual subjective and objective evaluations), which lead to considerations that are particularly significant in the people healthcare contexts. As for the energy aspects, it is necessary to underline the selection of an assessment proposal based on the different functional/energetic hospital areas, and the adoption of benchmark related to the individual case study. This approach allowed to overcome the limitations of the parameterization referred at the hospital bedside or at the surface unit. FUTURE DEVELOPMENT: It is possible to identify the following possible research developments, for the short and the long-term period: - the extension of the evaluation to further hospital functional areas and to other case studies; - the weighting process of the selected criteria based on a comparison shared with the stakeholders (technicians, specialists, medical staff); On the basis of the research process product of the assessments development on the existing assets it will, then, be possible, to define: - guidelines and performance levels for the acute hospitals design phase with an high energetic and environmental quality; - an assessment protocol to evaluate the hospital design phase.
APA, Harvard, Vancouver, ISO, and other styles
29

Scheepers, Jill. "Analysis of cryptocurrency verification challenges faced by the South African Revenue Service and tax authorities in other BRICS countries and whether SARS’ powers to gather information relating to cryptocurrency transactions are on par with those of other BRICS countries." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/31231.

Full text
Abstract:
The main objective of this study was to identify the potential difficulties that the verification of cryptocurrencies presents to SARS and determining whether these problems will also be encountered by tax authorities in Brazil, Russia, India and China (members of the BRICS group of countries). The study examined how the BRICS’ countries were addressing cryptocurrency data challenges and determining whether South Africa could learn from the solutions implemented by these countries. The information gathering powers of SARS were also examined in order to determine whether those powers are on par with those of the BRICS’ countries. The findings suggest that it is vital that tax authorities link the taxpayer’s real identity to the taxpayer’s digital identity in order to trace the taxpayer’s tax profile and verify compliance with tax legislation. The findings also suggest that certain BRICS countries did not experience significant verification difficulties. China has, however, banned the use of cryptocurrencies. Russia is in the process of passing tax legislation pertaining to cryptocurrencies and therefore, the Russian tax authorities have not yet undertaken to verify cryptocurrency transactions. India has addressed the verification challenges presented by cryptocurrencies by introducing legislation that compels clients of cryptocurrency exchanges to register with the exchange before transacting. Brazil is in the process of passing legislation which will require cryptocurrency exchanges to supply the Brazilian tax authorities with taxpayers’ identities, transaction amounts and transaction history on a monthly basis. Private altcoins, face-to-face transactions, cryptocurrency mixers and online peer-to-peer markets (which require no registration) present the largest verification challenges due to the difficulty in tracking these transactions. It was also found that the information gathering powers of SARS are on par with those of the BRICS’ countries and therefore, SARS is also able to request information from cryptocurrency exchanges as a means of collecting data for verification purposes. The study concluded with recommendations for SARS to consider in addressing the verification challenges posed by cryptocurrency transactions.
APA, Harvard, Vancouver, ISO, and other styles
30

Falade, Joannes Chiderlos. "Identification rapide d'empreintes digitales, robuste à la dissimulation d'identité." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC231.

Full text
Abstract:
La biométrie est de plus en plus utilisée à des fins d’identification compte tenu de la relation étroite entre la personne et son identifiant (comme une empreinte digitale). Nous positionnons cette thèse sur la problématique de l’identification d’individus à partir de ses empreintes digitales. L’empreinte digitale est une donnée biométrique largement utilisée pour son efficacité, sa simplicité et son coût d’acquisition modeste. Les algorithmes de comparaison d’empreintes digitales sont matures et permettent d’obtenir en moins de 500 ms un score de similarité entre un gabarit de référence (stocké sur un passeport électronique ou une base de données) et un gabarit acquis. Cependant, il devient très important de déterminer l'identité d'un individu contre une population entière en un temps très court (quelques secondes). Ceci représente un enjeu important compte tenu de la taille de la base de données biométriques (contenant un ensemble d’individus de l’ordre d’un pays). Par exemple, avant de délivrer un nouveau passeport à un individu qui en fait la demande, il faut faire une recherche d'identification sur la base des données biométriques du pays afin de s'assurer que ce dernier n'en possède pas déjà un autre mais avec les mêmes empreintes digitales (éviter les doublons). Ainsi, la première partie du sujet de cette thèse concerne l’identification des individus en utilisant les empreintes digitales. D’une façon générale, les systèmes biométriques ont pour rôle d’assurer les tâches de vérification (comparaison 1-1) et d’identification (1-N). Notre sujet se concentre sur l’identification avec N étant à l’échelle du million et représentant la population d’un pays par exemple. Dans le cadre de nos travaux, nous avons fait un état de l’art sur les méthodes d’indexation et de classification des bases de données d’empreintes digitales. Nous avons privilégié les représentations binaires des empreintes digitales pour indexation. Tout d’abord, nous avons réalisé une étude bibliographique et rédigé un support sur l’état de l’art des techniques d’indexation pour la classification des empreintes digitales. Ensuite, nous avons explorer les différentes représentations des empreintes digitales, puis réaliser une prise en main et l’évaluation des outils disponibles à l’imprimerie Nationale (IN Groupe) servant à l'extraction des descripteurs représentant une empreinte digitale. En partant de ces outils de l’IN, nous avons implémenté quatre méthodes d’identification sélectionnées dans l’état de l’art. Une étude comparative ainsi que des améliorations ont été proposées sur ces méthodes. Nous avons aussi proposé une nouvelle solution d'indexation d'empreinte digitale pour réaliser la tâche d’identification qui améliore les résultats existant. Les différents résultats sont validés sur des bases de données de tailles moyennes publiques et nous utilisons le logiciel Sfinge pour réaliser le passage à l’échelle et la validation complète des stratégies d’indexation. Un deuxième aspect de cette thèse concerne la sécurité. Une personne peut avoir en effet, la volonté de dissimuler son identité et donc de mettre tout en œuvre pour faire échouer l’identification. Dans cette optique, un individu peut fournir une empreinte de mauvaise qualité (portion de l’empreinte digitale, faible contraste en appuyant peu sur le capteur…) ou fournir une empreinte digitale altérée (empreinte volontairement abîmée, suppression de l’empreinte avec de l’acide, scarification…). Il s'agit donc dans la deuxième partie de cette thèse de détecter les doigts morts et les faux doigts (silicone, impression 3D, empreinte latente) utilisés par des personnes mal intentionnées pour attaquer le système. Nous avons proposé une nouvelle solution de détection d'attaque basée sur l'utilisation de descripteurs statistiques sur l'empreinte digitale. Aussi, nous avons aussi mis en place trois chaînes de détections des faux doigts utilisant les techniques d'apprentissages profonds
Biometrics are increasingly used for identification purposes due to the close relationship between the person and their identifier (such as fingerprint). We focus this thesis on the issue of identifying individuals from their fingerprints. The fingerprint is a biometric data widely used for its efficiency, simplicity and low cost of acquisition. The fingerprint comparison algorithms are mature and it is possible to obtain in less than 500 ms a similarity score between a reference template (enrolled on an electronic passport or database) and an acquired template. However, it becomes very important to check the identity of an individual against an entire population in a very short time (a few seconds). This is an important issue due to the size of the biometric database (containing a set of individuals of the order of a country). Thus, the first part of the subject of this thesis concerns the identification of individuals using fingerprints. Our topic focuses on the identification with N being at the scale of a million and representing the population of a country for example. Then, we use classification and indexing methods to structure the biometric database and speed up the identification process. We have implemented four identification methods selected from the state of the art. A comparative study and improvements were proposed on these methods. We also proposed a new fingerprint indexing solution to perform the identification task which improves existing results. A second aspect of this thesis concerns security. A person may want to conceal their identity and therefore do everything possible to defeat the identification. With this in mind, an individual may provide a poor quality fingerprint (fingerprint portion, low contrast by lightly pressing the sensor...) or provide an altered fingerprint (impression intentionally damaged, removal of the impression with acid, scarification...). It is therefore in the second part of this thesis to detect dead fingers and spoof fingers (silicone, 3D fingerprint, latent fingerprint) used by malicious people to attack the system. In general, these methods use machine learning techniques and deep learning. Secondly, we proposed a new presentation attack detection solution based on the use of statistical descriptors on the fingerprint. Thirdly, we have also build three presentation attacks detection workflow for fake fingerprint using deep learning. Among these three deep solutions implemented, two come from the state of the art; then the third an improvement that we propose. Our solutions are tested on the LivDet competition databases for presentation attack detection
APA, Harvard, Vancouver, ISO, and other styles
31

Hung, Wen Hsuan, and 洪文軒. "Face Verification from a Face Motion Video Clip." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/97899687858591408454.

Full text
Abstract:
碩士
國立交通大學
多媒體工程研究所
97
The system proposed in this thesis uses a face motion video clip to perform face verification. This system consists of three parts.The first part is to separate the background from the face by using the frame difference technique and the imformation of skin color .This skin color model is constructed automatically from the training input viedo clip. The second step is to extract the facial feature points based on the AAM shape and appearance models built from the training image set .This method can resist the intensity change and image geometric variation.The final part is to verify the identity of the face.It reconstructs the 3D model without camera calibration and verifies the face identity by registering the facial feature points to the gallery face image through the 2D image projection of the 3D face model constructed.
APA, Harvard, Vancouver, ISO, and other styles
32

"Face verification in the wild." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291313.

Full text
Abstract:
Lu, Chaochao.
Thesis M.Phil. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 86-98).
Abstracts also in Chinese.
Title from PDF title page (viewed on 19, September, 2016).
APA, Harvard, Vancouver, ISO, and other styles
33

Duan, Chih-Hsueh, and 段志學. "Face Verification with Local Sparse Representation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/63144328131569334187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Chun-Min, and 黃俊閔. "Face Verification Using Eigen Correlation Filter." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/79692841753957922440.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
100
Face verification is a branch of face recognition can be used as pre-treatment or in combination with other identification to improve recognition results. In addition, the human face recognition technology not only can be used for identity, even in the images and related multimedia applications can see it. Face verification from face images has many applications and is thus an important research topic. In this paper, a one-dimensional correlation filter based class-dependence feature analysis(1D-CFA) method is presented for face verification. Compared with original CFA that works in the two dimensional(2D)image space, 1D-CFA encodes the image data as vectors. In 1D-CFA, a new correlation filter called optimal trade-off filter(OTF), which is designed in the low-dimensional kernel principal component analysis(KPCA)subspace, is proposed for effective feature extraction. We will discuss a new correlation filter module called the eigen filter that designed in the kernel principal component analysis(KPCA)subspace. In the thesis, the system structure can be divided into three parts:(1)preprocessing module,(2)Training module and(3)Test module. The experimental results show that the best performance of 88.2% is achieved with the kernel principal component analysis (KPCA) and optimal trade-off filter(OTF).
APA, Harvard, Vancouver, ISO, and other styles
35

"Deep learning face representation by joint identification-verification." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291589.

Full text
Abstract:
Sun, Yi.
Thesis Ph.D. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 100-106).
Abstracts also in Chinese.
Title from PDF title page (viewed on 26, October, 2016).
APA, Harvard, Vancouver, ISO, and other styles
36

Pei-HsunWu and 吳沛勳. "Metric-Learning Face Verification Using Local Binary Pattern." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/49593027304201220818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

HUANG, JYUN-WE, and 黃駿偉. "Face Verification System Based on Generative Adversarial Network." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/432z6g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rahadian, Fattah Azzuhry, and 哈帝恩. "Compact and Low-Cost CNN for Face Verification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/6y847b.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
107
In recent years, face verification has been widely used to secure various transactions on the internet. The current state-of-the-art in face verification is convolutional neural network (CNN). Despite the performance of CNN, deploying CNN in mobile and embedded devices is still challenging because the available computational resource on these devices is constrained. In this paper, we propose a lightweight CNN for face verification using several methods. First, a modified version of ShuffleNet V2 called ShuffleHalf is used as the backbone network for the FaceNet algorithm. Second, the feature maps in the model are reused using two proposed methods called Reuse Later and Reuse ShuffleBlock. Reuse Later works by reusing the potentially unused features by connecting the features directly to the fully connected layer. Meanwhile, Reuse ShuffleBlock works by reusing the feature maps output of the first 1x1 convolution in the basic building block of ShuffleNet V2 (ShuffleBlock). This method is used to reduce the percentage of 1x1 convolution in the model because 1x1 convolution operation is computationally expensive. Third, kernel size is increased as the number of channels increases to obtain the same receptive field size with less computational complexity. Fourth, the depthwise convolution operations are used to replace some ShuffleBlocks. Fifth, other existing previous state-of-the-art algorithms are combined with the proposed method to see if they can increase the performance-efficiency tradeoff of the proposed method. Experimental results on five testing datasets show that ShuffleHalf achieves better accuracy than all other baselines with only 48% FLOPs of the previous state-of-the-art algorithm, MobileFaceNet. The accuracy of ShuffleHalf is further improved by reusing the feature. This method can also reduce the computational complexity to only 42% FLOPs of MobileFaceNet. Meanwhile, both changing kernel size and using depthwise repetition can further decrease computational complexity to only 38% FLOPs of MobileFaceNet with better performance than MobileFaceNet. Combination with some existing methods does not increase the accuracy nor performance-efficiency tradeoff of the model. However, adding shortcut connections and using Swish activation function can improve the accuracy of the model without any noticeable increase in the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Meng-Ying, and 林孟穎. "Face Verification by Exploiting Reconstructive and Discriminative Coupled Subspaces." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/v5m6tq.

Full text
Abstract:
碩士
淡江大學
資訊工程學系碩士班
104
Face verification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images in the database are high-quality while probe images are usually low-resolution or with heavy occlusion. This study, we proposed a regression-based approach for face verification in the low-quality scenario. We adopt principal component analysis (PCA) approach to construct the correlation between pairwise samples, where each sample contains heterogeneous pairwise facial image captured in terms of different modalities or features (e.g., low-resolution vs. high-resolution, or occluded facial image vs. non-occluded one). Three common feature spaces are reconstructed by cross-domain pairwise samples, with the goal of eliminating appearance variations and maximizing discrimination between different subjects. Such derived subspaces are then used to represent the subjects of interest, and achieve satisfactory verification performance. Experiments on a variety of synthesis-based verification tasks under low-resolution and occlusion cases would verify the effectiveness of our proposed learning framework.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Yen-Heng, and 陳衍亨. "Identity Verification by 3-D Information from Face Images." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/44579892499561352143.

Full text
Abstract:
碩士
國立交通大學
資訊科學系
90
Identity verification is an essential component of a security system. The traditional way to verify a person is asking him to show some kind of document, e.g., an ID card. Compared to the traditional way, using human face to verify a person is a more convenient approach. We all know that a human face is a 3-D entity. However, existing face recognition methods analyze human face images in two dimensions, they discard 3-D information of the face images.The approach proposed in this thesis uses 3-D information of a face to do the verification. The 3-D information is represented by a projective invariant called relative affine structure. If the images are taken from the same person, the relative affine structures between these images remain unchanged. Based on such a property, an identity verification system using human face images can be built.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Hsien-Chang, and 劉憲璋. "Personalized Face Verification System Based on Cluster-Dependent LDA Subspace." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37515340169234844138.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
91
Recently, person authentication becomes more and more important as technology advances. How to build a safe and convenient identity verification system is a hot research topic in academia and business. In this thesis, we introduce a personalized face verification system based on cluster dependent LDA subspace. The training of the system can be divided into three parts: the initial training, on-site training, and on-site evaluation. In the initial training, we select some human face images of our database as representative face images. The images can be clustered by using K-means clustering method. For on-site training, the client must give some face images for on-site training. We can assign the client to the closest cluster. To separate the client from other representative people in the cluster, we will adopt LDA method to the LDA subspace. At last, we use information of the client and impostors to adjust the threshold. In the part of system operation and on-line training, we can manually input the password when we cannot verified by the system. The system can get more training images to retain the LDA subspace and threshold. We also compare three different matching scores. The experimental results show our method outperforms the traditional LDA method。
APA, Harvard, Vancouver, ISO, and other styles
42

Hung, Chien-Yu, and 洪倩玉. "Dynamic Linear Discriminant Analysis for Online Face Recognition and Verification." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59833427175508565421.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
91
Linear Discriminant Analysis (LDA) is a popular linear transformation method for face recognition verification. Using LDA, we can extract the low-dimensional discriminative feature parameter for human faces. In the applications of face recognition and verification, it is usually necessary to enroll the system with new papers and templates. Also, we often need remove the out-of-date persons or templates from the system model. Namely, using the LDA model, the within and between class scatter matrices and the transformation matrices should be recomputed. However, such a recomputation is very time-consuming. To overcome this weakness, a dynamic LDA algorithm is proposed in this paper. Apply this algorithm, we cannot only save a huge amount of computation time but also obtain the updated new parameters with relatively small storage of model parameters. Moreover, in face verification system, we estimate the optimal matrix via by combining the theories of LDA and Maximum Likelihood Linear Transformation (MLLT). We also derive the distribution of likelihood ratio based on MLLT to be the F distribution. Then, the face verification system is carried out via hypothesis testing using different significant levels for F distribution. The advantage of new method is that the verification decision is done according to statistically meaning "significant levels". This superiority is attractive compared to the conventional method using empirical thresholds. In the experiments, we obtain desirable performance using IIS face database and CSIE/NCKU car face database. An online dynamic face recognition and verification demo system is implemented.
APA, Harvard, Vancouver, ISO, and other styles
43

Deng, Peter Shaohua, and 鄧少華. "Biometric-based Pattern Recognition -- Handwritten Signature Verification and Face Recognition." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/79526579531096548003.

Full text
Abstract:
博士
國立中央大學
資訊工程研究所
88
In this dissertation, two biometric-based pattern recognition problems were studied, i.e., off-line handwritten signature verification and human face recognition. Biometrics, by definition, is the automated technique of measuring a physical characteristic or person trait of an individual and comparing the characteristic or trait to a database for purposes of recognizing or authenticating that individual. Biometrics uses physical characteristics, defined as the things we are, and personal traits, defined as the things we behave, including facial thermographs, chemical composition of body odor, retina and iris, fingerprints, hand geometry, skin pores, wrist/hand veins, handwritten signature, keystrokes or typing, and voiceprint. To deal with the first biometric-based pattern recognition problem, i.e., off-line handwritten signature verification. Wavelet theory, zero-crossing, dynamic time warping, and nonlinear integer programming form the main body of our methodology. The proposed system can automatically identify useful features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced closed contours are decomposed into multiresolutional signals using wavelet transforms. Then the zero-crossings corresponding to the curvature data are extracted as features for matching. Moreover, a statistical measurement is devised to decide systematically which closed contours and their associated frequency data of a writer are most stable of a writer are most stable and discriminating. Based on these data, the optimal threshold value which controls the accuracy of the feature extraction process is calculated. The proposed approach can be applied to both on-line and off-line signature verification systems. The second biometric-based pattern recognition problem we deal with is human face recognition; we applied the minimum classification error (MCE) technique proposed by Juang and Katagiri[11]. In this technique, the classical discriminant analysis methodology is blended with the classification rule in a new functional form and is used as the design objective criterion to be optimized by numerical search algorithm. In our work, the MCE formulation is incorporated into a three-layer neural network classifier called multilayer perceptron (MLP). Unlike the traditional probabilistic-based Bayes decision technique, the proposed approach is not necessary to assume the probability model of each class. Besides, the classifier works well even when the size of a training set is small. Moreover, no matter in normal environment or harsh environment, the MCE-based method is superior to the minimum sum-squared error (MSE) based method which is commonly used in traditional neural network classifier. Finally, by incorporating a fast face detection algorithm into the system to help for extracting the face-only image from a complex background, the MCE-based face recognition system is robust to image acquired from harsh environment. Experimental results confirm that our approach outperforms the previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
44

Liang, Te-Hsiang, and 梁子祥. "Implementation of the Identity Verification Mechanism Based on Face Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41188398179101704033.

Full text
Abstract:
碩士
國立臺灣科技大學
自動化及控制研究所
104
This thesis discusses the face recognition with the application of identity verification. In order to narrow down the range in an image to find the face, the Haar-like AdaBoost is used for the face detection. Then KAZE algorithm is first applied here for the feature extraction in the face recognition. KAZE feature algorithm was first proposed in 2012, and the KAZE features can be described and detected in a nonlinear scale space by means of nonlinear diffusion filtering. In this research, we adopt new algorithm KAZE, instead of using the traditional methods, e.g., SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Robust Features) to process the face detection. Furthermore, based on the used method, we try to analyze the identity verification problems, including (a) the similarity between one person's photo and his other photos, (b) the similarity between one person with and without glasses, (c) the similarity between one person and the other ones with the same gender, and (d) the similarity between one person and the other ones with the different gender. Simulation results indicate that the above mentioned similarities can reach (a) 90%, (b) 92%, (c) 60%, and (d) 67%. We also apply the proposed method to (a) the home access control system, and (b) the identity verification mechanism. In the simulation of the latter application, we use 200 photos for comparison to get the different similarity values to judge if the identity is correct. Simulation results show that the accuracy can reach more than 90%.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Ching-chia, and 許徑嘉. "Face Verification and Lip Reading Systems based on Sparse Representation." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63219376372299366670.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
101
Face verification has many applications. The critical problem which lots of researchers concern is how to apply to real-world. In order to robust orientation, translation and scaling of face images, we extract SIFT features of face images which is built dictionary of sparse representation. We propose two kinds of method to extend dictionary via K-means and information theory(extended dictionary and incremental dictionary). Experiments show that we can increase sparseness of sparse coefficients efficiently, also can improve verification rate and reconstruction error via extended dictionary. This paper utilize BCS to solve optimization problem. Compare to OMP algorithm, BCS not only can solve optimization problem but also can improve dictionary by covariance which can decrease uncertainty of observation vectors. Experiments show that incremental dictionary do increases residual of reconstruction error. Lip reading has utilized ASM or AAM as features past few years. We concern that it might lose some useful information, therefore we consider whole image information by extracting SIFT features. In order to train HMM model via SIFT features, we utilize BOF to transform matrices of SIFT features into vectors. We experiment letters A-Z, and the result show that performance of proposed method is better than baseline systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Tzu-Hao, and 林子皓. "A Study on Face Verification with Local Appearance-Based Methods." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/9g94rd.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系所
106
In recent years, due to the fantastic applications revealed in the mass media are progressively walking out to the reality, face recognition is getting more and more attention to people all over the world. Between the two modes of face recognition, verification is simpler and more suitable than identification in some practical applications such as authentication. To describe the information in human faces more elaborately, we prefer the local appearance-based methods among the various face recognition approaches. In this thesis, we studied three local appearance-based methods: GOP-Face (Gradient Orientation Pyramid), LBP-Face (Local Binary Pattern) and DT-CWT-Face (Dual Tree-Complex Wavelet Transform), and tried to give a clear overview of these three methods. Furthermore, we use face verification to experiment their robustness to variations like spatial shift, illumination changes and age progression on the ORL, Yale and FERET databases with k nearest neighbor classifier. The results verified that LBP-Face and DT-CWT-Face are actually more robust to the spatial shift and DT-CWT-Face is surprisingly robust to age progression, and is even better than GOP-Face. However, the performance against illumination change is not as good as expected.
APA, Harvard, Vancouver, ISO, and other styles
47

N, Krishna. "A study of eigenvector based face verification in static images." Thesis, 2007. http://ethesis.nitrkl.ac.in/4371/1/A_Study_of_Eigenvector_Based_Face_Verification.pdf.

Full text
Abstract:
As one of the most successful application of image analysis and understanding, face recognition has recently received significant attention, especially during the past few years. There are at least two reasons for this trend the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research. The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. Although very reliable methods of biometric personal identification exist, for example, fingerprint analysis and retinal or iris scans, these methods depend on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participant’s cooperation or knowledge. The three categories of face recognition are face detection, face identification and face verification. Face Detection means extract the face from total image of the person. Face identification means the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals. Face verification means the system needs to confirm or reject the claimed identity of the input. My thesis was face verification in static images. Here a static image means the images which are not in motion. The eigenvectors based face verification algorithm gave the results on face verification in static images based upon the eigenvectors and neural network backpropagation algorithm. Eigen vectors are used for give the geometrical information about the faces. First we take 10 images for each person in same angle with different expressions and apply principle component analysis. Here we consider image dimension as 48 x48 then we get 48 eigenvalues. Out of 48 eigenvalues we consider only 10 highest eigenvaues corresponding eigenvectors. These eigenvectors are given as input to the neural network for training. Here we used backpropagation algorithm for training the neural network. After completion of training we give an image which is in different angle for testing purpose. Here we check the verification rate (the rate at which legitimate users is granted access) and false acceptance rate (the rate at which imposters are granted access). Here neural network take more time for training purpose. The proposed algorithm gives the results on face verification in static images based upon the eigenvectors and neural network modified backpropagation algorithm. In modified backpropagation algorithm momentum term is added for decrease the training time. Here for using the modified backpropagation algorithm verification rate also slightly increased and false acceptance rate also slightly decreased.
APA, Harvard, Vancouver, ISO, and other styles
48

Wei, Yu-Chen, and 魏育誠. "The Research of Identity Verification by Using Face and Hand Features." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/mtpc4u.

Full text
Abstract:
碩士
崑山科技大學
電機工程研究所
92
Along the modern technology development, the preservation of many confidential documents and the management of user identity have become more important. It may include the entrance control system, financial management, criminal detection, and computer certification etc., which all require a set of strong identity verification system. Therefore, identity verification technique will play a more important role in the informationalized society of the 21st century, and how to construct a set of safe and convenient identity verification system has been the hot study topic in the academic and industrial field. Most current identity verification study has been aimed at certain part of the individual physical feature, which made the identification rate much lower comparing to the use of multiple physiological features for identification. Therefore, the main purpose of this study is to upgrade the identification rate by combining both the human face and the hand geometric features, and to develop a set of identity verification system with multiple biological features. In this system, the basic image processing techniques including thresholding method, edge detection, image form treatment and image projection etc, are used to find out the coordinates of feature points automatically, and then to calculate their combination of corresponding eigenvectors. On the comparison aspect, due to the eigenvector includes the eigenvalues of each identity, therefore, this study has made the use of difference value calculation methods including Euclidean distance and Hamming distance to compare and to check the degree of similarity among the eigenvector to achieve the goal of identification. Upon completion of the theoretical inference, this study, in addition to produce a complete calculation method, has also proved the practical effect of the identity verification system with multiple physical features by means of both human face and palm shape images.
APA, Harvard, Vancouver, ISO, and other styles
49

CHU, Jia Der, and 朱家德. "Application of 1-D Wavelet Transform in Speaker and Face Verification." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59740085959243294870.

Full text
Abstract:
碩士
義守大學
電機工程學系
91
This paper will study two problem in biometrics ─ speaker and face recognition. Firstly, to speaker recognition problem, we use wavelet transform to decompose speech signal into high and low frequency coefficients, then we apply difference traditional methods include PCA, LPCC, Fractal, and WTFT to extract low or high frequency as feature, which combined probabilistic neural network classifier to match voiceprint. This shows the proposed method will improve recognition rate and efficiency. Besides, to face recognition, we will obtain cumulative gray curve after 2D face image projected in horizon. Using discrete wavelet transform extracts low frequency coefficients as feature. For face identification and face matching application modes, we precede a set of experiments. The facial images are sampled from ORL database.Our experiments reveal that the proposed method possesses excellent recognition performance and efficiency. It is advantageous to realize a facial recognition system in a hardware-friendly, resource-constrained embedded environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Yi-ChunLee and 李易俊. "A Gabor Feature Based Horizontal and Vertical Discriminant for Face Verification." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27933947409060647591.

Full text
Abstract:
博士
國立成功大學
電腦與通信工程研究所
100
This thesis proposes three different approaches for face recognition. In the first approach, a novel feature extraction method for face recognition is proposed. The digital curvelet transform is used to extract the face features. In the method, an original image is convolved with 6 Gabor filters corresponding to various orientations and scales to give its Gabor representation. Then, the Gabor representation is analyzed by the ridgelet transform followed by the two-dimensional principal component analysis (2DPCA) which computes the eigenvectors of the ridgelet image covariance matrix. Experiments showed that the correct recognition rate of our method is up to 95.5%. For the second approach, a new method of the two-dimensional locality preserving projections (2DLPP) was proposed to extract Gabor features for face recognition. The 2DPCA is first utilized for dimensionality reduction of Gabor feature space, which is implemented directly from 2D image matrices. The objective of 2DLPP is to preserve the local structure of the image space by detecting the intrinsic manifold structure. In the method, an original image is convolved with Gabor filters corresponding to various orientations and scales to give its Gabor representation. Experiments are conducted on the ORL face database, which shows higher recognition performance of the proposed method. The top recognition rate can reach 95.5%. In the last approach, a novel discriminant analysis method for a Gabor-based image feature extraction and representation is proposed and then implemented. The horizontal and vertical two-dimensional principal component analysis (HV-2DPCA) is directly applied to a Gabor face to reduce the redundant information and preserves a bi-directional characteristic as well. It is followed by an enhanced Fisher linear discriminant model (EFM) generating a low-dimensional feature representation with enhanced discrimination power. By the most discriminant features, different types of classes of training samples are made widely apart and the same category classes are made as compact as possible. This novel algorithm is designated as the horizontal and vertical enhanced Gabor Fisher discriminant (HV-EGF). By use of various dimensions of features as well as various numbers of training samples, our experiments indicate that the proposed HV-EGF method provides a superior recognition accuracy relative to those by the Fisher linear discriminant (FLD), the EFM and the Gabor Fisher classifier (GFC) methods. In our proposal, the recognition accuracies up to 99.0% and 97.7% are reached with images of features dimensions and on the ORL and the Yale databases, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography