Дисертації з теми "3D face analysi"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 3D face analysi.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-35 дисертацій для дослідження на тему "3D face analysi".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Amin, Syed Hassan. "Analysis of 3D face reconstruction." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/6163.

Повний текст джерела
Анотація:
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters and the texture parameters. The proposed approach has many potential applications in the law enforcement, surveillance, medicine, computer games and the entertainment industries. This problem is addressed using an analysis by synthesis framework by reconstructing a 3D face model from identity photographs. The identity photographs are a widely used medium for face identi cation and can be found on identity cards and passports. The novel contribution of this thesis is a new technique for creating 3D face models from a single 2D face image. The proposed method uses the improved dense 3D correspondence obtained using rigid and non-rigid registration techniques. The existing reconstruction methods use the optical ow method for establishing 3D correspondence. The resulting 3D face database is used to create a statistical shape model. The existing reconstruction algorithms recover shape by optimizing over all the parameters simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step wise approach thus reducing the dimension of the parameter space and simplifying the opti- mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face image by using anatomical landmarks. The texture is then warped onto the 3D model by using the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over the shape parameters while matching a texture mapped model to the target image. There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for improving the quality of reconstruction by improving the cost function. Previous methods use qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy. The improvement in the performance of the cost function occurs as a result of improvement in the feature space comprising the landmark and intensity features. Previously, the feature space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate assumptions about its behaviour. The proposed approach simpli es the reconstruction problem by using only identity images, rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations. This makes sense, as frontal face images under standard illumination conditions are widely available and could be utilized for accurate reconstruction. The reconstructed 3D models with texture can then be used for overcoming the PIE variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lee, Jinho. "Synthesis and analysis of human faces using multi-view, multi-illumination image ensembles." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133366279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hu, Guosheng. "Face analysis using 3D morphable models." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808011/.

Повний текст джерела
Анотація:
Face analysis aims to extract valuable information from facial images. One effective approach for face analysis is the analysis by synthesis. Accordingly, a new face image synthesised by inferring semantic knowledge from input images. To perform analysis by synthesis, a genera- tive model, which parameterises the sources of facial variations, is needed. A 3D Morphable Model (3DMM) is commonly used for this purpose. 3DMMs have been widely used for face analysis because the intrinsic properties of 3D faces provide an ideal representation that is immune to intra-personal variations such as pose and illumination. Given a single facial input image, a 3DMM can recover 3D face (shape and texture) and scene properties (pose and illumination) via a fitting process. However, fitting the model to the input image remains a challenging problem. One contribution of this thesis is a novel fitting method: Efficient Stepwise Optimisation (ESO). ESO optimises sequentially all the parameters (pose, shape, light direction, light strength and texture parameters) in separate steps. A perspective camera and Phong reflectance model are used to model the geometric projection and illumination respectively. Linear methods that are adapted to camera and illumination models are proposed. This generates closed-form solu- tions for these parameters, leading to an accurate and efficient fitting. Another contribution is an albedo based 3D morphable model (AB3DMM). One difficulty of 3DMM fitting is to recover the illumination of the 2D image because the proportion of the albedo and shading contributions in a pixel intensity is ambiguous. Unlike traditional methods, the AB3DMM removes the illumination component from the input image using illumination normalisation methods in a preprocessing step. This image can then be used as input to the AB3DMM fitting that does not need to handle the lighting parameters. Thus, the fitting of the AB3DMM becomes easier and more accurate. Based on AB3DMM and ESO, this study proposes a fully automatic face recognition (AFR) system. Unlike the existing 3DMM methods which assume the facial landmarks are known, our AFR automatically detects the landmarks that are used to initialise our fitting algorithms. Our AFR supports two types of feature extraction: holistic and local features. Experimental results show our AFR outperforms state-of-the-art face recognition methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Xiaozhou. "3D facial expression modeling and analysis with topographic information." Diss., Online access via UMI:, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Повний текст джерела
Анотація:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Clement, Stephen J. "Sparse shape modelling for 3D face analysis." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/8248/.

Повний текст джерела
Анотація:
This thesis describes a new method for localising anthropometric landmark points on 3D face scans. The points are localised by fitting a sparse shape model to a set of candidate landmarks. The candidates are found using a feature detector that is designed using a data driven methodology, this approach also informs the choice of landmarks for the shape model. The fitting procedure is developed to be robust to missing landmark data and spurious candidates. The feature detector and landmark choice is determined by the performance of different local surface descriptions on the face. A number of criteria are defined for a good landmark point and good feature detector. These inform a framework for measuring the performance of various surface descriptions and the choice of parameter values in the surface description generation. Two types of surface description are tested: curvature and spin images. These descriptions, in many ways, represent many aspects of the two most common approaches to local surface description. Using the data driven design process for surface description and landmark choice, a feature detector is developed using spin images. As spin images are a rich surface description, we are able to perform detection and candidate landmark labelling in a single step. A feature detector is developed based on linear discriminant analysis (LDA). This is compared to a simpler detector used in the landmark and surface description selection process. A sparse shape model is constructed using ground truth landmark data. This sparse shape model contains only the landmark point locations and relative positional variation. To localise landmarks, this model is fitted to the candidate landmarks using a RANSAC style algorithm and a novel model fitting algorithm. The results of landmark localisation show that the shape model approach is beneficial over template alignment approaches. Even with heavily contaminated candidate data, we are able to achieve good localisation for most landmarks.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Szeptycki, Przemyslaw. "Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00675988.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hariri, Walid. "Contribution à la reconnaissance/authentification de visages 2D/3D." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0905/document.

Повний текст джерела
Анотація:
L’analyse de visages 3D y compris la reconnaissance des visages et des expressions faciales 3D est devenue un domaine actif de recherche ces dernières années. Plusieurs méthodes ont été développées en utilisant des images 2D pour traiter ces problèmes. Cependant, ces méthodes présentent un certain nombre de limitations dépendantes à l’orientation du visage, à l’éclairage, à l’expression faciale, et aux occultations. Récemment, le développement des capteurs d’acquisition 3D a fait que les données 3D deviennent de plus en plus disponibles. Ces données 3D sont relativement invariables à l’illumination et à la pose, mais elles restent sensibles à la variation de l’expression. L’objectif principal de cette thèse est de proposer de nouvelles techniques de reconnaissance/vérification de visages et de reconnaissance d’expressions faciales 3D. Tout d’abord, une méthode de reconnaissance de visages en utilisant des matrices de covariance comme des descripteurs de régions de visages est proposée. Notre méthode comprend les étapes suivantes : le prétraitement et l’alignement de visages, un échantillonnage uniforme est ensuite appliqué sur la surface faciale pour localiser un ensemble de points de caractéristiques. Autours de chaque point, nous extrayons une matrice de covariance comme un descripteur de région du visage. Deux méthodes d’appariement sont ainsi proposées, et différentes distances (géodésiques / non-géodésique) sont appliquées pour comparer les visages. La méthode proposée est évaluée sur troisbases de visages GAVAB, FRGCv2 et BU-3DFE. Une description hiérarchique en utilisant trois niveaux de covariances est ensuite proposée et validée. La deuxième partie de cette thèse porte sur la reconnaissance des expressions faciales 3D. Pour ce faire, nous avons proposé d’utiliser les matrices de covariances avec les méthodes noyau. Dans cette contribution, nous avons appliqué le noyau de Gauss pour transformer les matrices de covariances en espace d’Hilbert. Cela permet d’utiliser les algorithmes qui sont déjà implémentés pour l’espace Euclidean (i.e. SVM) dans cet espace non-linéaire. Des expérimentations sont alors entreprises sur deux bases d’expressions faciales 3D (BU-3DFE et Bosphorus) pour reconnaître les six expressions faciales prototypiques
3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasetsincluding GAVAB, FRGCv2 and BU-3DFE. A hierarchical description using three levels of covariances is then proposed and validated. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions
Стилі APA, Harvard, Vancouver, ISO та ін.
10

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.

Повний текст джерела
Анотація:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.

Повний текст джерела
Анотація:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Drira, Hassen. "Statistical computing on manifolds for 3D face analysis and recognition." Thesis, Lille 1, 2011. http://www.theses.fr/2011LIL10075/document.

Повний текст джерела
Анотація:
La reconnaissance de visage automatique offre de nombreux avantages par rapport aux autres technologies biométriques en raison de la nature non-intrusive. Ainsi, les techniques de reconnaissance faciale ont reçu une attention croissante au sein de la communauté de vision par ordinateur au cours des trois dernières décennies. Un atout majeur de scans 3D sur l'imagerie couleur 2D est que les variations de éclairage et mise à l'échelle ont moins d'influence sur les scans 3D. Toutefois, la numérisation des données souffrent souvent du problème de données manquantes à cause de l'auto-occultation ou des imperfections des technologies de numérisation. En outre, les variations dues aux expressions faciales rendent difficile la reconnaissance automatique des visages 3D. Pour être utiles dans des applications du monde réel, les approches de reconnaissance faciale 3D devraient être en mesure de reconnaitre les surfaces faciales 3D, même dans la présence de grandes déformations dues aux expressions et des données manquantes. La plupart des recherches récentes ont été dirigés vers des techniques invariantes aux expressions faciales. Ils ont toutefois dépensé moins d'efforts pour faire face aux problème des données manquantes. Dans cet thèse, nous présentons un framework commun pour faire face aux expressions et aux données manquantes. En outre, dans le même cadre, notre framework permet de calculer des moyennes surfaces qui permettent une organization hiérarchique des bases de données de visages 3D pour permettre des recherches efficaces. Dans cette thèse, nous nous concentrons sur la tâche fondamentale de la reconnaissance faciale en 3D, fournir une analyse comparative de plusieurs approches, et offrir des solutions originales pour chacun des problèmes analysés
Automatic face recognition has many benefits over other biometric technologies due to the natural, non-intrusive, and high throughput nature of face data acquisition. Thus, the techniques for face recognition have received a growing attention within the computer vision community over the past three decades. In terms of a modality for face imaging, a major advantage of 3D scans over 2D color imaging is that variations in illumination and scaling have less influence on the 3D scans.However, scan data often suffer from the problem of missing parts dueto self-occlusions or imperfections in scanning technologies. Additionally, variations in face data due to facial expressions are challenging to 3D face recognition. In order to be useful in real-world applications, 3D face recognition approaches should be able to successfully recognize face scans even in the presence of large expression-based deformations and missing data due to occlusions and pose variation. Most recent research has been directed towards expression-invariant techniques and spent less effort to handle the missing parts problem. Few approaches handles the missing part problem but none has performed on a full database containing real missing data, they simulate some missing parts. We present a common framework handling both large expressions and missing parts due to large pose variation. In addition, with the same framework, we are able to average surfaces and hierarchically organize databases to allow efficient searches. In presence of occlusion, we propose to delete and restore occluded parts. The surface is first represented by radial curves (emanating from the nose tip fo the 3D face). Then a base is built using PCA for each curve. Hence, the missing part of the curve can be restored by projecting the existing part of it on the base. PCA is applied on the tangent space of the mean curve as it is linear space. Once the occlusion was detected and removed, the occlusion challenge can be handled as a missing data problem. Hence, we apply the restoration framework and then apply our radial-curve-based 3D face recognition algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Dagnes, Nicole. "3D human face analysis for recognition applications and motion capture." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.

Повний текст джерела
Анотація:
Cette thèse se propose comme une étude géométrique de la surface faciale en 3D, dont le but est de fournir un ensemble d'entités, issues du contexte de la géométrie différentielle, à utiliser comme descripteurs faciaux dans les applications d'analyse du visage, comme la reconnaissance faciale et la reconnaissance des expressions faciales. En effet, bien que chaque visage soit unique, tous les visages sont similaires et leurs caractéristiques morphologiques sont les mêmes pour tous les individus. Par conséquent, il est primordial pour l'analyse des visages d'extraire les caractéristiques faciales les plus appropriées. Tous les traits du visage, proposés dans cette étude, sont basés uniquement sur les propriétés géométriques de la surface faciale. En effet, l'objectif final de cette recherche est de démontrer que la géométrie différentielle est un outil complet pour l'analyse des visages et que les caractéristiques géométriques conviennent pour décrire et comparer des visages et, en général, pour extraire des informations pertinentes pour l'analyse faciale dans les différents domaines d'application. Enfin, ce travail se concentre aussi sur l'analyse des troubles musculo-squelettiques en proposant une quantification objective des mouvements du visage pour aider la chirurgie maxillo-faciale et la rééducation des mouvements du visage. Ce travail de recherche explore le système de capture du mouvement 3D, en adoptant la plateforme Technologie, Sport et Santé, située au Centre d'Innovation de l'Université de Technologie de Compiègne, au sein du Laboratoire de Biomécanique et Bioingénierie (BMBI)
This thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Alashkar, Taleb. "3D dynamic facial sequences analysis for face recognition and emotion detection." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10109/document.

Повний текст джерела
Анотація:
L’étude menée dans le cadre de cette thèse vise l’étude du rôle de la dynamique de formes faciales 3D à révéler l’identité des personnes et leurs états émotionnels. Pour se faire, nous avons proposé un cadre géométrique pour l’étude des formes faciales 3D et leurs dynamiques dans le temps. Une séquence 3D est d’abord divisée en courtes sous-séquences, puis chacune des sous-séquences obtenues est représentée dans une variété de Grassmann (ensemble des sous-espaces linéaires de dimension fixe). Nous avons exploité la géométrie de ces variétés pour comparer des sous-séquences 3D, calculer des statistiques (telles que des moyennes) et quantifier la divergence entre des éléments d’une même variété Grassmannienne. Nous avons aussi proposé deux représentations possibles pour les deux applications cibles – (1) la première est basée sur les dictionnaires (de sous-espaces) associée à des techniques de Dictionary Learning Sparse Coding pour la reconnaissance d’identité et (2) le représentation par des trajectoires paramétrées par le temps sur les Grassmanniennes couplée avec une variante de l’algorithme de classification SVM, permettant un apprentissage avec des données partielles, pour la détection précoce des émotions spontanée. Les expérimentations réalisées sur les bases publiques BU-4DFE, Cam3D et BP4D-Spontaneous montrent à la fois l’intérêt du cadre géométrique proposé (en terme de temps de calcul et de robustesse au bruit et aux données manquantes) et les représentations adoptées (dictionnaires pour la reconnaissance d’identité et trajectoires pour la détection précoce des émotions spontanées)
In this thesis, we have investigated the problems of identity recognition and emotion detection from facial 3D shapes animations (called 4D faces). In particular, we have studied the role of facial (shapes) dynamics in revealing the human identity and their exhibited spontaneous emotion. To this end, we have adopted a comprehensive geometric framework for the purpose of analyzing 3D faces and their dynamics across time. That is, a sequence of 3D faces is first split to an indexed collection of short-term sub-sequences that are represented as matrix (subspace) which define a special matrix manifold called, Grassmann manifold (set of k-dimensional linear subspaces). The geometry of the underlying space is used to effectively compare the 3D sub-sequences, compute statistical summaries (e.g. sample mean, etc.) and quantify densely the divergence between subspaces. Two different representations have been proposed to address the problems of face recognition and emotion detection. They are respectively (1) a dictionary (of subspaces) representation associated to Dictionary Learning and Sparse Coding techniques and (2) a time-parameterized curve (trajectory) representation on the underlying space associated with the Structured-Output SVM classifier for early emotion detection. Experimental evaluations conducted on publicly available BU-4DFE, BU4D-Spontaneous and Cam3D Kinect datasets illustrate the effectiveness of these representations and the algorithmic solutions for identity recognition and emotion detection proposed in this thesis
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Aljarrah, Inad A. "Three Dimensional Face Recognition Using Two Dimensional Principal Component Analysis." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1142453613.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Gevaux, Lou. "3D-hyperspectral imaging and optical analysis of skin for the human face." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES035.

Повний текст джерела
Анотація:
L’imagerie hyperspectrale (HSI), une méthode non invasive permettant de mesurer in vivo la réflectance spectrale, a démontré son fort potentiel pour l’analyse des propriétés optiques de la peau pour des zones planes et de petite taille : l’association d’un modèle optique de peau, d’une modélisation de ses interactions avec la lumière et d’une méthode d’optimisation permet d’analyser l’image hyperspectrale en chaque pixel et d’estimer des cartographies de concentrations en chromophores, comme la mélanine et le sang. Le but de ce travail est l’extension de la méthode pour la mesure et l’analyse de surfaces larges et non planes, et en particulier du visage humain. Les mesures d’objets complexes comme le visage sont affectées par des variations spatiales d’éclairement, que l’on appelle dérives d’éclairement. A moins d’être prises en compte dans le modèle, celles-ci créent des erreurs dans l’analyse des images.Nous proposons en 1ère partie de ce travail une caméra HSI grand-champ (acquisition de bandes spectrales de 10 nm de largeur entre 400 et 700 nm), combinée avec un système d’acquisition de la géométrie 3D par projection de franges. Une acquisition courte étant cruciale in vivo, un compromis entre résolution et vitesse d’acquisition permet un temps d’acquisition inférieur à 5 secondes.La caméra HSI a été associée avec un scanner 3D afin de corriger les dérives d’éclairement en utilisant la géométrie 3D et des principes de radiométrie. L’éclairement reçu par le visage est calculé en chaque pixel puis utilisé pour supprimer les dérives d’éclairement dans l’image hyperspectrale, un prétraitement à appliquer avant l’analyse. Cependant, cette méthode n’est pas satisfaisante sur les zones du visage pratiquement perpendiculaires à l’axe optique de la caméra, comme les côtés du nez, et a été rejetée en faveur d’un algorithme d’optimisation robuste aux dérives d’éclairement dans la méthode d’analyse.L’analyse de la peau à partir des images hyperspectrales est basée sur l’utilisation de modèles optiques. La peau est modélisée par un matériau translucide à deux couches dont les propriétés d’absorption dépendent de sa composition en chromophores. Les interactions lumière-peau sont modélisées à l’aide d’une approche à deux flux. La résolution d’un problème inverse par optimisation permet d’estimer la composition en chromophores à partir de la réflectance spectrale mesurée. Les modèles optiques choisis sont un bon compromis entre une description fidèle de la peau et un temps de calcul acceptable, qui augmente de manière exponentielle avec le nombre de paramètres du modèle. Les cartes de chromophores estimées peuvent être affichées sous forme 3D grâce à l’information mesurée par la caméra HSI-3D.Un point faible de la méthode est le manque d’information sur les propriétés de diffusion de la peau, considérées identiques d’une personne à l’autre et d’une partie du corps à l’autre. Dans la 2nd partie de ce travail, nous utilisons le projecteur de franges initialement dédié à l’acquisition 3D, pour mesurer la fonction de transfert de modulation (FTM) de la peau, qui fournit de l’information sur l’absorption et la diffusion. La FTM est mesurée par imagerie dans le domaine fréquentiel spatial (SFDI) et analysée avec l’équation de la diffusion pour estimer le coefficient de diffusion de la peau. Sur des objets non-plats, l’extraction d’information indépendamment des dérives d’éclairement est un défi important. L’originalité de la méthode proposée repose sur l’association de la HSI et SFDI dans le but d’estimer des cartes de coefficient de diffusion sur le visage indépendamment de sa forme.Nous insistons sur l’importance d’une acquisition courte pour des mesures in vivo, cependant, l’analyse par optimisation demande plusieurs heures de calcul. L’utilisation des réseaux de neurones comme alternative à l’optimisation nous semble prometteur, des premiers résultats ayant montré une forte réduction du temps de calcul, d’environ 1 heure à 1 seconde
Hyperspectral imaging (HSI), a non-invasive, in vivo imaging method that can be applied to measure skin spectral reflectance, has shown great potential for the analysis of skin optical properties on small, flat areas: by combining a skin model, a model of light-skin interaction and an optimization algorithm, an estimation of skin chromophore concentration in each pixel of the image can be obtained, corresponding to quantities such as melanin and blood. The purpose of this work is to extend this method to large, non-flat areas, in particular the human face. The accurate measurement of complex objects such as the face must account for variances of illumination that result from the 3D geometry of an object, which we call irradiance drifts. Unless they are accounted for, irradiance drifts will lead to errors in the hyperspectral image analysis.In the first part of the work, we propose a measurement setup comprising a wide field HSI camera (with an acquisition range of 400 - 700 nm, in 10 nm width wavebands) and a 3D measurement system using fringe projection. As short acquisition time is crucial for in vivo measurement, a trade-off between resolution and speed has been made so that the acquisition time remains under 5 seconds.To account for irradiance drifts, a correction method using the surface 3D geometry and radiometry principles is proposed. The irradiance received on the face is computed for each pixel of the image, and the resulting data used to suppress the irradiance drifts in the measured hyperspectral image. This acts as a pre-processing step to be applied before image analysis. This method, however, failed to yield satisfactory results on those parts of the face almost perpendicular to the optical axis of the camera, such as the sides of the nose, and was therefore discarded in favor of using an optimization algorithm robust to irradiance drifts in the analysis method.Skin analysis from the measured hyperspectral image is performed using optical models and an optimization method. Skin is modeled as a two-layer translucent material whose absorption and scattering properties are determined by its composition in chromophores. Light-skin interactions are modeled using a two-flux method. An inverse problem is solved by optimization to retrieve information about skin composition from the measured reflectance. The chosen optical models represent a trade-off between accuracy and acceptable computation time, which increases exponentially with the number of parameters in the model. The resulting chromophore maps can be added to the 3D mesh measured using the 3D-HSI camera for display purposes.In the spectral reflectance analysis method, skin scattering properties are assumed to be the same for everyone and on every part of the body, which represents a shortcoming. In the second part of this work, the fringe projector originally intended for measuring 3D geometry is used to acquire skin modulation transfer function (MTF), a quantity that yields information about both skin absorption and scattering coefficients. The MTF is measured using spatial frequency domain imaging (SFDI) and analyzed by an optical model relying on the diffusion equation to estimate skin scattering coefficients. On non-flat objects, retrieving such information independently from irradiance drifts is a significant challenge. The novelty of the proposed method is that it combines HSI and SFDI to obtain skin scattering coefficient maps of the face independently from its shape.We emphasize throughout this dissertation the importance of short acquisition time for in vivo measurement. The HSI analysis method, however, is extremely time-consuming, preventing real time image analysis. A preliminary attempt to address this shortcoming is presented, using neural networks to replace optimization-based analysis. Initial results of the method have been promising, and could drastically reduce calculation time from around an hour to a second
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Xianwang. "Single View Reconstruction for Human Face and Motion with Priors." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/62.

Повний текст джерела
Анотація:
Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square. Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking. Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Han, Xia. "Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5347.

Повний текст джерела
Анотація:
The purpose of this research was to enhance the methods towards the development of an efficient three dimensional face recognition system. More specifically, one of our aims was to investigate how the use of curvature of the diagonal profiles, extracted from 3D facial geometry models can help the neutral face recognition processes. Another aim was to use a gender classifier employed on 3D facial geometry in order to reduce the search space of the database on which facial recognition is performed. 3D facial geometry with facial expression possesses considerable challenges when it comes face recognition as identified by the communities involved in face recognition research. Thus, one aim of this study was to investigate the effects of the curvature-based method in face recognition under expression variations. Another aim was to develop techniques that can discriminate both expression-sensitive and expression-insensitive regions for ii face recognition based on non-neutral face geometry models. In the case of neutral face recognition, we developed a gender classification method using support vector machines based on the measurements of area and volume of selected regions of the face. This method reduced the search range of a database initially for a given image and hence reduces the computational time. Subsequently, in the characterisation of the face images, a minimum feature set of diagonal profiles, which we call T shape profiles, containing diacritic information were determined and extracted to characterise face models. We then used a method based on computing curvatures of selected facial regions to describe this feature set. In addition to the neutral face recognition, to solve the problem arising from data with facial expressions, initially, the curvature-based T shape profiles were employed and investigated for this purpose. For this purpose, the feature sets of the expression-invariant and expression-variant regions were determined respectively and described by geodesic distances and Euclidean distances. By using regression models the correlations between expressions and neutral feature sets were identified. This enabled us to discriminate expression-variant features and there was a gain in face recognition rate. The results of the study have indicated that our proposed curvature-based recognition, 3D gender classification of facial geometry and analysis of facial expressions, was capable of undertaking face recognition using a minimum set of features improving efficiency and computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Strand, Robin. "Distance Functions and Image Processing on Point-Lattices : with focus on the 3D face- and body-centered cubic grids." Doctoral thesis, Uppsala universitet, Centrum för bildanalys, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9312.

Повний текст джерела
Анотація:
There are many imaging techniques that generate three-dimensional volume images today. With higher precision in the image acquisition equipment, storing and processing these images require increasing amount of data processing capacity. Traditionally, three-dimensional images are represented by cubic (or cuboid) picture elements on a cubic grid. The two-dimensional hexagonal grid has some advantages over the traditionally used square grid. For example, less samples are needed to get the same reconstruction quality, it is less rotational dependent, and each picture element has only one type of neighbor which simplifies many algorithms. The corresponding three-dimensional grids are the face-centered cubic (fcc) grid and the body-centered cubic (bcc) grids. In this thesis, image representations using non-standard grids is examined. The focus is on the fcc and bcc grids and tools for processing images on these grids, but distance functions and related algorithms (distance transforms and various representations of objects) are defined in a general framework allowing any point-lattice in any dimension. Formulas for point-to-point distance and conditions for metricity are given in the general case and parameter optimization is presented for the fcc and bcc grids. Some image acquisition and visualization techniques for the fcc and bcc grids are also presented. More theoretical results define distance functions for grids of arbitrary dimensions. Less samples are needed to represent images on non-standard grids. Thus, the huge amount of data generated by for example computerized tomography can be reduced by representating the images on non-standard grids such as the fcc or bcc grids. The thesis gives a tool-box that can be used to acquire, process, and visualize images on high-dimensional, non-standard grids.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Bolkart, Timo [Verfasser], and Stefanie [Akademischer Betreuer] Wuhrer. "Dynamic and groupwise statistical analysis of 3D faces / Timo Bolkart. Betreuer: Stefanie Wuhrer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2016. http://d-nb.info/1104733293/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Derkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.

Повний текст джерела
Анотація:
Al llarg de les últimes dècades, l'anàlisi facial ha atret un interès creixent i considerable per part de la comunitat investigadora amb l’objectiu de millorar la interacció i la cooperació entre les persones i les màquines. Aquest interès ha propiciat la creació de sistemes automàtics capaços de reaccionar a diversos estímuls com ara els moviments del cap o les emocions d’una persona. Més enllà, les tasques automatitzades s’han de poder realitzar amb gran precisió dins d’entorns no controlats, fet que ressalta la necessitat d'algoritmes que aprofitin al màxim els avantatges que proporcionen les dades 3D. Aquests sistemes poden ser útils en molts àmbits com ara la interacció home-màquina, tutories, entrevistes, atenció sanitària, màrqueting, etc. En aquesta tesi, ens centrem en dos aspectes de l'anàlisi facial: el reconeixement d'expressions i l'estimació de l'orientació del cap. En ambdós casos, ens enfoquem en l’ús de dades 3D i presentem contribucions que tenen com a objectiu la identificació de representacions significatives de la geometria facial mitjançant mètodes basats en la descomposició espectral: 1. Proposem una tecnologia basada en la representació espectral per al reconeixement d’expressions facials utilitzant exclusivament la geometria 3D, la qual ens permet una descripció completa de la superfície subjacent que pot ser ajustada al nivell de detall desitjat. Dita tecnologia, es basa en la descomposició de fragments locals de la superfície en les seves components de freqüència espacial, d’una manera semblant a la transformada de Fourier, que estan relacionades amb característiques intrínseques de la superfície. Concretament, proposem la utilització de les Graph Laplacian Features (GLFs) que resulten de la projecció dels fragments locals de la superfície a una base comuna obtinguda a partir del Graph Laplacian eigenspace. El mètode proposat s’ha avaluat en termes de reconeixement d’expressions i Action Units (activacions musculars facials), i els resultats obtinguts confirmen que les GLFs produeixen taxes de reconeixement comparables a l’estat de l’art. 2. Proposem un mètode per a l’estimació de l’orientació del cap que permet modelar el manifold subjacent que formen les rotacions generals en 3D. En primer lloc, construïm un sistema completament automàtic que combina la detecció de landmarks (punts facials rellevants) i característiques basades en diccionari, el qual ha obtingut els millors resultats al FG2017 Head Pose Estimation Challenge. Posteriorment, utilitzem una representació basada en tensors i la seva descomposició en els valors singulars d’ordre més alt per tal de separar els subespais de cada factor de rotació i mostrar que cada un d’ells té una estructura clara que pot ser modelada amb funcions trigonomètriques. Aquesta representació proporciona un coneixement detallat del comportament de les dades i pot ser utilitzada per millorar l’estimació de les orientacions dels angles del cap.
Facial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fernandez-Abrevaya, Victoria. "Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D." Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.

Повний текст джерела
Анотація:
Les modèles du visage 3D fondés sur des données sont une direction prometteuse pour capturer les subtilités complexes du visage humain, et une composante centrale de nombreuses applications grâce à leur capacité à simplifier des tâches complexes. La plupart des approches basées sur les données à ce jour ont été construites à partir d’un nombre limité d’échantillons ou par une augmentation par données synthétiques, principalement en raison de la difficulté à obtenir des scans 3D à grande échelle. Pourtant, il existe une quantité substantielle d’informations qui peuvent être recueillies lorsque l’on considère les sources publiquement accessibles qui ont été capturées au cours de la dernière décennie, dont la combinaison peut potentiellement apporter des modèles plus puissants.Cette thèse propose de nouvelles méthodes pour construire des modèles de la géométrie du visage 3D fondés sur des données, et examine si des performances améliorées peuvent être obtenues en apprenant à partir d’ensembles de données vastes et variés. Afin d’utiliser efficacement un grand nombre d’échantillons d’apprentissage, nous développons de nouvelles techniques d’apprentissage profond conçues pour gérer efficacement les données faciales tri-dimensionnelles. Nous nous concentrons sur plusieurs aspects qui influencent la géométrie du visage : ses composantes de forme, y compris les détails, ses composants de mouvement telles que l’expression, et l’interaction entre ces deux sous-espaces.Nous développons notamment deux approches pour construire des modèles génératifs qui découplent l’espace latent en fonction des sources naturelles de variation, e.g.identité et expression. La première approche considère une nouvelle architecture d’auto-encodeur profond qui permet d’apprendre un modèle multilinéaire sans nécessiter l’assemblage des données comme un tenseur complet. Nous proposons ensuite un nouveau modèle non linéaire basé sur l’apprentissage antagoniste qui davantage améliore la capacité de découplage. Ceci est rendu possible par une nouvelle architecture 3D-2D qui combine un générateur 3D avec un discriminateur 2D, où les deux domaines sont connectés par une couche de projection géométrique.En tant que besoin préalable à la construction de modèles basés sur les données, nous abordons également le problème de mise en correspondance d’un grand nombre de scans 3D de visages en mouvement. Nous proposons une approche qui peut gérer automatiquement une variété de séquences avec des hypothèses minimales sur les données d’entrée. Ceci est réalisé par l’utilisation d’un modèle spatio-temporel ainsi qu’une initialisation basée sur la régression, et nous montrons que nous pouvons obtenir des correspondances précises d’une manière efficace et évolutive.Finalement, nous abordons le problème de la récupération des normales de surface à partir d’images naturelles, dans le but d’enrichir les reconstructions 3D grossières existantes. Nous proposons une méthode qui peut exploiter toutes les images disponibles ainsi que les données normales, qu’elles soient couplées ou non, grâce à une nouvelle architecture d’apprentissage cross-modale. Notre approche repose sur un nouveau module qui permet de transférer les détails locaux de l’image vers la surface de sortie sans nuire aux performances lors de l’auto-encodage des modalités, en obtenant des résultats de pointe pour la tâche
Data-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Schurch, Brandt Roger. "Three-dimensional imaging and analysis of electrical trees." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/threedimensional-imaging-and-analysis-of-electrical-trees(73e032f6-3e6b-4ee9-9cc1-953a11f36cb3).html.

Повний текст джерела
Анотація:
Electrical trees are micrometre-size tubular channels of degradation in high voltage polymeric insulation, a precursor to failure of electrical power plant. Hence, electrical trees critically affect the reliability of power systems and the performance of new insulation designs. Imaging laboratory-grown electrical trees has been an important tool for studying how trees develop. Commonly, electrical trees prepared in transparent or translucent polymers are imaged using traditional optical methods. Consequently, most of the analysis has been based on two-dimensional (2D) images of trees, thus, valuable information may be lost. However, electrical trees are complex interconnected structures that require a tree-dimensional (3D) approach for more complete analysis. This thesis investigates a method for imaging and analysis of electrical trees to characterise their 3D structure and provide a platform for further modelling. Laboratory created electrical trees were imaged using X-ray Computed Tomography (XCT) and Serial Block-Face Scanning Electron Microscopy (SBFSEM), 3D imaging techniques that provide sub-micrometre spatial resolution. Virtual replicas of the trees, which are the 3D geometrical models representing the real electrical trees, were generated and new indices to characterise the 3D structure of electrical trees were developed. These parameters were indicative of differences in tree growth and thus, they can be used to investigate patterns and classify the structure of electrical trees. The progression of the tree was analysed using cross-sections of the tree that are orthogonal to the growth: the number of tree channels and area covered by them were measured. The fractal dimension of the tree was calculated from the 3D model and from the 2D projections, the latter being lower for all the tree-type structures studied. Parameters from the skeleton of the tree such as number of nodes, segment length, tortuosity and branch angle were measured. Most of the mean segment lengths ranged 6-13 µm, which is in accordance to the 10µm proposed by various tree-growth models. The capabilities of XCT and SBFSEM imaging techniques were evaluated in their application to electrical trees. Bush and branch trees, including early-growth electrical trees (of length 20-40 µm), were analysed and compared using the comprehensive tool of visualisation and characterisation developed. A two-stage tree-growth experiment was conducted to analyse the progression and development of tree branches using XCT: tree channels after the second stage of growth were wider than after the first, while the fractal dimension remained the same. The capabilities of XCT and SBFSEM were tested for imaging electrical trees in optically-opaque materials such as micro and nano-filled epoxy compounds. The general structure of trees in epoxy filled up to 20 wt% micro-silica was observed using both techniques. The use of a virtual replica as the 3D geometrical model for the simulation of the electric field distribution using Finite Element Analysis (FEA) was preliminary explored. A combination of the imaging techniques is proposed for a more complete structural analysis of trees. It is believed that a great impact towards understanding electrical treeing will be achieved using the 3D technical platform developed in this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Akilo, Michele Arinze. "design and analysis of a composite panel with ultra-thin glass faces and a 3d–printed polymeric core." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15351/.

Повний текст джерела
Анотація:
The development of high-strength ultra-thin glass is becoming very interesting for the building sector due to its remarkable mechanical characteristics. This lightweight and rather strong material, better known for smartphone screen applications, is currently being researched and developed for building solutions such as façade adaptive panels, high performance windows and protective layers in interior architecture. However, ultra-thin glass is also quite flexible to make it a reliable and safe building material. Therefore, one of the main challenges is to find a way to stiffen it. This thesis project aims at exploring a feasible solution for an ultra-thin sandwich panel with a 3D printed stiffening core. Samples with different topologies are going to be designed and tested in bending. A numerical model for all options will help the evaluation of the panels’ stiffness. In this way, a possible composite panel for a façade application is proposed. Finally, after the discussion of the observed results, further recommendation for future studies in this brand-new research area will be given.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Assali, Pierre. "Modélisation géostructurale 3D de parois rocheuses en milieu ferroviaire : application aux ouvrages en terre." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD009.

Повний текст джерела
Анотація:
Ce travail de thèse vise une optimisation des méthodologies de modélisation géostructurale, permettant d'aboutir à une meilleure gestion des aléas rocheux affectant le système ferroviaire. La caractérisation géométrique des massifs rocheux est entreprise grâce à une classification des modèles en sous-ensembles correspondant aux principales familles de discontinuités. En parallèle de cette caractérisation automatisée, une seconde approche dite manuelle a été examinée. Cette approche combine données tridimensionnelles (nuages de points denses) et support photographique (images 2D). Les données sur les discontinuités planaires, traditionnellement acquises manuellement en certains points nécessairement accessibles du massif, résultent désormais de l'analyse des modèles couvrant l'ensemble de l'ouvrage. Ce projet a permis le développement d'un outil de modélisation améliorant la connaissance du patrimoine rocheux sans engager la sécurité du personnel, ni la capacité de la ligne ferroviaire
This project aims at an optimization of geostructural modeling methodolgies, leading to a better knowledge and a better management of the rock risk impacting the railway system. Acquired 3D models are exploited in order ton convert 3D point clouds into geostructural analysis. Hence, we have developed a semi-automatic process that allows 3D models to be combined with the results of field surveys in order to provide more precise analyses of rock faces, for example, by classifying rock discontinuities into subsets according to their orientation. A second approach is proposed, combining both 3D point clouds (from LiDAR or image matching) and 2D digital images. Combining these high-quality data with the proposed automatic and manual processing method greatly improves the geometrical analysis of rock faces, increases the reliability of structural interpretations, and enables reinforcement procedures to be optimized
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Повний текст джерела
Анотація:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Повний текст джерела
Анотація:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kim, Leejin. "Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/567.

Повний текст джерела
Анотація:
The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Cacchi, Alberto. "Analisi di sensibilità per la valutazione di driver aritmici con catetere ad alta risoluzione." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22987/.

Повний текст джерела
Анотація:
La Fibrillazione Atriale è il tipo di aritmia cardiaca più comune, ed è caratterizzata da un’attivazione irregolare degli atri che perdono la capacità di contrarsi in maniera coordinata. Nonostante i grandi sforzi della comunità scientifica per migliorare l'efficacia delle terapie sulla FA, esse rimangono non ottimali poiché i meccanismi di mantenimento della patologia non sono identificati con chiarezza. In, particolare viene presentata la teoria dei rotori, come principali meccanismi di innesco e mantenimento della FA. In questo lavoro di tesi è stato implementato un sistema che analizza i dati elettroanatomici, acquisiti con catetere Advisor™ HD Grid ad alta risoluzione, presso la sala di elettrofisiologia dell’U.O. di Cardiologia dell’ospedale “Bufalini” di Cesena, durante procedure di ablazione transcatetere su pazienti affetti da FA. Lo scopo è valutare l’attività elettrica dell’atrio e individuare in particolare rotori fissi e mobili. Per localizzarli si è utilizzata una procedura in due step: 1) preprocessing dei segnali acquisiti tramite catetere Advisor™ HD Grid e costruzione delle mappe di fase 3D, 2) stima delle PS (punti di inversione di fase da -π a π), e detezione dei rotori stabili. Si sono selezionate come rotori, solo le PS con persistenza temporale superiore al doppio del periodo dominante medio del segmento di riferimento. Dopo aver effettuato una analisi di sensibilità su due parametri importanti (“bucomin”: distanza temporale tra due PS successive, e “distmin”: distanza spaziale tra le stesse), e deciso di conseguenza i valori ideali, si è passato alla elaborazione dei segnali acquisiti provenienti dai 3 pazienti diversi. Alcuni dei rotori individuati, avevano durate consistenti pari all’intera acquisizione. Questo risultato, è un punto di partenza importante e se confermato con analisi molto più ampie, potrebbe validare la teoria dei rotori fissi e mobili come meccanismo alla base della generazione e del mantenimento della FA.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

"Automatic segmentation and registration techniques for 3D face recognition." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074674.

Повний текст джерела
Анотація:
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research.
Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets.
We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score.
Tang, Xinmin.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 109-117).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Barbosa, Rui Filipe de Arvins. "Accuracy Analysis of Region-Based 2D and 3D Face Recognition - Comparasion of Nasal and Mask-Wearing Ocular Regions." Master's thesis, 2020. http://hdl.handle.net/10316/93944.

Повний текст джерела
Анотація:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
A evolução dos sistemas de FR que tem ocorrido recentemente foi em grande parte consequência da evolução da tecnologia disponível, permitindo incluir novas analises 3D combinadas com os métodos 2D já desenvolvidos para sistemas de FR, aliado ao desenvolvimento e melhoramento de novos algoritmos de machinelearning.Apesar das ultimas conquistas dos sistemas de FR, recentes mudanças de hábitos, como a generalização da utilização da máscara como consequência da pandemia de COVID-19, representam um novo desafio para os algoritmos de FR. A maioria dos metodos não foi testada nesta nova realidade, tornando um estudo atualizado sobre FR fundamental para entender se poderão ser reutilizados ou encontram-se obsoletos.Neste trabalho um modelo classico de deteção de features desenvolvido por Emambakhsh & Evans et al. [1], baseado em patches nasais esfericos que combinados com uma NN projetada e personalizada e projetado, com o objetivo de analisar possíveis aplicações sobre uma populacão multicultural e diversificada, como foi proposto pelo artigo.Para adequar a tese a nova realidade, foram realizados testes para comprovarque algoritmos focados na regiao ocular alcançam valores de sucesso semelhantes quando comparados com a regiao nasal de forma a superar a oclusão da mesma como consequencia da utilização de mascara devido à pandemia COVID. Uma segunda versao do sistema FR inicialmente implementado para demonstrar o primeiro objetivo foi projetada tendo sido demonstrado que estes efetivamente mantem uma precisao comparável no domínio 3D.
The evolution of Facial Recognition (FR) systems that has occurred in recenttimes was largely consequence of the evolution in the available technology allowing to include new 3 dimensions analysis combined with the 2 dimensions methodsalready developed in FR systems, combined with the development of new and improved machine learning algorithms.Despite FR systems latter achievements, recent habits changes such as the generalization of face covering, as a consequence of COVID-19 pandemic present anew challenge to FR algorithms. The majority of the methods have not been testedin this new reality making an updated survey over FR fundamental to understand ifthey can be reused or appear obsolete.In this work a classic feature extractor algorithm developed by Emambakhsh & Evans et al. [1] based on spherical patches working combined with a designed and personalized Neural Network (NN) is applied with the objective to demonstrate the importance of the nasal region for 3D FR algorithms, as stated in the article.In order to adapt the research to a new reality, tests were preformed to prove thatalgorithms focused on the ocular region reach similar values of success when compared with the nasal region in order to overcome the nose occlusion consequence of using face coverings due to the COVID pandemic. A second version of the FR system built for the first objective demonstration was implemented having been demonstrated that these effectively have comparable accuracy in the 3D domain. .
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Jin-an, Jiang, and 江金安. "The Computer System Development For Tunnel Face Image Analysis and 3D Display." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/77791260701962768658.

Повний текст джерела
Анотація:
碩士
國立臺灣科技大學
營建工程系
88
The“NATM”, an excavation method of rock tunnel, is popularly used at the present time. The traditional method of geology record is recorded in the tunnel site by the geologist or the experienced engineers. It is unavoidable to leave out or misjudge that they record the geological conditions in a short time by naked eyes, and to draw a sectional or expanding drawing will be difficult and waste lots of time. Furthermore, the 2D drawing is inconceivable to a real 3D space. For this reason, the purpose of this study is to develop a computer system named “Tunnel Face Image Analysis and 3D Display System”. Continuous tunnel face images in the tunnel site are filmed and stacked by using the technology of 3D display to show the orientation of weak planes, including strike and dip. The computer system was developed using Borland C++ Builder 5.0, 3D display software “Slicer Dicer” and Microsoft Access 97. It can improve the traditional geological recording and is advantageous to the information communication and feedback. The functions of image preprocessing, including enhancement and segmentation, enhance the weak planes and extract them from image. Image interpolation is made between two rounds. 3D images are reconstructed to show the orientation of weak planes, including strike and dip. The 3D visual effect will help us realize the geological condition more easily and clearly.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

MASI, IACOPO. "From motion to faces: 3D-assisted automatic analysis of people." Doctoral thesis, 2014. http://hdl.handle.net/2158/853304.

Повний текст джерела
Анотація:
From motion to faces: 3D-assisted automatic analysis of people. This work proposes new computer vision algorithms about recognizing people by exploiting the face and the imaged appearance of the body. Many computer vision algorithms are covered: tracking, face recognition and person re-identification.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mohammadzade, Narges Hoda. "Two- and Three-dimensional Face Recognition under Expression Variation." Thesis, 2012. http://hdl.handle.net/1807/32773.

Повний текст джерела
Анотація:
In this thesis, the expression variation problem in two-dimensional (2D) and three-dimensional (3D) face recognition is tackled. While discriminant analysis (DA) methods are effective solutions for recognizing expression-variant 2D face images, they are not directly applicable when only a single sample image per subject is available. This problem is addressed in this thesis by introducing expression subspaces which can be used for synthesizing new expression images from subjects with only one sample image. It is proposed that by augmenting a generic training set with the gallery and their synthesized new expression images, and then training DA methods using this new set, the face recognition performance can be significantly improved. An important advantage of the proposed method is its simplicity; the expression of an image is transformed simply by projecting it into another subspace. The above proposed solution can also be used in general pattern recognition applications. The above method can also be used in 3D face recognition where expression variation is a more serious issue. However, DA methods cannot be readily applied to 3D faces because of the lack of a proper alignment method for 3D faces. To solve this issue, a method is proposed for sampling the points of the face that correspond to the same facial features across all faces, denoted as the closest-normal points (CNPs). It is shown that the performance of the linear discriminant analysis (LDA) method, applied to such an aligned representation of 3D faces, is significantly better than the performance of the state-of-the-art methods which, rely on one-by-one registration of the probe faces to every gallery face. Furthermore, as an important finding, it is shown that the surface normal vectors of the face provide a higher level of discriminatory information rather than the coordinates of the points. In addition, the expression subspace approach is used for the recognition of 3D faces from single sample. By constructing expression subspaces from the surface normal vectors at the CNPs, the surface normal vectors of a 3D face with single sample can be synthesized under other expressions. As a result, by improving the estimation of the within-class scatter matrix using the synthesized samples, a significant improvement in the recognition performance is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Villa, C., Jo Buckberry, C. Cattaneo, B. Frohlich, and N. Lynnerup. "Quantitative analysis of the morphological changes of the pubic symphyseal face and the auricular surface and implications for age at death estimation." 2015. http://hdl.handle.net/10454/7176.

Повний текст джерела
Анотація:
Yes
Age estimation methods are often based on the age-related morphological changes of the auricular surface and the pubic bone. In this study, a mathematical approach to quantify these changes has been tested analyzing the curvature variation on 3D models from CT and laser scans. The sample consisted of the 24 Suchey–Brooks (SB) pubic bone casts, 19 auricular surfaces from the Buckberry and Chamberlain (BC) “recording kit” and 98 pelvic bones from the Terry Collection (Smithsonian Institution). Strong and moderate correlations between phases and curvature were found in SB casts (ρ 0.60–0.93) and BC “recording kit” (ρ 0.47–0.75), moderate and weak correlations in the Terry Collection bones (pubic bones: ρ 0.29–0.51, auricular surfaces: ρ 0.33–0.50) but associated with large individual variability and overlap of curvature values between adjacent decades. The new procedure, requiring no expert judgment from the operator, achieved similar correlations that can be found in the classic methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії