Добірка наукової літератури з теми "3D face analysi"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "3D face analysi".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "3D face analysi"

1

Venkatakrishnan Ragu, D., C. Hariram, N. Anantharaj, and A. Muthulakshmi. "3D Face Recognition with Occlusions Using Fisher Faces Projection." Applied Mechanics and Materials 573 (June 2014): 442–46. http://dx.doi.org/10.4028/www.scientific.net/amm.573.442.

Повний текст джерела
Анотація:
In recent years, the 3-D face has become biometric modal, for security applications. Dealing with occlusions covering the facial surface is difficult to handle. Occlusion means blocking of face images by objects such as sun glasses, kerchiefs, hands, hair and so on. Occlusions are occurred by facial expressions, poses also. Basically consider two things: i) Occlusion handling for surface registration and ii). Missing data handling for classification. For registration to use an adaptively-selected-model based registration scheme is used. After registering occlusions are detected and removed. In order to handle the missing data we use a masking strategy call masked projection technique called Fisher faces Projection. Registration based on the adaptively selected model together with the masked analysis offer an occlusion robust face recognition system.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ramasubramanian, M., and M. A. Dorai Rangaswamy. "ANALYSIS OF 3D FACE RECONSTRUCTION." International Journal on Intelligent Electronic Systems 8, no. 1 (2014): 14–21. http://dx.doi.org/10.18000/ijies.30134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ullah, Zabeeh, Imran Mumtaz, and Muhammad Sajid Khan. "Analysis of 3D Face Modeling." International Journal of Signal Processing, Image Processing and Pattern Recognition 8, no. 11 (November 30, 2015): 7–14. http://dx.doi.org/10.14257/ijsip.2015.8.11.02.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abbas, Hawraa H., Bilal Z. Ahmed, and Ahmed Kamil Abbas. "3D Face Factorisation for Face Recognition Using Pattern Recognition Algorithms." Cybernetics and Information Technologies 19, no. 2 (June 1, 2019): 28–37. http://dx.doi.org/10.2478/cait-2019-0013.

Повний текст джерела
Анотація:
Abstract The face is the preferable biometrics for person recognition or identification applications because person identifying by face is a human connate habit. In contrast to 2D face recognition, 3D face recognition is practically robust to illumination variance, facial cosmetics, and face pose changes. Traditional 3D face recognition methods describe shape variation across the whole face using holistic features. In spite of that, taking into account facial regions, which are unchanged within expressions, can acquire high performance 3D face recognition system. In this research, the recognition analysis is based on defining a set of coherent parts. Those parts can be considered as latent factors in the face shape space. Non-negative matrix Factorisation technique is used to segment the 3D faces to coherent regions. The best recognition performance is achieved when the vertices of 20 face regions are utilised as a feature vector for recognition task. The region-based 3D face recognition approach provides a 96.4% recognition rate in FRGCv2 dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Goto, Lyè, Wonsup Lee, Toon Huysmans, Johan F. M. Molenbroek, and Richard H. M. Goossens. "The Variation in 3D Face Shapes of Dutch Children for Mask Design." Applied Sciences 11, no. 15 (July 25, 2021): 6843. http://dx.doi.org/10.3390/app11156843.

Повний текст джерела
Анотація:
The use of 3D anthropometric data of children’s heads and faces has great potential in the development of protective gear and medical products that need to provide a close fit in order to function well. Given the lack of detailed data of this kind, the aim of this study is to map the size and shape variation of Dutch children’s heads and faces and investigate possible implications for the design of a ventilation mask. In this study, a dataset of heads and faces of 303 Dutch children aged six months to seven years consisting of traditional measurements and 3D scans were analysed. A principal component analysis (PCA) of facial measurements was performed to map the variation of the children’s face shapes. The first principal component describes the overall size, whilst the second principal component captures the more width related variation of the face. After establishing a homology between the 3D scanned face shapes, a second principal component analysis was done on the point coordinates, revealing the most prominent variations in 3D shape within the sample.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Colombo, Alessandro, Claudio Cusano, and Raimondo Schettini. "3D face detection using curvature analysis." Pattern Recognition 39, no. 3 (March 2006): 444–55. http://dx.doi.org/10.1016/j.patcog.2005.09.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sheng, Dao Qing, and Hua Cheng. "3D Face Recognition in the Conception of Sparse Representation." Applied Mechanics and Materials 278-280 (January 2013): 1275–81. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1275.

Повний текст джерела
Анотація:
In this paper, a novel 3D face recognition method is proposed from the sparse representation point of view. Under the framework of sparse representation, the recognition problem is transformed to solve the problem of minimization L0-norm. Three types of facial geometrical features are extracted to describe 3D faces. According to the extracted features, 3D face recognition is conducted by applying to the ranking strategy of Fisher linear discriminant analysis. The experiments employed BJUT-3D datasets demonstrate the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zheng, Siming, Rahmita Wirza OK Rahmat, Fatimah Khalid, and Nurul Amelina Nasharuddin. "3D texture-based face recognition system using fine-tuned deep residual networks." PeerJ Computer Science 5 (December 2, 2019): e236. http://dx.doi.org/10.7717/peerj-cs.236.

Повний текст джерела
Анотація:
As the technology for 3D photography has developed rapidly in recent years, an enormous amount of 3D images has been produced, one of the directions of research for which is face recognition. Improving the accuracy of a number of data is crucial in 3D face recognition problems. Traditional machine learning methods can be used to recognize 3D faces, but the face recognition rate has declined rapidly with the increasing number of 3D images. As a result, classifying large amounts of 3D image data is time-consuming, expensive, and inefficient. The deep learning methods have become the focus of attention in the 3D face recognition research. In our experiment, the end-to-end face recognition system based on 3D face texture is proposed, combining the geometric invariants, histogram of oriented gradients and the fine-tuned residual neural networks. The research shows that when the performance is evaluated by the FRGC-v2 dataset, as the fine-tuned ResNet deep neural network layers are increased, the best Top-1 accuracy is up to 98.26% and the Top-2 accuracy is 99.40%. The framework proposed costs less iterations than traditional methods. The analysis suggests that a large number of 3D face data by the proposed recognition framework could significantly improve recognition decisions in realistic 3D face scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Jun, Qingsong Zhang, An Liu, and Guanghui Chen. "Stability Analysis of the Horseshoe Tunnel Face in Rock Masses." Materials 15, no. 12 (June 17, 2022): 4306. http://dx.doi.org/10.3390/ma15124306.

Повний текст джерела
Анотація:
Accurately estimating the stability of horseshoe tunnel faces remains a challenge, especially when excavating in rock masses. This study aims to propose an analytical model to analyze the stability of the horseshoe tunnel face in rock masses. Based on discretization and “point-by-point” techniques, a rotational failure model for horseshoe tunnel faces is first proposed. Based on the proposed failure model, the upper-bound limit analysis method is then adopted to determine the limit support pressure of the tunnel face under the nonlinear Hoek–Brown failure criterion, and the calculated results are validated by comparisons with the numerical results. Finally, the effects of the rock properties on the limit support pressure and the 3D failure surface are discussed. The results show that (1) compared with the numerical simulation method, the proposed method is an efficient and accurate approach to evaluating the face stability of the horseshoe tunnel; (2) from the parametric analysis, it can be seen that the normalized limit support pressure of the tunnel face decreases with the increasing of geological strength index, GSI, Hoek–Brown coefficient, mi, and uniaxial compressive strength, σci, and with the decreasing of the disturbance coefficient of rock, Di; and (3) a larger 3D failure surface is associated with a high value of the normalized limit support pressure.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gallucci, Alessio, Dmitry Znamenskiy, Yuxuan Long, Nicola Pezzotti, and Milan Petkovic. "Generating High-Resolution 3D Faces and Bodies Using VQ-VAE-2 with PixelSNAIL Networks on 2D Representations." Sensors 23, no. 3 (January 19, 2023): 1168. http://dx.doi.org/10.3390/s23031168.

Повний текст джерела
Анотація:
Modeling and representing 3D shapes of the human body and face is a prominent field due to its applications in the healthcare, clothes, and movie industry. In our work, we tackled the problem of 3D face and body synthesis by reducing 3D meshes to 2D image representations. We show that the face can naturally be modeled on a 2D grid. At the same time, for more challenging 3D body geometries, we proposed a novel non-bijective 3D–2D conversion method representing the 3D body mesh as a plurality of rendered projections on the 2D grid. Then, we trained a state-of-the-art vector-quantized variational autoencoder (VQ-VAE-2) to learn a latent representation of 2D images and fit a PixelSNAIL autoregressive model to sample novel synthetic meshes. We evaluated our method versus a classical one based on principal component analysis (PCA) by sampling from the empirical cumulative distribution of the PCA scores. We used the empirical distributions of two commonly used metrics, specificity and diversity, to quantitatively demonstrate that the synthetic faces generated with our method are statistically closer to real faces when compared with the PCA ones. Our experiment on the 3D body geometry requires further research to match the test set statistics but shows promising results.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "3D face analysi"

1

Amin, Syed Hassan. "Analysis of 3D face reconstruction." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/6163.

Повний текст джерела
Анотація:
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters and the texture parameters. The proposed approach has many potential applications in the law enforcement, surveillance, medicine, computer games and the entertainment industries. This problem is addressed using an analysis by synthesis framework by reconstructing a 3D face model from identity photographs. The identity photographs are a widely used medium for face identi cation and can be found on identity cards and passports. The novel contribution of this thesis is a new technique for creating 3D face models from a single 2D face image. The proposed method uses the improved dense 3D correspondence obtained using rigid and non-rigid registration techniques. The existing reconstruction methods use the optical ow method for establishing 3D correspondence. The resulting 3D face database is used to create a statistical shape model. The existing reconstruction algorithms recover shape by optimizing over all the parameters simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step wise approach thus reducing the dimension of the parameter space and simplifying the opti- mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face image by using anatomical landmarks. The texture is then warped onto the 3D model by using the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over the shape parameters while matching a texture mapped model to the target image. There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for improving the quality of reconstruction by improving the cost function. Previous methods use qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy. The improvement in the performance of the cost function occurs as a result of improvement in the feature space comprising the landmark and intensity features. Previously, the feature space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate assumptions about its behaviour. The proposed approach simpli es the reconstruction problem by using only identity images, rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations. This makes sense, as frontal face images under standard illumination conditions are widely available and could be utilized for accurate reconstruction. The reconstructed 3D models with texture can then be used for overcoming the PIE variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lee, Jinho. "Synthesis and analysis of human faces using multi-view, multi-illumination image ensembles." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133366279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hu, Guosheng. "Face analysis using 3D morphable models." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808011/.

Повний текст джерела
Анотація:
Face analysis aims to extract valuable information from facial images. One effective approach for face analysis is the analysis by synthesis. Accordingly, a new face image synthesised by inferring semantic knowledge from input images. To perform analysis by synthesis, a genera- tive model, which parameterises the sources of facial variations, is needed. A 3D Morphable Model (3DMM) is commonly used for this purpose. 3DMMs have been widely used for face analysis because the intrinsic properties of 3D faces provide an ideal representation that is immune to intra-personal variations such as pose and illumination. Given a single facial input image, a 3DMM can recover 3D face (shape and texture) and scene properties (pose and illumination) via a fitting process. However, fitting the model to the input image remains a challenging problem. One contribution of this thesis is a novel fitting method: Efficient Stepwise Optimisation (ESO). ESO optimises sequentially all the parameters (pose, shape, light direction, light strength and texture parameters) in separate steps. A perspective camera and Phong reflectance model are used to model the geometric projection and illumination respectively. Linear methods that are adapted to camera and illumination models are proposed. This generates closed-form solu- tions for these parameters, leading to an accurate and efficient fitting. Another contribution is an albedo based 3D morphable model (AB3DMM). One difficulty of 3DMM fitting is to recover the illumination of the 2D image because the proportion of the albedo and shading contributions in a pixel intensity is ambiguous. Unlike traditional methods, the AB3DMM removes the illumination component from the input image using illumination normalisation methods in a preprocessing step. This image can then be used as input to the AB3DMM fitting that does not need to handle the lighting parameters. Thus, the fitting of the AB3DMM becomes easier and more accurate. Based on AB3DMM and ESO, this study proposes a fully automatic face recognition (AFR) system. Unlike the existing 3DMM methods which assume the facial landmarks are known, our AFR automatically detects the landmarks that are used to initialise our fitting algorithms. Our AFR supports two types of feature extraction: holistic and local features. Experimental results show our AFR outperforms state-of-the-art face recognition methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Xiaozhou. "3D facial expression modeling and analysis with topographic information." Diss., Online access via UMI:, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Повний текст джерела
Анотація:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Clement, Stephen J. "Sparse shape modelling for 3D face analysis." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/8248/.

Повний текст джерела
Анотація:
This thesis describes a new method for localising anthropometric landmark points on 3D face scans. The points are localised by fitting a sparse shape model to a set of candidate landmarks. The candidates are found using a feature detector that is designed using a data driven methodology, this approach also informs the choice of landmarks for the shape model. The fitting procedure is developed to be robust to missing landmark data and spurious candidates. The feature detector and landmark choice is determined by the performance of different local surface descriptions on the face. A number of criteria are defined for a good landmark point and good feature detector. These inform a framework for measuring the performance of various surface descriptions and the choice of parameter values in the surface description generation. Two types of surface description are tested: curvature and spin images. These descriptions, in many ways, represent many aspects of the two most common approaches to local surface description. Using the data driven design process for surface description and landmark choice, a feature detector is developed using spin images. As spin images are a rich surface description, we are able to perform detection and candidate landmark labelling in a single step. A feature detector is developed based on linear discriminant analysis (LDA). This is compared to a simpler detector used in the landmark and surface description selection process. A sparse shape model is constructed using ground truth landmark data. This sparse shape model contains only the landmark point locations and relative positional variation. To localise landmarks, this model is fitted to the candidate landmarks using a RANSAC style algorithm and a novel model fitting algorithm. The results of landmark localisation show that the shape model approach is beneficial over template alignment approaches. Even with heavily contaminated candidate data, we are able to achieve good localisation for most landmarks.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Szeptycki, Przemyslaw. "Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00675988.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hariri, Walid. "Contribution à la reconnaissance/authentification de visages 2D/3D." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0905/document.

Повний текст джерела
Анотація:
L’analyse de visages 3D y compris la reconnaissance des visages et des expressions faciales 3D est devenue un domaine actif de recherche ces dernières années. Plusieurs méthodes ont été développées en utilisant des images 2D pour traiter ces problèmes. Cependant, ces méthodes présentent un certain nombre de limitations dépendantes à l’orientation du visage, à l’éclairage, à l’expression faciale, et aux occultations. Récemment, le développement des capteurs d’acquisition 3D a fait que les données 3D deviennent de plus en plus disponibles. Ces données 3D sont relativement invariables à l’illumination et à la pose, mais elles restent sensibles à la variation de l’expression. L’objectif principal de cette thèse est de proposer de nouvelles techniques de reconnaissance/vérification de visages et de reconnaissance d’expressions faciales 3D. Tout d’abord, une méthode de reconnaissance de visages en utilisant des matrices de covariance comme des descripteurs de régions de visages est proposée. Notre méthode comprend les étapes suivantes : le prétraitement et l’alignement de visages, un échantillonnage uniforme est ensuite appliqué sur la surface faciale pour localiser un ensemble de points de caractéristiques. Autours de chaque point, nous extrayons une matrice de covariance comme un descripteur de région du visage. Deux méthodes d’appariement sont ainsi proposées, et différentes distances (géodésiques / non-géodésique) sont appliquées pour comparer les visages. La méthode proposée est évaluée sur troisbases de visages GAVAB, FRGCv2 et BU-3DFE. Une description hiérarchique en utilisant trois niveaux de covariances est ensuite proposée et validée. La deuxième partie de cette thèse porte sur la reconnaissance des expressions faciales 3D. Pour ce faire, nous avons proposé d’utiliser les matrices de covariances avec les méthodes noyau. Dans cette contribution, nous avons appliqué le noyau de Gauss pour transformer les matrices de covariances en espace d’Hilbert. Cela permet d’utiliser les algorithmes qui sont déjà implémentés pour l’espace Euclidean (i.e. SVM) dans cet espace non-linéaire. Des expérimentations sont alors entreprises sur deux bases d’expressions faciales 3D (BU-3DFE et Bosphorus) pour reconnaître les six expressions faciales prototypiques
3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasetsincluding GAVAB, FRGCv2 and BU-3DFE. A hierarchical description using three levels of covariances is then proposed and validated. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions
Стилі APA, Harvard, Vancouver, ISO та ін.
10

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.

Повний текст джерела
Анотація:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "3D face analysi"

1

Daoudi, Mohamed, Anuj Srivastava, and Remco Veltkamp, eds. 3D Face Modeling, Analysis and Recognition. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

1936-, Huang Thomas S., ed. 3D face processing: Modeling, analysis, and synthesis. Boston: Kluwer Academic Publishers, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley & Sons, Incorporated, John, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley & Sons, Incorporated, John, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley & Sons, Incorporated, John, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley & Sons, Limited, John, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Srivastava, Anuj, Remco Veltkamp, and Mohamed Daoudi. 3D Face Modeling, Analysis and Recognition. Wiley & Sons, Incorporated, John, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Huang, Thomas S. 3D Face Processing: Modeling, Analysis and Synthesis. Springer London, Limited, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wen, Zhen. 3D Face Processing: "Modeling, Analysis And Synthesis". Springer, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "3D face analysi"

1

Pears, Nick, and Ajmal Mian. "3D Face Recognition." In 3D Imaging, Analysis and Applications, 569–630. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mian, Ajmal, and Nick Pears. "3D Face Recognition." In 3D Imaging, Analysis and Applications, 311–66. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4063-4_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Amor, Boulbaba Ben, Mohsen Ardabilian, and Liming Chen. "3D Face Modeling." In 3D Face Modeling, Analysis and Recognition, 1–37. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.ch1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Stern, Guillaume, Zehua Fu, and Mohsen Ardabilian. "3D Face Analysis for Healthcare." In Biometrics under Biomedical Considerations, 147–60. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1144-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Al-Osaimi, Faisal Radhi M., and Mohammed Bennammoun. "3D Face Surface Analysis and Recognition Based on Facial Surface Features." In 3D Face Modeling, Analysis and Recognition, 39–76. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.ch2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Drira, Hassen, Stefano Berretti, Boulbaba Ben Amor, Mohamed Daoudi, Anuj Srivastava, Alberto del Bimbo, and Pietro Pala. "3D Face Surface Analysis and Recognition Based on Facial Curves." In 3D Face Modeling, Analysis and Recognition, 77–118. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.ch3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Haar, Frank B. ter, and Remco Veltkamp. "3D Morphable Models for Face Surface Analysis and Recognition." In 3D Face Modeling, Analysis and Recognition, 119–47. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.ch4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Berretti, Stefano, Boulbaba Ben Amor, Hassen Drira, Mohamed Daoudi, Anuj Srivastava, Alberto del Bimbo, and Pietro Pala. "Applications." In 3D Face Modeling, Analysis and Recognition, 149–202. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.ch5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Dai, Hang, Nick Pears, Patrik Huber, and William A. P. Smith. "3D Morphable Models: The Face, Ear and Head." In 3D Imaging, Analysis and Applications, 463–512. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Silva Mata, Francisco José, Elaine Grenot Castellanos, Alfredo Muñoz-Briseño, Isneri Talavera-Bustamante, and Stefano Berretti. "3D Face Recognition in Continuous Spaces." In Image Analysis and Processing - ICIAP 2017, 3–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68548-9_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "3D face analysi"

1

Gupta, Shalini, Kenneth R. Castleman, Mia K. Markey, and Alan C. Bovik. "Texas 3D Face Recognition Database." In 2010 IEEE Southwest Symposium on Image Analysis & Interpretation (SSIAI). IEEE, 2010. http://dx.doi.org/10.1109/ssiai.2010.5483908.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Colombo, Alessandro, Claudio Cusano, and Raimondo Schettini. "Face^3 a 2D+3D Robust Face Recognition System." In 14th International Conference on Image Analysis and Processing (ICIAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciap.2007.4362810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Amin, S. Hassan, and Duncan Gillies. "Analysis of 3D Face Reconstruction." In 14th International Conference on Image Analysis and Processing (ICIAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciap.2007.4362813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Berretti, Stefano, Alberto Del Bimbo, Pietro Pala, and Francisco Jose Silva Mata. "Using Geodesic Distances for 2D-3D and 3D-3D Face Recognition." In 14th International Conference of Image Analysis and Processing - Workshops (ICIAPW 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciapw.2007.44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ye Pan, Bo Dai, and Qicong Peng. "Fast and robust 3D face matching approach." In 2010 International Conference on Image Analysis and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/iasp.2010.5476132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tokola, Ryan, Aravind Mikkilineni, and Christopher Boehnen. "3D face analysis for demographic biometrics." In 2015 International Conference on Biometrics (ICB). IEEE, 2015. http://dx.doi.org/10.1109/icb.2015.7139052.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xiaoguang Lu and A. K. Jain. "Deformation Analysis for 3D Face Matching." In 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05). IEEE, 2005. http://dx.doi.org/10.1109/acvmot.2005.40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lim, Seong-Jae, Bon-Woo Hwang, Seung-Uk Yoon, Jin Sung Choi, and Chang-Joon Park. "Automatic 3D face component analysis technique." In 2018 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2018. http://dx.doi.org/10.1109/icce.2018.8326087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sun, Qi, Yanlong Tang, Ping Hu, and Jingliang Peng. "Kinect-based automatic 3D high-resolution face modeling." In 2012 International Conference on Image Analysis and Signal Processing (IASP). IEEE, 2012. http://dx.doi.org/10.1109/iasp.2012.6425065.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Queirolo, Chaua, Mauricio P. Segundo, Olga Bellon, and Luciano Silva. "Noise versus Facial Expression on 3D Face Recognition." In 14th International Conference on Image Analysis and Processing (ICIAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciap.2007.4362775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "3D face analysi"

1

Barkatov, Igor V., Volodymyr S. Farafonov, Valeriy O. Tiurin, Serhiy S. Honcharuk, Vitaliy I. Barkatov, and Hennadiy M. Kravtsov. New effective aid for teaching technology subjects: 3D spherical panoramas joined with virtual reality. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4407.

Повний текст джерела
Анотація:
Rapid development of modern technology and its increasing complexity make high demands to the quality of training of its users. Among others, an important class is vehicles, both civil and military. In the teaching of associated subjects, the accepted hierarchy of teaching aids includes common visual aids (posters, videos, scale models etc.) on the first stage, followed by simulators ranging in complexity, and finished at real vehicles. It allows achieving some balance between cost and efficiency by partial replacement of more expensive and elaborated aids with the less expensive ones. However, the analysis of teaching experience in the Military Institute of Armored Forces of National Technical University “Kharkiv Polytechnic Institute” (Institute) reveals that the balance is still suboptimal, and the present teaching aids are still not enough to allow efficient teaching. This fact raises the problem of extending the range of available teaching aids for vehicle-related subjects, which is the aim of the work. Benefiting from the modern information and visualization technologies, we present a new teaching aid that constitutes a spherical (360° or 3D) photographic panorama and a Virtual Reality (VR) device. The nature of the aid, its potential applications, limitations and benefits in comparison to the common aids are discussed. The proposed aid is shown to be cost-effective and is proved to increase efficiency of training, according to the results of a teaching experiment that was carried out in the Institute. For the implementation, a tight collaboration between the Institute and an IT company “Innovative Distance Learning Systems Limited” was established. A series of panoramas, which are already available, and its planned expansions are presented. The authors conclude that the proposed aid may significantly improve the cost-efficiency balance of teaching a range of technology subjects.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії