Academic literature on the topic 'Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning"

1

Han, Pingli, Xuan Li, Fei Liu, Yudong Cai, Kui Yang, Mingyu Yan, Shaojie Sun, Yanyan Liu, and Xiaopeng Shao. "Accurate Passive 3D Polarization Face Reconstruction under Complex Conditions Assisted with Deep Learning." Photonics 9, no. 12 (November 30, 2022): 924. http://dx.doi.org/10.3390/photonics9120924.

Full text
Abstract:
Accurate passive 3D face reconstruction is of great importance with various potential applications. Three-dimensional polarization face reconstruction is a promising approach, but one bothered by serious deformations caused by an ambiguous surface normal. In this study, we propose a learning-based method for passive 3D polarization face reconstruction. It first calculates the surface normal of each microfacet at a pixel level based on the polarization of diffusely reflected light on the face, where no auxiliary equipment, including artificial illumination, is required. Then, the CNN-based 3DMM (convolutional neural network; 3D morphable model) generates a rough depth map of the face with the directly captured polarization image. The map works as an extra constraint to correct the ambiguous surface normal obtained from polarization. An accurate surface normal finally allows for an accurate 3D face reconstruction. Experiments in both indoor and outdoor conditions demonstrate that accurate 3D faces can be well-reconstructed. Moreover, with no auxiliary equipment required, the method ensures a total passive 3D face reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Tianping, Hongxin Xu, Hua Zhang, and Honglin Wan. "Detail 3D Face Reconstruction Based on 3DMM and Displacement Map." Journal of Sensors 2021 (June 25, 2021): 1–13. http://dx.doi.org/10.1155/2021/9921101.

Full text
Abstract:
How to accurately reconstruct the 3D model human face is a challenge issue in the computer vision. Due to the complexity of face reconstruction and diversity of face features, most existing methods are aimed at reconstructing a smooth face model with ignoring face details. In this paper a novel deep learning-based face reconstruction method is proposed. It contains two modules: initial face reconstruction and face details synthesis. In the initial face reconstruction module, a neural network is used to detect the facial feature points and the angle of the pose face, and 3D Morphable Model (3DMM) is used to reconstruct the rough shape of the face model. In the face detail synthesis module, Conditional Generation Adversarial Network (CGAN) is used to synthesize the displacement map. The map provides texture features to render to the face surface reconstruction, so as to reflect the face details. Our proposal is evaluated by Facescape dataset in experiments and achieved better performance than other current methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Duc-Phong, Tan-Nhu Nguyen, Stéphanie Dakpé, Marie-Christine Ho Ba Ho Ba Tho, and Tien-Tuan Dao. "Fast 3D Face Reconstruction from a Single Image Using Different Deep Learning Approaches for Facial Palsy Patients." Bioengineering 9, no. 11 (October 27, 2022): 619. http://dx.doi.org/10.3390/bioengineering9110619.

Full text
Abstract:
The 3D reconstruction of an accurate face model is essential for delivering reliable feedback for clinical decision support. Medical imaging and specific depth sensors are accurate but not suitable for an easy-to-use and portable tool. The recent development of deep learning (DL) models opens new challenges for 3D shape reconstruction from a single image. However, the 3D face shape reconstruction of facial palsy patients is still a challenge, and this has not been investigated. The contribution of the present study is to apply these state-of-the-art methods to reconstruct the 3D face shape models of facial palsy patients in natural and mimic postures from one single image. Three different methods (3D Basel Morphable model and two 3D Deep Pre-trained models) were applied to the dataset of two healthy subjects and two facial palsy patients. The reconstructed outcomes were compared to the 3D shapes reconstructed using Kinect-driven and MRI-based information. As a result, the best mean error of the reconstructed face according to the Kinect-driven reconstructed shape is 1.5 ± 1.1 mm. The best error range is 1.9 ± 1.4 mm when compared to the MRI-based shapes. Before using the procedure to reconstruct the 3D faces of patients with facial palsy or other facial disorders, several ideas for increasing the accuracy of the reconstruction can be discussed based on the results. This present study opens new avenues for the fast reconstruction of the 3D face shapes of facial palsy patients from a single image. As perspectives, the best DL method will be implemented into our computer-aided decision support system for facial disorders.
APA, Harvard, Vancouver, ISO, and other styles
4

Soni, Neha, Enakshi Khular Sharma, and Amita Kapoor. "Novel BSSSO-Based Deep Convolutional Neural Network for Face Recognition with Multiple Disturbing Environments." Electronics 10, no. 5 (March 8, 2021): 626. http://dx.doi.org/10.3390/electronics10050626.

Full text
Abstract:
Face recognition technology is presenting exciting opportunities, but its performance gets degraded because of several factors, like pose variation, partial occlusion, expression, illumination, biased data, etc. This paper proposes a novel bird search-based shuffled shepherd optimization algorithm (BSSSO), a meta-heuristic technique motivated by the intuition of animals and the social behavior of birds, for improving the performance of face recognition. The main intention behind the research is to establish an optimization-driven deep learning approach for recognizing face images with multiple disturbing environments. The developed model undergoes three main steps, namely, (a) Noise Removal, (b) Feature Extraction, and (c) Recognition. For the removal of noise, a type II fuzzy system and cuckoo search optimization algorithm (T2FCS) is used. The feature extraction is carried out using the CNN, and landmark enabled 3D morphable model (L3DMM) is utilized to efficiently fit a 3D face from a single uncontrolled image. The obtained features are subjected to Deep CNN for face recognition, wherein the training is performed using novel BSSSO. The experimental findings on standard datasets (LFW, UMB-DB, Extended Yale B database) prove the ability of the proposed model over the existing face recognition approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Bahri, Mehdi, Eimear O’ Sullivan, Shunwang Gong, Feng Liu, Xiaoming Liu, Michael M. Bronstein, and Stefanos Zafeiriou. "Shape My Face: Registering 3D Face Scans by Surface-to-Surface Translation." International Journal of Computer Vision 129, no. 9 (July 10, 2021): 2680–713. http://dx.doi.org/10.1007/s11263-021-01494-4.

Full text
Abstract:
AbstractStandard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.
APA, Harvard, Vancouver, ISO, and other styles
6

Mursalin, Md, Mohiuddin Ahmed, and Paul Haskell-Dowland. "Biometric Security: A Novel Ear Recognition Approach Using a 3D Morphable Ear Model." Sensors 22, no. 22 (November 20, 2022): 8988. http://dx.doi.org/10.3390/s22228988.

Full text
Abstract:
Biometrics is a critical component of cybersecurity that identifies persons by verifying their behavioral and physical traits. In biometric-based authentication, each individual can be correctly recognized based on their intrinsic behavioral or physical features, such as face, fingerprint, iris, and ears. This work proposes a novel approach for human identification using 3D ear images. Usually, in conventional methods, the probe image is registered with each gallery image using computational heavy registration algorithms, making it practically infeasible due to the time-consuming recognition process. Therefore, this work proposes a recognition pipeline that reduces the one-to-one registration between probe and gallery. First, a deep learning-based algorithm is used for ear detection in 3D side face images. Second, a statistical ear model known as a 3D morphable ear model (3DMEM), was constructed to use as a feature extractor from the detected ear images. Finally, a novel recognition algorithm named you morph once (YMO) is proposed for human recognition that reduces the computational time by eliminating one-to-one registration between probe and gallery, which only calculates the distance between the parameters stored in the gallery and the probe. The experimental results show the significance of the proposed method for a real-time application.
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Jiangke, Yi Yuan, and Zhengxia Zou. "MeInGame: Create a Game Character Face from a Single Portrait." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 311–19. http://dx.doi.org/10.1609/aaai.v35i1.16106.

Full text
Abstract:
Many deep learning based 3D face reconstruction methods have been proposed recently, however, few of them have applications in games. Current game character customization systems either require players to manually adjust considerable face attributes to obtain the desired face, or have limited freedom of facial shape and texture. In this paper, we propose an automatic character face creation method that predicts both facial shape and texture from a single portrait, and it can be integrated into most existing 3D games. Although 3D Morphable Face Model (3DMM) based methods can restore accurate 3D faces from single images, the topology of 3DMM mesh is different from the meshes used in most games. To acquire fidelity texture, existing methods require a large amount of face texture data for training, while building such datasets is time-consuming and laborious. Besides, such a dataset collected under laboratory conditions may not generalized well to in-the-wild situations. To tackle these problems, we propose 1) a low-cost facial texture acquisition method, 2) a shape transfer algorithm that can transform the shape of a 3DMM mesh to games, and 3) a new pipeline for training 3D game face reconstruction networks. The proposed method not only can produce detailed and vivid game characters similar to the input portrait, but can also eliminate the influence of lighting and occlusions. Experiments show that our method outperforms state-of-the-art methods used in games. Code and dataset are available at https://github.com/FuxiCV/MeInGame.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Cheng-Wei, and Chao-Chung Peng. "3D Face Point Cloud Reconstruction and Recognition Using Depth Sensor." Sensors 21, no. 8 (April 7, 2021): 2587. http://dx.doi.org/10.3390/s21082587.

Full text
Abstract:
Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.
APA, Harvard, Vancouver, ISO, and other styles
9

Kada, M. "3D RECONSTRUCTION OF SIMPLE BUILDINGS FROM POINT CLOUDS USING NEURAL NETWORKS WITH CONTINUOUS CONVOLUTIONS (CONVPOINT)." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W4-2022 (October 14, 2022): 61–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w4-2022-61-2022.

Full text
Abstract:
Abstract. The automatic reconstruction of 3D building models from airborne laser scanning point clouds or aerial imagery data in a model-driven fashion most often consists of a recognition of standardized building primitives with typically rectangular footprints and parameterized roof shapes based on a pre-defined collection, and a parameter estimation so that the selected primitives best fit the input data. For more complex buildings that consist of multiple parts, several such primitives need to be combined. This paper focuses on the reconstruction of such simple buildings, and explores the potential of Deep Learning by presenting a neural network architecture that takes a 3D point cloud of a single building as input and outputs the geometric information needed to construct a 3D building model in half-space representation with up to four roof faces like saddleback, hip, and pyramid roof. The proposed neural network architecture consists of a roof face segmentation module implemented with continuous convolutions as used in ConvPoint, which performs feature extraction directly from a set of 3D points, and four PointNet modules that predict from sampled subsets of the feature-enriched points the presence of four roof faces and their slopes. Trained with the RoofN3D dataset, which contains roof point segmentations and geometric information for 3D reconstruction purposes for about 118,000 simple buildings, the neural network achieves a performance of about 80% intersection over union (IoU) for roof face segmentation, 1.8° mean absolute error (MAE) for roof slope angles, and 95% overall accuracy (OA) for predicting the presence of faces.
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Hao, Jianyang Gu, Xiaojin Fan*, He Li, Lidong Xie, and Jian Zhao. "3D-Guided Frontal Face Generation for Pose-Invariant Recognition." ACM Transactions on Intelligent Systems and Technology, November 21, 2022. http://dx.doi.org/10.1145/3572035.

Full text
Abstract:
Although deep learning techniques have achieved extraordinary accuracy in recognizing human faces, the pose variances of images captured in real-world scenarios still hinder reliable model appliance. To mitigate this gap, we propose to recognize faces via generation frontal face images with a 3D -Guided Deep P ose- I nvariant Face Recognition M odel (3D-PIM) consisted of a simulator and a refiner module. The simulator employs a 3D Morphable Model (3D MM) to fit the shape and appearance features and recover primary frontal images with less training data. The refiner further enhances the image realism on both global facial structure and local details with adversarial training, while keeping the discriminative identity information consistent with original images. An Adaptive Weighting (AW) metric is then adopted to leverage the complimentary information from recovered frontal faces and original profile faces and to obtain credible similarity scores for recognition. Extended experiments verify the superiority of the proposed “recognition via generation” framework over state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning"

1

Ferrari, Claudio. "Representing faces: local and holistic approaches with application to recognition." Doctoral thesis, 2018. http://hdl.handle.net/2158/1120507.

Full text
Abstract:
Face analysis from 2D images and videos is a central task in many computer vision applications. Methods developed to this end perform either face recognition or facial expression recognition, and in both cases, results are negatively influenced by variations in pose, illumination, and resolution of the face. Such variations have a lower impact on 3D face data, which has given the way to the idea of using a 3D Morphable Model as an intermediate tool to enhance face analysis on 2D data. In the first part of this thesis, a new approach for constructing a 3D Morphable Shape Model (called DL-3DMM) is proposed. It is shown that this solution can reach the accuracy of deformation required in applications where fine details of the face are concerned. The DL-3DMM is then exploited to develop a new and effective frontalization algorithm, which can produce a frontal facing view of unconstrained face images. The rendered frontal views result artifact-free and pixelwise aligned so that matching consistency between local descriptors is enhanced. Results obtained with this approach are comparable with the state-of-the-art. Lately, in contrast to local descriptors based approaches, methods grounded on deep learning algorithms proved to be dramatically effective for face recognition in the wild. It has been extensively demonstrated that methods exploiting Deep Convolutional Neural Networks (DCNN) are powerful enough to overcome to a great extent many problems that negatively affected computer vision algorithms based on hand-crafted features. The DCNNs excellent discriminative power comes from the fact that they learn low- and high-level representations directly from the raw image data. Considering this, it can be assumed that the performance of a DCNN is influenced by the characteristics of the raw image data that are fed to the network. In the final part of this thesis, the effects of different raw data characteristics on face recognition using well known DCNN architectures are presented.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning"

1

Munir, Hafiz Muhammad Umair, and Waqar S. Qureshi. "3D Single Image Face Reconstruction Approaches With Deep Neural Networks." In Interactivity and the Future of the Human-Computer Interface, 262–81. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2637-8.ch014.

Full text
Abstract:
3D facial reconstruction is an emerging and interesting application in the field of computer graphics and computer vision. It is difficult and challenging to reconstruct the 3D facial model from a single photo because of arbitrary poses, non-uniform illumination, expressions, and occlusions. Detailed 3D facial models are difficult to reconstruct because every algorithm has some limitations related to profile view, fine detail, accuracy, and speed. The major problem is to develop 3D face with texture of large poses, wild faces, large training data, and occluded faces. Mostly algorithms use convolution neural networks and deep learning frameworks to create facial model. 3D face reconstruction algorithms used for application such as 3D printing, 3D VR games and facial recognition. Different issues, problems and their proposed solutions are discussed. Different facial dataset and facial 3DMM used for 3D face reconstructing from a single photo are explained. The recent state of art 3D facial reconstruction and 3D face learning methods developed in 2019 is briefly explained.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Face recognition, 3D Face Reconstruction, 3D Morphable Model, Deep Learning"

1

Zhao, Jian, Lin Xiong, Yu Cheng, Yi Cheng, Jianshu Li, Li Zhou, Yan Xu, et al. "3D-Aided Deep Pose-Invariant Face Recognition." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/165.

Full text
Abstract:
Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. To mitigate this gap, we propose a 3D-Aided Deep Pose-Invariant Face Recognition Model (3D-PIM), which automatically recovers realistic frontal faces from arbitrary poses through a 3D face model in a novel way. Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data. It further leverages a global-local Generative Adversarial Network (GAN) with multiple critical improvements as a refiner to enhance the realism of both global structures and local details of the face simulator’s output using unlabelled real data only, while preserving the identity information. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks clearly demonstrate superiority of the proposed model over state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
2

Latorre Sánchez, Consuelo, Juan Antonio Solves, Joaquín Sanchiz Navarro, Ricardo Bayona Salvador, Jose Laparra, Nicolás Palomares, and Jose Solaz. "Methodology Based on 3D Thermal Scanner and AI Integration to Model Thermal Comfort and Ergonomics." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001896.

Full text
Abstract:
The current pandemic situation due to the appearance of the coronavirus-2 or SARS-CoV-2 (COVID-19) has increased the demand and familiarization of the population with infrared cameras and their thermal interpretation. Infrared radiation and the technology behind it have become a necessity not only developing new applications for the present but also for the future. In the post-pandemic world, commercial solutions to existing problems are being developed with this technology and with very efficient approaches, reducing costs and complementing many areas.Institute of Biomechanics of Valencia (IBV) is constantly innovating in the field of infrared thermal imaging and its applications in the well-being of people through research, experimentation and user validation. 3D models have been developed merging anthropometric data and thermal information based on scanners, 3D reconstruction and imagen processing. Some of the algorithms for monitoring and reconstruction system are based on a FLIR A35 thermal camera and an INTEL RealSense D455 depth sensor, a low-cost, high-performance sensor.Artificial intelligence techniques applied to images, mainly in visible or RGB datasets, have undergone significant development in recent years, however there is a gap in the application in thermal images. The IBV has compiled a powerful database for years from many users, insulation in clothes, extreme scenarios and different poses and face orientations. Many networks, models and libraries of computer vision, have been explored and some AI techniques (machine and deep learning) have been applied to extract information from those images, although open solutions and networks do not work accurately. The thermal database has been used to retrain these network models and the results have been considerably better.Near real-time, low-cost 3D thermal reconstruction, with embedded AI techniques, has been applied in facemasks evaluation, face recognition, feature and key points extraction, segmentation and development of automatic thermal measure algorithms. From feature extraction and landmark information, aspects such as thermotype, age and sex, have been also determined, or even the effects of the emotions, rotations or artifacts like glasses, facemasks or beards on the identification of the user. IBV has a huge background in this technology and develops new innovative solutions in order to tackle with new challenges, from determining the effect a facemask has, in thermal comfort or breathing rate to helping physician to diagnose certain diseases, such as circulatory, vascular problems and the effect of therapies or cosmetic products. In this way, information on the thermoregulatory behavior of the human body is provided, allowing to relate changes in thermal maps, to certain pathologies or to the effect of a treatment, skin affections, varicose veins or joint injuries.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography