Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Face super-resolution.

Zeitschriftenartikel zum Thema „Face super-resolution“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Face super-resolution" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kui Jia und Shaogang Gong. „Generalized Face Super-Resolution“. IEEE Transactions on Image Processing 17, Nr. 6 (Juni 2008): 873–86. http://dx.doi.org/10.1109/tip.2008.922421.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xin, Jingwei, Nannan Wang, Xinrui Jiang, Jie Li, Xinbo Gao und Zhifeng Li. „Facial Attribute Capsules for Noise Face Super Resolution“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12476–83. http://dx.doi.org/10.1609/aaai.v34i07.6935.

Der volle Inhalt der Quelle
Annotation:
Existing face super-resolution (SR) methods mainly assume the input image to be noise-free. Their performance degrades drastically when applied to real-world scenarios where the input image is always contaminated by noise. In this paper, we propose a Facial Attribute Capsules Network (FACN) to deal with the problem of high-scale super-resolution of noisy face image. Capsule is a group of neurons whose activity vector models different properties of the same entity. Inspired by the concept of capsule, we propose an integrated representation model of facial information, which named Facial Attribute Capsule (FAC). In the SR processing, we first generated a group of FACs from the input LR face, and then reconstructed the HR face from this group of FACs. Aiming to effectively improve the robustness of FAC to noise, we generate FAC in semantic, probabilistic and facial attributes manners by means of integrated learning strategy. Each FAC can be divided into two sub-capsules: Semantic Capsule (SC) and Probabilistic Capsule (PC). Them describe an explicit facial attribute in detail from two aspects of semantic representation and probability distribution. The group of FACs model an image as a combination of facial attribute information in the semantic space and probabilistic space by an attribute-disentangling way. The diverse FACs could better combine the face prior information to generate the face images with fine-grained semantic attributes. Extensive benchmark experiments show that our method achieves superior hallucination results and outperforms state-of-the-art for very low resolution (LR) noise face image super resolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kanakaraj, Sithara, V. K. Govindan und Saidalavi Kalady. „Face Super Resolution: A Survey“. International Journal of Image, Graphics and Signal Processing 9, Nr. 5 (08.05.2017): 54–67. http://dx.doi.org/10.5815/ijigsp.2017.05.06.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Liu, Zhi-Song, Wan-Chi Siu und Yui-Lam Chan. „Reference Based Face Super-Resolution“. IEEE Access 7 (2019): 129112–26. http://dx.doi.org/10.1109/access.2019.2934078.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chen, Jin, Jun Chen, Zheng Wang, Chao Liang und Chia-Wen Lin. „Identity-Aware Face Super-Resolution for Low-Resolution Face Recognition“. IEEE Signal Processing Letters 27 (2020): 645–49. http://dx.doi.org/10.1109/lsp.2020.2986942.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

ZHANG, Di, und Jia-Zhong HE. „Feature Space Based Face Super-resolution Reconstruction“. Acta Automatica Sinica 38, Nr. 7 (2012): 1145. http://dx.doi.org/10.3724/sp.j.1004.2012.01145.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

An, Le, und Bir Bhanu. „Face image super-resolution using 2D CCA“. Signal Processing 103 (Oktober 2014): 184–94. http://dx.doi.org/10.1016/j.sigpro.2013.10.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lu, Tao, Lanlan Pan, Yingjie Guan und Kangli Zeng. „Face Super-Resolution by Deep Collaborative Representation“. Journal of Computer-Aided Design & Computer Graphics 31, Nr. 4 (2019): 596. http://dx.doi.org/10.3724/sp.j.1089.2019.17323.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gunturk, B. K., A. U. Batur, Y. Altunbasak, M. H. Hayes und R. M. Mersereau. „Eigenface-domain super-resolution for face recognition“. IEEE Transactions on Image Processing 12, Nr. 5 (Mai 2003): 597–606. http://dx.doi.org/10.1109/tip.2003.811513.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chen, Chaofeng, Dihong Gong, Hao Wang, Zhifeng Li und Kwan-Yee K. Wong. „Learning Spatial Attention for Face Super-Resolution“. IEEE Transactions on Image Processing 30 (2021): 1219–31. http://dx.doi.org/10.1109/tip.2020.3043093.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Molahasani Majdabadi, Mahdiyar, und Seok-Bum Ko. „Capsule GAN for robust face super resolution“. Multimedia Tools and Applications 79, Nr. 41-42 (19.08.2020): 31205–18. http://dx.doi.org/10.1007/s11042-020-09489-y.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Lu, Tao, Kangli Zeng, Shenming Qu, Yanduo Zhang und Wei He. „Face super-resolution via nonlinear adaptive representation“. Neural Computing and Applications 32, Nr. 15 (19.12.2019): 11637–49. http://dx.doi.org/10.1007/s00521-019-04652-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zeng, Kangli, Tao Lu, Xuefeng Liang, Kai Liu, Hui Chen und Yanduo Zhang. „Face super-resolution via bilayer contextual representation“. Signal Processing: Image Communication 75 (Juli 2019): 147–57. http://dx.doi.org/10.1016/j.image.2019.03.019.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Yin, Yu, Joseph Robinson, Yulun Zhang und Yun Fu. „Joint Super-Resolution and Alignment of Tiny Faces“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12693–700. http://dx.doi.org/10.1609/aaai.v34i07.6962.

Der volle Inhalt der Quelle
Annotation:
Super-resolution (SR) and landmark localization of tiny faces are highly correlated tasks. On the one hand, landmark localization could obtain higher accuracy with faces of high-resolution (HR). On the other hand, face SR would benefit from prior knowledge of facial attributes such as landmarks. Thus, we propose a joint alignment and SR network to simultaneously detect facial landmarks and super-resolve tiny faces. More specifically, a shared deep encoder is applied to extract features for both tasks by leveraging complementary information. To exploit representative power of the hierarchical encoder, intermediate layers of a shared feature extraction module are fused to form efficient feature representations. The fused features are then fed to task-specific modules to detect landmarks and super-resolve face images in parallel. Extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art in both landmark localization and SR of faces. We show a large improvement for landmark localization of tiny faces (i.e., 16 × 16). Furthermore, the proposed framework yields comparable results for landmark localization on low-resolution (LR) faces (i.e., 64 × 64) to existing methods on HR (i.e., 256 × 256). As for SR, the proposed method recovers sharper edges and more details from LR face images than other state-of-the-art methods, which we demonstrate qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Wang, Xiaoyu. „Very Low Resolution Face Image Super-Resolution Based on DCT“. Journal of Information and Computational Science 11, Nr. 11 (20.07.2014): 3807–13. http://dx.doi.org/10.12733/jics20104400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhang, Ziwei, Yangjing Shi, Xiaoshi Zhou, Hongfei Kan und Juan Wen. „Shuffle block SRGAN for face image super-resolution reconstruction“. Measurement and Control 53, Nr. 7-8 (August 2020): 1429–39. http://dx.doi.org/10.1177/0020294020944969.

Der volle Inhalt der Quelle
Annotation:
When low-resolution face images are used for face recognition, the model accuracy is substantially decreased. How to recover high-resolution face features from low-resolution images precisely and efficiently is an essential subtask in face recognition. In this study, we introduce shuffle block SRGAN, a new image super-resolution network inspired by the SRGAN structure. By replacing the residual blocks with shuffle blocks, we can achieve efficient super-resolution reconstruction. Furthermore, by considering the generated image quality in the loss function, we can obtain more realistic super-resolution images. We train and test SB-SRGAN in three public face image datasets and use transfer learning strategy during the training process. The experimental results show that shuffle block SRGAN can achieve desirable image super-resolution performance with respect to visual effect as well as the peak signal-to-noise ratio and structure similarity index method metrics, compared with the performance attained by the other chosen deep-leaning models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lu, Tao, Jiaming Wang, Junjun Jiang und Yanduo Zhang. „Global-local fusion network for face super-resolution“. Neurocomputing 387 (April 2020): 309–20. http://dx.doi.org/10.1016/j.neucom.2020.01.015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Mei, Gong, Xiaoxi He, Ke Wang und Xie Wang. „Single Constrained Face Super-Resolution via Neighbor Patches“. Journal of Computational and Theoretical Nanoscience 13, Nr. 8 (01.08.2016): 5478–83. http://dx.doi.org/10.1166/jctn.2016.5442.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Xia, Jinfeng, Zhizheng Yang, Fang Li, Yuanda Xu, Nan Ma und Chunxing Wang. „Human Face Super-Resolution Based on Hybrid Algorithm“. Advances in Molecular Imaging 08, Nr. 04 (2018): 39–47. http://dx.doi.org/10.4236/ami.2018.84004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Tang Jialin, 唐佳林, 陈泽彬 Chen Zebin, 苏秉华 Su Binghua und 李克勤 Li Keqin. „Super-Resolution Restoration of Low Quality Face Images“. Laser & Optoelectronics Progress 55, Nr. 3 (2018): 031007. http://dx.doi.org/10.3788/lop55.031007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Cao, Lin, Jiape Liu, Kangning Du, Yanan Guo und Tao Wang. „Guided Cascaded Super-Resolution Network for Face Image“. IEEE Access 8 (2020): 173387–400. http://dx.doi.org/10.1109/access.2020.3025972.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Li, Jinning, Yichen Zhou, Jie Ding, Cen Chen und Xulei Yang. „ID Preserving Face Super-Resolution Generative Adversarial Networks“. IEEE Access 8 (2020): 138373–81. http://dx.doi.org/10.1109/access.2020.3011699.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Zhou, F., Q. Liao und B. Wang. „Super-resolution for face image by bilateral patches“. Electronics Letters 48, Nr. 18 (30.08.2012): 1125–26. http://dx.doi.org/10.1049/el.2012.2369.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Duan, Yanfei, Yintian Liu, Ruixiang Wang, Dengguo Yao und Hang Zhang. „Progressive face super-resolution via learning prior information“. Journal of Physics: Conference Series 1651 (November 2020): 012127. http://dx.doi.org/10.1088/1742-6596/1651/1/012127.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Winston, L. Gershom, und Jemima Jebaseeli. „Survey of Face Recognition Using Super-Resolution Techniques“. International Journal of Engineering Trends and Technology 8, Nr. 3 (25.02.2014): 140–43. http://dx.doi.org/10.14445/22315381/ijett-v8p226.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

WANG, Yu, Tao LU, Feng YAO, Yuntao WU und Yanduo ZHANG. „Multi-View Texture Learning for Face Super-Resolution“. IEICE Transactions on Information and Systems E104.D, Nr. 7 (01.07.2021): 1028–38. http://dx.doi.org/10.1587/transinf.2020edp7223.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Fookes, Clinton, Frank Lin, Vinod Chandran und Sridha Sridharan. „Evaluation of image resolution and super-resolution on face recognition performance“. Journal of Visual Communication and Image Representation 23, Nr. 1 (Januar 2012): 75–93. http://dx.doi.org/10.1016/j.jvcir.2011.06.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Rajput, Shyam Singh, und K. V. Arya. „A robust face super-resolution algorithm and its application in low-resolution face recognition system“. Multimedia Tools and Applications 79, Nr. 33-34 (15.06.2020): 23909–34. http://dx.doi.org/10.1007/s11042-020-09072-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Huang, Zhijie, Wenbo Zheng, Lan Yan und Chao Gou. „A Novel Face Super-Resolution Method Based on Parallel Imaging and OpenVINO“. Mathematical Problems in Engineering 2021 (13.02.2021): 1–9. http://dx.doi.org/10.1155/2021/6648983.

Der volle Inhalt der Quelle
Annotation:
Face image super-resolution refers to recovering a high-resolution face image from a low-resolution one. In recent years, due to the breakthrough progress of deep representation learning for super-resolution, the study of face super-resolution has become one of the hot topics in the field of super-resolution. However, the performance of these deep learning-based approaches highly relies on the scale of training samples and is limited in efficiency in real-time applications. To address these issues, in this work, we introduce a novel method based on the parallel imaging theory and OpenVINO. In particular, inspired by the methodology of learning-by-synthesis in parallel imaging, we propose to learn from the combination of virtual and real face images. In addition, we introduce a center loss function borrowed from the deep model to enhance the robustness of our model and propose to apply OpenVINO to speed up the inference. To the best of our knowledge, it is the first time to tackle the problem of face super-resolution based on parallel imaging methodology and OpenVINO. Extensive experimental results and comparisons on the publicly available LFW, WebCaricature, and FERET datasets demonstrate the effectiveness and efficiency of the proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Xin, Jingwei, Nannan Wang, Xinbo Gao und Jie Li. „Residual Attribute Attention Network for Face Image Super-Resolution“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9054–61. http://dx.doi.org/10.1609/aaai.v33i01.33019054.

Der volle Inhalt der Quelle
Annotation:
Facial prior knowledge based methods recently achieved great success on the task of face image super-resolution (SR). The combination of different type of facial knowledge could be leveraged for better super-resolving face images, e.g., facial attribute information with texture and shape information. In this paper, we present a novel deep end-to-end network for face super resolution, named Residual Attribute Attention Network (RAAN), which realizes the efficient feature fusion of various types of facial information. Specifically, we construct a multi-block cascaded structure network with dense connection. Each block has three branches: Texture Prediction Network (TPN), Shape Generation Network (SGN) and Attribute Analysis Network (AAN). We divide the task of face image reconstruction into three steps: extracting the pixel level representation information from the input very low resolution (LR) image via TPN and SGN, extracting the semantic level representation information by AAN from the input, and finally combining the pixel level and semantic level information to recover the high resolution (HR) image. Experiments on benchmark database illustrate that RAAN significantly outperforms state-of-the-arts for very low-resolution face SR problem, both quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Xin, Jingwei, Nannan Wang, Jie Li, Xinbo Gao und Zhifeng Li. „Video Face Super-Resolution with Motion-Adaptive Feedback Cell“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12468–75. http://dx.doi.org/10.1609/aaai.v34i07.6934.

Der volle Inhalt der Quelle
Annotation:
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN). Current state-of-the-art CNN methods usually treat the VSR problem as a large number of separate multi-frame super-resolution tasks, at which a batch of low resolution (LR) frames is utilized to generate a single high resolution (HR) frame, and running a slide window to select LR frames over the entire video would obtain a series of HR frames. However, duo to the complex temporal dependency between frames, with the number of LR input frames increase, the performance of the reconstructed HR frames become worse. The reason is in that these methods lack the ability to model complex temporal dependencies and hard to give an accurate motion estimation and compensation for VSR process. Which makes the performance degrade drastically when the motion in frames is complex. In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way. Our approach efficiently utilizes the information of the inter-frame motion, the dependence of the network on motion estimation and compensation method can be avoid. In addition, benefiting from the excellent nature of MAFC, the network can achieve better performance in the case of extremely complex motion scenarios. Extensive evaluations and comparisons validate the strengths of our approach, and the experimental results demonstrated that the proposed framework is outperform the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

HUANG, Dong-jun, und Song-lin HOU. „Learning-based nonlinear algorithm of face image super-resolution“. Journal of Computer Applications 29, Nr. 5 (27.07.2009): 1339–41. http://dx.doi.org/10.3724/sp.j.1087.2009.01339.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Tao Lu, Ruimin Hu, Chengdong Lan und Zhen Han. „Face Super-resolution based-on Non-negative Matrix Factorization“. International Journal of Digital Content Technology and its Applications 5, Nr. 4 (30.04.2011): 82–87. http://dx.doi.org/10.4156/jdcta.vol5.issue4.10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

SenthilSingh. „FACE RECOGNITION USING RELATIONSHIP LEARNING BASED SUPER RESOLUTION ALGORITHM“. American Journal of Applied Sciences 11, Nr. 3 (01.03.2014): 475–81. http://dx.doi.org/10.3844/ajassp.2014.475.481.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Jiang, Junjun, Ruimin Hu, Chao Liang, Zhen Han und Chunjie Zhang. „Face image super-resolution through locality-induced support regression“. Signal Processing 103 (Oktober 2014): 168–83. http://dx.doi.org/10.1016/j.sigpro.2014.02.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Hui, Zhuo, und Kin-Man Lam. „Eigentransformation-based face super-resolution in the wavelet domain“. Pattern Recognition Letters 33, Nr. 6 (April 2012): 718–27. http://dx.doi.org/10.1016/j.patrec.2011.12.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Hikichi, Ikumi, Syogo Hara und Makoto Motoki. „Super-Resolution Method of Face Image using Capsule Network“. IEEJ Transactions on Electronics, Information and Systems 140, Nr. 11 (01.11.2020): 1270–77. http://dx.doi.org/10.1541/ieejeiss.140.1270.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Grm, Klemen, Walter J. Scheirer und Vitomir Struc. „Face Hallucination Using Cascaded Super-Resolution and Identity Priors“. IEEE Transactions on Image Processing 29 (2020): 2150–65. http://dx.doi.org/10.1109/tip.2019.2945835.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Kim, Jonghyun, Gen Li, Inyong Yun, Cheolkon Jung und Joongkyu Kim. „Edge and identity preserving network for face super-resolution“. Neurocomputing 446 (Juli 2021): 11–22. http://dx.doi.org/10.1016/j.neucom.2021.03.048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wang, Huan, Qian Hu, Chengdong Wu, Jianning Chi, Xiaosheng Yu und Hao Wu. „DCLNet: Dual Closed-loop Networks for face super-resolution“. Knowledge-Based Systems 222 (Juni 2021): 106987. http://dx.doi.org/10.1016/j.knosys.2021.106987.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Chudasama, Vishal, Kartik Nighania, Kishor Upla, Kiran Raja, Raghavendra Ramachandra und Christoph Busch. „E-ComSupResNet: Enhanced Face Super-Resolution Through Compact Network“. IEEE Transactions on Biometrics, Behavior, and Identity Science 3, Nr. 2 (April 2021): 166–79. http://dx.doi.org/10.1109/tbiom.2021.3059196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Liu, Shuang, Chengyi Xiong, Xiaodi Shi und Zhirong Gao. „Progressive face super-resolution with cascaded recurrent convolutional network“. Neurocomputing 449 (August 2021): 357–67. http://dx.doi.org/10.1016/j.neucom.2021.03.124.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Zhang, Fan, Junli Zhao, Liang Wang und Fuqing Duan. „3D Face Model Super-Resolution Based on Radial Curve Estimation“. Applied Sciences 10, Nr. 3 (05.02.2020): 1047. http://dx.doi.org/10.3390/app10031047.

Der volle Inhalt der Quelle
Annotation:
Consumer depth cameras bring about cheap and fast acquisition of 3D models. However, the precision and resolution of these consumer depth cameras cannot satisfy the requirements of some 3D face applications. In this paper, we present a super-resolution method for reconstructing a high resolution 3D face model from a low resolution 3D face model acquired from a consumer depth camera. We used a group of radial curves to represent a 3D face. For a given low resolution 3D face model, we first extracted radial curves on it, and then estimated their corresponding high resolution ones by radial curve matching, for which Dynamic Time Warping (DTW) was used. Finally, a reference high resolution 3D face model was deformed to generate a high resolution face model by using the radial curves as the constraining feature. We evaluated our method both qualitatively and quantitatively, and the experimental results validated our method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Rajnoha, Martin, Anzhelika Mezina und Radim Burget. „Multi-Frame Labeled Faces Database: Towards Face Super-Resolution from Realistic Video Sequences“. Applied Sciences 10, Nr. 20 (16.10.2020): 7213. http://dx.doi.org/10.3390/app10207213.

Der volle Inhalt der Quelle
Annotation:
Forensically trained facial reviewers are still considered as one of the most accurate approaches for person identification from video records. The human brain can utilize information, not just from a single image, but also from a sequence of images (i.e., videos), and even in the case of low-quality records or a long distance from a camera, it can accurately identify a given person. Unfortunately, in many cases, a single still image is needed. An example of such a case is a police search that is about to be announced in newspapers. This paper introduces a face database obtained from real environment counting in 17,426 sequences of images. The dataset includes persons of various races and ages and also different environments, different lighting conditions or camera device types. This paper also introduces a new multi-frame face super-resolution method and compares this method with the state-of-the-art single-frame and multi-frame super-resolution methods. We prove that the proposed method increases the quality of face images, even in cases of low-resolution low-quality input images, and provides better results than single-frame approaches that are still considered the best in this area. Quality of face images was evaluated using several objective mathematical methods, and also subjective ones, by several volunteers. The source code and the dataset were released and the experiment is fully reproducible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Chen, Liang, Jinshan Pan, Junjun Jiang, Jiawei Zhang und Yi Wu. „Robust Face Super-Resolution via Position Relation Model Based on Global Face Context“. IEEE Transactions on Image Processing 29 (2020): 9002–16. http://dx.doi.org/10.1109/tip.2020.3023580.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wang Yanran, 王嫣然, 罗宇豪 Luo Yuhao und 尹东 Yin Dong. „A Super Resolution Technology of Face Image for Surveillance Video“. Acta Optica Sinica 37, Nr. 3 (2017): 0318012. http://dx.doi.org/10.3788/aos201737.0318012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Huang, Hua, Huiting He, Xin Fan und Junping Zhang. „Super-resolution of human face image using canonical correlation analysis“. Pattern Recognition 43, Nr. 7 (Juli 2010): 2532–43. http://dx.doi.org/10.1016/j.patcog.2010.02.007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Niu, Zhouzhou, Jianhong Shi, Lei Sun, Yan Zhu, Jianping Fan und Guihua Zeng. „Photon-limited face image super-resolution based on deep learning“. Optics Express 26, Nr. 18 (21.08.2018): 22773. http://dx.doi.org/10.1364/oe.26.022773.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Liu, Qing-Ming, Rui-Sheng Jia, Chao-Yue Zhao, Xiao-Ying Liu, Hong-Mei Sun und Xing-Li Zhang. „Face Super-Resolution Reconstruction Based on Self-Attention Residual Network“. IEEE Access 8 (2020): 4110–21. http://dx.doi.org/10.1109/access.2019.2962790.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Ayan Chakrabarti, A. N. Rajagopalan und Rama Chellappa. „Super-Resolution of Face Images Using Kernel PCA-Based Prior“. IEEE Transactions on Multimedia 9, Nr. 4 (Juni 2007): 888–92. http://dx.doi.org/10.1109/tmm.2007.893346.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie