Articles de revues sur le sujet « CAS-PEAL »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : CAS-PEAL.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 21 meilleurs articles de revues pour votre recherche sur le sujet « CAS-PEAL ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Wen Gao, Bo Cao, Shiguang Shan, Xilin Chen, Delong Zhou, Xiaohua Zhang et Debin Zhao. « The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations ». IEEE Transactions on Systems, Man, and Cybernetics - Part A : Systems and Humans 38, no 1 (janvier 2008) : 149–61. http://dx.doi.org/10.1109/tsmca.2007.909557.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Xiang De, Qing Song Tang, Hua Jin et Yue Qiu. « Eye Location Based on Adaboost and Region Features ». Applied Mechanics and Materials 143-144 (décembre 2011) : 731–36. http://dx.doi.org/10.4028/www.scientific.net/amm.143-144.731.

Texte intégral
Résumé :
In this paper, we proposed a novel eye location method based on Adaboost and region features. Firstly, Haar features and Adaboost algorithm are used to extract the eye regions from a face image. Then, we highlight the characteristics of eyes to eye location. The method proposed have been tested in the CAS-PEAL-R1 database and CASIA NIR database separately, and the accuracy rate is 98.86% and 97.68%, which demonstrates the effectiveness of the method
Styles APA, Harvard, Vancouver, ISO, etc.
3

GÜNTHER, MANUEL, et ROLF P. WÜRTZ. « FACE DETECTION AND RECOGNITION USING MAXIMUM LIKELIHOOD CLASSIFIERS ON GABOR GRAPHS ». International Journal of Pattern Recognition and Artificial Intelligence 23, no 03 (mai 2009) : 433–61. http://dx.doi.org/10.1142/s0218001409007211.

Texte intégral
Résumé :
We present an integrated face recognition system that combines a Maximum Likelihood (ML) estimator with Gabor graphs for face detection under varying scale and in-plane rotation and matching as well as a Bayesian intrapersonal/extrapersonal classifier (BIC) on graph similarities for face recognition. We have tested a variety of similarity functions and achieved verification rates (at FAR 0.1%) of 90.5% on expression-variation and 95.8% on size-varying frontal images within the CAS-PEAL database. Performing Experiment 1 of FRGC ver2.0, the method achieved a verification rate of 72%.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Qi, Yong Feng, et Yuan Lian Huo. « Locality Preserving Maximum Scatter Difference Projection for Face Recognition ». Applied Mechanics and Materials 411-414 (septembre 2013) : 1179–84. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1179.

Texte intégral
Résumé :
Maximum Scatter Difference (MSD) aims to preserve discriminant information of sample space, but it fails to find the essential structure of the samples with nonlinear distribution. To overcome this problem, an efficient feature extraction method named as Locality Preserving Maximum Scatter Difference (LPMSD) projection is proposed in this paper. The new algorithm is developed based on locality preserved embedding and MSD criterion. Thus, the proposed LPMSD not only preserves discriminant information of sample space but also captures the intrinsic submanifold of sample space. Experimental results on ORL, Yale and CAS-PEAL face database indicate that the LPMSD method outperforms the MSD, MMSD and LDA methods under various experimental conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jing, Xiao Yuan, Li Li, Cai Ling Wang, Yong Fang Yao et Feng Nan Yu. « Research on Image Feature Extraction Method Based on Orthogonal Projection Transformation of Multi-Task Learning Technology ». Advanced Materials Research 760-762 (septembre 2013) : 1609–14. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1609.

Texte intégral
Résumé :
When the number of labeled training samples is very small, the sample information we can use would be very little. Because of this, the recognition rates of some traditional image recognition methods are not satisfactory. In order to use some related information that always exist in other databases, which is helpful to feature extraction and can improve the recognition rates, we apply multi-task learning to feature extraction of images. Our researches are based on transferring the projection transformation. Our experiments results on the public AR, FERET and CAS-PEAL databases demonstrate that the proposed approaches are more effective than the general related feature extraction methods in classification performance.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jing, Xiao Yuan, Min Li, Yong Fang Yao, Song Hao Zhu et Sheng Li. « A New Kernel Orthogonal Projection Analysis Approach for Face Recognition ». Advanced Materials Research 760-762 (septembre 2013) : 1627–32. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1627.

Texte intégral
Résumé :
In the field of face recognition, how to extract effective nonlinear discriminative features is an important research topic. In this paper, we propose a new kernel orthogonal projection analysis approach. We obtain the optimal nonlinear projective vector which can differentiate one class and its adjacent classes, by using the Fisher criterion and constructing the specific between-class and within-class scatter matrices in kernel space. In addition, to eliminate the redundancy among projective vectors, our approach makes every projective vector satisfy locally orthogonal constraints by using the corresponding class and part of its most adjacent classes. Experimental results on the public AR and CAS-PEAL face databases demonstrate that the proposed approach outperforms several representative nonlinear projection analysis methods.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jing, Xiao Yuan, Xiang Long Ge, Yong Fang Yao et Feng Nan Yu. « Feature Extraction Algorithm Based on Sample Set Reconstruction ». Applied Mechanics and Materials 347-350 (août 2013) : 2241–45. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2241.

Texte intégral
Résumé :
When the number of labeled training samples is very small, the sample information people can use would be very little and the recognition rates of traditional image recognition methods are not satisfactory. However, there is often some related information contained in other databases that is helpful to feature extraction. Thus, it is considered to take full advantage of the data information in other databases by transfer learning. In this paper, the idea of transferring the samples is employed and further we propose a feature extraction approach based on sample set reconstruction. We realize the approach by reconstructing the training sample set using the difference information among the samples of other databases. Experimental results on three widely used face databases AR, FERET, CAS-PEAL are presented to demonstrate the efficacy of the proposed approach in classification performance.
Styles APA, Harvard, Vancouver, ISO, etc.
8

ZHANG, CHENGYUAN, QIUQI RUAN et YI JIN. « FUSING GLOBAL AND LOCAL COMPLETE LINEAR DISCRIMINANT FEATURES BY FUZZY INTEGRAL FOR FACE RECOGNITION ». International Journal of Pattern Recognition and Artificial Intelligence 22, no 07 (novembre 2008) : 1427–45. http://dx.doi.org/10.1142/s0218001408006806.

Texte intégral
Résumé :
Face recognition becomes very difficult in a complex environment, and the combination of multiple classifiers is a good solution to this problem. A novel face recognition algorithm GLCFDA-FI is proposed in this paper, which fuses the complementary information extracted by complete linear discriminant analysis from the global and local features of a face to improve the performance. The Choquet fuzzy integral is used as the fusing tool due to its suitable properties for information aggregation. Experiments are carried out on the CAS-PEAL-R1 database, the Harvard database and the FERET database to demonstrate the effectiveness of the proposed method. Results also indicate that the proposed method GLCFDA-FI outperforms five other commonly used algorithms — namely, Fisherfaces, null space-based linear discriminant analysis (NLDA), cascaded-LDA, kernel-Fisher discriminant analysis (KFDA), and null-space based KFDA (NKFDA).
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ardakany, Abbas Roayaei, Mircea Nicolescu et Monica Nicolescu. « Improving Gender Classification Using an Extended Set of Local Binary Patterns ». International Journal of Multimedia Data Engineering and Management 5, no 3 (juillet 2014) : 47–66. http://dx.doi.org/10.4018/ijmdem.2014070103.

Texte intégral
Résumé :
In this article, the authors designed and implemented an efficient gender recognition system with high classification accuracy. In this regard, they proposed a novel local binary descriptor capable of extracting more informative and discriminative local features for the purpose of gender classification. Traditional Local binary patterns include information about the relationship between a central pixel value and those of its neighboring pixels in a very compact manner. In the proposed method the authors incorporate into the descriptor more information from the neighborhood by using extra patterns. They have evaluated their approach on the standard FERET and CAS-PEAL databases and the experiments show that the proposed approach offers superior results compared to techniques using state-of-the-art descriptors such as LBP, LDP and HoG. The results demonstrate the effectiveness and robustness of the proposed system with 98.33% classification accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cai, Ying, Menglong Yang et Ziqiang Li. « Robust Head Pose Estimation Using a 3D Morphable Model ». Mathematical Problems in Engineering 2015 (2015) : 1–10. http://dx.doi.org/10.1155/2015/678973.

Texte intégral
Résumé :
Head pose estimation from single 2D images has been considered as an important and challenging research task in computer vision. This paper presents a novel head pose estimation method which utilizes the shape model of the Basel face model and five fiducial points in faces. It adjusts shape deformation according to Laplace distribution to afford the shape variation across different persons. A new matching method based on PSO (particle swarm optimization) algorithm is applied both to reduce the time cost of shape reconstruction and to achieve higher accuracy than traditional optimization methods. In order to objectively evaluate accuracy, we proposed a new way to compute the pose estimation errors. Experiments on the BFM-synthetic database, the BU-3DFE database, the CUbiC FacePix database, the CMU PIE face database, and the CAS-PEAL-R1 database show that the proposed method is robust, accurate, and computationally efficient.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zou, Guofeng, Yuanyuan Zhang, Kejun Wang, Shuming Jiang, Huisong Wan et Guixia Fu. « An Improved Metric Learning Approach for Degraded Face Recognition ». Mathematical Problems in Engineering 2014 (2014) : 1–10. http://dx.doi.org/10.1155/2014/724978.

Texte intégral
Résumé :
To solve the matching problem of the elements in different data collections, an improved coupled metric learning approach is proposed. First, we improved the supervised locality preserving projection algorithm and added the within-class and between-class information of the improved algorithm to coupled metric learning, so a novel coupled metric learning method is proposed. Furthermore, we extended this algorithm to nonlinear space, and the kernel coupled metric learning method based on supervised locality preserving projection is proposed. In kernel coupled metric learning approach, two elements of different collections are mapped to the unified high dimensional feature space by kernel function, and then generalized metric learning is performed in this space. Experiments based on Yale and CAS-PEAL-R1 face databases demonstrate that the proposed kernel coupled approach performs better in low-resolution and fuzzy face recognition and can reduce the computing time; it is an effective metric method.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Reddy, A. Mallikarjuna, V. Venkata Krishna et L. Sumalatha. « Face recognition based on stable uniform patterns ». International Journal of Engineering & ; Technology 7, no 2 (28 avril 2018) : 626. http://dx.doi.org/10.14419/ijet.v7i2.9922.

Texte intégral
Résumé :
Face recognition (FR) is one of the challenging and active research fields of image processing, computer vision and biometrics with numerous proposed systems. We present a feature extraction method named “stable uniform local pattern (SULP)”, a refined variant of ULBP operator, for robust face recognition. The SULP directly applied on gradient face images (in x and y directions) of a single image for capturing significant fundamental local texture patterns to build up a feature vector of a face image. Histogram sequences of SULP images of the two gradient images are finally concatenated to form the “stable uniform local pattern gradient (SULPG)” vector for the given image. The SULPG approach is experimented on Yale, ATT-ORL, FERET, CAS-PEAL and LFW face databases and the results are compared with the LBP model and various variants of LBP descriptor. The results indicate that the present descriptor is more powerful against a wide range of challenges, such as illumination, expression and pose variations and outperforms the state-of-the-art methods based on LBP.
Styles APA, Harvard, Vancouver, ISO, etc.
13

LIAN, HUI-CHENG, et BAO-LIANG LU. « MULTI-VIEW GENDER CLASSIFICATION USING MULTI-RESOLUTION LOCAL BINARY PATTERNS AND SUPPORT VECTOR MACHINES ». International Journal of Neural Systems 17, no 06 (décembre 2007) : 479–87. http://dx.doi.org/10.1142/s0129065707001317.

Texte intégral
Résumé :
In this paper, we present a novel method for multi-view gender classification considering both shape and texture information to represent facial images. The face area is divided into small regions from which local binary pattern (LBP) histograms are extracted and concatenated into a single vector efficiently representing a facial image. Following the idea of local binary pattern, we propose a new feature extraction approach called multi-resolution LBP, which can retain both fine and coarse local micro-patterns and spatial information of facial images. The classification tasks in this work are performed by support vector machines (SVMs). The experiments clearly show the superiority of the proposed method over both support gray faces and support Gabor faces on the CAS-PEAL face database. A higher correct classification rate of 96.56% and a higher cross validation average accuracy of 95.78% have been obtained. In addition, the simplicity of the proposed method leads to very fast feature extraction, and the regional histograms and fine-to-coarse description of facial images allow for multi-view gender classification.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Bekhet, Saddam, Abdullah M. Alghamdi et Islam F. Taj-Eddin. « Gender recognition from unconstrained selfie images : a convolutional neural network approach ». International Journal of Electrical and Computer Engineering (IJECE) 12, no 2 (1 avril 2022) : 2066. http://dx.doi.org/10.11591/ijece.v12i2.pp2066-2078.

Texte intégral
Résumé :
<p>Human gender recognition is an essential demographic tool. This is reflected in forensic science, surveillance systems and targeted marketing applications. This research was always driven using standard face images and hand-crafted features. Such way has achieved good results, however, the reliability of the facial images had a great effect on the robustness of extracted features, where any small change in the query facial image could change the results. Nevertheless, the performance of current techniques in unconstrained environments is still inefficient, especially when contrasted against recent breakthroughs in different computer vision research. This paper introduces a novel technique for human gender recognition from non-standard selfie images using deep learning approaches. Selfie photos are uncontrolled partial or full-frontal body images that are usually taken by people themselves in real-life environment. As far as we know this is the first paper of its kind to identify gender from selfie photos, using deep learning approach. The experimental results on the selfie dataset emphasizes the proposed technique effectiveness in recognizing gender from such images with 89% accuracy. The performance is further consolidated by testing on numerous benchmark datasets that are widely used in the field, namely: Adience, LFW, FERET, NIVE, Caltech WebFaces and<br />CAS-PEAL-R1.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
15

Zhang, Zheng, Guozhi Song et Jigang Wu. « A Novel Two-Stage Illumination Estimation Framework for Expression Recognition ». Scientific World Journal 2014 (2014) : 1–12. http://dx.doi.org/10.1155/2014/565389.

Texte intégral
Résumé :
One of the critical issues for facial expression recognition is to eliminate the negative effect caused by variant poses and illuminations. In this paper a two-stage illumination estimation framework is proposed based on three-dimensional representative face and clustering, which can estimate illumination directions under a series of poses. First, 256 training 3D face models are adaptively categorized into a certain amount of facial structure types byk-means clustering to group people with similar facial appearance into clusters. Then the representative face of each cluster is generated to represent the facial appearance type of that cluster. Our training set is obtained by rotating all representative faces to a certain pose, illuminating them with a series of different illumination conditions, and then projecting them into two-dimensional images. Finally the saltire-over-cross feature is selected to train a group of SVM classifiers and satisfactory performance is achieved when estimating a number of test sets including images generated from 64 3D face models kept for testing, CAS-PEAL face database, CMU PIE database, and a small test set created by ourselves. Compared with other related works, our method is subject independent and has less computational complexityO(C×N)without 3D facial reconstruction.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Sun, Zhe, Zheng-Ping Hu, Meng Wang, Fan Bai et Bo Sun. « Robust Facial Expression Recognition with Low-Rank Sparse Error Dictionary Based Probabilistic Collaborative Representation Classification ». International Journal on Artificial Intelligence Tools 26, no 04 (août 2017) : 1750017. http://dx.doi.org/10.1142/s0218213017500178.

Texte intégral
Résumé :
The performance of facial expression recognition (FER) would be degraded due to some factors such as individual differences, Gaussian random noise and so on. Prior feature extraction methods like Local Binary Patterns (LBP) and Gabor filters require explicit expression components, which are always unavailable and difficult to obtain. To make the facial expression recognition (FER) more robust, we propose a novel FER approach based on low-rank sparse error dictionary (LRSE) to remit the side-effect caused by the problems above. Then the query samples can be represented and classified by a probabilistic collaborative representation based classifier (ProCRC), which exploits the maximum likelihood that the query sample belonging to the collaborative subspace of all classes can be better computed. The final classification is performed by seeking which class has the maximum probability. The proposed approach which exploits ProCRC associated with the LRSE features (LRSE ProCRC) for robust FER reaches higher average accuracies on the different databases (i.e., 79.39% on KDEF database, 89.54% on CAS-PEAL database, 84.45% on CK+ database etc.). In addition, our method also leads to state-of-the-art classification results from the aspect of feature extraction methods, training samples, Gaussian noise variances and classification based methods on benchmark databases.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Márquez-Olivera, Moisés, Antonio-Gustavo Juárez-Gracia, Viridiana Hernández-Herrera, Amadeo-José Argüelles-Cruz et Itzamá López-Yáñez. « System for Face Recognition under Different Facial Expressions Using a New Associative Hybrid Model Amαβ-KNN for People with Visual Impairment or Prosopagnosia ». Sensors 19, no 3 (30 janvier 2019) : 578. http://dx.doi.org/10.3390/s19030578.

Texte intégral
Résumé :
Face recognition is a natural skill that a child performs from the first days of life; unfortunately, there are people with visual or neurological problems that prevent the individual from performing the process visually. This work describes a system that integrates Artificial Intelligence which learns the face of the people with whom the user interacts daily. During the study we propose a new hybrid model of Alpha-Beta Associative memories (Amαβ) with Correlation Matrix (CM) and K-Nearest Neighbors (KNN), where the Amαβ-CMKNN was trained with characteristic biometric vectors generated from images of faces from people who present different facial expressions such as happiness, surprise, anger and sadness. To test the performance of the hybrid model, two experiments that differ in the selection of parameters that characterize the face are conducted. The performance of the proposed model was tested in the databases CK+, CAS-PEAL-R1 and Face-MECS (own), which test the Amαβ-CMKNN with faces of subjects of both sexes, different races, facial expressions, poses and environmental conditions. The hybrid model was able to remember 100% of all the faces learned during their training, while in the test in which faces are presented that have variations with respect to those learned the results range from 95.05% in controlled environments and 86.48% in real environments using the proposed integrated system.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Amanzadeh, Soodabeh, Yahya Forghani et Javad Mahdavi Chabok. « Improvements on Learning Kernel Extended Dictionary for Face Recognition ». Revue d'Intelligence Artificielle 34, no 4 (30 septembre 2020) : 387–94. http://dx.doi.org/10.18280/ria.340402.

Texte intégral
Résumé :
Kernel extended dictionary learning model (KED) is a new type of Sparse Representation for Classification (SRC), which represents the input face image as a linear combination of dictionary set and extended dictionary set to determine the input face image class label. Extended dictionary is created based on the differences between the occluded images and non-occluded training images. There are four defaults to make about KED: (1) Similar weights are assigned to the principle components of occlusion variations in KED model, while the principle components of the occlusion variations have different weights, which are proportional to the principle components Eigen-values. (2) Reconstruction of an occluded image is not possible by combining only non-occluded images and the principle components (or the directions) of occlusion variations, but it requires the mean of occlusion variations. (3) The importance and capability of main dictionary and extended dictionary in reconstructing the input face image is not the same, necessarily. (4) KED Runtime is high. To address these problems or challenges, a novel mathematical model is proposed in this paper. In the proposed model, different weights are assigned to the principle components of occlusion variations; different weights are assigned to the main dictionary and extended dictionary; an occluded image is reconstructed by non-occluded images and the principle components of occlusion variations, and also the mean of occlusion variations; and collaborative representation is used instead of sparse representation to enhance the runtime. Experimental results on CAS-PEAL subsets showed that the runtime and accuracy of the proposed model is about 1% better than that of KED.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Dong, Jun, Xue Yuan et Fanlun Xiong. « Lighting Equilibrium Distribution Maps and Their Application to Face Recognition Under Difficult Lighting Conditions ». International Journal of Pattern Recognition and Artificial Intelligence 31, no 03 (février 2017) : 1756003. http://dx.doi.org/10.1142/s0218001417560031.

Texte intégral
Résumé :
In this paper, a novel facial-patch based recognition framework is proposed to deal with the problem of face recognition (FR) on the serious illumination condition. First, a novel lighting equilibrium distribution maps (LEDM) for illumination normalization is proposed. In LEDM, an image is analyzed in logarithm domain with wavelet transform, and the approximation coefficients of the image are mapped according to a reference-illumination map in order to normalize the distribution of illumination energy due to different lighting effects. Meanwhile, the detail coefficients are enhanced to achieve detail information emphasis. The LEDM is obtained by blurring the distances between the test image and the reference illumination map in the logarithm domain, which may express the entire distribution of illumination variations. Then, a facial-patch based framework and a credit degree based facial patches synthesizing algorithm are proposed. Each normalized face images is divided into several stacked patches. And, all patches are individually classified, then each patch from the test image casts a vote toward the parent image classification. A novel credit degree map is established based on the LEDM, which is deciding a credit degree for each facial patch. The main idea of credit degree map construction is the over-and under-illuminated regions should be assigned lower credit degree than well-illuminated regions. Finally, results are obtained by the credit degree based facial patches synthesizing. The proposed method provides state-of-the-art performance on three data sets that are widely used for testing FR under different illumination conditions: Extended Yale-B, CAS-PEAL-R1, and CMUPIE. Experimental results show that our FR frame outperforms several existing illumination compensation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Huang, Yongwen, Dingding Chen, Haiyan Wang et Lulu Wang. « Gender recognition of Guanyin in China based on VGGNet ». Heritage Science 10, no 1 (21 juin 2022). http://dx.doi.org/10.1186/s40494-022-00732-3.

Texte intégral
Résumé :
AbstractGender transformation of Guanyin (Avalokitesvara in India) in China is an intrinsically fascinating research topic. Besides the inner source from the scriptures, literatures and epigraphs, iconological analysis is usually as the external evidence of Guanyin’s gender recognition. However, the ambiguous gender of the Guanyin image is often intentional and can be objectively assessed. Can computer vision be applied to the recognition objectively and quantitatively? In this paper, VGGNet (VGGNet is a very deep convolutional network for large-scale image recognition proposed by Visual Geometry Group of Oxford University) is applied to propose an automatic gender recognition system. To validate its efficiency, abundant experiments are implemented on the images of Dazu Rock Carvings, Dunhuang Mogao Caves, and Yungang Grottoes. The following conclusions can be made according to the quantitative results. Firstly, VGG-based method can be effectively applied to the gender recognition on non-Buddhist and Buddhist images. Compared with five classical feature extraction methods, VGG-based method performs not much better on non-Buddhist images, but superior on Buddhist images. Furthermore, the experiments are also carried out on three different training datasets, real-world facial datasets, including CUHK (CUHK is a student face database of Chinese University of Hong Kong). IMFDB (IMFDB is an Indian movie face database.) and CAS-PEAL (CAS-PEAL is a Chinese face database created by Chinese Academy of Sciences (CAS) with varying pose, expression, accessory, and lighting (PEAL). The unsatisfactory results based on IMFDB indicate that it is not valid to apply Indian facial images as a training set to the gender recognition on Buddhist image in China. With the sinicization of Buddhism, there were more Chinese rather than Indian characteristics on Buddhist images in ancient China. The results based on CAS-PEAL are more robust than those based on CUHK, as the former is mainly composed of mature adult faces, and the latter consists of young student faces. It gives the evidence that Buddha and Bodhisattva (Guanyin included) were as ideally mature men in original Buddhist art. The last but the most meaningful is that besides the time factor, the relationship between image making and the scriptures, or the intentional combination of male and female features, the geographical impact should not be ignored when we talk about the gender transformation of Guanyin in ancient China. The gender of Guanyin frescoes in Dunhuang Mogao Caves painted in the Sui, Tang, Five, Song and Yuan dynasties were always with prominent male characteristics (with tadpole-like moustache), while bodhisattvas in Yungang Grottoes engraved in the Northern Wei Dynasty were feminine even though they were made earlier than those in Dunhuang Mogao Caves. It is quite different from the common idea that the feminization of Guanyin occurred in the early Tang Dynasties and completely feminized in the late Tang Dynasty. Both the quantitative results and image analysis indicate that there might be a common model in a specific region, so the image-making of Guanyin was affected much more by geographical rather than temporal factor. In a word, it is quite a complicated issue for the feminization of Guanyin in China.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Shen, Yumin, et Hongyu Guo. « New Breakthroughs and Innovation Modes in English Education in Post-pandemic Era ». Frontiers in Psychology 13 (11 février 2022). http://dx.doi.org/10.3389/fpsyg.2022.839440.

Texte intégral
Résumé :
The outbreak of COVID-19 has brought drastic changes to English teaching as it has shifted from the offline mode before the pandemic to the online mode during the pandemic. However, in the post-pandemic era, there are still many problems in the effective implementation of the process of English teaching, leading to the inability of achieving better results in the quality and efficiency of English teaching and effective cultivation of students’ practical application ability. In recent years, English speaking has attracted the attention of experts and scholars. Therefore, this study constructs an interactive English-speaking practice scene based on a virtual character. A dual-modality emotion recognition method is proposed that mainly recognizes and analyzes facial expressions and physiological signals of students and the virtual character in each scene. Thereafter, the system adjusts the difficulty of the conversation according to the current state of students, toward making the conversation more conducive to the students’ understanding and gradually improving their English-speaking ability. The simulation compares nine facial expressions based on the eNTERFACE05 and CAS-PEAL datasets, which shows that the emotion recognition method proposed in this manuscript can effectively recognize students’ emotions in interactive English-speaking practice and reduce the recognition time to a great extent. The recognition accuracy of the nine facial expressions was close to 90% for the dual-modality emotion recognition method in the eNTERFACE05 dataset, and the recognition accuracy of the dual-modality emotion recognition method was significantly improved with an average improvement of approximately 5%.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie