Academic literature on the topic 'Face Expression Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Face Expression Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Face Expression Recognition"

1

CHOI, JAE-YOUNG, TAEG-KEUN WHANGBO, YOUNG-GYU YANG, MURLIKRISHNA VISWANATHAN, and NAK-BIN KIM. "POSE-EXPRESSION NORMALIZATION FOR FACE RECOGNITION USING CONNECTED COMPONENTS ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 06 (September 2006): 869–81. http://dx.doi.org/10.1142/s0218001406005010.

Full text
Abstract:
Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahlawat, Deepti, and Vijay Nehra. "Expression Invariant Face Recognition System." International Journal of Signal Processing, Image Processing and Pattern Recognition 10, no. 6 (June 30, 2017): 13–22. http://dx.doi.org/10.14257/ijsip.2017.10.6.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ali, Humayra Binte, David M. W. Powers, Xibin Jia, and Yanhua Zhang. "Extended Non-negative Matrix Factorization for Face and Facial Expression Recognition." International Journal of Machine Learning and Computing 5, no. 2 (April 2015): 142–47. http://dx.doi.org/10.7763/ijmlc.2015.v5.498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

ter Haar, Frank B., and Remco C. Veltkamp. "Expression modeling for expression-invariant face recognition." Computers & Graphics 34, no. 3 (June 2010): 231–41. http://dx.doi.org/10.1016/j.cag.2010.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chaudhari, V. J. "Face Recognition and Emotion Detection." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 4775–77. http://dx.doi.org/10.22214/ijraset.2021.35698.

Full text
Abstract:
This Face recognition and facial emotion detection is new era of technology. It’s also indirectly defining the level of growth in intelligence, security and copying human emotional behaviour. It is mainly used in market research and testing. Many companies require a good and accurate testing method which contributes to their development by providing the necessary insights and drawing the accurate conclusions. Facial expression recognition technology can be developed through various methods. This technology can be developed by using the deep learning with the convolutional neural network or with inbuilt libraries like deepface. The main objective here is to classify each face based on the emotions shown into seven categories which include Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutrality. The main objective here in this project is, to read the facial expressions of the people and displaying them the product which helps in determining their interest in it. Facial expression recognition technology can also be used in video game testing. During the video game testing, certain users are asked to play the game for a specified period and their expressions, and their behavior are monitored and analyzed. The game developers usually use the facial expression recognition and get the required insights and draw the conclusions and provide their feedback in the making of the final product. In this project, deep learning with the convolutional neural networks (CNN) approach is used. Neural networks need to be trained with large amounts of data and have a higher computational power [8-11]. It takes more time to train the model.[1]
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Hyung-Soo, and Daijin Kim. "Expression-invariant face recognition by facial expression transformations." Pattern Recognition Letters 29, no. 13 (October 2008): 1797–805. http://dx.doi.org/10.1016/j.patrec.2008.05.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dhekane, Manasi, Ayan Seal, and Pritee Khanna. "Illumination and Expression Invariant Face Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 12 (September 17, 2017): 1756018. http://dx.doi.org/10.1142/s0218001417560183.

Full text
Abstract:
An illumination and expression invariant face recognition method based on uniform local binary patterns (uLBP) and Legendre moments is proposed in this work. The proposed method exploits uLBP texture features and Legendre moments to make a feature representation with enhanced discriminating power. The input images are preprocessed to extract the face region and normalized. From normalized image, uLBP codes are extracted to obtain texture image which overcomes the effect of monotonic temperature changes. Legendre moments are computed from this texture image to get the required feature vector. Legendre moments conserve the spatial structure information of the texture image. The resultant feature vector is classified using k-nearest neighbor classifier with [Formula: see text] norm. To evaluate the proposed method, experiments are performed on IRIS and NVIE databases. The proposed method is tested on both visible and infrared images under different illumination and expression variations and performance is compared with recently published methods in terms of recognition rate, recall, length of feature vector, and computational time. The proposed method gives better recognition rates and outperforms other recent face recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Rhodes, Gillian. "Adaptive Coding and Face Recognition." Current Directions in Psychological Science 26, no. 3 (June 2017): 218–24. http://dx.doi.org/10.1177/0963721417692786.

Full text
Abstract:
Face adaptation generates striking face aftereffects, but is this adaptation useful? The answer appears to be yes, with several lines of evidence suggesting that it contributes to our face-recognition ability. Adaptation to face identity is reduced in a variety of clinical populations with impaired face recognition. In addition, individual differences in face adaptation are linked to face-recognition ability in typical adults. People who adapt more readily to new faces are better at recognizing faces. This link between adaptation and recognition holds for both identity and expression recognition. Adaptation updates face norms, which represent the typical or average properties of the faces we experience. By using these norms to code how faces differ from average, the visual system can make explicit the distinctive information that we need to recognize faces. Thus, adaptive norm-based coding may help us to discriminate and recognize faces despite their similarity as visual patterns.
APA, Harvard, Vancouver, ISO, and other styles
9

Minemoto, Kazusa, Yoshiyuki Ueda, and Sakiko Yoshikawa. "The aftereffect of the ensemble average of facial expressions on subsequent facial expression recognition." Attention, Perception, & Psychophysics 84, no. 3 (February 15, 2022): 815–28. http://dx.doi.org/10.3758/s13414-021-02407-w.

Full text
Abstract:
AbstractAn ensemble or statistical summary can be extracted from facial expressions presented in different spatial locations simultaneously. However, how such complicated objects are represented in the mind is not clear. It is known that the aftereffect of facial expressions, in which prolonged viewing of facial expressions biases the perception of subsequent facial expressions of the same category, occurs only when a visual representation is formed. Using this methodology, we examined whether an ensemble can be represented with visualized information. Experiment 1 revealed that the presentation of multiple facial expressions biased the perception of subsequent facial expressions to less happy as much as the presentation of a single face did. Experiment 2 compared the presentation of faces comprising strong and weak intensities of emotional expressions with an individual face as the adaptation stimulus. The results indicated that the perceptual biases were found after the presentation of four faces and a strong single face, but not after the weak single face presentation. Experiment 3 employed angry expressions, a distinct category from the test expression used as an adaptation stimulus; no aftereffect was observed. Finally, Experiment 4 clearly demonstrated the perceptual bias with a higher number of faces. Altogether, these results indicate that an ensemble average extracted from multiple faces leads to the perceptual bias, and this effect is similar in terms of its properties to that of a single face. This supports the idea that an ensemble of faces is represented with visualized information as a single face.
APA, Harvard, Vancouver, ISO, and other styles
10

Rasyid, Muhammad Furqan. "Comparison Of LBPH, Fisherface, and PCA For Facial Expression Recognition of Kindergarten Student." International Journal Education and Computer Studies (IJECS) 2, no. 1 (May 15, 2022): 19–26. http://dx.doi.org/10.35870/ijecs.v2i1.625.

Full text
Abstract:
Face recognition is the biometric personal identification that gaining a lot of attention recently. An increasing need for fast and accurate face expression recognition systems. Facial expression recognition is a system used to identify what expression is displayed by someone. In general, research on facial expression recognition only focuses on adult facial expressions. The introduction of human facial expressions is one of the very fields of research important because it is a blend of feelings and computer applications such as interactions between humans and computers, compressing data, face animation and face image search from a video. This research process recognizes facial expressions for toddlers, precisely for kindergarten students. But before making this research system Comparing three methods namely PCA, Fisherface and LBPH by adopts our new database that contains the face of individuals with a variety of pose and expression. which will be used for facial expression recognition. Fisherface accuracy was obtained at 94%, LBPH 100%, and PCA 48.75%.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Face Expression Recognition"

1

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Full text
Abstract:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
APA, Harvard, Vancouver, ISO, and other styles
2

Ener, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.

Full text
Abstract:
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Full text
Abstract:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
APA, Harvard, Vancouver, ISO, and other styles
4

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Full text
Abstract:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
APA, Harvard, Vancouver, ISO, and other styles
5

Minoi, Jacey-Lynn. "Geometric expression invariant 3D face recognition using statistical discriminant models." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/4648.

Full text
Abstract:
Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhan, Ce. "Facial expression recognition for multi-player on-line games." School of Computer Science and Software Engineering, 2008. http://ro.uow.edu.au/theses/100.

Full text
Abstract:
Multi-player on-line games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communications and interactions. However, compared with ordinary human communication, MOG still has several limitations, especially in the communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. This thesis proposes an automatic expression recognition system that can be integrated into a MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, tailored and extended. In particular, Viola-Jones face detection method is modified in several aspects to detect small scale key facial components with wide shape variations. In addition a new coarse-to-fine method is proposed for extracting 20 facial landmarks from image sequences. The proposed system has been evaluated on a number of databases that are different from the training database and achieved 83% recognition rate for 4 emotional state expressions. During the real-time test, the system achieved an average frame rate of 13 fps for 320 x 240 images on a PC with 2.80 GHz Intel Pentium. Testing results have shown that the system has a practical range of working distances (from user to camera), and is robust against variations in lighting and backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
7

Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.

Full text
Abstract:
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning.
Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.

Full text
Abstract:
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Xiaozhou. "3D facial expression modeling and analysis with topographic information." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Beall, Paula M. "Automaticity and Hemispheric Specialization in Emotional Expression Recognition: Examined using a modified Stroop Task." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3267/.

Full text
Abstract:
The main focus of this investigation was to examine the automaticity of facial expression recognition through valence judgments in a modified photo-word Stroop paradigm. Positive and negative words were superimposed across male and female faces expressing positive (happy) and negative (angry, sad) emotions. Subjects categorized the valence of each stimulus. Gender biases in judgments of expressions (better recognition for male angry and female sad expressions) and the valence hypothesis of hemispheric advantages for emotions (left hemisphere: positive; right hemisphere: negative) were also examined. Four major findings emerged. First, the valence of expressions was processed automatically (robust interference effects). Second, male faces interfered with processing the valence of words. Third, no posers' gender biases were indicated. Finally, the emotionality of facial expressions and words was processed similarly by both hemispheres.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Face Expression Recognition"

1

Face recognition: New research. New York: Nova Science Publishers, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bai, Xiang, Yi Fang, Yangqing Jia, Meina Kan, Shiguang Shan, Chunhua Shen, Jingdong Wang, et al., eds. Video Analytics. Face and Facial Expression Recognition. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12177-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ji, Qiang, Thomas B. Moeslund, Gang Hua, and Kamal Nasrollahi, eds. Face and Facial Expression Recognition from Real World Videos. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nasrollahi, Kamal, Cosimo Distante, Gang Hua, Andrea Cavallaro, Thomas B. Moeslund, Sebastiano Battiato, and Qiang Ji, eds. Video Analytics. Face and Facial Expression Recognition and Audience Measurement. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56687-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

W, Young Andrew, ed. Face perception. London: Psychology Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Diminich, Erica. Is this the face of sadness? Facial expression recognition and context. [New York, N.Y.?]: [publisher not identified], 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Michela, Balconi, ed. Neuropsychology and cognition of emotional face comprehension, 2006. Trivandrum, India: Research Signpost, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

The Oxford handbook of face perception. Oxford: Oxford University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

A, Tsihrintzis George, ed. Visual affect recognition. Amsterdam: IOS Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Our biometric future: Facial recognition technology and the culture of surveillance. New York: New York University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Face Expression Recognition"

1

Tian, Yingli, Takeo Kanade, and Jeffrey F. Cohn. "Facial Expression Recognition." In Handbook of Face Recognition, 487–519. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-932-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lekshmi, V. Praseeda, M. Sasikumar, and Divya S. Vidyadharan. "Face Recognition and Expression Classification." In Lecture Notes in Electrical Engineering, 669–79. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-90-481-2311-7_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bronstein, Alexander M., Michael M. Bronstein, and Ron Kimmel. "Expression-Invariant 3D Face Recognition." In Lecture Notes in Computer Science, 62–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44887-x_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Hyung-Soo, and Daijin Kim. "Facial Expression Transformations for Expression-Invariant Face Recognition." In Advances in Visual Computing, 323–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11919476_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dai, Jiangnan. "Facial Expression Synthesis with Synchronous Editing of Face Organs." In Biometric Recognition, 139–47. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86608-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Varma, Rahul, Sandesh Gupta, and Phalguni Gupta. "Face Recognition System Invariant to Expression." In Intelligent Computing Theory, 299–307. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09333-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ali, Kamran, and Charles E. Hughes. "Face Reenactment Based Facial Expression Recognition." In Advances in Visual Computing, 501–13. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64556-4_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Lianghua, Jianzhong Zhou, Die Hu, Cairong Zou, and Li Zhao. "Boosted Independent Features for Face Expression Recognition." In Advances in Neural Networks – ISNN 2005, 137–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mohseni, Sina, Niloofar Zarei, Ehsan Miandji, and Gholamreza Ardeshir. "Facial Expression Recognition Using Facial Graph." In Face and Facial Expression Recognition from Real World Videos, 58–66. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Jucheng, Meng Li, Lingchao Zhang, Shujie Han, Xiaojing Wang, and Jie Wang. "Face Expression Recognition Using Gabor Features and a Novel Weber Local Descriptor." In Biometric Recognition, 265–74. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97909-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Face Expression Recognition"

1

Sreelakshmi, C., and Krishnan Kutty. "Vision Based Face Expression Recognition." In SAE 2015 World Congress & Exhibition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2015. http://dx.doi.org/10.4271/2015-01-0218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Avinash Kumar, Arun Kumar, G. C. Nandi, and Pavan Chakroborty. "Expression invariant fragmented face recognition." In 2014 International Conference on Signal Propagation and Computer Technology (ICSPCT). IEEE, 2014. http://dx.doi.org/10.1109/icspct.2014.6884987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kolahdouzi, Mojtaba, Alireza Sepas-Moghaddam, and Ali Etemad. "Face Trees for Expression Recognition." In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021). IEEE, 2021. http://dx.doi.org/10.1109/fg52635.2021.9666986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Loderer, Marek, Jarmila Pavlovicova, Milos Oravec, and Jan Mazanec. "Face parts importance in face and expression recognition." In 2015 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2015. http://dx.doi.org/10.1109/iwssip.2015.7314208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anil, J., and L. Padma Suresh. "Literature survey on face and face expression recognition." In 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT). IEEE, 2016. http://dx.doi.org/10.1109/iccpct.2016.7530173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sovizi, Javad, Rahul Rai, and Venkat Krovi. "3D Face Recognition Under Isometric Expression Deformations." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34449.

Full text
Abstract:
In this paper, 3D face recognition under isometric deformation (induced by facial expressions) is considered. The main objective is to employ the shape descriptors that are invariant to (isometric) deformations to provide an efficient face recognition algorithm. Two methods of the correspondence are utilized for automatic landmark assignment to the query face. One is based on the conventional iterative closest point (ICP) method and another is based upon the geometrical/topological features of the human face. The shape descriptor is chosen to be the well-known geodesic distance (GD) measure. The recognition task is performed on SHREC08 database for both correspondence methods and the effect of feature (GD) vector size as well as landmark positions on the recognition accuracy were argued.
APA, Harvard, Vancouver, ISO, and other styles
7

Arca, S., P. Campadelli, R. Lanzarotti, and G. Lipori. "A face recognition system dealing with expression variant faces." In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Petpairote, Chayanut, and Suthep Madarasmi. "Face recognition improvement by converting expression faces to neutral faces." In 2013 13th International Symposium on Communications and Information Technologies (ISCIT). IEEE, 2013. http://dx.doi.org/10.1109/iscit.2013.6645898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gesù, Vito Di, Bertrand Zavidovique, and Marco Elio Tabacchi. "Face Expression Recognition through Broken Symmetries." In Image Processing (ICVGIP). IEEE, 2008. http://dx.doi.org/10.1109/icvgip.2008.39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lekdioui, Khadija, Yassine Ruichek, Rochdi Messoussi, Youness Chaabi, and Raja Touahni. "Facial expression recognition using face-regions." In 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2017. http://dx.doi.org/10.1109/atsip.2017.8075517.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Face Expression Recognition"

1

Silapachote, Piyanuch, Deepak R. Karuppiah, and Allen R. Hanson. Feature Selection Using Adaboost for Face Expression Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada438800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meidan, Rina, and Joy Pate. Roles of Endothelin 1 and Tumor Necrosis Factor-A in Determining Responsiveness of the Bovine Corpus Luteum to Prostaglandin F2a. United States Department of Agriculture, January 2004. http://dx.doi.org/10.32747/2004.7695854.bard.

Full text
Abstract:
The corpus luteum (CL) is a transient endocrine gland that has a vital role in the regulation of the estrous cycle, fertility and the maintenance of pregnancy. In the absence of appropriate support, such as occurs during maternal recognition of pregnancy, the CL will regress. Prostaglandin F2a (PGF) was first suggested as the physiological luteolysin in ruminants several decades ago. Yet, the cellular mechanisms by which PGF causes luteal regression remain poorly defined. In recent years it became evident that the process of luteal regression requires a close cooperation between steroidogenic, endothelial and immune cells, all resident cells of this gland. Changes in the population of these cells within the CL closely consort with the functional changes occurring during various stages of CL life span. The proposal aimed to gain a better understanding of the intra-ovarian regulation of luteolysis and focuses especially on the possible reasons causing the early CL (before day 5) to be refractory to the luteolytic actions of PGF. The specific aims of this proposal were to: determine if the refractoriness of the early CL to PGF is due to its inability to synthesize or respond to endothelin–1 (ET-1), determine the cellular localization of ET, PGF and tumor necrosis factor a (TNF a) receptors in early and mid luteal phases, determine the functional relationships among ET-1 and cytokines, and characterize the effects of PGF and ET-1 on prostaglandin production by luteal cell types. We found that in contrast to the mature CL, administration of PGF2a before day 5 of the bovine cycle failed to elevate ET-1, ETA receptors or to induce luteolysis. In fact, PGF₂ₐ prevented the upregulation of the ET-1 gene by ET-1 or TNFa in cultured luteal cells from day 4 CL. In addition, we reported that ECE-1 expression was elevated during the transitionof the CL from early to mid luteal phase and was accompanied by a significant rise in ET-1 peptide. This coincides with the time point at which the CL gains its responsiveness to PGF2a, suggesting that ability to synthesize ET-1 may be a prerequisite for luteolysis. We have shown that while ET-1 mRNA was exclusively localized to endothelial cells both in young and mature CL, ECE-1 was present in the endothelial cells and steroidogenic cells alike. We also found that the gene for TNF receptor I is only moderately affected by the cytokines tested, but that the gene for TNF receptor II is upregulated by ET-1 and PGF₂ₐ. However, these cytokines both increase expression of MCP-1, although TNFa is even more effective in this regard. In addition, we found that proteins involved in the transport and metabolism of PGF (PGT, PGDH, COX-2) change as the estrous cycle progresses, and could contribute to the refractoriness of young CL. The data obtained in this work illustrate ET-1 synthesis throughout the bovine cycle and provide a better understanding of the mechanisms regulating luteal regression and unravel reasons causing the CL to be refractory to PGF2a.
APA, Harvard, Vancouver, ISO, and other styles
3

Meidan, Rina, and Robert Milvae. Regulation of Bovine Corpus Luteum Function. United States Department of Agriculture, March 1995. http://dx.doi.org/10.32747/1995.7604935.bard.

Full text
Abstract:
The main goal of this research plan was to elucidate regulatory mechanisms controlling the development, function of the bovine corpus luteum (CL). The CL contains two different sterodigenic cell types and therefore it was necessary to obtain pure cell population. A system was developed in which granulosa and theca interna cells, isolated from a preovulatory follicle, acquired characteristics typical of large (LL) and small (SL) luteal cells, respectively, as judged by several biochemical and morphological criteria. Experiments were conducted to determine the effects of granulosa cells removal on subsequent CL function, the results obtained support the concept that granulosa cells make a substaintial contribution to the output of progesterone by the cyclic CL but may have a limited role in determining the functional lifespan of the CL. This experimental model was also used to better understand the contribution of follicular granulosa cells to subsequent luteal SCC mRNA expression. The mitochondrial cytochrome side-chain cleavage enzyme (SCC), which converts cholesterol to pregnenolone, is the first and rate-limiting enzyme of the steroidogenic pathway. Experiments were conducted to characterize the gene expression of P450scc in bovine CL. Levels of P450scc mRNA were higher during mid-luteal phase than in either the early or late luteal phases. PGF 2a injection decreased luteal P450scc mRNA in a time-dependent manner; levels were significantly reduced by 2h after treatment. CLs obtained from heifers on day 8 of the estrous cycle which had granulosa cells removed had a 45% reduction in the levels of mRNA for SCC enzymes as well as a 78% reduction in the numbers of LL cells. To characterize SCC expression in each steroidogenic cell type we utilized pure cell populations. Upon luteinization, LL expressed 2-3 fold higher amounts of both SCC enzymes mRNAs than SL. Moreover, eight days after stimulant removal, LL retained their P4 production capacity, expressed P450scc mRNA and contained this protein. In our attempts to establish the in vitro luteinization model, we had to select the prevulatory and pre-gonadotropin surge follicles. The ratio of estradiol:P4 which is often used was unreliable since P4 levels are high in atretic follicles and also in preovulatory post-gonadotropin follicles. We have therefore examined whether oxytocin (OT) levels in follicular fluids could enhance our ability to correctly and easily define follicular status. Based on E2 and OT concentrations in follicular fluids we could more accurately identify follicles that are preovulatory and post gonadotropin surge. Next we studied OT biosynthesis in granulosa cells, cells which were incubated with forskolin contained stores of the precursor indicating that forskolin (which mimics gonadotropin action) is an effective stimulator of OT biosynthesis and release. While studying in vitro luteinization, we noticed that IGF-I induced effects were not identical to those induced by insulin despite the fact that megadoses of insulin were used. This was the first indication that the cells may secrete IGF binding protein(s) which regonize IGFs and not insulin. In a detailed study involving several techniques, we characterized the species of IGF binding proteins secreted by luteal cells. The effects of exogenous polyunsaturated fatty acids and arachidonic acid on the production of P4 and prostanoids by dispersed bovine luteal cells was examined. The addition of eicosapentaenoic acid and arachidonic acid resulted in a dose-dependent reduction in basal and LH-stimulated biosynthesis of P4 and PGI2 and an increase in production of PGF 2a and 5-HETE production. Indomethacin, an inhibitor of arachidonic acid metabolism via the production of 5-HETE was unaffected. Results of these experiments suggest that the inhibitory effect of arachidonic acid on the biosynthesis of luteal P4 is due to either a direct action of arachidonic acid, or its conversion to 5-HETE via the lipoxgenase pathway of metabolism. The detailed and important information gained by the two labs elucidated the mode of action of factors crucially important to the function of the bovine CL. The data indicate that follicular granulosa cells make a major contribution to numbers of large luteal cells, OT and basal P4 production, as well as the content of cytochrome P450 scc. Granulosa-derived large luteal cells have distinct features: when luteinized, the cell no longer possesses LH receptors, its cAMP response is diminished yet P4 synthesis is sustained. This may imply that maintenance of P4 (even in the absence of a Luteotropic signal) during critical periods such as pregnancy recognition, is dependent on the proper luteinization and function of the large luteal cell.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography