Gotowa bibliografia na temat „Facial expression”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Facial expression”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Facial expression"

1

Chapre Lopita Choudhury, Harshada. "Emotion / Facial Expression Detection". International Journal of Science and Research (IJSR) 12, nr 5 (5.05.2023): 1395–98. http://dx.doi.org/10.21275/sr23516180518.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Mehra, Shivam, Prabhat Parashar, Akshay Aggarwal i Deepika Rawat. "FACIAL EXPRESSION RECOGNITION". International Journal of Advanced Research 12, nr 01 (31.01.2024): 1109–13. http://dx.doi.org/10.21474/ijar01/18230.

Pełny tekst źródła
Streszczenie:
Facial Expression Recognition is a system which provides an interface for computer and human interaction. With the advancement of technology and need of the hour such systems have earned the interest of researchers in psychology, medicine, computer science and similar other fields as its applications are identified in such fields. Facial expression recognizer is an application which uses live data coming from the camera or even the existing videos to capture the expressions of the person in the video and is represented on the screen in the form of attractive emojis. Expressions form the basis of human communication and interaction. Expressions are used as a crucial tool to study the behaviour in the medicine and psychological fields to understand the state of mind of the people. The main objective to develop such a system was to be able to classify facial expressions using CNN algorithm which is responsible for the expression detection and then providing a corresponding emoticon relevant to detected facial expression as the output.
Style APA, Harvard, Vancouver, ISO itp.
3

XIONG, LEI, NANNING ZHENG, SHAOYI DU i JIANYI LIU. "FACIAL EXPRESSION SYNTHESIS BASED ON FACIAL COMPONENT MODEL". International Journal of Pattern Recognition and Artificial Intelligence 23, nr 03 (maj 2009): 637–57. http://dx.doi.org/10.1142/s0218001409007235.

Pełny tekst źródła
Streszczenie:
Statistical model based facial expression synthesis methods are robust and can be easily used in real environment. But facial expressions of humans are varied. How to represent and synthesize expressions that are is not included in the training set is an unresolved problem in statistical model based researches. In this paper, we propose a two-step method. At first, we propose a statistical appearance model, the facial component model, to represent faces. The model divides the face into seven components, and constructs one global shape model and seven local texture models separately. The motivation to use global shape + local texture strategy is the combination of different components that can generate more types of expression than training sets and the global shape guarantees a "legal" result. Then a neighbor reconstruction framework is proposed to synthesize expressions. The framework estimates the target expression vector by a linear combination of neighbor subject's expression vectors. This paper primarily contributes three things: first, the proposed method can synthesize a wider range of expressions than with the training set. Second, experiments demonstrate that FCM is better than standard AAM in face representation. Third, neighbor reconstruction framework is very flexible. It can be used in multisamples with multitargets and single-sample with single-target applications.
Style APA, Harvard, Vancouver, ISO itp.
4

Prkachin, Kenneth M. "Assessing Pain by Facial Expression: Facial Expression as Nexus". Pain Research and Management 14, nr 1 (2009): 53–58. http://dx.doi.org/10.1155/2009/542964.

Pełny tekst źródła
Streszczenie:
The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of facial assessment in the study of adult and pediatric pain are also reviewed, focusing on how such techniques facilitate the discovery and articulation of novel phenomena. Emerging applications of facial assessment in clinical settings are also described. Alternative techniques that have the potential to overcome barriers to the application of facial assessment arising out of its resource-intensiveness are described and evaluated, including recent work on computer-based automatic assessment.
Style APA, Harvard, Vancouver, ISO itp.
5

Ekman, Paul. "Facial Appearance and Facial Expression". Facial Plastic Surgery Clinics of North America 2, nr 3 (sierpień 1994): 235–39. http://dx.doi.org/10.1016/s1064-7406(23)00426-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Dewangan Asha Ambhaikar, Leelkanth. "Real Time Facial Expression Analysis Using PCA". International Journal of Science and Research (IJSR) 1, nr 2 (5.02.2012): 27–30. http://dx.doi.org/10.21275/ijsr11120203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yagi, Satoshi, Yoshihiro Nakata, Yutaka Nakamura i Hiroshi Ishiguro. "Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?" PLOS ONE 16, nr 8 (10.08.2021): e0254905. http://dx.doi.org/10.1371/journal.pone.0254905.

Pełny tekst źródła
Streszczenie:
Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans.
Style APA, Harvard, Vancouver, ISO itp.
8

Sadat, Mohammed Nashat. "Facial Emotion Recognition using Convolutional Neural Network". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (9.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem33503.

Pełny tekst źródła
Streszczenie:
Emotion is an expression that human use in expressing their feelings. It can be express through facial expression, body language and voice tone. Humans' facial expression is a major way in conveying emotion since it is the most powerful, natural and universal signal to express humans' emotion condition. However, humans' facial expression has similar patterns, and it is very confusing in recognizing the expression using naked eye. For instance, afraid and surprised is very similar to one another. Thus, this will lead to confusion in determining the facial expression. Hence, this study aims to develop a application for emotion recognition that can recognize emotion based on facial expression in real-time. The Deep Learning based technique, Convolutional Neural Network (CNN) is implemented in this study. The Mobile Net algorithm is deployed to train the model for recognition. There are four types of facial expressions to be recognized which are happy, sad, surprise, and disgusting. Keywords: Facial Emotion Recognition, Deep Learning, CNN, Image Processing.
Style APA, Harvard, Vancouver, ISO itp.
9

Saha, Priya, Debotosh Bhattacharjee, Barin Kumar De i Mita Nasipuri. "Mathematical Representations of Blended Facial Expressions towards Facial Expression Modeling". Procedia Computer Science 84 (2016): 94–98. http://dx.doi.org/10.1016/j.procs.2016.04.071.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ren, Zhuoyue. "Facial expression classification". Highlights in Science, Engineering and Technology 41 (30.03.2023): 43–52. http://dx.doi.org/10.54097/hset.v41i.6741.

Pełny tekst źródła
Streszczenie:
At present, emotion classification has become a hot topic in artificial intelligence pattern recognition. The facial expression recognition (FER) is indispensable for computers to understand the emotional information conveyed by expressions. In the past, using traditional features to extract and classify facial expressions has not achieved satisfactory accuracy, so the classification of facial emotions is still a challenge. The model used in the paper is an existing one - MINI_XCEPTION, the dominant framework for CNNs that extracts features from images to identify and classify seven facial emotions. The model was trained on a dataset of people’s facial expressions (Kaggle). This model has a significant improvement over the previous model.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Facial expression"

1

Testa, Rafael Luiz. "Síntese de expressões faciais em fotografias para representação de emoções". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-31012019-165605/.

Pełny tekst źródła
Streszczenie:
O processamento e a identificação de emoções faciais constituem ações essenciais para estabelecer interação entre pessoas. Alguns transtornos psiquiátricos podem limitar a capacidade de um indivíduo em reconhecer emoções em expressões faciais. De modo a contribuir com a solução deste problema, técnicas computacionais podem ser utilizadas para compor ferramentas destinadas ao diagnóstico, avaliação e treinamento no reconhecimento de tais expressões. Com esta motivação, o objetivo deste trabalho é definir, implementar e avaliar um método para sintetizar expressões faciais que representam emoções em imagens de pessoas reais. Nos trabalhos encontrados na literatura a principal ideia é que a expressão facial da imagem de uma pessoa pode ser reconstituída na imagem de outra pessoa. Este estudo difere-se das abordagens apresentadas na literatura ao propor uma técnica que considera a similaridade entre imagens faciais para escolher aquela que será empregada como origem para a reconstituição. Desta maneira, pretende-se aumentar o realismo das imagens sintetizadas. A abordagem sugerida para resolver o problema, além de buscar as faces mais similares em banco de imagens, faz a deformação dos componentes faciais e o mapeamento das diferenças de iluminação na imagem destino. O realismo das imagens geradas foi mensurado de forma objetiva e subjetiva usando imagens disponíveis em bancos de imagens públicos. Uma análise visual mostrou que as imagens sintetizadas com base em faces similares apresentaram um grau de realismo adequado, principalmente quando comparadas com imagens sintetizadas a partir de faces aleatórias. Além de constituir uma contribuição para a geração de imagens a serem aplicadas em ferramentas de auxílio ao diagnóstico e terapia de distúrbios psiquiátricos, oferece uma contribuição para a área de Ciência da Computação, por meio da proposição de novas técnicas de síntese de expressões faciais
The ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
Style APA, Harvard, Vancouver, ISO itp.
2

Neth, Donald C. "Facial configuration and the perception of facial expression". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189090729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Baltrušaitis, Tadas. "Automatic facial expression analysis". Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.

Pełny tekst źródła
Streszczenie:
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
Style APA, Harvard, Vancouver, ISO itp.
4

Mikheeva, Olga. "Perceptual facial expression representation". Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217307.

Pełny tekst źródła
Streszczenie:
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process.
Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
Style APA, Harvard, Vancouver, ISO itp.
5

Li, Jingting. "Facial Micro-Expression Analysis". Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.

Pełny tekst źródła
Streszczenie:
Les micro-expressions (MEs) sont porteuses d'informations non verbales spécifiques. Cependant, en raison de leur nature locale et brève, il est difficile de les détecter. Dans cette thèse, nous proposons une méthode de détection par reconnaissance d'un motif local et temporel de mouvement du visage. Ce motif a une forme spécifique (S-pattern) lorsque la ME apparait. Ainsi, à l'aide de SVM, nous distinguons les MEs des autres mouvements faciaux. Nous proposons également une fusion spatiale et temporelle afin d'améliorer la distinction entre les MEs (locaux) et les mouvements de la tête (globaux). Cependant, l'apprentissage des S-patterns est limité par le petit nombre de bases de données de ME et par le faible volume d'échantillons de ME. Les modèles de Hammerstein (HM) est une bonne approximation des mouvements musculaires. En approximant chaque S-pattern par un HM, nous pouvons filtrer les S-patterns réels et générer de nouveaux S-patterns similaires. Ainsi, nous effectuons une augmentation et une fiabilisation des S-patterns pour l'apprentissage et améliorons ainsi la capacité de différencier les MEs d'autres mouvements. Lors du premier challenge de détection de MEs, nous avons participé à la création d’une nouvelle méthode d'évaluation des résultats. Cela a aussi été l’occasion d’appliquer notre méthode à longues vidéos. Nous avons fourni le résultat de base au challenge.Les expérimentions sont effectuées sur CASME I, CASME II, SAMM et CAS(ME)2. Les résultats montrent que notre méthode proposée surpasse la méthode la plus populaire en termes de F1-score. L'ajout du processus de fusion et de l'augmentation des données améliore encore les performances de détection
The Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
Style APA, Harvard, Vancouver, ISO itp.
6

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
Style APA, Harvard, Vancouver, ISO itp.
7

Miao, Yu. "A Real Time Facial Expression Recognition System Using Deep Learning". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38488.

Pełny tekst źródła
Streszczenie:
This thesis presents an image-based real-time facial expression recognition system that is capable of recognizing basic facial expressions of several subjects simultaneously from a webcam. Our proposed methodology combines a supervised transfer learning strategy and a joint supervision method with a new supervision signal that is crucial for facial tasks. A convolutional neural network (CNN) model, MobileNet, that contains both accuracy and speed is deployed in both offline and real-time frameworks to enable fast and accurate real-time output. Evaluations for both offline and real-time experiments are provided in our work. The offline evaluation is carried out by first evaluating two publicly available datasets, JAFFE and CK+, and then presenting the results of the cross-dataset evaluation between these two datasets to verify the generalization ability of the proposed method. A comprehensive evaluation configuration for the CK+ dataset is given in this work, providing a baseline for a fair comparison. It reaches an accuracy of 95.24% on JAFFE dataset, and an accuracy of 96.92% on 6-class CK+ dataset which only contains the last frames of image sequences. The resulting average run-time cost for recognition in the real-time implementation is reported, which is approximately 3.57 ms/frame on an NVIDIA Quadro K4200 GPU. The results demonstrate that our proposed CNN-based framework for facial expression recognition, which does not require a massive preprocessing module, can not only achieve state-of-art accuracy on these two datasets but also perform the classification task much faster than a conventional machine learning methodology as a result of the lightweight structure of MobileNet.
Style APA, Harvard, Vancouver, ISO itp.
8

Pierce, Meghan. "Facial Expression Intelligence Scale (FEIS): Recognizing and Interpreting Facial Expressions and Implications for Consumer Behavior". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26786.

Pełny tekst źródła
Streszczenie:
Each time we meet a new person, we draw inferences based on our impressions. The first thing we are likely to notice is a personâ s face. The face functions as one source of information, which we combine with the spoken word, body language, past experience, and the context of the situation to form judgments. Facial expressions serve as pieces of information we use to understand what another person is thinking, saying, or feeling. While there is strong support for the universality of emotion recognition, the ability to identify and interpret facial expressions varies by individual. Existing scales fail to include the dynamicity of the face. Five studies are proposed to examine the viability of the Facial Expression Intelligence Scale (FEIS) to measure individual ability to identify and interpret facial expressions. Consumer behavior implications are discussed.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
9

Carter, Jeffrey R. "Facial expression analysis in schizophrenia". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Yu, Kaimin. "Towards Realistic Facial Expression Recognition". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.

Pełny tekst źródła
Streszczenie:
Automatic facial expression recognition has attracted significant attention over the past decades. Although substantial progress has been achieved for certain scenarios (such as frontal faces in strictly controlled laboratory settings), accurate recognition of facial expression in realistic environments remains unsolved for the most part. The main objective of this thesis is to investigate facial expression recognition in unconstrained environments. As one major problem faced by the literature is the lack of realistic training and testing data, this thesis presents a web search based framework to collect realistic facial expression dataset from the Web. By adopting an active learning based method to remove noisy images from text based image search results, the proposed approach minimizes the human efforts during the dataset construction and maximizes the scalability for future research. Various novel facial expression features are then proposed to address the challenges imposed by the newly collected dataset. Finally, a spectral embedding based feature fusion framework is presented to combine the proposed facial expression features to form a more descriptive representation. This thesis also systematically investigates how the number of frames of a facial expression sequence can affect the performance of facial expression recognition algorithms, since facial expression sequences may be captured under different frame rates in realistic scenarios. A facial expression keyframe selection method is proposed based on keypoint based frame representation. Comprehensive experiments have been performed to demonstrate the effectiveness of the presented methods.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Facial expression"

1

Young, A. W. Facial Expression Recognition. London ; New York : Psychology Press, 2016. | Series: World: Psychology Press, 2016. http://dx.doi.org/10.4324/9781315715933.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Tetsuo, Yamaori. Nihonjin no kao: Zuzō kara bunka o yomu. Tōkyō: Nihon Hōsō Shuppan Kyōkai, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Smallman, Steve. If the wind changes. Mankato, Minn: QEB Pub., 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

A, Russell James, i Fernández Dols José Miguel, red. The psychology of facial expression. Cambridge: Cambridge University Press, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Darris, Dobbs, red. Animating facial features and expression. Rockland, Mass: Charles River Media, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

International, Symposium on the Facial Nerve (8th 1997 Ehime-ken Japan). New horizons in facial nerve research and facial expression. The Hague: Kugler, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Peck, Stephen Rogers. Atlas of facial expression: An account of facial expression for artists, actors, and writers. Oxford: Oxf.U.P.(N.Y.), 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Boris, Cyrulnik, red. Le Visage: Sens et contresens. Paris: Eshel, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Olson, Rex. Facial animation. Burbank, CA: Desktop Images, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ikeda, Susumu. Hito no kao matawa hyōjō no shikibetsu ni tsuite: Shoki no jikkenteki kenkyū o chūshin to shita shiteki tenbō. Suita-shi: Kansai Daigaku Shuppanbu, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Facial expression"

1

Gong, Shaogang, i Tao Xiang. "Understanding Facial Expression". W Visual Analysis of Behaviour, 69–93. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-670-2_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kanade, Takeo. "Facial Expression Analysis". W Lecture Notes in Computer Science, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564386_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sebe, Nicu, i Michael S. Lew. "Facial Expression Recognition". W Computational Imaging and Vision, 163–97. Dordrecht: Springer Netherlands, 2003. http://dx.doi.org/10.1007/978-94-017-0295-9_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Pantic, Maja. "Facial Expression Recognition". W Encyclopedia of Biometrics, 1–8. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-3-642-27733-7_98-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Frijda, N. H. "Facial Expression Processing". W Aspects of Face Processing, 319–25. Dordrecht: Springer Netherlands, 1986. http://dx.doi.org/10.1007/978-94-009-4420-6_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Oberwelland, Eileen, Whitney Mattson, Naomi Ekas i Daniel S. Messinger. "Facial Expression Learning". W Encyclopedia of the Sciences of Learning, 1259–62. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_1925.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Tian, Yingli, Takeo Kanade i Jeffrey F. Cohn. "Facial Expression Recognition". W Handbook of Face Recognition, 487–519. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-932-1_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

De la Torre, Fernando, i Jeffrey F. Cohn. "Facial Expression Analysis". W Visual Analysis of Humans, 377–409. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-997-0_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Pantic, Maja. "Facial Expression Recognition". W Encyclopedia of Biometrics, 400–406. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_98.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Naini, Farhad B. "Facial Expression: Influence and Significance". W Facial Aesthetics, 45–53. West Sussex, UK: John Wiley & Sons, Ltd., 2013. http://dx.doi.org/10.1002/9781118786567.ch3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Facial expression"

1

Mal, Hari Prasad, i P. Swarnalatha. "Facial expression detection using facial expression model". W 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS). IEEE, 2017. http://dx.doi.org/10.1109/icecds.2017.8389644.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Park, Sungsoo, Jongju Shin i Daijin Kim. "Facial expression analysis with facial expression deformation". W 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761398.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Murtaza, Marryam, Muhammad Sharif, Musarrat AbdullahYasmin i Tanveer Ahmad. "Facial expression detection using Six Facial Expressions Hexagon (SFEH) model". W 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2019. http://dx.doi.org/10.1109/ccwc.2019.8666602.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Reveriano, Francisco, Unal Sakoglu i Jiang Lu. "Facial Expression Recognition". W PEARC '19: Practice and Experience in Advanced Research Computing. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3332186.3333039.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Amano, Toshiyuki. "Coded facial expression". W SA '16: SIGGRAPH Asia 2016. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2988240.2988243.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Matre, G. N., i S. K. Shah. "Facial expression detection". W 2013 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, 2013. http://dx.doi.org/10.1109/iccic.2013.6724242.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kulkarni, Ketki R., i Sahebrao B. Bagal. "Facial expression recognition". W 2015 International Conference on Information Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/infop.2015.7489442.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Congyong Su i Li Huang. "Facial Expression Hallucination". W 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05). IEEE, 2005. http://dx.doi.org/10.1109/acvmot.2005.53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hongcheng Wang i Ahuja. "Facial expression decomposition". W ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238452.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kulkarni, Ketki R., i Sahebrao B. Bagal. "Facial Expression Recognition". W 2015 Annual IEEE India Conference (INDICON). IEEE, 2015. http://dx.doi.org/10.1109/indicon.2015.7443572.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Facial expression"

1

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, wrzesień 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Pełny tekst źródła
Streszczenie:
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
Style APA, Harvard, Vancouver, ISO itp.
2

Ivanova, E. S. The accuracy of identification of spontaneous facial expressions of male and female faces. LJournal, 2017. http://dx.doi.org/10.18411/a-2017-010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ivanova, E. S. PERFORMANCE INDICATORS OF THE VOLUME Active vocabulary EMOTIONS AND ACCURACY Recognition of facial expressions STUDENTS. LJournal, 2017. http://dx.doi.org/10.18411/a-2017-002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Peschka-Daskalos, Patricia. An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.6584.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Makhachashvili, Rusudan K., Svetlana I. Kovpik, Anna O. Bakhtina i Ekaterina O. Shmeltser. Technology of presentation of literature on the Emoji Maker platform: pedagogical function of graphic mimesis. [б. в.], lipiec 2020. http://dx.doi.org/10.31812/123456789/3864.

Pełny tekst źródła
Streszczenie:
The article deals with the technology of visualizing fictional text (poetry) with the help of emoji symbols in the Emoji Maker platform that not only activates students’ thinking, but also develops creative attention, makes it possible to reproduce the meaning of poetry in a succinct way. The application of this technology has yielded the significance of introducing a computer being emoji in the study and mastering of literature is absolutely logical: an emoji, phenomenologically, logically and eidologically installed in the digital continuum, is separated from the natural language provided by (ethno)logy, and is implicitly embedded into (cosmo)logy. The technology application object is the text of the twentieth century Cuban poet José Ángel Buesa. The choice of poetry was dictated by the appeal to the most important function of emoji – the expression of feelings, emotions, and mood. It has been discovered that sensuality can reconstructed with the help of this type of meta-linguistic digital continuum. It is noted that during the emoji design in the Emoji Maker program, due to the technical limitations of the platform, it is possible to phenomenologize one’s own essential-empirical reconstruction of the lyrical image. Creating the image of the lyrical protagonist sign, it was sensible to apply knowledge in linguistics, philosophy of language, psychology, psycholinguistics, literary criticism. By constructing the sign, a special emphasis was placed on the facial emogram, which also plays an essential role in the transmission of a wide range of emotions, moods, feelings of the lyrical protagonist. Consequently, the Emoji Maker digital platform allowed to create a new model of digital presentation of fiction, especially considering the psychophysiological characteristics of the lyrical protagonist. Thus, the interpreting reader, using a specific digital toolkit – a visual iconic sign (smile) – reproduces the polylaterial metalinguistic multimodality of the sign meaning in fiction. The effectiveness of this approach is verified by the poly-functional emoji ousia, tested on texts of fiction.
Style APA, Harvard, Vancouver, ISO itp.
6

Bloch, G., i H. S. Woodard. regulation of size related division of labor in a key pollinator and its impact on crop pollination efficacy. Israel: United States-Israel Binational Agricultural Research and Development Fund, 2021. http://dx.doi.org/10.32747/2021.8134168.bard.

Pełny tekst źródła
Streszczenie:
Despite the rapid increase in reliance on bumble bees for food production and security, there are many critical knowledge gaps in our understanding of bumble bee biology that limit their colony production, commercial management, and pollination services. Our project focuses on the social, endocrine, and molecular processes regulating body size in the two bumble bee species most important to agriculture: Bombus terrestris in Israel, and B. impatiens in the USA. Variation in body size underline both caste (queen/worker) differentiation and division of labor among workers (foragers are typically larger than nest bees), two hallmarks of insect sociality which are also crucial for the commercial rearing and crop pollination services of bumble bees. Our project has generated several fundamental new insights into the biology of bumble bees, which can be integrated into science-based management strategies for commercial pollination. Using transcriptomic and behavioral approaches we show that in spite of high flexibility, task performance (brood care or foraging) in bumble bee colonies is associated with physiological variation and differential brain gene expression and RNA editing patterns. We further showed that interactions between the brood, the queen, and the workers determine the developmental program of the larva. We identified two important periods. The first is a critical period during the first few days after hatching. Larvae fed by queens during this period develop over less days, are not likely to develop into gynes, and commonly reach a smaller ultimate body size compared to workers reared mostly or solely by workers. The facial exocrine (mandibular and hypopharangeal) glands are involved in this queen effect on larva development. The second period is important for determining the ultimate body size which is positively regulated by the number of tending workers. The presence of the queen during this stage has little, if at all, influence. We further show that stressors such as agrochemicals that interfere with foraging or brood care specific processes can compromise bumble bee colony development and their pollination performance. We also developed new technology (an RFID system) for automated collection of foraging trip data, for future deployment in agroecosystems. In spite of many similarities, our findings suggest important differences between the Eurasian model species (B. terrestris) and the North American model species (B. impatiens) that impact how management strategies translate across the two species. For example, there is a similar influence of the queen on offspring body size in both species, but this effect does not appear to be mediated by development time in B. impatiens as it is in B. terrestris. Taken together, our collaboration highlights the power of comparative work, to show that considerable differences that exist between these two key pollinator species, and in the organization of young bumble bee nests (wherein queens provide the majority of care and then transition away from brood care) relative to later stages of nest development.
Style APA, Harvard, Vancouver, ISO itp.
7

Norelli, John L., Moshe Flaishman, Herb Aldwinckle i David Gidoni. Regulated expression of site-specific DNA recombination for precision genetic engineering of apple. United States Department of Agriculture, marzec 2005. http://dx.doi.org/10.32747/2005.7587214.bard.

Pełny tekst źródła
Streszczenie:
Objectives: The original objectives of this project were to: 1) evaluate inducible promoters for the expression of recombinase in apple (USDA-ARS); 2) develop alternative selectable markers for use in apple to facilitate the positive selection of gene excision by recombinase (Cornell University); 3) compare the activity of three different recombinase systems (Cre/lox, FLP/FRT, and R/RS)in apple using a rapid transient assay (ARO); and 4) evaluate the use of recombinase systems in apple using the best promoters, selectable markers and recombinase systems identified in 1, 2 and 3 above (Collaboratively). Objective 2 was revised from the development alternative selectable markers, to the development of a marker-free selection system for apple. This change in approach was taken due to the inefficiency of the alternative markers initially evaluated in apple, phosphomannose-isomerase and 2-deoxyglucose-6-phosphate phosphatase, and the regulatory advantages of a marker-free system. Objective 3 was revised to focus primarily on the FLP/FRT recombinase system, due to the initial success obtained with this recombinase system. Based upon cooperation between researchers (see Achievements below), research to evaluate the use of the FLP recombinase system under light-inducible expression in apple was then conducted at the ARO (Objective 4). Background: Genomic research and genetic engineering have tremendous potential to enhance crop performance, improve food quality and increase farm profits. However, implementing the knowledge of genomics through genetically engineered fruit crops has many hurdles to be overcome before it can become a reality in the orchard. Among the most important hurdles are consumer concerns regarding the safety of transgenics and the impact this may have on marketing. The goal of this project was to develop plant transformation technologies to mitigate these concerns. Major achievements: Our results indicate activity of the FLP\FRTsite-specific recombination system for the first time in apple, and additionally, we show light- inducible activation of the recombinase in trees. Initial selection of apple transformation events is conducted under dark conditions, and tissue cultures are then moved to light conditions to promote marker excision and plant development. As trees are perennial and - cross-fertilization is not practical, the light-induced FLP-mediated recombination approach shown here provides an alternative to previously reported chemically induced recombinase approaches. In addition, a method was developed to transform apple without the use of herbicide or antibiotic resistance marker genes (marker free). Both light and chemically inducible promoters were developed to allow controlled gene expression in fruit crops. Implications: The research supported by this grant has demonstrated the feasibility of "marker excision" and "marker free" transformation technologies in apple. The use of these safer technologies for the genetic enhancement of apple varieties and rootstocks for various traits will serve to mitigate many of the consumer and environmental concerns facing the commercialization of these improved varieties.
Style APA, Harvard, Vancouver, ISO itp.
8

Pochtoviuk, Svitlana I., Tetiana A. Vakaliuk i Andrey V. Pikilnyak. Possibilities of application of augmented reality in different branches of education. [б. в.], luty 2020. http://dx.doi.org/10.31812/123456789/3756.

Pełny tekst źródła
Streszczenie:
Augmented reality has a great impact on the student in the presentation of educational material: objects of augmented reality affect the development of facial expressions, attention, stimulate thinking, and increase the level of understanding of information. Its implementation in various spheres has indisputable advantages: realism, clarity, application in many industries, information completeness and interactivity. That is why the study presents the possibilities of using augmented reality in the study of mathematics, anatomy, physics, chemistry, architecture, as well as in other fields. The comparison of domestic and foreign proposals for augmented reality is presented. The use of augmented reality in various fields (technology, entertainment, science and medicine, education, games, etc.) should be well thought out and pedagogically appropriate. That is why in the future it is planned to conduct research on the feasibility of using augmented reality and to develop elements of augmented reality accordingly.
Style APA, Harvard, Vancouver, ISO itp.
9

Sklenar, Ihor. The newspaper «Christian Voice» (Munich) in the postwar period: history, thematic range of expression, leading authors and publicists. Ivan Franko National University of Lviv, luty 2022. http://dx.doi.org/10.30970/vjo.2022.51.11393.

Pełny tekst źródła
Streszczenie:
The article considers the history, thematic range of expression and a number of authors and publicists of the newspaper «Christian Voice» (with the frequency of a fortnightly). It has been published in Munich by nationally conscious groups of migrants since 1949 as a part of the «Ukrainian Christian Publishing House». The significance of this Ukrainian newspaper in post-Nazi Germany is only partly comprehended in the works of a number of diaspora press’s researchers. Therefore, the purpose of this article is to supplement the scientific information about the «Christian Voice» in the postwar period, in particular, the yearbook for 1957 was chosen as the principal subject of analysis. In the process of writing the article, we used such methods: analysis, synthesis, content analysis, generalization and others. Thus, the results of our study became the socio-political and religious context in which the «Christian Voice» was founded. The article is also a concise overview of the titles of Ukrainian magazines in post-Nazi Germany in the 1940s and 1950s. The thematic analysis of publications of 1957 showed the main trends of journalistic texts in the newspaper and the journalistic skills of it’s iconic authors and publicists (D. Buchynsky, M. Bradovych, S. Shah, etc.). The thematic range of the newspaper after 1959 was somewhat narrowed due to the change in the status of the «Christian Voice» when it became the official newspaper of the UGCC in Germany. It has been distinguished two main thematic blocks of the newspaper ‒ social and religious. Historians will find interesting factual material from the newspaper publications about the life of Ukrainians in the diaspora. Historians of journalism can supplement the bibliographic apparatus in the journalistic and publicistic works of the authors in the postwar period of the newspaper and in subsequent years of publishing. Based upon the publications of the «Christian Voice» in different years, not only since 1957, journalists can study the contents and a form of different genres, linguistic peculiarities in the newspaper articles, and so on.
Style APA, Harvard, Vancouver, ISO itp.
10

Datsyshyn, Chrystyna. FUNCTIONAL PARAMETERS OF ANTHROPONYM AS ONE OF THE VARIETIES OF FACTUAL MATERIAL IN THE MEDIA TEXT. Ivan Franko National University of Lviv, marzec 2024. http://dx.doi.org/10.30970/vjo.2024.54-55.12169.

Pełny tekst źródła
Streszczenie:
The main objective of the study is to reveal the functional parameters of anthroponyms in the media texts. Methods of investigation: the method of media texts monitoring, the comparative method; the method of contextual analysis, the methods of functional analysis. Results. Anthroponyms in media texts contribute to the exact reproduction of facts, the display of a certain time-space. The use of an anthroponym in the media gives its bearer greater social significance; silencing an anthroponym demonstrates a desire to remove its bearer from the public agenda. Anthroponyms can reflect person’s social connections, inform about a belonging to a certain national, ethnic, age, social group. Conclusions Anthroponyms give media text more credibility, because they inform about a specific person in specific realities, personalize information. Anthroponyms are capable to mark time-space, therefore the actualization of proper names can be a means of transferring to another time, informing about forgotten historical facts and persons. Given the ability of anthroponyms – the names of famous persons – to be reduced, the journalist should take into account the possible difficulties of identifying such a person in a different time-space or under the condition of insufficient recognition. Entering the language game, anthroponyms are actualizing simultaneously meanings associated with different time-spaces, such ability can be effectively used to draw historical or cultural parallels, create an expressive load. Given the ability of anthroponyms to increase or decrease social status, journalists should be responsible in the selection of proper names as part of the factual material of the media text. Marking through anthroponyms the connection with national, social, age groups makes these words unique identifiers of the division into “own” or “strangers”, demonstrates the attitude of the speaker towards the bearer of his own name. Significance. The revealed functional parameters of anthroponyms as part of the actual material of the media text provide journalists with ample opportunities for the implementation of various communicative tasks. Key words: media text, anthroponym, factual material, language picture of the world, time-space, social communications.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii