Dissertations / Theses on the topic 'Multimodal perception of emotion'

To see the other types of publications on this topic, follow the link: Multimodal perception of emotion.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multimodal perception of emotion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cox, A. G. "Multimodal emotion perception from facial and vocal signals." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598105.

Full text
Abstract:
The perception of emotion in other people is a fundamental part of social communication. Emotional expressions are often multimodal in nature and like human speech both auditory and visual components are used for comprehension. Up to this date however, the majority of emotion research has focused on the perception of emotion from facial or vocal expressions in isolation. This thesis investigated the behavioural and neural consequences of perceiving emotion from facial and vocal emotional signals simultaneously. Initial experiments demonstrated that a congruent, but unattended, vocal expression produced faster emotion-categorisation decisions to facial expressions, relative to incongruent or neutral voices. Similarly, simultaneously presented facial expressions had the same effect on the categorisation of vocal expressions. Subsequent experiments showed that other pairings of emotional stimuli (vocal expressions and emotion pictures; facial expressions and emotion pictures) did not have bi-directional effects on each other, but rather asymmetric effects that were consistent with interactions between these stimuli at post-perceptual stages of processing. Facial and vocal signals are naturalistic pairings, and evidence that these signals are integrated at a ‘perceptual’ level was provided by a final experiment using functional magnetic resonance imaging. Congruent facial-vocal pairings produced enhanced activity in the superior temporal sulcus; a region implicated in cross-modal integration of sensory inputs. The data from this thesis suggest that facial and vocal signals of emotion are automatically integrated at a perceptual processing stage to create a single unified percept to facilitate social communication.
APA, Harvard, Vancouver, ISO, and other styles
2

REALDON, OLIVIA. "Differenze culturali nella percezione multimodale delle emozioni." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/37944.

Full text
Abstract:
The research question in the present study concerns how culture shapes the way in which simultaneous facial and vocalization cues are combined in emotion perception. The matter is not whether culture influences such process: cultures supply systems of meaning that make salient different core emotional themes, different sets of emotions, their ostensible expression, and action tendencies. Therefore, research doesn’t regard whether, but how and at what level of analysis culture shapes these processes (Matsumoto, 2001). Cultural variability was tested within the methodological framework of cultural priming studies (Matsumoto & Yoo, 2006). In such a methodological option culture is not viewed as consensual, enduring, and context-general, but as fragmented, fluctuating, and context-specific (situated cognition model; Oyserman & Sorensen, 2009). Bicultural individuals that, through enduring exposure to at least two cultures, possess systems of meaning and practices of both cultures, can therefore switch between such cultural orientations alternating them depending on the cultural cues (cultural primers) available in the immediate context (cultural frame switching; Hong et al. 2000). The present research investigated cultural differences in the way visual and auditory cues of fear and disgust are combined in emotion perception by Italian-Japanese biculturals primed with Japanese and Italian cultural cues. Bicultural participants were randomly assigned to Italian or Japanese priming conditions and were shown dynamic faces and vocalizations expressing either congruent (i.e., fear-fear) or incongruent (i.e. fear-disgust) emotion and were asked to identify the emotion expressed ignoring the one or the other modality (cross-modal bias paradigm; Bertelson & de Gelder, 2004). The effect of to-be-ignored vocalization cues was larger for participants in the Japanese priming condition, while the effect of to-be-ignored dynamic face cues was larger for participants in the Italian priming condition. This pattern of results was investigated also within current perspectives on embodied cognition, that, regarding emotion perception, assume that perceivers subtly mimic a target’s facial expression, so that contractions in the perceiver’s face generate an afferent muscolar feedback from the face to the brain, leading the perceiver to use this feedback to reproduce and thus understand the perceived expressions (Barsalou, 2009; Niedenthal, 2007). In other words, mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. A mimicry-interfering (with the facial expressions of fear and disgust; Oberman, Winkielman & Ramachandran, 2007) manipulation with bicultural participants performing the same task above described generated no cultural differences in the effect of to-be-ignored vocalizations, showing that the interference effect of vocalizations on faces turns out to be larger for participants in the Italian priming condition. Altogether, these results can be interpreted within the cultural syndromes highlighting the independent vs. interdependent and socially embedded nature of self, providing meaning systems that encourage and make available a different weighting of nonverbal cues in emotion perception depending on their relying, respectively, on more (or less) face exposure (meant as individual exposure) in modulating social relationships and less (or more) vocal exposure (more subtle and time-dependent than the face) in order to enhance individual standing and autonomy (vs. establish and maintain social harmony and interpersonal respect). Current perspectives sketching how human cognitive functioning works through a situated (Mesquita, Barrett, & Smith, 2010) and embodied (simulative) mind (Barsalou, 2009), and their implications in emotion perception are briefly described as the theoretical framework guiding the research question addressed in the empirical contribution.
APA, Harvard, Vancouver, ISO, and other styles
3

ur, Réhman Shafiq. "Expressing emotions through vibration for perception and control." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.

Full text
Abstract:
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
Taktil Video
APA, Harvard, Vancouver, ISO, and other styles
4

Fernández, Carbonell Marcos. "Automated Multimodal Emotion Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282534.

Full text
Abstract:
Being able to read and interpret affective states plays a significant role in human society. However, this is difficult in some situations, especially when information is limited to either vocal or visual cues. Many researchers have investigated the so-called basic emotions in a supervised way. This thesis holds the results of a multimodal supervised and unsupervised study of a more realistic number of emotions. To that end, audio and video features are extracted from the GEMEP dataset employing openSMILE and OpenFace, respectively. The supervised approach includes the comparison of multiple solutions and proves that multimodal pipelines can outperform unimodal ones, even with a higher number of affective states. The unsupervised approach embraces a traditional and an exploratory method to find meaningful patterns in the multimodal dataset. It also contains an innovative procedure to better understand the output of clustering techniques.
Att kunna läsa och tolka affektiva tillstånd spelar en viktig roll i det mänskliga samhället. Detta är emellertid svårt i vissa situationer, särskilt när information är begränsad till antingen vokala eller visuella signaler. Många forskare har undersökt de så kallade grundläggande känslorna på ett övervakat sätt. Det här examensarbetet innehåller resultaten från en multimodal övervakad och oövervakad studie av ett mer realistiskt antal känslor. För detta ändamål extraheras ljud- och videoegenskaper från GEMEP-data med openSMILE respektive OpenFace. Det övervakade tillvägagångssättet inkluderar jämförelse av flera lösningar och visar att multimodala pipelines kan överträffa unimodala sådana, även med ett större antal affektiva tillstånd. Den oövervakade metoden omfattar en konservativ och en utforskande metod för att hitta meningsfulla mönster i det multimodala datat. Den innehåller också ett innovativt förfarande för att bättre förstå resultatet av klustringstekniker.
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Tien Dung. "Multimodal emotion recognition using deep learning techniques." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180753/1/Tien%20Dung_Nguyen_Thesis.pdf.

Full text
Abstract:
This thesis investigates the use of deep learning techniques to address the problem of machine understanding of human affective behaviour and improve the accuracy of both unimodal and multimodal human emotion recognition. The objective was to explore how best to configure deep learning networks to capture individually and jointly, the key features contributing to human emotions from three modalities (speech, face, and bodily movements) to accurately classify the expressed human emotion. The outcome of the research should be useful for several applications including the design of social robots.
APA, Harvard, Vancouver, ISO, and other styles
6

Gay, R. "Morality : Emotion, perception and belief." Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lawrie, Louisa. "Adult ageing and emotion perception." Thesis, University of Aberdeen, 2018. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=239235.

Full text
Abstract:
Older adults are worse than young adults at perceiving emotions in others. However, it is unclear why these age-related differences in emotion perception exist. The studies presented in this thesis investigated the cognitive, emotional and motivational factors influencing age differences in emotion perception. Study 1 revealed no age differences in mood congruence effects: sad faces were rated as more sad when participants experienced negative mood. In contrast, Study 2 demonstrated that sad mood impaired recognition accuracy for sad faces. Together, findings suggested that different methods of assessing emotion perception engage the use of discrete processing strategies. These mood influences on emotion perception are similar in young and older adults. Studies 3 and 4 investigated age differences in emotion perception tasks which are more realistic and contextualised than still photographs of facial expressions. Older adults were worse than young at recognising emotions from silent dynamic displays; however, older adults outperformed young in a film task that displayed emotional information in multiple modalities (Study 3). Study 4 suggested that the provision of vocal information was particularly beneficial to older adults. Furthermore, vocabulary mediated the relationship between age and performance on the contextual film task. However, age-related deficits in decoding basic emotions were established in a separate multi-modal video-based task. In addition, age differences in the perception of neutral expressions were also examined. Neutral expressions were interpreted as displaying positive emotions by older adults. Using a dual-task paradigm, Study 5 suggested that working memory processes are involved in decoding emotions. However, age-related declines in working memory were not driving age effects in emotion perception. Neuropsychological, motivational and cognitive explanations for these results are evaluated. Implications of these findings for older adults' social functioning are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Abrilian, Sarkis. "Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.

Full text
Abstract:
L'objectif de cette thèse est de représenter les émotions spontanées et les signes multimodaux associés pour contribuer à la conception des futurs systèmes affectifs interactifs. Les prototypes actuels sont généralement limités à la détection et à la génération de quelques émotions simples et se fondent sur des données audio ou vidéo jouées par des acteurs et récoltées en laboratoire. Afin de pouvoir modéliser les relations complexes entre les émotions spontanées et leurs expressions dans différentes modalités, une approche exploratoire est nécessaire. L'approche exploratoire que nous avons choisie dans cette thèse pour l'étude de ces émotions spontanées consiste à collecter et annoter un corpus vidéo d'interviews télévisées. Ce type de corpus comporte des émotions plus complexes que les 6 émotions de base (colère, peur, joie, tristesse, surprise, dégoût). On observe en effet dans les comportements émotionnels spontanés des superpositions, des masquages, des conflits entre émotions positives et négatives. Nous rapportons plusieurs expérimentations ayant permis la définition de plusieurs niveaux de représentation des émotions et des paramètres comportementaux multimodaux apportant des informations pertinentes pour la perception de ces émotions complexes spontanées. En perspective, les outils développés durant cette thèse (schémas d'annotation, programmes de mesures, protocoles d'annotation) pourront être utilisés ultérieurement pour concevoir des modèles utilisables par des systèmes interactifs affectifs capables de détecter/synthétiser des expressions multimodales d'émotions spontanées.
APA, Harvard, Vancouver, ISO, and other styles
9

Angelica, Lim. "MEI: Multimodal Emotional Intelligence." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kosti, Ronak. "Visual scene context in emotion perception." Doctoral thesis, Universitat Oberta de Catalunya, 2019. http://hdl.handle.net/10803/667808.

Full text
Abstract:
Els estudis psicològics demostren que el context de l'escena, a més de l'expressió facial i la postura corporal, aporta informació important a la nostra percepció de les emocions de les persones. Tot i això, el processament del context per al reconeixement automàtic de les emocions no s'ha explorat a fons, en part per la manca de dades adequades. En aquesta tesi presentem EMOTIC, un conjunt de dades d'imatges de persones en situacions naturals i diferents anotades amb la seva aparent emoció. La base de dades EMOTIC combina dos tipus de representació d'emocions diferents: (1) un conjunt de 26 categories d'emoció i (2) les dimensions contínues valència, excitació i dominància. També presentem una anàlisi estadística i algorítmica detallada del conjunt de dades juntament amb l'anàlisi d'acords d'anotadors. Els models CNN estan formats per EMOTIC, combinant característiques de la persona amb funcions d'escena (context). Els nostres resultats mostren com el context d'escena aporta informació important per reconèixer automàticament els estats emocionals i motiven més recerca en aquesta direcció.
Los estudios psicológicos muestran que el contexto de la escena, además de la expresión facial y la pose corporal, aporta información importante a nuestra percepción de las emociones de las personas. Sin embargo, el procesamiento del contexto para el reconocimiento automático de emociones no se ha explorado en profundidad, en parte debido a la falta de datos adecuados. En esta tesis presentamos EMOTIC, un conjunto de datos de imágenes de personas en situaciones naturales y diferentes anotadas con su aparente emoción. La base de datos EMOTIC combina dos tipos diferentes de representación de emociones: (1) un conjunto de 26 categorías de emociones y (2) las dimensiones continuas de valencia, excitación y dominación. También presentamos un análisis estadístico y algorítmico detallado del conjunto de datos junto con el análisis de concordancia de los anotadores. Los modelos CNN están entrenados en EMOTIC, combinando características de la persona con características de escena (contexto). Nuestros resultados muestran cómo el contexto de la escena aporta información importante para reconocer automáticamente los estados emocionales, lo cual motiva más investigaciones en esta dirección.
Psychological studies show that the context of a setting, in addition to facial expression and body language, lends important information that conditions our perception of people's emotions. However, context's processing in the case of automatic emotion recognition has not been explored in depth, partly due to the lack of sufficient data. In this thesis we present EMOTIC, a dataset of images of people in various natural scenarios annotated with their apparent emotion. The EMOTIC database combines two different types of emotion representation: (1) a set of 26 emotion categories, and (2) the continuous dimensions of valence, arousal and dominance. We also present a detailed statistical and algorithmic analysis of the dataset along with the annotators' agreement analysis. CNN models are trained using EMOTIC, combining a person's features with those of the setting (context). Our results not only show how the context of a setting contributes important information for automatically recognizing emotional states but also promote further research in this direction.
APA, Harvard, Vancouver, ISO, and other styles
11

Bodnar, Andor L. "Sensory and Emotion Perception of Music." Thesis, University of Louisiana at Lafayette, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10268431.

Full text
Abstract:

The purpose of this study was to examine whether isolated musical chords and chord progressions are capable of communicating basic emotions (happiness, sadness, and fear) and sensory perceptions of tension and dissonance to eighty-two university students differing in musical expertise. Participants were recruited from ULL’s psychology and music department, and were divided into three different groups based on their formal training in music. Participants listened to forty-six music excerpts and were asked to identify and rate the emotions they felt each stimulus was attempting to convey. Participants were also asked to rate how much tension and dissonance they experienced after each excerpt.

The results demonstrated that major chord progressions played in fast tempo more readily expressed happiness than minor and chromatic chord progressions. Minor chord progressions played in slow tempo were associated with sadness and were rated higher in tension and dissonance than major chord progressions. Chromatic chord progressions, regardless of tempo, expressed fear most reliable, and received higher tension and dissonance ratings than major and minor chord progressions. Furthermore, results showed that isolated major chords were perceived as the least tense, the least dissonant, and the happiest sounding. Isolated minor chords readily conveyed sadness, and were perceived as more tense and dissonant than majors. Additionally, isolated augmented and diminished chords were the most likely to express fear and were rated highest in tension and dissonance. Contrary to previous research findings, participants’ level of musical expertise influenced sensory and emotion perception ratings. Participants with three to four years of formal training outperformed experts and novices. Future research directions and possible applied implications of these finding are also discussed.

APA, Harvard, Vancouver, ISO, and other styles
12

Ver, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception." Connect to resource, 2006. http://hdl.handle.net/1811/6629.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
13

Petit, Céline E. F. "Multimodal flavour perception : influence of colour and chemesthesis." Thesis, University of Nottingham, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cheng, Linda. "Ethnic and Racial Differences in Emotion Perception." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/psych_hontheses/6.

Full text
Abstract:
This study analyzed racial differences in the way African Americans and Caucasians perceive emotion from facial expressions and tone of voice. Participants were African American (n=25) and Caucasian (n=26) college students. The study utilizes 56 images of African American and Caucasian faces balanced for race and sex from the NimStim stimulus set (Tottenham, 2006). The study also utilized visual and auditory stimuli form the DANVA2. Participants were asked to judged emotion for each stimulus in the tasks. The BFRT, the WASI, and the Seashore Rhythm test were used as exclusionary criteria. In general the study found few differences in the way African Americans and Caucasians perceived emotion, though racial differences emerged as an interaction with other factors. The results of the study supported the theory of universality of emotion perception and expression though social influences, which may affect emotion perception, is also a possibility. Areas of future research were discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Lim, Seung-Lark. "The role of emotion on visual perception." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3358931.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Psychological and Brain Sciences, 2009.
Title from PDF t.p. (viewed on Feb. 10, 2010). Source: Dissertation Abstracts International, Volume: 70-05, Section: B, page: 3196. Adviser: Luiz Pessoa.
APA, Harvard, Vancouver, ISO, and other styles
16

Recio, Guillermo. "Perception of dynamic facial expressions of emotion." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16697.

Full text
Abstract:
Verhaltensstudien haben gezeigt, dass dynamische besser als statische Emotionsausdrücke erkannt werden. Im Einklang mit dieser dynamischer Vorteil Hypothese, haben fMRT Studien eine erhöhte und ausgedehnte Aktivierung für dynamische Emotionsausdrücke gezeigt. Die vorliegende Dissertation hatte das Ziel, die kognitiven Mechanismen, die den dynamischen Vorteil bedingen, zu klären, beziehungsweise die Spezifität dessen Wirkung für Gesichtsausdrücke der sechs Basisemotionen zu untersuchen. Studie 1 verglich Verhaltensdaten und kortikale Reaktionen zwischen dynamischen und statischen Emotionsausdrücken. Studie 2 behandelte methodischen Fragen des Timings der Stimuli und der neutralen dynamischen Bedingung. Studie 3 überprüfte die Hypothese, dass die Erhöhung der Menge von Bewegungen in den Gesichtsausdrücken die Zuweisung der Aufmerksamkeit erhöhen würde, und verglich die Wirkung in emotionalen und nicht-emotionalen Bewegungen. Study 4 konzentrierte sich auf die Frage der Emotionsspezifität der Hirnaktivierung in der Erkennung von Emotionen. Die Ergebnisse bestätigten einen dynamischen Vorteil in der Klassifizierung von Emotionsausdrücken, vermutlich bedingt durch eine Erhöhung in der visuellen Aufmerksamkeit, und eine Verbesserung der Wahrnehmungsverarbeitung. Außerdem, erhöht sich dieser Effekt mit allmählichem Erhöhen der Stärke der Bewegung in beide emotionalen und neutralen Bedingungen. Solche Effekte sprechen für ein perzeptuellen Bias erhöhte Aufmerksamkeit emotionalen verglichen mit neutralen und dynamischen verglichen mit statischen Gesichtern zuzuweisen. Dieser Effekt war für Freude etwas erhöht und für Überraschung reduziert, aber insgesamt ähnlich für alle Emotionsausdrücken.
Behavioral studies have shown that facial expressions of emotion unfolding over time provide some type of information that benefits the recognition of emotional expressions, in comparison with static images. In line with the dynamic advantage hypothesis, neuroimaging studies have shown increased and wider activation while seeing dynamic expressions. The present dissertation aims to clarify the cognitive mechanism underlying this dynamic advantage and the specificity of this effect for six facial expressions of emotion. Study 1 compared behavioral and brain cortical responses to dynamic and static expressions, looking for psychophysiological correlates of the dynamic advantage. Study 2 dealt with methodological issues regarding the timing of the stimuli and the dynamic neutral conditions. Study 3 tested the hypothesis that increasing the amount of movement in the expressions would increase the allocation of attention, and compared effects of intensity in both emotional and non-emotional movements. Study 4 focused on the question of emotion specificity of brain activation during emotion recognition. Results confirmed a dynamic advantage in the classification of expressions, presumably due to more efficient allocation of attention that improved perceptual processing. The effect increased gradually by augmenting the amount of motion, in both emotional and neutral expressions, indicating a perceptual bias to attend facial movements. The enhancement was somewhat larger for happiness and reduced for surprise, but overall similar for all emotional expressions.
APA, Harvard, Vancouver, ISO, and other styles
17

Soladié, Catherine. "Représentation Invariante des Expressions Faciales." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00935973.

Full text
Abstract:
De plus en plus d'applications ont pour objectif d'automatiser l'analyse des comportements humains afin d'aider ou de remplacer les experts qui réalisent actuellement ces analyses. Cette thèse traite de l'analyse des expressions faciales qui fournissent des informations clefs sur ces comportements. Les travaux réalisés portent sur une solution innovante permettant de définir efficacement une expression d'un visage, indépendamment de la morphologie du sujet. Pour s'affranchir des différences de morphologies entre les personnes, nous utilisons des modèles d'apparence spécifiques à la personne. Nous proposons une solution qui permet à la fois de tenir compte de l'aspect continu de l'espace des expressions et de la cohérence des différentes parties du visage entre elles. Pour ce faire, nous proposons une approche originale basée sur l'organisation des expressions. Nous montrons que l'organisation des expressions, telle que définie, est universelle et qu'elle peut être efficacement utilisée pour définir de façon unique une expression : une expression est caractérisée par son intensité et sa position relative par rapport aux autres expressions. La solution est comparée aux méthodes classiques basées sur l'apparence et montre une augmentation significative des résultats de reconnaissance sur 14 expressions non basiques. La méthode a été étendue à des sujets inconnus. L'idée principale est de créer un espace d'apparence plausible spécifique à la personne inconnue en synthétisant ses expressions basiques à partir de déformations apprises sur d'autres sujets et appliquées sur le neutre du sujet inconnu. La solution est aussi mise à l'épreuve dans un environnement multimodal plus complet dont l'objectif est la reconnaissance d'émotions lors de conversations spontanées. Les résultats montrent que la solution est efficace sur des données réelles et qu'elle permet l'extraction d'informations essentielles à l'analyse des émotions. Notre méthode a été mise en œuvre dans le cadre du challenge international AVEC 2012 (Audio/Visual Emotion Challenge) où nous avons fini 2nd, avec des taux de reconnaissance très proches de ceux obtenus par les vainqueurs. La comparaison des deux méthodes (la nôtre et celles des vainqueurs) semble montrer que l'extraction des caractéristiques pertinentes est la clef de tels systèmes.
APA, Harvard, Vancouver, ISO, and other styles
18

Lambert, Hayley M. "Emotion Discrimination in Peripheral Vision." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2087.

Full text
Abstract:
The recognition accuracy of emotion in faces varies depending on the discrete emotion being expressed and the location of the stimulus. More specifically, emotion detection performance declines as facial stimuli are presented further out in the periphery. Interestingly, this is not always true for faces depicting happy emotional expressions, which can be associated with maintained levels of detection. The current study examined neurophysiological responses to emotional face discrimination in the periphery. Two event-related potentials (ERPs) that can be sensitive to the perception of emotion in faces, P1 and N170, were examined using EEG data recorded from electrodes at occipitotemporal sites on the scalp. Participants saw a face presented at a 0° angle of eccentricity, at a 10° angle of eccentricity, or at a 20° angle of eccentricity, and responded whether the face was a specific emotion or neutral. Results showed that emotion detection was higher when faces were presented at the center of the display than at 10° or 20° for both happy and angry expressions. Likewise, the voltage amplitude of the N170 component was greater when faces were presented at the center of the display than at 10° or 20°. Further exploration of the data revealed that high intensity expressions were more easily detected at each location and elicited a larger amplitude N170 than low intensity expressions for both emotions. For a peripheral emotion discrimination task like that which was employed in the current study, emotion cues seem to enhance face processing at peripheral locations.
APA, Harvard, Vancouver, ISO, and other styles
19

Gendron, Maria Therese. "Relativity in the perception of emotion across cultures." Thesis, Boston College, 2013. http://hdl.handle.net/2345/bc-ir:104063.

Full text
Abstract:
Thesis advisor: Lisa Feldman Barrett
A central question in the study of human behavior is whether or not certain categories of emotion, such as anger, fear and sadness (termed "discrete emotions"), are universally recognized in the nonverbal behaviors of others (termed the "universality of attribution hypothesis"). In this dissertation, the universality of attribution hypothesis was revisited in order to examine whether individuals from remote cultural contexts perceive the same mental states in nonverbal cues as individuals from a Western cultural context. The studies described in this dissertation removed certain features of prior universality studies that served to obscure the underlying nature of cross-cultural perceptions. In study 1, perception of posed emotional vocalizations by individuals from a US cultural context were compared to those of individuals from the Himba ethnic group, who reside in remote regions of Namibia and have limited contact with individuals outside their community. In contrast to recent data claiming to support the universality hypothesis, we did not find evidence that emotions were universally perceived when participants were asked to freely label the emotion they perceived in vocalizations. In contrast, our findings did support the hypothesis that affective dimensions of valence and arousal are perceived across cultural contexts. In the second study, emotion perceptions based on facial expressions were compared between participants from US and Himba cultural contexts. Consistent with the results of Study 1, Himba individuals did not perceive the Western discrete emotion categories that their US counterparts did. Our data did support the hypothesis that Himba participants were routinely engaging in action perception, rather than mental state inference. Across both cultural contexts, when conceptual knowledge about emotions was made more accessible by presenting emotion words as part of the task, perception was impacted. In US participants, perceptions conformed even more strongly with the previously assumed "universal" model. Himba participants appeared to rely more on mental state categories when exposed to concepts, but a substantial amount of cultural variation was still observed. Finally, in Study 3, perceptions of emotion were examined in a US cultural context after the focus of participants was manipulated, either onto mental states (broadly), emotions or behaviors. Perceptions of emotion did not differ substantially across these three conditions, indicating that within a US cultural context the tendency to infer mental states from facial expressions is somewhat inflexible. Overall, the findings of this dissertation indicate that emotion perception is both culturally and linguistically relative and that attempts to apply the Western cultural model for emotions as a universal one obscures important cultural variation
Thesis (PhD) — Boston College, 2013
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Psychology
APA, Harvard, Vancouver, ISO, and other styles
20

Picó, Pérez Maria. "Multimodal Neuroimaging of Emotion Regulation Strategies in Clinical and Healthy Control Samples." Doctoral thesis, Universitat de Barcelona, 2018. http://hdl.handle.net/10803/664163.

Full text
Abstract:
Emotion regulation is a critical process for dealing appropriately with emotions in our daily lives. An important part of adaptively regulating emotions is being able to maintain a balance between short-term and long-term goals. There are different processes and strategies involved in emotion regulation, which can lead to different affective, cognitive and social outcomes. More importantly, these are altered in mental health disorders, with emotion regulation deficits appearing to be a transdiagnostic feature. Several functional magnetic resonance imaging (fMRI) studies and meta-analyses have shown a consistent pattern of neural activation during emotion regulation in healthy controls (HC), specifically for cognitive reappraisal tasks. This pattern of activations consists on a network of fronto-parietal control regions, which, in turn, down-regulates the limbic system (including the amygdala). On the other hand, preliminary findings from psychiatric populations indicate that the limbic system is overactive and prefrontal regulatory regions are compromised, showing both hyper and hypoactivations depending on the specific task. Further research is needed to better characterize whether the same pattern of affected neural activations is observed across different mental health disorders or whether a specific pattern of regional alterations emerges for different disorders, considering that most of these show deficits in emotion regulation in one way or another. To this end, the overall aim of this thesis was to provide new insights into the neural correlates of emotion regulation in clinical and HC samples, using different neuroimaging modalities. Specifically, Study 1 was the first using a dispositional approach looking into the intrinsic functional connectivity patterns associated with reappraisal and suppression use in a sample of HC. Then, in Study 2 this was further complemented with a sample of obsessive-compulsive disorder (OCD) patients and looking also at structural connectivity. Study 3 was the first to examine the fMRI cognitive reappraisal task in a sample of excess-weight participants, and the meta-analysis of fMRI cognitive reappraisal studies performed in Study 4 was the first one to include clinical samples and not just HC. From the results of these studies, we were able to draw the following conclusions: • The dispositional use of cognitive reappraisal and expressive suppression in HC subjects shows different neural correlates, with reappraisal use being related to basolateral amygdala (BLA)-insular and supplementary motor area (SMA) intrinsic functional connectivity, and suppression use being associated to BLA-dorsal anterior cingulate (dACC) and centromedial amygdala (CMA)-SMA functional connectivity. • There is an alteration in the insula across OCD, mood and anxiety disorders, and excess-weight individuals, a region that is critically involved in emotional awareness and valuation. • OCD patients show decreased functional and structural connectivity between the right amygdala and the right post-central gyrus. Moreover, the association between amygdala’s functional connectivity and habitual reappraisal and suppression use is altered in comparison with HC, with microstructural alterations in underlying white-matter tracts connecting these regions. • Excess-weight individuals show decreased attentional involvement during the experimentation of emotions, and increased emotional reactivity during emotion regulation, as indicated by decreased insula, orbitofrontal cortex, and cerebellum activity during the Maintain>Observe contrast, and increased insula activity during the Regulate>Maintain contrast. • Other alterations in mood and anxiety disorders encompass decreased activations in the prefronto-parietal regulatory network during cognitive reappraisal, as well as increased activations in attentional and parietal compensatory regions. • According to our findings, there are deficits in emotion regulation shared across different disorders, although there are also specific deficits for different disorder groups, emphasizing the need of subject-wise brain-based treatments targeting individual needs. This thesis contributes to expand the knowledge about the underlying neurobiology of emotion regulation across conditions and mental health disorders, leading to a better understanding of the commonalities and differences in patient groups with impairments in these processes, and eventually providing new targets for brain-based treatments of emotion regulation deficits.
La regulació emocional és un procés necessari per gestionar de manera apropiada les emocions en el nostre dia a dia. Hi ha diferents estratègies de regulació emocional que comporten diferents conseqüències afectives, cognitives i socials. Encara més important, aquestes pareixen estar afectades als trastorns psiquiàtrics, podent considerar els dèficits de regulació emocional com una característica transdiagnòstica. Diferents estudis i meta-anàlisis de ressonància magnètica funcional (RMf) han mostrat un patró d’activació neural durant la regulació emocional en controls sans, que consisteix en una xarxa prefronto-parietal de regions regulatòries, encarregada de regular a la baixa l’activació del sistema límbic (incloent l’amígdala). En canvi, als trastorns psiquiàtrics, resultats preliminars indiquen que el sistema límbic podria estar hiperactiu i les regions regulatòries prefrontals es trobarien alterades. Es necessiten més estudis per poder determinar si s’observa el mateix patró d’activacions neurals afectades en diferents trastorns psiquiàtrics, o si hi ha un patró específic per cada trastorn, tenint en compte que en la majoria de trastorns predominen els dèficits de regulació emocional d’una manera o altra. Per això, l’objectiu d’aquesta tesi és oferir noves dades respecte als correlats neurals de la regulació emocional en mostres clíniques i controls sans. S’han utilitzat diferents modalitats de neuroimatge per poder oferir una millor descripció d’aquests correlats i la seua relació amb diferents estratègies de regulació emocional. Concretament, l’Estudi 1 de la tesi és el primer en examinar com s’associa la connectivitat funcional intrínseca en una mostra de controls sans amb l’ús habitual d’estratègies de reavaluació cognitiva i de supressió. En l’Estudi 2, a més, això es veu complementat amb la comparació amb pacients amb trastorn obsessiu-compulsiu, i també es compara la connectivitat estructural. A l’Estudi 3 s’estudia per primera vegada la tasca de reavaluació cognitiva en RMf en una mostra de subjectes amb excés de pes, i a l’Estudi 4 es fa un meta-anàlisi d’estudis que utilitzen aquesta tasca en diferents mostres clíniques. D’acord amb els nostres resultats, existeixen dèficits de regulació emocional compartits a través dels diferents trastorns, però també hi ha dèficits específics per als diferents grups de trastorns, el qual posa encara més èmfasi en la necessitat de desenvolupar tractaments personalitzats basats en el coneixement neurobiològic i enfocats a millorar les necessitats individuals.
APA, Harvard, Vancouver, ISO, and other styles
21

Israelsson, Alexandra. "Emotion Recognition Ability, Metacognition, and Metaemotion:A Multimodal Online-Assessment of Swedish Adults." Thesis, Stockholms universitet, Psykologiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-157964.

Full text
Abstract:
Data obtained in laboratory settings is a valid but resource-demanding approach. Moreover, although aspects of both metacognition and metaemotion have been proposed to be important for socioemotional functioning, such associations have rarely been studied previously. This study aimed to examine the feasibility of a multimodal online-assessment of emotion recognition ability, and to investigate its associations with metacognition and metaemotion. The sample consisted of 106 students from three Swedish universities. The online-survey included a multimodal emotion recognition test (ERAM) with added trial-by-trial confidence judgments (to measure metacognition) and questionnaires related to metaemotion. Online-data showed great consistency with previous data collected in lab. Well-calibrated adults had higher emotion recognition accuracy than under-confident adults. Higher levels of negative metaemotions were associated with higher emotion recognition accuracy. In conclusion, online-assessments of emotional abilities may be a useful approach. Further research is required to understand relationships between metacognition, metaemotion, and emotion recognition ability more fully.
APA, Harvard, Vancouver, ISO, and other styles
22

Ludlum, Madonna L. "A Multimodal Investigation of Renewal of Human Avoidance, Perceived Threat, and Emotion." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801907/.

Full text
Abstract:
Many people who receive exposure-based treatments for anxiety disorders exhibit a return of fear and avoidance which is often referred to as renewal or relapse. Human and nonhuman research on fear conditioning and renewal has been instrumental in helping understand relapse in anxiety disorders. The purpose of this investigation was to examine renewal of human avoidance and assess whether avoidance may aid in sustaining renewal of fear responses. We adopted a multimodal measurement approach consisting of an approach-avoidance task along with ratings of perceived threat and fear and measures of skin-conductance, a widely used physiological measure of fear. A traditional, single-subject research design was used with six healthy adults. All tasks employed a discrete trial procedure. Experimental conditions included Pavlovian fear conditioning in which increased probability of money loss was paired with a “threat” meter in Context A and later followed extinction in Context B. Fear and avoidance increased to higher threat levels in Context A but not Context B. Renewal testing involved presenting the threat meter on a return to Context A to determine if it evoked fear and avoidance (i.e., relapse). As predicted, renewal testing in Context A showed that increased threat was associated with increased avoidance, ratings of perceived threat and fear, and higher skin-conductance. Moreover, results showed that renewal maintained over six blocks of trials. This is the first investigation of renewal of threat and avoidance in humans that highlights avoidance as a mechanism that may contribute to maintaining fear in anxiety pathology.
APA, Harvard, Vancouver, ISO, and other styles
23

Longmore, Richard. "The understanding and perception of emotion in schizophrenia." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/31335.

Full text
Abstract:
Objectives: Research is beginning to examine the links between schizophrenia and social cognition - the processes people use to make sense of their social experience (Corrigan & Perm, 2001). Identifying emotion in other people is a vital social skill. A significant body of research shows that people with schizophrenia have problems judging facial emotions. However, an intact understanding of emotion concepts is usually assumed in such research. This thesis aims to establish (a) whether affect recognition difficulties in schizophrenia reflect problems in the understanding or perception of emotion (b) the relationship of emotional understanding to social functioning and (c) how far general cognition accounts for differences in affect recognition. Method: The study describes the validation of a set of vignettes that reliably imply specific emotions. These were then administered to participants with a diagnosis of schizophrenia (n = 60), nonclinical controls (n = 40) and learning disabled controls (n = 20). There were two experimental conditions. In the first, vignettes were paired with emotion words. In the second, they were matched with previously validated photographs of emotional facial expressions. A measure of intelligence was administered to all participants, and a social functioning scale was completed for participants with schizophrenia. Results: Participants with schizophrenia and learning disabled controls had significantly more difficulty than nonclinical controls in the understanding and perception of emotion. Once general intellectual functioning was taken into account, however, only the group with schizophrenia showed a differential deficit in affect recognition. No differential deficit was found in the perception of facial emotion in schizophrenia, although performance on some emotions was markedly low. There was no significant correlation between affect recognition ability and social functioning in schizophrenia. Conclusion: People with schizophrenia have a specific and differential deficit in social understanding which is not wholly accounted for by general cognitive functioning.
APA, Harvard, Vancouver, ISO, and other styles
24

Zieber, Nicole R. "INFANTS’ PERCEPTION OF EMOTION FROM DYNAMIC BODY MOVEMENTS." UKnowledge, 2012. http://uknowledge.uky.edu/psychology_etds/5.

Full text
Abstract:
In humans, the capacity to extract meaning from another person’s behavior is fundamental to social competency. Adults recognize emotions conveyed by body movements with comparable accuracy to when they are portrayed in facial expressions. While infancy research has examined the development of facial and vocal emotion processing extensively, no prior study has explored infants’ perception of emotion from body movements. The current studies examined the development of emotion processing from body gestures. In Experiment 1, I asked whether 6.5-month-olds infants would prefer to view emotional versus neutral body movements. The results indicate that infants prefer to view a happy versus a neutral body action when the videos are presented upright, but fail to exhibit a preference when the videos are inverted. This suggests that the preference for the emotional body movement was not driven by low-level features (such as the amount or size of the movement displayed), but rather by the affective content displayed. Experiments 2A and 2B sought to extend the findings of Experiment 1 by asking whether infants are able to match affective body expressions to their corresponding vocal emotional expressions. In both experiments, infants were tested using an intermodal preference technique: Infants were exposed to a happy and an angry body expression presented side by side while hearing either a happy or angry vocalization. An inverted condition was included to investigate whether matching was based solely upon some feature redundantly specified across modalities (e.g., tempo). In Experiment 2A, 6.5-month-old infants looked longer at the emotionally congruent videos when they were presented upright, but did not display a preference when the same videos were inverted. In Experiment 2B, 3.5-month-olds tested in the same manner exhibited a preference for the incongruent video in the upright condition, but did not show a preference when the stimuli were inverted. These results demonstrate that even young infants are sensitive to emotions conveyed by bodies, indicating that sophisticated emotion processing capabilities are present early in life.
APA, Harvard, Vancouver, ISO, and other styles
25

John, C. "Subliminal perception and the cognitive processing of emotion." Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vickhoff, Björn. "A perspective theory of music perception and emotion /." Göteborg : Göteborgs Universitet, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016671611&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Paterson, Helena M. "The perception and cognition of emotion from motion." Thesis, Connect to e-thesis, 2002. http://theses.gla.ac.uk/1072/.

Full text
Abstract:
Thesis (Ph.D.) -- University of Glasgow, 2002.
Ph.D. thesis submitted to the Department of Psychology, University of Glasgow, 2002. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
28

Zuberbühler, Hans-Jörg. "Quality aspects of multimodal communication : user perception and acceptance thresholds /." Zürich, 2003. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Browatzki, Björn Verfasser], and Alexander [Akademischer Betreuer] [Verl. "Multimodal object perception for robotics / Björn Browatzki. Betreuer: Alexander Verl." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/1076775705/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Browatzki, Björn [Verfasser], and Alexander [Akademischer Betreuer] Verl. "Multimodal object perception for robotics / Björn Browatzki. Betreuer: Alexander Verl." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/1076775705/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Richoz, Anne-Raphaëlle. "Vers la compréhension du traitement dynamique du visage humain." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS002.

Full text
Abstract:
Au cours des dernières décennies, la plupart des études investiguant la reconnaissance des visages ont utilisé des photographies statiques. Or dans notre environnement naturel, les visages auxquels nous sommes exposés sont des phénomènes dynamiques qu’il est difficile de communiquer écologiquement avec des images statiques. Cette exposition quotidienne et répétée à des visages en mouvement pourrait-elle avoir un effet sur notre système visuel, favorisant le traitement de stimuli dynamiques au détriment des statiques ?Afin d’éclairer cette problématique, les recherches présentées dans cette thèse avaient pour but d’utiliser des stimuli dynamiques pour étudier différents aspects du traitement des visages à travers plusieurs groupes d’âge et populations. Dans notre première recherche, nous avons utilisé des visages animés pour voir si la capacité de nourrissons âgés de 6-, 9- et 12 mois à associer des attributs audibles et visibles à un genre est influencée par l'utilisation d’un discours de type adulte par opposition à un langage de type enfantin. Nos résultats ont montré qu’à partir de 6 mois, lorsqu'ils étaient soumis à un discours de type adulte, les nourrissons associaient les voix et visages de femmes. Par contre, lorsqu'ils étaient confrontés à un langage de type enfantin, cette capacité apparaissait seulement à l'âge de 9 mois. Ces premiers résultats soutiennent l'idée selon laquelle le développement de la perception multisensorielle chez les nourrissons est influencé par la nature même des interactions sociales.Dans notre deuxième recherche, nous avons utilisé une nouvelle technique 4D pour reconstruire les représentations mentales des six émotions de base d’une patiente présentant un cas unique et pure de prosopagnosie acquise (i.e., une incapacité à reconnaître les visages), afin de réexaminer une question bien débattue, à savoir si les modules cérébraux sous-jacents à la reconnaissance de l’identité et des expressions faciales sont séparés ou communs. Les résultats ont montré que notre patiente a utilisé toutes les caractéristiques faciales pour identifier les émotions de base, ce qui contraste fortement avec son utilisation déficitaire de l'information faciale pour la reconnaissance de l’identité. Ces résultats confortent l’idée selon laquelle différents systèmes de représentations sous-tendent le traitement de l'identité et de l'expression. Par la suite, nous avons pu démontrer que notre patiente était capable de reconnaître adéquatement les expressions émotionnelles dynamiques, mais pas les émotions statiques provenant de ses propres représentations internes. Ces résultats qui pourraient être expliqués par un ensemble spécifique de lésions dans son gyrus occipital inférieur droit, soutiennent l’idée selon laquelle le traitement des stimuli statiques et dynamiques se produit dans des régions cérébrales différentes.Dans notre troisième recherche, nous avons investigué si d'autres populations ayant un système visuel neurologiquement fragile ou en développement bénéficient également de la présentation d’expressions dynamiques. Nous avons demandé à plus de 400 sujets âgés de 5 à 96 ans de catégoriser les six expressions de base en versions statique, dynamique ou bruitée. En utilisant un modèle Bayésien, nos résultats nous ont permis de quantifier la pente d'amélioration et de déclin pour chaque expression dans chaque condition, ainsi que d'estimer l'âge auquel l’efficacité est maximale. En résumé, nos résultats montrent la supériorité des stimuli dynamiques dans la reconnaissance des expressions faciales, de manière plus marquée pour certaines expressions que d'autres et de façon plus importante à certains moments spécifiques du développement.Dans l'ensemble, les résultats de cette thèse soulignent l'importance d’investiguer la reconnaissance des visages avec des stimuli dynamiques, non seulement en neuropsychologie, mais aussi dans d'autres domaines des neurosciences développementales et cliniques
The human visual system is steadily stimulated by dynamic cues. Faces provide crucial information important for adapted social interactions. From an evolutionary perspective, humans have been more extensively exposed to dynamic faces, as static face images have only appeared recently with the advent of photography and the expansion of digital tools. Yet, most studies investigating face perception have relied on static faces and only a little is known about the mechanisms involved in dynamic face processing.To clarify this issue, this thesis aimed to use dynamic faces to investigate different aspects of face processing in different populations and age groups. In Study 1, we used dynamic faces to investigate whether the ability of infants aged 6, 9, and 12 months in matching audible and visible attributes of gender is influenced by the use of adult-directed (ADS) vs. infant-directed (IDS) speech. Our results revealed that from 6 months of age, infants matched female faces and voices when presented with ADS. This ability emerged at 9 months of age when presented with IDS. Altogether, these findings support the idea that the perception of multisensory gender coherence is influenced by the nature of social interactions.In Study 2, we used a novel 4D technique to reconstruct the dynamic internal representations of the six basic expressions in a pure case of acquired prosopagnosia (i.e., a brain-damaged patient severely impaired in recognizing familiar faces). This was done in order to re-examine the debated issue of whether identity and expression are processed independently. Our results revealed that our patient used all facial features to represent basic expressions, contrasting sharply with her suboptimal use of facial information for identity recognition. These findings support the idea that different sets of representations underlie the processing of identity and expression. We then examined our patient’s ability to recognize static and dynamic expressions using her internal representations as stimuli. Our results revealed that she was selectively impaired in recognizing many of the static expressions; whereas she displayed maximum accuracy in recognizing all the dynamic emotions with the exception of fear. The latter findings support recent evidence suggesting that separate cortical pathways, originating in early visual areas and not in the inferior occipital gyrus, are responsible for the processing of static and dynamic face information.Moving on from our second study, in Study 3, we investigated whether dynamic cues offer processing benefits for the recognition of facial expressions in other populations with immature or fragile face processing systems. To this aim, we conducted a large sample cross-sectional study with more than 400 participants aged between 5 to 96 years, investigating their ability to recognize the six basic expressions presented under different temporal conditions. Consistent with previous studies, our findings revealed the highest recognition performance for happiness, regardless of age and experimental condition, as well as marked confusions among expressions with perceptually similar facial signals (e.g., fear and surprise). By using Bayesian modelling, our results further enabled us to quantify, for each expression and condition individually, the steepness of increase and decrease in recognition performance, as well as the peak efficiency, the point at which observers’ performance reaches its maximum before declining. Finally, our results offered new evidence for a dynamic advantage for facial expression recognition, stronger for some expressions than others and more important at specific points in development.Overall, the results highlighted in this thesis underlie the critical importance of research featuring dynamic stimuli in face perception and expression recognition studies; not only in the field of prosopagnosia, but also in other domains of developmental and clinical neuroscience
APA, Harvard, Vancouver, ISO, and other styles
32

Dalili, Michael Nader. "Investigating emotion recognition and evaluating the emotion recognition training task, a novel technique to alter emotion perception in depression." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.702458.

Full text
Abstract:
Rationale. Accurately recognising facial expressions of emotion is important in social interactions and for maintaining interpersonal relationships. While comparing evidence across studies is difficult, research suggests that depressed individuals show deficits in emotion recognition (ER). A possible explanation for these deficits is the biased perception of these expressions. Research suggests that the emotion recognition training task, a novel cognitive bias modification (CBM) technique, shows promise in improving affect in individuals with low mood. However, further work is necessary to evaluate its training effects. Finally, research in healthy individuals has been limited, with larger studies needed to determine the effects of participant and study characteristics and negative symptoms on ER performance. Methods. Using experimental methodologies such as meta-analysis and online recruitment and testing, the research conducted here reviews and contributes to ER research in healthy and depressed populations. This work also uses CBM paradigms, brain imaging, and randomised controlled trial design to evaluate the emotion recognition training task. Results. This research identifies a general ER deficit in depression, and across emotions except sadness. It also finds effects of presentation time and anxiety, but not sociodemographic characteristics or depression, on performance in healthy individuals. This work also indicates generalisation of emotion recognition training effects across identities, but only partial generalisation across emotions. Finally, it finds increased neural activity for happy faces following training in individuals with low mood. Conclusions. Overall, this thesis has contributed new evidence to understanding ER and factors influencing performance in healthy and depressed individuals. The work presented in this thesis has found partial generalisation of emotion recognition training effects and an increase in neural activation for happy faces following a course of training, resembling antidepressant treatment effects. These findings suggest emotion recognition training is a promising novel CBM technique that should continue being evaluated for use in treatment in conjunction with traditional methods such as cognitive behavioural therapy.
APA, Harvard, Vancouver, ISO, and other styles
33

Aldebot, Stephanie. "Neurocognition, Emotion Perception and Quality of Life in Schizophrenia." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_theses/228.

Full text
Abstract:
Patients with schizophrenia have extremely high levels of depression and suicide (Carlborg et al., 2008), thus, a better understanding of factors associated with poor quality of life (QoL) for this population is sorely needed. A growing body of research suggests that cognitive functioning in schizophrenia may be a strong predictor of overall QoL (Green et al., 2000), but individual domains of QoL have not been examined. Indirect evidence also suggests that emotion perception may underlie the relationship between neurocognition and QoL, but this hypothesis has also yet to be tested. Using a sample of 92 clinically stable schizophrenia patients, the current study explores the relationship between neurocognition, namely attention and working memory, and the following sub domains of QoL: social, vocational, intrapsychic foundations and environmental engagement. The current study also examines whether emotion perception mediates this relationship. In partial support of hypotheses, patients with more deficits in working memory reported decreased Occupational QoL and, although only marginally significant, decreased Total QoL. There was also a trend for poorer working memory to be associated with poorer Intrapsychic Foundations QoL. Contrary to hypotheses, emotion perception was not found to mediate the relationship between working memory and QoL. Current findings suggest that interventions that specifically target working memory may also improve many other aspects of schizophrenia patients? QoL.
APA, Harvard, Vancouver, ISO, and other styles
34

Obeidi, Amer. "Emotion, Perception and Strategy in Conflict Analysis and Resolution." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2828.

Full text
Abstract:
Theoretical procedures are developed to account for the effect of emotion and perception in strategic conflict. The possibility principle facilitates modeling the effects of emotions on future scenarios contemplated by decision makers; perceptual graph models and a graph model system permit the decision makers (DMs) to experience and view the conflict independently; and perceptual stability analysis, which is based on individual- and meta-stability analysis techniques, is employed in analyzing graph model systems when the DMs have inconsistent perceptions. These developments improve the methodology of the Graph Model for Conflict Resolution by reconciling emotion, perception, and strategy to make predictions consistent with the actual unfolding of events.

Current research in neuroscience suggests that emotions are a necessary component of cognitive processes such as memory, attention, and reasoning. The somatic marker hypothesis, for example, holds that feelings are necessary to reasoning, especially during social interactions (Damasio, 1994, 2003). Somatic markers are memories of past emotions: we use them to predict future outcomes. To incorporate the effect of emotion in conflict, the underlying principle of Damasio?s hypothesis is used in developing the possibility principle, which significantly expands the paradigm of the Graph Model for Conflict Resolution of Fang, Hipel, and Kilgour (1993).

State identification is a crucial step in determining future scenarios for DMs. The possibility principle is integrated into the modeling stage of the Graph Model by refining the method of determining feasible states. The possibility principle enables analysts and DMs to include emotion in a conflict model, without sacrificing the parsimonious design of the Graph Model methodology, by focusing attention on two subsets of the set of feasible states: hidden and potential states. Hidden states are logically valid, feasible states that are invisible because of the presence of negative emotions such as anger and fear; potential states are logically valid, feasible states that are invisible because of missing positive emotions. Dissipating negative emotions will make the hidden states visible, while expressing the appropriate positive emotions will make the potential states visible. The possibility principle has been applied to a number of real world conflicts. In all cases, eliminating logically valid states not envisioned by any DM simplifies a conflict model substantially, expedites the analysis, and makes it an intuitive and a realistic description of the DMs' conceptualizations of the conflict.

A fundamental principle of the Graph Model methodology is that all DMs' directed graphs must have the same set of feasible states, which are integrated into a standard graph model. The possibility principle may modify the set of feasible states perceived by each DM according to his or her emotion, making it impossible to construct a single standard graph model. When logically valid states are no longer achievable for one or more DMs due to emotions, the apprehension of conflict becomes inconsistent, and resolution may become difficult to predict. Therefore, reconciling emotion and strategy requires that different apprehensions of the underlying decision problem be permitted, which can be accomplished using a perceptual graph model for each DM. A perceptual graph model inherits its primitive ingredients from a standard graph model, but reflects a DM's emotion and perception with no assumption of complete knowledge of other DMs' perceptions.

Each DM's perceptual graph model constitutes a complete standard graph model. Hence, conclusions drawn from a perceptual graph model provide a limited view of equilibria and predicted resolutions. A graph model system, which consists of a list of DMs' perceptual graph models, is defined to reconcile perceptions while facilitating conclusions that reflect each DM's viewpoint. However, since a DM may or may not be aware that other graph models differ from his or her own, different variants of graph model systems are required to describe conflicts. Each variant of graph model system corresponds to a configuration of awareness, which is a set of ordered combinations of DMs' viewpoints.

Perceptual stability analysis is a new procedure that applies to graph model systems. Its objective is to help an outside analyst predict possible resolutions, and gauge the robustness and sustainability of these predictions. Perceptual stability analysis takes a two-phase approach. In Phase 1, the stability of each state in each perceptual graph model is assessed from the point of view of the owner of the model, for each DM in the model, using standard or perceptual solution concepts, depending on the owner's awareness of others' perceptions. (In this research, only perceptual solution concepts for the 2-decision maker case are developed. ) In Phase 2, meta-stability analysis is employed to consolidate the stability assessments of a state in all perceptual graph models and across all variants of awareness. Distinctive modes of equilibria are defined, which reflect incompatibilities in DMs' perceptions and viewpoints but nonetheless provide important insights into possible resolutions of conflict.

The possibility principle and perceptual stability analysis are integrative techniques that can be used as a basis for empathetically studying the interaction of emotion and reasoning in the context of strategic conflict. In general, these new techniques expand current modeling and analysis capabilities, thereby facilitating realistic, descriptive models without exacting too great a cost in modeling complexity. In particular, these two theoretical advances enhance the applicability of the Graph Model for Conflict Resolution to real-world disputes by integrating emotion and perception, common ingredients in almost all conflicts.

To demonstrate that the new developments are practical, two illustrative applications to real-world conflicts are presented: the US-North Korea conflict and the confrontation between Russia and Chechen Rebels. In both cases, the analysis yields new strategic insights and improved advice.
APA, Harvard, Vancouver, ISO, and other styles
35

Balkwill, Laura-Lee. "Perception of emotion in music a cross-cultural investigation /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0035/MQ27332.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Santorelli, Noelle Turini. "Perception of Emotion from Facial Expression and Affective Prosody." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/psych_theses/17.

Full text
Abstract:
Real-world perception of emotion results from the integration of multiple cues, most notably facial expression and affective prosody. The use of incongruent emotional stimuli presents an opportunity to study the interaction between sensory modalities. Thirty-seven participants were exposed to audio-visual stimuli (Robins & Schultz, 2004) including angry, fearful, happy, and neutral presentations. Eighty stimuli contain matching emotions and 240 contain incongruent emotional cues. Matching emotions elicited a significant number of correct responses for all four emotions. Sign tests indicated that for most incongruent conditions, participants demonstrated a bias towards the visual modality. Despite these findings, specific incongruent conditions did show evidence of blending. Future research should explore an evolutionary model of facial expression as a means for behavioral adaptation and the possibility of an “emotional McGurk effect” in particular combinations of emotions.
APA, Harvard, Vancouver, ISO, and other styles
37

Yates, Alan J. "The role of attention and awareness in emotion perception." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Christy, Anita Marie. "The Effects of Attributed Gender on Adult Emotion Perception." Thesis, Boston College, 2004. http://hdl.handle.net/2345/446.

Full text
Abstract:
Thesis advisor: James Russell
Adults' gender stereotypes of emotion have been investigated with a variety of methods, but those methods do not provide a strong test of the stereotype: The participants were presented only with cues to the gender or to the emotion; or when both cues were available, gender was confounded with poser. This study examined the effects of attributed gender on adults' perception of emotion in facial expressions and stories when presented with clear versus ambiguous cues to both emotion and gender. College students (n = 90) were first asked to label the emotion of either a man (Timothy) or a woman (Sophia) with identical prototypical and “mixed” facial expressions and, separately, to Free Label stories about emotions. The same students were then to choose from a list of ten emotion labels the one that best described the protagonist's emotion for the same stimuli. Results showed that, for ambiguous cues to emotion, participants labeled facial expressions according to gender stereotypes. However, for the stimuli with clear cues to both emotion and gender of the poser, a reverse effect of gender stereotypes was observed for anger, fear, shame, and compassion due to an expectancy violation
Thesis (BA) — Boston College, 2004
Submitted to: Boston College. College of Arts and Sciences
Discipline: Psychology
Discipline: College Honors Program
APA, Harvard, Vancouver, ISO, and other styles
39

Mollet, Gina Alice. "Neuropsychological Effects of Hostility and Pain on Emotion Perception." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26432.

Full text
Abstract:
Recent research on the neuropsychology of emotion and pain has indicated that emotion and pain are complex processes that may substantially influence each other. Disorders of negative emotion and pain are known to co-occur (Delgado, 2004); however, it is not clear whether negative emotional conditions lead to pain or whether increased pain experiences lead to negative emotion. Further, certain negative emotions, such as hostility or anger, may produce differential effects on the experience of pain, such that they may lead to an increase in pain or a decrease in pain. An increase or decrease in pain perception may lead to altered behavioral, cognitive, and neuropsychological effects in high hostility. In order to more clearly examine the aforementioned relationships, the current experiment examined auditory emotion perception before and after cold pressor pain in high and low hostile men. Additionally, quantitative electroencephalography (QEEG) was used to measure changes in cerebral activation as a result of auditory emotion perception and cold pressor pain. Results indicated that identification of emotion post-cold pressor differed as a function of hostility level and ear. The high hostile group increased identification of stimuli at the right ear after cold pressor exposure, while the low hostile group increased identification of stimuli at the left ear after cold pressor exposure. Primary QEEG findings indicated increased left temporal activation after cold pressor exposure and increased reactivity to cold pressor pain in the high hostile group. Low hostile men had a bilateral increase in high beta magnitude at the temporal lobes and a bilateral increase in delta magnitude at the frontal lobes after the cold pressor. Results suggest decreased cerebral laterality and left hemisphere activation for emotional and pain processing in high hostile men.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

BOSSI, FRANCESCO. "Investigating face and body perception." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199061.

Full text
Abstract:
I volti e i corpi veicolano gli indizi non-verbali più importanti per le interazioni sociali. Essi forniscono numerosi dettagli essenziali per il riconoscimento dell’identità, genere, intenzioni e stato emotivo. Tutti i volti e i corpi sono simmetrici e condividono la medesima struttura tridimensionale, ma gli esseri umani riescono ad identificare facilmente centinaia di persone diverse, facendo affidamento solo sulle informazioni fornite da volto e corpo. L’elaborazione del volto e del corpo è stata ampiamente studiata e diversi modelli cogntivi e neuroanatomici sono stati ideati per spiegare questi processi. Nonostante numerose differenze sostanziali, tutti questi modelli hanno riconosciuto diversi stadi di elaborazione, dalla codifica dello stimolo rapida e più grezza (corteccia visiva occipitale) fino a processi di livello più alto, finalizzati al riconoscimento di aspetti invarianti (es., identità) e mutevoli (es., sguardo, espressioni emotive) (sottesi da un vasto network fronto-temporo-parietale). È stato dimostrato che questi processi coinvolgono l’elaborazione configurale degli stimoli. Inoltre, le espressioni emotive sembrano influenzare la codifica di questi stimoli. Le espressioni emotive vengono elaborate ad uno stadio molto precoce e pare che coinvolgano l’attivazione di una via sottocorticale. Gli studi presentati in questa tesi hanno l’obiettivo di indagare la percezione visiva di volti e corpi, e come essa può essere modulata o manipolata, in alcuni studi anche attraverso l’elettroencefalografia (EEG). Mentre il primo Capitolo presenta il quadro teorico in cui è stato concepito questo lavoro di tesi, il secondo Capitolo presenta il primo studio (composto da due esperimenti), che ha l’obiettivo di indagare come può essere modulata la percezione di indizi sociali attraverso l’esclusione sociale. La ricerca era concentrata sulla percezione di due categorie di indizi facciali diversi, ma in interazione: le espressioni emotive e la direzione dello sguardo. In questo studio, abbiamo trovato che il riconoscimento della direzione dello sguardo veniva indebolita in modo specifico, mentre il riconoscimento delle espressioni emotive non era compromesso. I risultati di questo studio hanno portato a riflessioni importanti sull’importanza dello sguardo in quanto segnale di potenziale re-inclusione, e su come l’indebolimento dell’elaborazione dello sguardo potesse portare nuovamente all’esclusione sociale. Il terzo Capitolo presenta una meta-analisi sul body-inversion effect, una manipolazione che ha lo scopo di dimostrare l’elaborazione configurale dei corpi. Con la meta-analisi è stata indagata la coerenza e la dimensione di questo effetto, fondamentale nello studio della codifica strutturale dei corpi. Nel quarto capitolo, viene presentato uno studio sulle oscillazioni neurali coinvolte negli effetti di face- e body-inversion. Le oscillazioni sono state misurate nelle bande di attività theta e gamma attraverso l’EEG, dal momento che rappresentano un mezzo notevole per indagare l’attività psicofsiologica coinvolta in diversi processi. I risultati di questo studio hanno mostrato che l’elaborazione configurale di volti e corpi coinvolge meccanismi percettivi diversi. Nel quinto Capitolo viene presentato uno studio che indaga l’influenza dell’inversione e delle espressioni emotive nella codifica di volti e corpi. I correlati neurali di questi processi sono stati indagati attraverso i potenziali evento-correlati (ERPs). I risultati hanno evidenziato che sia l’inversione che l’espressione emotiva influenzavano l’elaborazione di questi stimoli, durante diversi stadi e attraverso diversi processi, ma queste due manipolazioni non interagivano. Pertanto, sembra che le informazioni configurali e le espressioni emotive siano elaborate attraverso processi percettivi indipendenti e che non interagiscono.
Human face and body convey the most important non-verbal cues for social interactions. Face and body provide numerous cues essential for recognition of other people’s identity, gender, age, intentions and emotional state. All faces and bodies are symmetrical and share a common 3D structure, but humans are able to easily identify hundreds of different people, just relying on facial and bodily information. Face and body processing have been widely studied and several cognitive and neuroanatomical models of these processes were hypothesized. Despite many critical differences, all these models recognized different stages of processing from early coarse stimulus encoding (occipital visual cortices) to higher-level processes aimed to identify invariant (e.g., identity) and changeable features (e.g., gaze, emotional expressions) (broad fronto-temporo-parietal network). It was demonstrated that these processes involve configural processing. Moreover, emotional expressions seem to influence the encoding of these stimuli. Processing of emotional expressions occurs at very early latencies and seems to involve the activation of a subcortical pathway. The studies presented in this thesis are aimed to investigate the visual perception of faces and bodies, and how it can be modulated or manipulated. EEG was used in some of the studies presented in this thesis to investigate the psychophysiological processes involved in face and body perception. While the first Chapter is aimed to present the theoretical background of the studies reported in the thesis, the second Chapter presents the first study (composed of two experiments), aimed to investigate how the perception of social cues can be modulated by social exclusion. The process investigated is the perception of two different, but interacting, facial cues: emotional expression and gaze direction. In this study, we found that the identification of gaze direction was specifically impaired by social exclusion, while no impairment was found for emotional expression recognition. The results of this study brought important insights concerning the relevance of gaze as a signal of potential re-inclusion, and how the impaired processing of gaze direction may reiterate social exclusion. The third Chapter presents a meta-analytic review on the body inversion effect, a manipulation aimed to demonstrate configural processing of bodies. This meta-analysis was aimed to investigate consistency and size of this effect, fundamental in studying structural encoding of body shapes. In the fourth Chapter, a study on the neural oscillations involved in face and body inversion effects is presented. Neural oscillations in theta and gamma bands were measured by means of the EEG since they are a very influential measure to investigate the psychophysiological activity involved in different processes. The results of this study showed that configural processing of faces and bodies involve different perceptual mechanisms. In the fifth Chapter, a study investigating the influence of inversion and emotional expression on the visual encoding of faces and bodies is presented. The neural correlates of these processes were investigated by means of event-related potentials (ERPs). Both inversion and emotional expressions were shown to influence the processing of these stimuli, during different stages and through different perceptual mechanisms, but results revealed that these two manipulations were not interacting. Therefore, configural information and emotional expressions seem to be processed through independent and non-interacting perceptual processes.
APA, Harvard, Vancouver, ISO, and other styles
41

Taylor, Richard James. "Affective perception." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:a5fe8467-c5e5-4cda-9875-ab46d67c4a62.

Full text
Abstract:
This thesis aims to present and defend an account of affective perception. The central argument seeks to establish three claims. 1) Certain emotional bodily feelings (and not just psychic feelings) are world-directed intentional states. 2) Their intentionality is to be understood in perceptual terms: such feelings are affective perceptions of emotional properties of a certain kind. 3) These ‘emotion-proper properties’ are response-dependent in a way that entails that appropriate affective responses to their token instances qualify, ipso facto, as perceptions of those instances. The arguments for (1) and (2) appeal directly to the phenomenology of emotional experience and draw heavily from recent research by Peter Goldie and Matthew Ratcliffe. By applying Goldie’s insights into the intentional structure of psychic feelings to the case of emotional bodily feelings, it is shown that certain of the latter—particularly those pertaining to the so-called ‘standard’ emotions—exemplify world-directed intentionality analogous to the perceptual intentionality of tactile feelings. Adapting Ratcliffe’s account of the analogy between tactile feelings and what he terms ‘existential feelings’, it is argued that standard emotional bodily feelings are at the same time intrinsically intentional world-directed perceptual states (affective perceptions) through which the defining properties of emotional objects (emotion-proper properties) are apprehended. The subsequent account of these properties endorses a response-dependence thesis similar to that defended by John McDowell and David Wiggins and argues that tokening an appropriate emotional affective state in response to a token emotion-proper property is both a necessary and a sufficient condition for perception of that property (Claim (3)). The central claim is thus secured by appeal both to the nature of the relevant feelings and the nature of the relevant properties (the former being intrinsically intentional representational states and the latter being response-dependent in a way that guarantees the perceptual status of the former).
APA, Harvard, Vancouver, ISO, and other styles
42

Clark, Rebecca A. "Multimodal flavour perception : the impact of sweetness, bitterness, alcohol content and carbonation level on flavour perception." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/13432/.

Full text
Abstract:
Flavour perception of food and beverages is a complex multisensory experience involving the gustatory, olfactory, trigeminal, auditory and visual senses. Thus, investigations into multimodal flavour perception require a multidisciplinary design of experiments approach. This research has focussed on beer flavour perception and the fundamental interactions between the main flavour components - sweetness, bitterness (from hop acids), alcohol content and carbonation level. A model beer was developed using representative ingredients which could be manipulated to systematically vary the concentration of the main flavour components in beer and was used in the following experiments. Using a full factorial design, the physical effect of ethanol, C02 and hop acid addition was determined by headspace analysis and in-nose expired breath (in-vivo) measurements. Results from headspace and in-vivo methods differed and highlighted the importance of in-vivo measures when correlating to sensory experience. Ethanol and C02 significantly increased volatile partitioning during model beverage consumption. The effects of ethanol and C02 appeared to be independent and therefore additive, which could account for up to 86% increase in volatile partitioning. This would increase volatile delivery to the olfactory bulb and thus potentially enhance aroma and flavour perception. This was investigated using quantitative descriptive analysis. Results showed that C02 significantly impacted all discriminating attributes, either directly or as a result of complex interactions with other design factors. C02 suppressed the sweetness of dextrose and interacted with hop acids to modify bitterness and tingly perception. Ethanol was the main driver of complexity of flavour and enhanced sweet perception. In a first study of its kind, the impact of C02 on gustatory perception was further investigated using functional magnetic resonance imaging (fMRI) to understand cortical response. In addition, classification of subjects into PROP taster status groups and thermal taster status groups was carried out. Groups were tested for their sensitivity to oral stimuli using sensory techniques and for the first time, cortical response to taste and C02 was investigated between groups using fMRI techniques and behavioural data. There was no correlation between PROP taster status and thermal taster status. PROP taster status groups varied in their cortical response to stimuli with PROP super-tasters showing significantly higher cortical activation to samples than PROP non-tasters. The mechanism for thermal taster status is not currently known but thermal tasters were found to have higher cortical activation in response to the samples. The difference in cortical activation between thermal taster groups was supported by behavioural data as thermal tasters least preferred, but were more able to discriminate the high C02 sample than thermal non-tasters. This research has provided in-depth study into the importance of flavour components in beer. It advances the limited data available on the effects of C02 on sensory perception in a carbonated beverage, providing sound data for the successful development of products with reduced ethanol or C02 levels. The use of functional magnetic resonance imaging has revealed for the first time that oral C02 significantly increases activation in the somatosensory cortex. However, C02 seemed to have a limited impact on activation strength in 'taste' areas, such as the anterior insula. Research comparing data from PROP taster status groups and thermal taster status groups has given insight into the possible mechanisms accounting for differences in oral intensity of stimuli.
APA, Harvard, Vancouver, ISO, and other styles
43

Buchanan, Joshua. "I Feel Your Pain: Social Connection and the Expression and Perception of Regret." Miami University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=miami1436928483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Anderson, Corinne D. "Auditory and visual characteristics of individual talkers in multimodal speech perception." Connect to resource, 2007. http://hdl.handle.net/1811/28373.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2007.
Title from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 29-30). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
45

Polk, Robert B. "A Multimodal Study on How Embodiment Relates to Perception of Complexity." Thesis, Fielding Graduate University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10602706.

Full text
Abstract:

This preamble study asks whether amplifying our embodied knowing may heighten our ability to sense the complex adaptive patterns in our daily lives. Embodied cognitivists argue nothing that qualifies as thinking was not itself first borne of our physical engagement with the natural world. In this stance, all knowledge is seen as corporeal in nature and thus generated from our intersubjective relationships with the world about us. As such, embodied perception is believed to be direct, veridical, and unmediated by the brain alone. This study also reinforces a growing consensus that the dominant locus for perceiving complex adaptive patterns is achieved through nonconscious rather than conscious processes. Consequently, this research marries the literatures of embodied cognition, nonconscious perception, and complexity to generate an original investigation into how manipulating these relationships could improve our abilities to access, sift through, and act more wisely in the patterns that matter the most. While attempts to establish a clear empirical connection amongst these phenomena were less than conclusive, this inaugural study also makes useful contributions in (a) reframing the array of literature around embodiment into a single, monist conception called the Mind, Body, Environment (MBE) Continuum; (b) lessons learned designing macro-level empirical research into nonconscious embodied perception; (c) providing an inaugural dataset upon which to build future inquiry into this domain, and finally (d) augmenting and testing a non-traditional research methodology called distributed ethnography commensurate to the unique nature of this inquiry.

APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Dempsey H., and n/a. "A Gestalt-Taxonomy for Designing Multimodal Information Displays." University of Canberra. Arts & Design, 2007. http://erl.canberra.edu.au./public/adt-AUC20081203.123314.

Full text
Abstract:
The theory of Gestalt was proposed in the nineteenth century to explain and predict the way that people perceptually group visual elements, and it has been used to develop guidelines for designing visual computer interfaces. In this thesis we seek to extend the use of Gestalt principles to the design of haptic and visual-haptic displays. The thesis begins with a survey of Gestalt research into visual, auditory and haptic perception. From this survey the five most commonly found principles are identified as figure-ground, continuation, closure, similarity and proximity. This thesis examines the proposition that these five principles can be applied to the design of haptic interfaces. Four experiments investigate whether Gestalt principles of figure-ground, continuation, closure, similarity and proximity are applicable in the same way when people group elements either through their visual (by colour) or haptic (by texture) sense. The results indicate significant correspondence between visual and haptic grouping. A set of haptic design guidelines for haptic displays are developed from the experiments. This allows us to use the Gestalt principles to organise a Gestalt-Taxonomy of specific guidelines for designing haptic displays. The Gestalt-Taxonomy has been used to develop new haptic design guidelines for information displays.
APA, Harvard, Vancouver, ISO, and other styles
47

Duval, Céline. "Pain perception in schizophrenia, and relationships between emotion and visual organization : is emotion flattened in patients, and how does it affect cognition?" Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAJ052/document.

Full text
Abstract:
La schizophrénie touche 1% de la population et comprend des symptômes positifs (hallucinations) et négatifs (affect émoussé), mais aussi des troubles cognitifs. Ici nous présentons deux expériences qui explorent l’interaction entre cognition, douleur et émotion chez les patients et les sujets sains. La première étude montre que des images émotionnelles peuvent détourner l’attention jusqu’à renverser les effets de groupement automatique. Cet effet est présent chez les patients comme chez les témoins. La deuxième étude est centrée sur la perception de la douleur en prenant en compte les différents mécanismes sollicités, dont le traitement émotionnel. Nos résultats, et notamment une P50 élevée chez les patients après la stimulation douloureuse montrent une hypersensibilité à un niveau très précoce. Les deux études montrent que les patients sont plus sensibles aux stimuli émotionnels et douloureux que ce que l’on pensait, ce qui devrait être pris en compte lors de leur prise en charge
Schizophrenia is a severe mental illness affecting 1% of the population, and comprises positive (hallucinations) and negative symptoms (blunted affect), but also cognitive deficits. Here we describe two distinct studies which address the question of how emotion and cognition interact, in healthy subjects and in schizophrenia. In the first study we created a paradigm that shows how emotional stimuli distract subjects and thus interfere during the organization of visual stimuli. The effect is the same in patients and healthy controls.In our second study we explored pain perception by taking into account different mechanisms, and especially emotion processing. The results show that patients are more sensitive to pain than healthy controls as they present an elevated P50 which indicates an alteration at an early stage of processing. Both studies reveal that patients are more sensitive as previously thought which has to be considered when dealing with patients in hospitals and everyday life
APA, Harvard, Vancouver, ISO, and other styles
48

White, Eliah J. "The Influence of Multimodally Specified Effort on Distance Perception." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1219083136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

CARRETERO, MIGUEL RAMOS. "Expression of Emotion in Virtual Crowds:Investigating Emotion Contagion and Perception of Emotional Behaviour in Crowd Simulation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-153966.

Full text
Abstract:
Emotional behaviour in the context of crowd simulationis a topic that is gaining particular interest in the area of artificial intelligence. Recent efforts in this domain havelooked for the modelling of emotional emergence and socialinteraction inside a crowd of virtual agents, but further investigation is still needed in aspects such as simulation of emotional awareness and emotion contagion. Also, in relation to perception of emotions, many questions remain about perception of emotional behaviour in the context of virtual crowds.This thesis investigates the current state-of-the-art of emotional characters in virtual crowds and presents the implementation of a computational model able to generate expressive full-body motion behaviour and emotion contagion in a crowd of virtual agents. Also, as a second part of the thesis, this project presents a perceptual study in which the perception of emotional behaviour is investigated in the context of virtual crowds. The results of this thesis reveal some interesting findings in relation to the perception and modelling of virtual crowds, including some relevant effectsin relation to the influence of emotional crowd behaviourin viewers, specially when virtual crowds are not the mainfocus of a particular scene. These results aim to contribute for the further development of this interdisciplinary area of computer graphics, artificial intelligence and psychology.
Emotionellt beteende i simulerade folkmassor är ett ämne med ökande intresse, inom området för artificiell intelligens. Nya studier har tittat på modellen för social interaktion inuti en grupp av virtuella agenter, men fortsatt utredning behövs fortfarande inom aspekter så som simulation av emotionell medvetenhet och emotionell smitta. Också, när det gäller synen på känslor, kvarstår många frågor kring synen på känslomässigt beteende i samband med virtuella folkmassor. Denna studie undersöker de nuvarande "state-of-theart" emotionella egenskaperna i virtuella folksamlingar och presenterar implementationen av en datormodell som kan generera smittsamma känslor i en grupp av virtuella agenter. Också, när det gäller synen på känslor, kvarstår många frågor kring synen på känslomässigt beteende i samband med virtuella folksamlingar. Som en andra del av denna avhandlingen presenteras, i detta projekt, en perceptuell studie där uppfattningen av emotionella beteenden undersöks i samband med virtuella folksamlingar.
APA, Harvard, Vancouver, ISO, and other styles
50

Heck, Alison, Alyson Chroust, Hannah White, Rachel Jubran, and Ramesh S. Bhatt. "Development of Body Emotion Perception in Infancy: From Discrimination to Recognition." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/2730.

Full text
Abstract:
Research suggests that infants progress from discrimination to recognition of emotions in faces during the first half year of life. It is whether the perception of emotions from bodies develops in a similar manner. In the current study, when presented with happy and angry body videos and voices, 5-month-olds looked longer at the matching video when they were presented upright but not when they were inverted. In contrast, 3.5-month-olds failed to match even with upright videos. Thus, 5-month-olds but not 3.5-month-olds exhibited evidence of recognition of emotions from bodies by demonstrating intermodal matching. In a subsequent experiment, younger infants did discriminate between body emotion videos but failed to exhibit an inversion effect, suggesting that discrimination may be based on low-level stimulus features. These results document a developmental change from discrimination based on non-emotional information at 3.5 months to recognition of body emotions at 5 months. This pattern of development is similar to face emotion knowledge development and suggests that both the face and body emotion perception systems develop rapidly during the first half year of life.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography