Academic literature on the topic 'Prosodia emotiva'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prosodia emotiva.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prosodia emotiva"

1

Heath, Maria. "Orthography in social media: Pragmatic and prosodic interpretations of caps lock." Proceedings of the Linguistic Society of America 3, no. 1 (March 3, 2018): 55. http://dx.doi.org/10.3765/plsa.v3i1.4350.

Full text
Abstract:
Orthography in social media is largely understudied, but rich in pragmatic potential. This study examines the use of "caps lock" on Twitter, which has been claimed to function as an emotive strengthener. In a survey asking participants to rate tweets on gradient scales of emotion, I show that this claim does not account for all the data. I instead propose that caps lock should be understood as an indicator of prosody in text. I support this theory by drawing on Twitter corpus data to show how users employ single-word capitalization in positions indicative of emphatic stress and semantic focus. A prosodic interpretation of capitalization accounts for all the data in a unified way.
APA, Harvard, Vancouver, ISO, and other styles
2

Mitchell, Rachel L. C., and Rachel A. Kingston. "Age-Related Decline in Emotional Prosody Discrimination." Experimental Psychology 61, no. 3 (November 1, 2014): 215–23. http://dx.doi.org/10.1027/1618-3169/a000241.

Full text
Abstract:
It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.
APA, Harvard, Vancouver, ISO, and other styles
3

Kao, Chieh, Maria D. Sera, and Yang Zhang. "Emotional Speech Processing in 3- to 12-Month-Old Infants: Influences of Emotion Categories and Acoustic Parameters." Journal of Speech, Language, and Hearing Research 65, no. 2 (February 9, 2022): 487–500. http://dx.doi.org/10.1044/2021_jslhr-21-00234.

Full text
Abstract:
Purpose: The aim of this study was to investigate infants' listening preference for emotional prosodies in spoken words and identify their acoustic correlates. Method: Forty-six 3- to-12-month-old infants ( M age = 7.6 months) completed a central fixation (or look-to-listen) paradigm in which four emotional prosodies (happy, sad, angry, and neutral) were presented. Infants' looking time to the string of words was recorded as a proxy of their listening attention. Five acoustic variables—mean fundamental frequency (F0), word duration, intensity variation, harmonics-to-noise ratio (HNR), and spectral centroid—were also analyzed to account for infants' attentiveness to each emotion. Results: Infants generally preferred affective over neutral prosody, with more listening attention to the happy and sad voices. Happy sounds with breathy voice quality (low HNR) and less brightness (low spectral centroid) maintained infants' attention more. Sad speech with shorter word duration (i.e., faster speech rate), less breathiness, and more brightness gained infants' attention more than happy speech did. Infants listened less to angry than to happy and sad prosodies, and none of the acoustic variables were associated with infants' listening interests in angry voices. Neutral words with a lower F0 attracted infants' attention more than those with a higher F0. Neither age nor sex effects were observed. Conclusions: This study provides evidence for infants' sensitivity to the prosodic patterns for the basic emotion categories in spoken words and how the acoustic properties of emotional speech may guide their attention. The results point to the need to study the interplay between early socioaffective and language development.
APA, Harvard, Vancouver, ISO, and other styles
4

Khan, S., S. A. Ali, and J. Sallar. "Analysis of Children’s Prosodic Features Using Emotion Based Utterances in Urdu Language." Engineering, Technology & Applied Science Research 8, no. 3 (June 19, 2018): 2954–57. http://dx.doi.org/10.48084/etasr.1902.

Full text
Abstract:
Emotion plays a significant role in identifying the states of a speaker using spoken utterances. Prosodic features add sense in spoken utterances providing speaker emotions. The objective of this research is to analyze the behavior of prosodic features (individual and in combination with others’ prosodic features) with different learning classifiers on emotion based utterances of children in the Urdu language. In this paper, three different prosodic features (intensity, pitch, formant and their combinations) with five different learning classifiers(ANN, J-48, K-star, Naïve Bayes, decision stump) and four basic emotions (happy, sad, angry, and neutral) were used to develop the experimental framework. Demonstrative experiments expressed that, in terms of classification accuracy, artificial neural networks show significant results with both individual and combination of prosodic features in comparison with other learning classifiers.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Bo, Pan Xiao, and Xiaohong Yu. "The Influence of Prosocial and Antisocial Emotions on the Spread of Weibo Posts: A Study of the COVID-19 Pandemic." Discrete Dynamics in Nature and Society 2021 (September 11, 2021): 1–9. http://dx.doi.org/10.1155/2021/8462264.

Full text
Abstract:
This study investigates the influences of the prosocial and antisocial tendency of Weibo users on post transmission during the COVID-19 pandemic. To overcome the deficiency of existing research on prosocial and antisocial emotions, we employ a web crawler technology to obtain post data from Weibo and identify texts with prosocial or antisocial emotions. We use SnowNLP to construct semantic dictionaries and training models. Our major findings include the following. First, through correlation analysis and negative binomial regression, we find that user posts with high intensity and prosocial emotion can trigger comments or forwarding behaviour. Second, the influence of antisocial emotion on Weibo comments, likes, and retweets are insignificant. Third, the general emotion about prosocial comments in Weibo also shows the emotion trend of prosocial comments. Overall, a major contribution of this paper is our focus on prosocial and antisocial emotions in cyberspace, providing a new perspective on emotion communication.
APA, Harvard, Vancouver, ISO, and other styles
6

Hutahaean, Siska Friskica. "REGULASI EMOSI DITINJAU DARI PERILAKU PROSOSIAL PADA SISWA SMA RAKSANA DI MEDAN." Jurnal Psikologi Universitas HKBP Nommensen 6, no. 2 (February 29, 2020): 53–59. http://dx.doi.org/10.36655/psikologi.v6i2.135.

Full text
Abstract:
This research purpose is to get to know the relation between the emotion regulation and the prosocial behavior. The hypothetical that submitted in the research is about the positive relation between the emotion regulationand prosocial behavior, that assumption is the higher emotion regulationthan the prosocial behavior is also getting high, otherwise the lower emotion regulation then the prosocial behavior is getting low either. The subject research which used in this investigation is 177students in Raksana Hight School Medan. The data obtained based on scale to measure the emotion regulationwith the prosocial behavior. The calculation has done by prerequisite analysis test (assumption test) which consists of the normality distribution test and the relation of linearity. The data analysis used is using the Product Moment correlation through the SPSS 19 aid for Windows. The data analysis result showed the coefficient correlation 0,456 (p < 0,05). It showed positive relation between emotion regulationand the prosocial behavior. This investigation showed contribution statistical psychological capital variable toward job satisfaction is 20,8 percent, the rest 79,2 affected by other factors which not examined. The result of this study can be concluded that the hypothesis research has positive relation between the emotion regulationand prosocial behavior acceptable.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yu Tai, Jie Han, Xiao Qing Jiang, Jing Zou, and Hui Zhao. "Study of Speech Emotion Recognition Based on Prosodic Parameters and Facial Expression Features." Applied Mechanics and Materials 241-244 (December 2012): 1677–81. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1677.

Full text
Abstract:
The present status of speech emotion recognition was introduced in the paper. The emotional databases of Chinese speech and facial expressions were established with the noise stimulus and movies evoking subjects' emtion. For different emotional states, we analyzed the single-mode speech emotion recognitions based the prosodic features and the geometric features of facial expression. Then, we discussed the bimodal emotion recognition by the use of Gaussian Mixture Model. The experimental results show that, the bimodal emotion recognition rate combined with facial expression is about 6% higher than the single model recognition rate merely using prosodic features.
APA, Harvard, Vancouver, ISO, and other styles
8

Arndt, Horst, and Richard W. Janney. "Verbal, prosodic, and kinesic emotive contrasts in speech." Journal of Pragmatics 15, no. 6 (June 1991): 521–49. http://dx.doi.org/10.1016/0378-2166(91)90110-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yi, Xinran Fan, Yueqi Chen, Hao Zhang, Fei Chen, Hui Zhang, Hongwei Ding, and Yang Zhang. "Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words." Brain Sciences 12, no. 12 (December 12, 2022): 1706. http://dx.doi.org/10.3390/brainsci12121706.

Full text
Abstract:
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers’ gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
APA, Harvard, Vancouver, ISO, and other styles
10

Knoche, Harry “Trip”, and Ethan P. Waples. "A process model of prosocial behavior: The interaction of emotion and the need for justice." Journal of Management & Organization 22, no. 1 (June 11, 2015): 96–112. http://dx.doi.org/10.1017/jmo.2015.24.

Full text
Abstract:
AbstractResearch on prosocial behavior has tended to focus on the positive consequences of prosocial behavior. This paper draws on attribution, emotion and justice motive literature to expand the discussion of prosocial behavior in organizations. Specifically, an expanded definition of prosocial behavior is offered and a process-oriented process model of prosocial behavior is introduced. The process model of prosocial behavior is used to discuss the idea that prosocial behavior might have negative consequences. This paper contributes to the literature on prosocial behavior in organizations by (1) accounting for the effects of emotion and the need for justice on decisions to engage in prosocial actions and (2) identifying negative consequences of specific prosocial actions.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Prosodia emotiva"

1

Ammendola, Giulia <1985&gt. "Prospettive sulle Emozioni e Analisi Neurolinguistica della Prosodia Emotiva." Master's Degree Thesis, Università Ca' Foscari Venezia, 2012. http://hdl.handle.net/10579/2294.

Full text
Abstract:
Per molti secoli è stata tramandata l’idea che le emozioni fossero un aspetto bestiale dell’essere umano che andava dominato perché in contrasto con la sua parte razionale. Solo a partire dalla pubblicazione di The Expression of the Emotions in Animals and Men (1872), di Darwin, l’emozione ha riconquistato una dignità, ricavandosi uno spazio proprio di indagine all’interno del vasto panorama di ricerche scientifiche. Da quel momento, l’emozione è diventata uno dei fenomeni più interessanti e controversi per la scienza. Sebbene, infatti, negli anni si sono accumulati moltissimi studi sulla natura, il ruolo ed il funzionamento delle emozioni, non siamo ancora arrivati ad una definizione completa del termine ‘emozione’. Questa impossibilità deriva dal fatto che le emozioni sono fenomeni complessi e multicompositi. Una delle maggiori difficoltà della scienza nei confronti delle emozioni è, infatti, quella di riuscire a descrivere attraverso i rigidi strumenti scientifici un fenomeno dai contorni sfumati. Per questo, negli anni, le ricerche hanno finito per concentrarsi su singoli o pochi aspetti di questo fenomeno, nel tentativo di arrivare ad una spiegazione globale e definitiva. Questa ricerca si apre con un’analisi degli aspetti problematici legati allo studio delle emozioni (storici, linguistici, metodologici e deontologici). Si è passati poi alla descrizione delle teorie più rilevanti in campo neurofisiologico, neuropsicologico e psicologico, con il tentativo di delineare attraverso una prospettiva storica il percorso che ha portato alle conoscenze attuali su questo processo. Un capitolo a parte è dedicato alle ultime ricerche sulla neurofisiologia delle emozioni. Diversamente dai primi studi, che ritenevano l’emozione un processo unitario e riconducibile ad un sostrato neurale unico, un ‘cervello emotivo’, i risultati delle ultime ricerche hanno evidenziato che l’emozione è piuttosto un fenomeno multicomposito che coinvolge non una ma molte strutture cerebrali, sia sottocorticali sia corticali, connesse tra di loro. Infine, l’ultimo capitolo è focalizzato su un aspetto particolare degli studi neurofisiologici, relativo alla prosodia emotiva. Questi studi, relativamente recenti, stanno cercando di capire le connessioni neurali e l’eventuale specializzazione emisferica che è possibile riconoscere nella prosodia emotiva. Poiché alla prosodia emotiva è stata data meno attenzione rispetto ad altri aspetti dell’emozione, si crede che un’attenzione posta su questi studi sia essenziale per contribuire allo scopo più generale di queste ricerche: rispondere alla famosa domanda di James: “Cos’è un’emozione?”.
APA, Harvard, Vancouver, ISO, and other styles
2

ANZUINO, ISABELLA. "COMPROMISSIONE DELL'ELABORAZIONE EMOZIONALE NELLA MALATTIA DI PARKINSON: PRODUZIONE E RICONOSCIMENTO DELLE EMOZIONI TRASMESSE DAL VOLTO E DALLA VOCE." Doctoral thesis, Università Cattolica del Sacro Cuore, 2022. http://hdl.handle.net/10280/112847.

Full text
Abstract:
La classica descrizione della Malattia di Parkinson (MP) come caratterizzata esclusivamente da sintomi motori è stata superata perché i pazienti sperimentano frequentemente, tra gli altri, anche disturbi dell'elaborazione emozionale che portano a difficoltà nell’esprimere e nel riconoscere le emozioni altrui dalla prosodia e dall'espressione facciale. Nella presente tesi è stato indagato, a livello comportamentale, se la MP può indurre modificazioni nella produzione e nel riconoscimento delle emozioni trasmesse dal volto e dalla voce. A livello neurostrutturale, sono state esplorate le alterazioni della materia grigia in una sottopopolazione di pazienti con MP, testando se le prestazioni ottenute nei vari compiti ideati per studiare l'elaborazione emozionale siano associate a indici di integrità strutturale. Nel complesso, le evidenze raccolte in questa tesi confermano che la MP è caratterizzata da un deficit nella produzione e nel riconoscimento delle espressioni facciali e vocali fin dall'esordio della malattia, che potrebbe essere spiegato non solo dal coinvolgimento funzionale delle vie dopaminergiche e dei gangli della base in questi processi, ma anche dal reclutamento di un più ampio network cerebrale sottostante l'elaborazione emozionale.
The original description characterizing Parkinson’s Disease (PD) by motor symptoms has been updated because patients also frequently experience, among others, emotional‐processing impairments leading to difficulty in expressing and recognizing others’ emotions from prosody and facial expression. In the current thesis, it was investigated, at the behavioural level, if PD could induce modifications in the production and recognition of emotions conveyed by face and voice. At the neurostructural level, grey matter alterations in PD subgroup were explored, testing if the performance obtained in the various tasks devised to study the emotional processing is associated with indices of grey matter integrity in PD. Taken together, the evidence collected in this thesis confirms that PD is characterized by a deficit in the production and recognition of facial and vocal expressions from disease onset, which could be explained not only by the functional involvement of the dopaminergic pathways and basal ganglia in these processes, but also by the engagement of a more extensive network underlying emotional processing.
APA, Harvard, Vancouver, ISO, and other styles
3

Haszko, Sarah Elisabeth. "Emotion & prosody examining infants' ability to match subtle prosodic variation with corresponding facial expressions /." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8907.

Full text
Abstract:
Thesis (M.A.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Hearing and Speech Sciences. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Iliev, Alexander Iliev. "Emotion Recognition Using Glottal and Prosodic Features." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_dissertations/515.

Full text
Abstract:
Emotion conveys the psychological state of a person. It is expressed by a variety of physiological changes, such as changes in blood pressure, heart beat rate, degree of sweating, and can be manifested in shaking, changes in skin coloration, facial expression, and the acoustics of speech. This research focuses on the recognition of emotion conveyed in speech. There were three main objectives of this study. One was to examine the role played by the glottal source signal in the expression of emotional speech. The second was to investigate whether it can provide improved robustness in real-world situations and in noisy environments. This was achieved through testing in clear and various noisy conditions. Finally, the performance of glottal features was compared to diverse existing and newly introduced emotional feature domains. A novel glottal symmetry feature is proposed and automatically extracted from speech. The effectiveness of several inverse filtering methods in extracting the glottal signal from speech has been examined. Other than the glottal symmetry, two additional feature classes were tested for emotion recognition domains. They are the: Tonal and Break Indices (ToBI) of American English intonation, and Mel Frequency Cepstral Coefficients (MFCC) of the glottal signal. Three corpora were specifically designed for the task. The first two investigated the four emotions: Happy, Angry, Sad, and Neutral, and the third added Fear and Surprise in a six emotions recognition task. This work shows that the glottal signal carries valuable emotional information and using it for emotion recognition has many advantages over other conventional methods. For clean speech, in a four emotion recognition task using classical prosodic features achieved 89.67% recognition, ToBI combined with classical features, reached 84.75% recognition, while using glottal symmetry alone achieved 98.74%. For a six emotions task these three methods achieved 79.62%, 90.39% and 85.37% recognition rates, respectively. Using the glottal signal also provided greater classifier robustness under noisy conditions and distortion caused by low pass filtering. Specifically, for additive white Gaussian noise at SNR = 10 dB in the six emotion task the classical features and the classical with ToBI both failed to provide successful results; speech MFCC's achieved a recognition rate of 41.43% and glottal symmetry reached 59.29%. This work has shown that the glottal signal, and the glottal symmetry in particular, provides high class separation for both the four and six emotion cases. It is confidently surpassing the performance of all other features included in this investigation in noisy speech conditions and in most clean signal conditions.
APA, Harvard, Vancouver, ISO, and other styles
5

Väyrynen, E. (Eero). "Emotion recognition from speech using prosodic features." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526204048.

Full text
Abstract:
Abstract Emotion recognition, a key step of affective computing, is the process of decoding an embedded emotional message from human communication signals, e.g. visual, audio, and/or other physiological cues. It is well-known that speech is the main channel for human communication and thus vital in the signalling of emotion and semantic cues for the correct interpretation of contexts. In the verbal channel, the emotional content is largely conveyed as constant paralinguistic information signals, from which prosody is the most important component. The lack of evaluation of affect and emotional states in human machine interaction is, however, currently limiting the potential behaviour and user experience of technological devices. In this thesis, speech prosody and related acoustic features of speech are used for the recognition of emotion from spoken Finnish. More specifically, methods for emotion recognition from speech relying on long-term global prosodic parameters are developed. An information fusion method is developed for short segment emotion recognition using local prosodic features and vocal source features. A framework for emotional speech data visualisation is presented for prosodic features. Emotion recognition in Finnish comparable to the human reference is demonstrated using a small set of basic emotional categories (neutral, sad, happy, and angry). A recognition rate for Finnish was found comparable with those reported in the western language groups. Increased emotion recognition is shown for short segment emotion recognition using fusion techniques. Visualisation of emotional data congruent with the dimensional models of emotion is demonstrated utilising supervised nonlinear manifold modelling techniques. The low dimensional visualisation of emotion is shown to retain the topological structure of the emotional categories, as well as the emotional intensity of speech samples. The thesis provides pattern recognition methods and technology for the recognition of emotion from speech using long speech samples, as well as short stressed words. The framework for the visualisation and classification of emotional speech data developed here can also be used to represent speech data from other semantic viewpoints by using alternative semantic labellings if available
Tiivistelmä Emootiontunnistus on affektiivisen laskennan keskeinen osa-alue. Siinä pyritään ihmisen kommunikaatioon sisältyvien emotionaalisten viestien selvittämiseen, esim. visuaalisten, auditiivisten ja/tai fysiologisten vihjeiden avulla. Puhe on ihmisten tärkein tapa kommunikoida ja on siten ensiarvoisen tärkeässä roolissa viestinnän oikean semanttisen ja emotionaalisen tulkinnan kannalta. Emotionaalinen tieto välittyy puheessa paljolti jatkuvana paralingvistisenä viestintänä, jonka tärkein komponentti on prosodia. Tämän affektiivisen ja emotionaalisen tulkinnan vajaavaisuus ihminen-kone – interaktioissa rajoittaa kuitenkin vielä nykyisellään teknologisten laitteiden toimintaa ja niiden käyttökokemusta. Tässä väitöstyössä on käytetty puheen prosodisia ja akustisia piirteitä puhutun suomen emotionaalisen sisällön tunnistamiseksi. Työssä on kehitetty pitkien puhenäytteiden prosodisiin piirteisiin perustuvia emootiontunnistusmenetelmiä. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamiseksi on taas kehitetty informaatiofuusioon perustuva menetelmä käyttäen prosodian sekä äänilähteen laadullisten piirteiden yhdistelmää. Lisäksi on kehitetty teknologinen viitekehys emotionaalisen puheen visualisoimiseksi prosodisten piirteiden avulla. Tutkimuksessa saavutettiin ihmisten tunnistuskykyyn verrattava automaattisen emootiontunnistuksen taso käytettäessä suppeaa perusemootioiden joukkoa (neutraali, surullinen, iloinen ja vihainen). Emootiontunnistuksen suorituskyky puhutulle suomelle havaittiin olevan verrannollinen länsieurooppalaisten kielten kanssa. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamisessa saavutettiin taas parempi suorituskyky käytettäessä fuusiomenetelmää. Emotionaalisen puheen visualisoimiseksi kehitetyllä opetettavalla epälineaarisella manifoldimallinnustekniikalla pystyttiin tuottamaan aineistolle emootion dimensionaalisen mallin kaltainen visuaalinen rakenne. Mataladimensionaalisen kuvauksen voitiin edelleen osoittaa säilyttävän sekä tutkimusaineiston emotionaalisten luokkien että emotionaalisen intensiteetin topologisia rakenteita. Tässä väitöksessä kehitettiin hahmontunnistusmenetelmiin perustuvaa teknologiaa emotionaalisen puheen tunnistamiseksi käytettäessä sekä pitkiä että lyhyitä puhenäytteitä. Emotionaalisen aineiston visualisointiin ja luokitteluun kehitettyä teknologista kehysmenetelmää käyttäen voidaan myös esittää puheaineistoa muidenkin semanttisten rakenteiden mukaisesti
APA, Harvard, Vancouver, ISO, and other styles
6

Sethu, Vidhyasaharan Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Automatic emotion recognition: an investigation of acoustic and prosodic parameters." Awarded by:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/44620.

Full text
Abstract:
An essential step to achieving human-machine speech communication with the naturalness of communication between humans is developing a machine that is capable of recognising emotions based on speech. This thesis presents research addressing this problem, by making use of acoustic and prosodic information. At a feature level, novel group delay and weighted frequency features are proposed. The group delay features are shown to emphasise information pertaining to formant bandwidths and are shown to be indicative of emotions. The weighted frequency feature, based on the recently introduced empirical mode decomposition, is proposed as a compact representation of the spectral energy distribution and is shown to outperform other estimates of energy distribution. Feature level comparisons suggest that detailed spectral measures are very indicative of emotions while exhibiting greater speaker specificity. Moreover, it is shown that all features are characteristic of the speaker and require some of sort of normalisation prior to use in a multi-speaker situation. A novel technique for normalising speaker-specific variability in features is proposed, which leads to significant improvements in the performances of systems trained and tested on data from different speakers. This technique is also used to investigate the amount of speaker-specific variability in different features. A preliminary study of phonetic variability suggests that phoneme specific traits are not modelled by the emotion models and that speaker variability is a more significant problem in the investigated setup. Finally, a novel approach to emotion modelling that takes into account temporal variations of speech parameters is analysed. An explicit model of the glottal spectrum is incorporated into the framework of the traditional source-filter model, and the parameters of this combined model are used to characterise speech signals. An automatic emotion recognition system that takes into account the shape of the contours of these parameters as they vary with time is shown to outperform a system that models only the parameter distributions. The novel approach is also empirically shown to be on par with human emotion classification performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Santorelli, Noelle Turini. "Perception of Emotion from Facial Expression and Affective Prosody." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/psych_theses/17.

Full text
Abstract:
Real-world perception of emotion results from the integration of multiple cues, most notably facial expression and affective prosody. The use of incongruent emotional stimuli presents an opportunity to study the interaction between sensory modalities. Thirty-seven participants were exposed to audio-visual stimuli (Robins & Schultz, 2004) including angry, fearful, happy, and neutral presentations. Eighty stimuli contain matching emotions and 240 contain incongruent emotional cues. Matching emotions elicited a significant number of correct responses for all four emotions. Sign tests indicated that for most incongruent conditions, participants demonstrated a bias towards the visual modality. Despite these findings, specific incongruent conditions did show evidence of blending. Future research should explore an evolutionary model of facial expression as a means for behavioral adaptation and the possibility of an “emotional McGurk effect” in particular combinations of emotions.
APA, Harvard, Vancouver, ISO, and other styles
8

Selting, Margret. "Emphatic speech style : with special focus on the prosodic signalling of heightened emotive involvement in conservation." Universität Potsdam, 1994. http://opus.kobv.de/ubp/volltexte/2010/3793/.

Full text
Abstract:
After a review of previous work on the prosody of emotional involvement, data extracts from natural conversations are analyzed in order to argue for the constitution of an 'emphatic (speech) style', which linguistic devices are used to signal heightened emotive involvement. Participants use prosodic cues, in co-occurrence with syntactic and lexical cues, to contextualize turn-constructional units as 'emphatic'. Only realizations of prosodic categories that are marked in relation to surrounding uses of these categories have the power to contextualize units as displaying 'more-than-normal involvement'. In the appropriate context, and in cooccurrence with syntactic and lexical cues and sequential position, the context-sensitive interpretation of this involvement is 'emphasis'. Prosodic marking is used in addition to various unmarked cues that signal and constitute different activity types in conversation. Emphatic style highlights and reinforms particular conversational activities, and makes certain types of recipient responses locally relevant. In particular, switches from non-emphatic to emphatic style are used to contextualize 'peaks of involvement' or 'climaxes' in story-telling. These are shown in the paper to be 'staged' by speakers and treated by recipients as marked activities calling for displays of alignment with respect to the matter at hand. Signals of emphasis are deployable as techniques for locally organizing demonstrations of shared understanding and participant reciprocity in conversational interaction.
APA, Harvard, Vancouver, ISO, and other styles
9

Gordon, Haley. "Investigating the Relation between Empathy and Prosocial Behavior: An Emotion Regulation Framework." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/78070.

Full text
Abstract:
Little is known about the complex processes leading to prosocial behavior. However, theories suggests that empathy, empathic responding, and emotion regulation abilities, may all contribute to the presence or absence of prosocial behavior. While theoretical papers demonstrate relationships between these constructs, researchers to date have only focused on small aspects of this complex relationship (e.g., the relationship between sympathy and emotion regulation, the relationship between empathy and prosocial behavior). This study proposed a complex model whereby empathy was both directly related to prosocial behavior and indirectly related to prosocial behavior via sympathy or personal distress. Furthermore, this study proposed an emotion regulation framework for understanding the relation between empathy and prosocial behavior, suggesting that one's emotion regulation abilities would cause a differential presentation of empathic responses, leading to a potential increase or decrease in prosocial behavior. An adult sample was recruited. Analyses were completed using Structural Equation Modeling (SEM). Results indicate that hypothesized model adequately fit the data. All hypothesized associations between variables were significant. However, contrary to the hypothesis, emotion regulation ability did not alter the associations between study constructs. Strengths, limitations, and implications will be discussed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Laplaza, Miras Yesika. "Síntesis del habla con emociones en el dominio de las conversaciones virtuales." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/128499.

Full text
Abstract:
Esta tesis, al centrarse en la generación de voz sintética en el dominio de las conversaciones virtuales en español, trata dos aspectos diferentes del proceso de la conversión de texto en habla: Por un lado, parte de esta investigación gira alrededor de la normalización-corrección de los mensajes que los usuarios escriben mientras chatean con amigos o conocidos, cuyo lenguaje dista considerablemente del texto estándar y normativo que emplean estos conversores. Estos textos presentan numerosas abreviaturas, emoticonos, sustituciones de grafías o repeticiones de ellas, haciendo que el texto si es procesado por los normalizadores convencionales de los CTH resultara incomprensible por el oyente. Por otro lado, el habla generada debe corresponderse con el dominio especificado. En las conversaciones virtuales, que se asemejan más a una conversación oral que a un discurso escrito, predomina un afán por comunicar situaciones, transmitir estados emocionales, opiniones, sentimientos, etc. Por lo tanto, la voz desarrollada en esta investigación pretende reflejar esta expresividad, concretamente se pretende generar enunciados en los que se transmitan emociones mediante la modelación de parámetros prosódicos. Para lograr este propósito se parte del conversor de texto a habla de la empresa escocesa Cereproc.
This thesis, by focusing on the generation of synthetic speech in the domain of virtual Spanish conversations, addresses two different aspects of the Text-to-Speech process. On one hand, most of this research revolves around the standardization-correction of messages that users type while chatting with friends or acquaintances, whose language varies considerably from standard text and the policy using these systems. These texts have many abbreviations, emoticons, photographic substitutions or repetitions of them, making the text incomprehensible to the listener if it is processed by the conventional normalisers of TTS. On the other hand, the speech generated should match the specified domain. In virtual conversations, which are more like a conversation than written speech, the prevailing situations are eagerness to communicate, convey emotional states, feelings, etc. Therefore, the voice developed in this research is intended to reflect this expression, specifically aims to generate statements, which emotions are transmitted through prosodic modeling. To achieve this purpose, the text-to-speech system by the Scottish company CereProc is addressed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Prosodia emotiva"

1

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6360-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rao, K. Sreenivasa. Robust Emotion Recognition using Spectral and Prosodic Features. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mario, Mikulincer, and Shaver Phillip R, eds. Prosocial motives, emotions, and behavior: The better angels of our nature. Washington, DC: American Psychological Association, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weng, Helen Y., Brianna Schuyler, and Richard J. Davidson. The Impact of Compassion Meditation Training on the Brain and Prosocial Behavior. Edited by Emma M. Seppälä, Emiliana Simon-Thomas, Stephanie L. Brown, Monica C. Worline, C. Daryl Cameron, and James R. Doty. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190464684.013.11.

Full text
Abstract:
Compassion meditation is a form of mental training that cultivates compassion towards oneself and other people, and is thought to result in greater prosocial behavior in real-world settings. This framework views compassion as a quality that can be trained, rather than a stable trait, and scientists have started testing these hypotheses using neuroscientific and objective behavioral methods. How does this internal meditative practice translate to external behavioral changes? We propose an emotion-regulation model of compassion meditation, where responses to suffering may change through three processes: (1) increasing empathic responses, (2) decreasing avoidance responses, and (3) increasing compassionate responses to suffering. These altered responses to suffering may lead to behavioral transfer, where prosocial behavior is more likely to occur, even in a non-meditative state. We summarize the neuroscientific and behavioral literature that may provide early support for this model, and make recommendations for future research to further test the model.
APA, Harvard, Vancouver, ISO, and other styles
7

Winner, Ellen. Color and Form. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190863357.003.0005.

Full text
Abstract:
This chapter addresses the philosophical puzzle of how abstract arrangement of forms and colors can communicate emotions. Research shows that adults as well as children perceive emotional properties in abstract art (thus not needing to rely on representational cues like weeping people). Our tendency to perceive emotions in abstract visual art is part of a broader tendency to perceive such connotations in simple lines and shapes, and indeed is not limited to art. We see expressive properties in rocks, trees, columns, cracks, drapery, and the like. Such perception is made possible by an isomorphism between physical and mental life: we droop when we are sad; we reach upward when we are striving. Thus, while our perception of emotion in music is in part due to music’s resemblance to speech prosody, our perception of emotion in visual art grows out of our ability to see expressive properties in all visual forms.
APA, Harvard, Vancouver, ISO, and other styles
8

Winner, Ellen. Wordless Sounds. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190863357.003.0003.

Full text
Abstract:
Philosophers have worried that music cannot be sad or happy. Only sentient creatures can have emotions. However, empirical studies show that people do perceive emotions in music, including music from unfamiliar traditions. The question then becomes how music conveys emotion. Research shows that structural features in music mirror how emotions are conveyed by prosodic features of speech. When we are sad we speak slowly, softly, and in a low register; and when music is slow and soft and low, we perceive it as sad. Other emotional properties (like the link between the minor mode and sadness, the major mode and happiness) may be learned, but this matter remains in dispute. The research provides no support for the claim that music does not express emotions. The conventional wisdom that music is the language of the emotions holds up very well.
APA, Harvard, Vancouver, ISO, and other styles
9

Padilla-Walker, Laura M. Moral Development During Emerging Adulthood. Edited by Jeffrey Jensen Arnett. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199795574.013.23.

Full text
Abstract:
Morality is an extremely broad and yet nuanced topic and field of development, with a multitude of factors that are thought to contribute to moral behavior. Although an exhaustive review is outside the scope of this chapter, the author examines the broad areas of moral cognition, moral emotion, moral identity, and prosocial behavior with a particular emphasis on how these aspects of morality are characterized during emerging adulthood. The chapter asserts that moral development is alive, well, and developing during the third decade of life. Perhaps because of the salient focus on negative aspects of development, our understanding of moral development during emerging adulthood is relatively underdeveloped, especially from a developmental framework and outside of a college student population. Thus, multiple suggestions for future directions in the area of moral development during emerging adulthood are discussed throughout.
APA, Harvard, Vancouver, ISO, and other styles
10

Howe, George W., and Laura Mlynarski. Coercion, Power, and Control in Interdependent Relationships. Edited by Thomas J. Dishion and James Snyder. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199324552.013.28.

Full text
Abstract:
Children must learn to navigate the complex world of social interdependence. This chapter discusses the central characteristics of interdependent interaction, reviewing recent research from social psychology. It then explores the repertoire of skill necessary for successful navigation of interdependence, and how rigid coercive aggression might impede success. It combines a dynamic systems framework with developmental and family research on social interaction in dyads and larger groups. In this view, elements of emotion, thought, and action assemble at each moment during real-time interaction, conditioning and being conditioned by the ongoing flow of that interaction. These elements come to form coordinated ensembles at the individual, dyad, and group level, and over time self-stabilize into coherent styles, including coercive aggression and prosocial orientations. The chapter then focuses on how these styles develop, and concludes with discussion of directions for future research and intervention.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Prosodia emotiva"

1

Anne, Koteswara Rao, Swarna Kuchibhotla, and Hima Deepthi Vankayalapati. "Emotion Recognition Using Prosodic Features." In SpringerBriefs in Electrical and Computer Engineering, 7–15. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15530-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krothapalli, Sreenivasa Rao, and Shashidhar G. Koolagudi. "Emotion Recognition Using Prosodic Information." In SpringerBriefs in Electrical and Computer Engineering, 79–91. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5143-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tait, Peta. "Prosodies of affect and emotional climates." In Forms of Emotion, 195–216. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003124832-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gülich, Elisabeth, and Katrin Lindemann. "Communicating emotion in doctor-patient interaction." In Prosody in Interaction, 269–94. Amsterdam: John Benjamins Publishing Company, 2010. http://dx.doi.org/10.1075/sidag.23.23gnl.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Devillers, Laurence. "Automatic detection of emotion from real-life data." In Prosody and Iconicity, 219–32. Amsterdam: John Benjamins Publishing Company, 2013. http://dx.doi.org/10.1075/ill.13.12dev.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lacheret-Dujour, Anne. "Prosodic clustering in speech: From emotional to semantic processes." In Consciousness & Emotion Book Series, 175–90. Amsterdam: John Benjamins Publishing Company, 2015. http://dx.doi.org/10.1075/ceb.10.09lac.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fehr, Beverley. "Compassionate love as a prosocial emotion." In Prosocial motives, emotions, and behavior: The better angels of our nature., 245–65. Washington: American Psychological Association, 2010. http://dx.doi.org/10.1037/12061-013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yimngam, Sukanya, Wichian Premchaisawadi, and Worapoj Kreesuradej. "Prosody Analysis of Thai Emotion Utterances." In Natural Language Processing and Information Systems, 177–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22327-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

de Moraes, João Antônio, and Albert Rilliard. "Prosody and Emotion in Brazilian Portuguese." In Intonational Grammar in Ibero-Romance, 135–52. Amsterdam: John Benjamins Publishing Company, 2016. http://dx.doi.org/10.1075/ihll.6.07mor.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schuller, Björn. "Emotion Modelling via Speech Content and Prosody: In Computer Games and Elsewhere." In Emotion in Games, 85–102. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41316-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prosodia emotiva"

1

Sisson, Natalie M., Emily A. Impett, and L. H. Shu. "Can Gratitude Promote More Creative Engineering Design?" In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-70664.

Full text
Abstract:
Abstract Urgent societal problems, including climate change, require innovation, and can benefit from interdisciplinary solutions. A small body of research has demonstrated the potential of positive emotions (e.g., gratitude, awe) to promote creativity and prosocial behavior, which may help address these problems. This study integrates, for the first time, psychology research on a positive and prosocial emotion (i.e., gratitude) with engineering-design creativity research. In a pre-registered study design, engineering students and working engineers (pilot N = 49; full study N = 329) completed gratitude, positive-emotion control, or neutral-control inductions. Design creativity was assessed through rater scores of responses to an Alternate Uses Task (AUT) and a Wind-Turbine-Blade Repurposing Task (WRT). No significant differences among AUT scores emerged across conditions in either sample. While only the pilot-study manipulation of gratitude was successful, WRT results warrant further studies on the effect of gratitude on engineering-design creativity. The reported work may also inform other strategies to incorporate prosocial emotion to help engineers arrive at more original and effective concepts to tackle environmental sustainability, and in the future, other problems facing society.
APA, Harvard, Vancouver, ISO, and other styles
2

Luengo, Iker, Eva Navas, Inmaculada Hernáez, and Jon Sánchez. "Automatic emotion recognition using prosodic parameters." In Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Esau, Natascha, Lisa Kleinjohann, and Bernd Kleinjohann. "Emotional Competence in Human-Robot Communication." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49409.

Full text
Abstract:
Since emotional competence is an important factor in human communication, it will certainly also improve communication between humans and robots or other machines. Emotional competence is defined by the aspects emotion recognition, emotion representation, emotion regulation and emotional behavior. In this paper we present how these aspects are intergrated into the architecture of the robot head MEXI. MEXI is able to recognize emotions from facial expressions and prosody of natural speech and represents its internal state made up of emotions and drives by according facial expressions, head movements and speech utterances. For its emotions and drives internal and external regulation mechanisms are realized. Furthermore, this internal state and its perceptions, including the emotions recognized at its human counterpart, are used by MEXI to control its actions. Thereby MEXI can react adequately in an emotional communication.
APA, Harvard, Vancouver, ISO, and other styles
4

Montenegro, Chuchi S., and Elmer A. Maravillas. "Acoustic-prosodic recognition of emotion in speech." In 2015 International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM). IEEE, 2015. http://dx.doi.org/10.1109/hnicem.2015.7393229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pavaloi, Ioan, and Elena Musca. "Experimental study in emotion recognition using prosodie features." In 2015 E-Health and Bioengineering Conference (EHB). IEEE, 2015. http://dx.doi.org/10.1109/ehb.2015.7391422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Bicheng, and Gang Liu. "Speech Emotion Recognition Based on Hyper-Prosodic Features." In 2017 International Conference on Computer Technology, Electronics and Communication (ICCTEC). IEEE, 2017. http://dx.doi.org/10.1109/icctec.2017.00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koolagudi, Shashidhar G., Nitin Kumar, and K. Sreenivasa Rao. "Speech Emotion Recognition Using Segmental Level Prosodic Analysis." In 2011 International Conference on Devices and Communications (ICDeCom). IEEE, 2011. http://dx.doi.org/10.1109/icdecom.2011.5738536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

John, Remya Susan, Starlet Ben Alex, M. S. Sinith, and Leena Mary. "Significance of prosodic features for automatic emotion recognition." In PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON MICROELECTRONICS, SIGNALS AND SYSTEMS 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0004235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bicheng, Jin, and Liu Gang. "Speech Emotion Recognition Based on Hyper-Prosodic Features." In 2018 International Computers, Signals and Systems Conference (ICOMSSC). IEEE, 2018. http://dx.doi.org/10.1109/icomssc45026.2018.8941666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Austermann, A., N. Esau, L. Kleinjohann, and B. Kleinjohann. "Prosody based emotion recognition for MEXI." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography