To see the other types of publications on this topic, follow the link: Prosodia emotiva.

Journal articles on the topic 'Prosodia emotiva'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prosodia emotiva.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Heath, Maria. "Orthography in social media: Pragmatic and prosodic interpretations of caps lock." Proceedings of the Linguistic Society of America 3, no. 1 (March 3, 2018): 55. http://dx.doi.org/10.3765/plsa.v3i1.4350.

Full text
Abstract:
Orthography in social media is largely understudied, but rich in pragmatic potential. This study examines the use of "caps lock" on Twitter, which has been claimed to function as an emotive strengthener. In a survey asking participants to rate tweets on gradient scales of emotion, I show that this claim does not account for all the data. I instead propose that caps lock should be understood as an indicator of prosody in text. I support this theory by drawing on Twitter corpus data to show how users employ single-word capitalization in positions indicative of emphatic stress and semantic focus. A prosodic interpretation of capitalization accounts for all the data in a unified way.
APA, Harvard, Vancouver, ISO, and other styles
2

Mitchell, Rachel L. C., and Rachel A. Kingston. "Age-Related Decline in Emotional Prosody Discrimination." Experimental Psychology 61, no. 3 (November 1, 2014): 215–23. http://dx.doi.org/10.1027/1618-3169/a000241.

Full text
Abstract:
It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.
APA, Harvard, Vancouver, ISO, and other styles
3

Kao, Chieh, Maria D. Sera, and Yang Zhang. "Emotional Speech Processing in 3- to 12-Month-Old Infants: Influences of Emotion Categories and Acoustic Parameters." Journal of Speech, Language, and Hearing Research 65, no. 2 (February 9, 2022): 487–500. http://dx.doi.org/10.1044/2021_jslhr-21-00234.

Full text
Abstract:
Purpose: The aim of this study was to investigate infants' listening preference for emotional prosodies in spoken words and identify their acoustic correlates. Method: Forty-six 3- to-12-month-old infants ( M age = 7.6 months) completed a central fixation (or look-to-listen) paradigm in which four emotional prosodies (happy, sad, angry, and neutral) were presented. Infants' looking time to the string of words was recorded as a proxy of their listening attention. Five acoustic variables—mean fundamental frequency (F0), word duration, intensity variation, harmonics-to-noise ratio (HNR), and spectral centroid—were also analyzed to account for infants' attentiveness to each emotion. Results: Infants generally preferred affective over neutral prosody, with more listening attention to the happy and sad voices. Happy sounds with breathy voice quality (low HNR) and less brightness (low spectral centroid) maintained infants' attention more. Sad speech with shorter word duration (i.e., faster speech rate), less breathiness, and more brightness gained infants' attention more than happy speech did. Infants listened less to angry than to happy and sad prosodies, and none of the acoustic variables were associated with infants' listening interests in angry voices. Neutral words with a lower F0 attracted infants' attention more than those with a higher F0. Neither age nor sex effects were observed. Conclusions: This study provides evidence for infants' sensitivity to the prosodic patterns for the basic emotion categories in spoken words and how the acoustic properties of emotional speech may guide their attention. The results point to the need to study the interplay between early socioaffective and language development.
APA, Harvard, Vancouver, ISO, and other styles
4

Khan, S., S. A. Ali, and J. Sallar. "Analysis of Children’s Prosodic Features Using Emotion Based Utterances in Urdu Language." Engineering, Technology & Applied Science Research 8, no. 3 (June 19, 2018): 2954–57. http://dx.doi.org/10.48084/etasr.1902.

Full text
Abstract:
Emotion plays a significant role in identifying the states of a speaker using spoken utterances. Prosodic features add sense in spoken utterances providing speaker emotions. The objective of this research is to analyze the behavior of prosodic features (individual and in combination with others’ prosodic features) with different learning classifiers on emotion based utterances of children in the Urdu language. In this paper, three different prosodic features (intensity, pitch, formant and their combinations) with five different learning classifiers(ANN, J-48, K-star, Naïve Bayes, decision stump) and four basic emotions (happy, sad, angry, and neutral) were used to develop the experimental framework. Demonstrative experiments expressed that, in terms of classification accuracy, artificial neural networks show significant results with both individual and combination of prosodic features in comparison with other learning classifiers.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Bo, Pan Xiao, and Xiaohong Yu. "The Influence of Prosocial and Antisocial Emotions on the Spread of Weibo Posts: A Study of the COVID-19 Pandemic." Discrete Dynamics in Nature and Society 2021 (September 11, 2021): 1–9. http://dx.doi.org/10.1155/2021/8462264.

Full text
Abstract:
This study investigates the influences of the prosocial and antisocial tendency of Weibo users on post transmission during the COVID-19 pandemic. To overcome the deficiency of existing research on prosocial and antisocial emotions, we employ a web crawler technology to obtain post data from Weibo and identify texts with prosocial or antisocial emotions. We use SnowNLP to construct semantic dictionaries and training models. Our major findings include the following. First, through correlation analysis and negative binomial regression, we find that user posts with high intensity and prosocial emotion can trigger comments or forwarding behaviour. Second, the influence of antisocial emotion on Weibo comments, likes, and retweets are insignificant. Third, the general emotion about prosocial comments in Weibo also shows the emotion trend of prosocial comments. Overall, a major contribution of this paper is our focus on prosocial and antisocial emotions in cyberspace, providing a new perspective on emotion communication.
APA, Harvard, Vancouver, ISO, and other styles
6

Hutahaean, Siska Friskica. "REGULASI EMOSI DITINJAU DARI PERILAKU PROSOSIAL PADA SISWA SMA RAKSANA DI MEDAN." Jurnal Psikologi Universitas HKBP Nommensen 6, no. 2 (February 29, 2020): 53–59. http://dx.doi.org/10.36655/psikologi.v6i2.135.

Full text
Abstract:
This research purpose is to get to know the relation between the emotion regulation and the prosocial behavior. The hypothetical that submitted in the research is about the positive relation between the emotion regulationand prosocial behavior, that assumption is the higher emotion regulationthan the prosocial behavior is also getting high, otherwise the lower emotion regulation then the prosocial behavior is getting low either. The subject research which used in this investigation is 177students in Raksana Hight School Medan. The data obtained based on scale to measure the emotion regulationwith the prosocial behavior. The calculation has done by prerequisite analysis test (assumption test) which consists of the normality distribution test and the relation of linearity. The data analysis used is using the Product Moment correlation through the SPSS 19 aid for Windows. The data analysis result showed the coefficient correlation 0,456 (p < 0,05). It showed positive relation between emotion regulationand the prosocial behavior. This investigation showed contribution statistical psychological capital variable toward job satisfaction is 20,8 percent, the rest 79,2 affected by other factors which not examined. The result of this study can be concluded that the hypothesis research has positive relation between the emotion regulationand prosocial behavior acceptable.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yu Tai, Jie Han, Xiao Qing Jiang, Jing Zou, and Hui Zhao. "Study of Speech Emotion Recognition Based on Prosodic Parameters and Facial Expression Features." Applied Mechanics and Materials 241-244 (December 2012): 1677–81. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1677.

Full text
Abstract:
The present status of speech emotion recognition was introduced in the paper. The emotional databases of Chinese speech and facial expressions were established with the noise stimulus and movies evoking subjects' emtion. For different emotional states, we analyzed the single-mode speech emotion recognitions based the prosodic features and the geometric features of facial expression. Then, we discussed the bimodal emotion recognition by the use of Gaussian Mixture Model. The experimental results show that, the bimodal emotion recognition rate combined with facial expression is about 6% higher than the single model recognition rate merely using prosodic features.
APA, Harvard, Vancouver, ISO, and other styles
8

Arndt, Horst, and Richard W. Janney. "Verbal, prosodic, and kinesic emotive contrasts in speech." Journal of Pragmatics 15, no. 6 (June 1991): 521–49. http://dx.doi.org/10.1016/0378-2166(91)90110-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yi, Xinran Fan, Yueqi Chen, Hao Zhang, Fei Chen, Hui Zhang, Hongwei Ding, and Yang Zhang. "Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words." Brain Sciences 12, no. 12 (December 12, 2022): 1706. http://dx.doi.org/10.3390/brainsci12121706.

Full text
Abstract:
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers’ gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
APA, Harvard, Vancouver, ISO, and other styles
10

Knoche, Harry “Trip”, and Ethan P. Waples. "A process model of prosocial behavior: The interaction of emotion and the need for justice." Journal of Management & Organization 22, no. 1 (June 11, 2015): 96–112. http://dx.doi.org/10.1017/jmo.2015.24.

Full text
Abstract:
AbstractResearch on prosocial behavior has tended to focus on the positive consequences of prosocial behavior. This paper draws on attribution, emotion and justice motive literature to expand the discussion of prosocial behavior in organizations. Specifically, an expanded definition of prosocial behavior is offered and a process-oriented process model of prosocial behavior is introduced. The process model of prosocial behavior is used to discuss the idea that prosocial behavior might have negative consequences. This paper contributes to the literature on prosocial behavior in organizations by (1) accounting for the effects of emotion and the need for justice on decisions to engage in prosocial actions and (2) identifying negative consequences of specific prosocial actions.
APA, Harvard, Vancouver, ISO, and other styles
11

Morneau-Sévigny, Flore, Joannie Pouliot, Sophie Presseau, Marie-Hélène Ratté, Marie-Pier Tremblay, Joël Macoir, and Carol Hudon. "Validation de stimuli prosodiques émotionnels chez les Franco-québécois de 50 à 80 ans." Canadian Journal on Aging / La Revue canadienne du vieillissement 33, no. 2 (April 25, 2014): 111–22. http://dx.doi.org/10.1017/s0714980814000063.

Full text
Abstract:
ABSTRACTFew batteries of prosodic stimuli testing have been validated for Quebec-French people. Such validation is necessary to develop auditory-verbal tasks in this population. The objective of this study was to validate a battery of emotional prosodic stimuli for French-Québec aging subjects. The battery of 195 stimuli, which was elaborated by Maurage et al. (2007), is composed of 195 prosodic stimuli and was administrated to 50 healthy Quebecers aged 50-to-80 years. The percentages of good responses were calculated for each stimulus. For each emotion, Cronbach’s alphas were calculated to evaluate the internal consistency of the stimuli. Results showed that among the 195 stimuli, 40 were correctly recognized by at least 80 per cent of the subjects. Anger was the emotion that was most correctly identified by the participants, while recognition of disgust was the least recognised. Overall, this study provides data that will guide the selection of prosodic stimuli in evaluating French-Québécois.
APA, Harvard, Vancouver, ISO, and other styles
12

Frick, Robert W. "Communicating emotion: The role of prosodic features." Psychological Bulletin 97, no. 3 (1985): 412–29. http://dx.doi.org/10.1037/0033-2909.97.3.412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Erro, Daniel, Eva Navas, Inma Hernáez, and Ibon Saratxaga. "Emotion Conversion Based on Prosodic Unit Selection." IEEE Transactions on Audio, Speech, and Language Processing 18, no. 5 (July 2010): 974–83. http://dx.doi.org/10.1109/tasl.2009.2038658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Straight, Bilinda, Belinda L. Needham, Georgiana Onicescu, Puntipa Wanitjirattikal, Todd Barkman, Cecilia Root, Jen Farman, et al. "Prosocial Emotion, Adolescence, and Warfare." Human Nature 30, no. 2 (April 2, 2019): 192–216. http://dx.doi.org/10.1007/s12110-019-09344-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Attari, Azadeh, Bahram Ali Ghanbary Hashemabady, Ali Mashhadi, and Hossein Kareshki. "Temperament and Prosocial Behavior: The Mediating Role of Prosocial Reasoning, Emotion Regulation, and Emotion Lability." Practice in Clinical Psychology 6, no. 4 (October 30, 2018): 257–64. http://dx.doi.org/10.32598/jpcp.6.4.257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Septianto, Felix, and Bambang Soegianto. "Being moral and doing good to others." Marketing Intelligence & Planning 35, no. 2 (April 3, 2017): 180–91. http://dx.doi.org/10.1108/mip-06-2016-0093.

Full text
Abstract:
Purpose Although previous research has established that moral emotion, moral judgment, and moral identity influence consumer intention to engage in prosocial behavior (e.g. donating, volunteering) under some circumstances, these factors, in reality, can concurrently influence judgment process. Therefore, it is important to get a more nuanced understanding of how the combinations of each factor can lead to a high intention to engage in prosocial behavior. The paper aims to discuss these issues. Design/methodology/approach This research employs fuzzy-set qualitative comparative analysis to explore different configurations of moral emotion, judgment, and identity that lead to a high consumer intention to engage in prosocial behavior. Findings Findings indicate four configurations of moral emotion, moral judgment, and moral identity that lead to a high intention to engage in prosocial behavior. Research limitations/implications This research focuses on the case of a hospital in Indonesia; thus, it is important not to overgeneralize the findings. Nonetheless, from a methodological standpoint, opportunity emerges to broaden the examinations in other service and cultural contexts. Practical implications The findings of this research can help the hospital to develop effective combinations of advertising and marketing strategies to promote prosocial behavior among its customers. Originality/value This paper provides the first empirical evidence on the existence of multiple pathways of moral emotion, judgment, and identity that lead to a high consumer intention to engage in prosocial behavior. The implications of this research also highlight the importance of cultural context in understanding consumer behavior.
APA, Harvard, Vancouver, ISO, and other styles
17

Erickson, Donna, Osamu Fujimura, and Bryan Pardo. "Articulatory Correlates of Prosodic Control: Emotion and Emphasis." Language and Speech 41, no. 3-4 (July 1998): 399–417. http://dx.doi.org/10.1177/002383099804100408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Piotrovskaya, Larisa A. "DIFFERENTIATED APPROACH TO STUDYING *THE EMOTIVE FUNCTION OF INTONATION." Theoretical and Applied Linguistics, no. 3 (2019): 143–63. http://dx.doi.org/10.22250/2410-7190_2019_5_3_143_163.

Full text
Abstract:
The current paper has several aims: on the one hand, it is meant to determine the principles for selecting language material for experimental phonetic research of prosodic means to express emotions, on the other hand it applies the results of this study (i) to determine the place of emotive intonation patterns in the Russian intonation system, (ii) to provide enough ground for identifying 3 aspects of emotive function of intonation: communicative-emotive, emotive-differentiating and form-building. I argue that, first, the focus of the corresponding phonetic studies should be emotive utterances that present a separate utterance communicative type but not emotionally colored statements, interrogative or imperative sentences, and second, that there are definite principles to differentiate between them. The paper provides a detailed description of the experimental material used in this study of emotive utterances intonation including 134 text samples containing emotive utterances of 18 structural types pronounced by 11 native Russian speakers...
APA, Harvard, Vancouver, ISO, and other styles
19

Gerholm, Tove. "From shrieks to “Stupid poo”: emotive language in a developmental perspective." Text & Talk 38, no. 2 (February 23, 2018): 137–65. http://dx.doi.org/10.1515/text-2017-0036.

Full text
Abstract:
AbstractThe aim of this paper is to highlight and describe the forms of verbal emotive utterances that appeared in a longitudinal corpus of 11 Swedish children interacting with parents, siblings and friends. The children were in the ages 0;9 to 5;10 and were recorded four to six times during a two-year period. The verbal emotive expressions of the material are divided into the categories Descriptive versus Accompanying utterances. Descriptive utterances are emotive mainly from semantic conventions, whereas Accompanying utterances are emotive due to prosodic and contextual traits. The categories are illustrated and related to conventions, language development and cognitive growth. By classifying and labeling verbal expressions as emotive in different ways, it is argued that we can gain a better understanding of how language is used when intertwined with emotions, but also that we access a way to compare and investigate emotive language in a more thorough manner.
APA, Harvard, Vancouver, ISO, and other styles
20

Hayashi, Hajimu, and Yuki Shiomi. "Do children understand that people selectively conceal or express emotion?" International Journal of Behavioral Development 39, no. 1 (September 2, 2014): 1–8. http://dx.doi.org/10.1177/0165025414548777.

Full text
Abstract:
This study examined whether children understand that people selectively conceal or express emotion depending upon the context. We prepared two contexts for a verbal display task for 70 first-graders, 80 third-graders, 64 fifth-graders, and 71 adults. In both contexts, protagonists had negative feelings because of the behavior of the other character. In the prosocial context, children were instructed that the protagonist wished to spare the other character’s feelings. In contrast, in the real-emotion context, children were told that the protagonist was fed up with the other character’s behavior. Participants were asked to imagine what the protagonists would say. Adults selected utterances with positive or neutral emotion in the prosocial context but chose utterances with negative emotion in the real-emotion context, whereas first-graders selected utterances with negative emotion in both contexts. In the prosocial context, the proportion of utterances with negative emotion decreased from first-graders to adults, whereas in the real-emotion context the proportion was U-shaped, decreasing from first- to third-graders and increasing from fifth-graders to adults. Further, performance on both contexts was associated with second-order false beliefs as well as second-order intention understanding. These results indicate that children begin to understand that people selectively conceal or express emotion depending upon context after 8 to 9 years. This ability is also related to second-order theory of mind.
APA, Harvard, Vancouver, ISO, and other styles
21

Denham, Susanne A., Teresa Mason, and Elizabeth A. Couchoud. "Scaffolding Young Children's Prosocial Responsiveness: Preschoolers' Responses to Adult Sadness, Anger, and Pain." International Journal of Behavioral Development 18, no. 3 (September 1995): 489–504. http://dx.doi.org/10.1177/016502549501800306.

Full text
Abstract:
Children's responsiveness to a female adult's negative emotions was investigated in two studies. During individual play sessions with preschoolers (mean age = 44 and 50 months in each study), experimenters enacted two vignettes involving each of three emotions: anger, sadness, and pain. The children's reactions to negative emotion, as well as their reactions after it was explained, and after prosocial behaviour was requested, were rated for level of prosocial response. Overall, prosocial behaviour increased after such supportive scaffolding from the adult. Children responded most prosocially to anger, and least prosocially to pain. Requesting help affected prosocial responsiveness only for sadness and anger. The slightly older subjects in Study Two may have needed less adult scaffolding of the situation because of their more proficient understanding of emotion. The results are discussed with reference to the information young children need regarding adult emotional displays, and their need to feel competent to perform prosocial actions.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Zepeng, and Yansheng Mao. "No Prosody, No Emotion : Affective Prosody by Chinese EFL Learners." Journal of AsiaTEFL 19, no. 1 (March 31, 2022): 125–40. http://dx.doi.org/10.18823/asiatefl.2022.19.1.8.125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Damee, Hyun-Sub Sim, and Youngmee Lee. "Age-Related Differences in the Perception of Emotion in Emotional Speech: The Effects of Semantics and Prosody." Audiology and Speech Research 18, no. 1 (January 31, 2022): 48–59. http://dx.doi.org/10.21848/asr.210019.

Full text
Abstract:
Purpose: This study aimed to identify the age-related differences in the perception of emotion in speech, focusing on the effects of semantics and prosody.Methods: Thirty-two young adults and 32 elderly adults participated in this study. We implemented the test for rating of emotions in speech. The participants were presented with spoken sentences, which consisted of four emotional categories (anger, sadness, happiness, and neutral) in terms of prosody and semantics. In the general rating tasks, the participants were asked to listen to the sentences and rated the degree of the speaker’s emotions. In the attention rating tasks, the participants were asked to focus on only one cue (prosody and semantics) and to rate how much they agree with the speaker’s emotion.Results: The young group scored significantly higher than the elderly group on the general rating tasks and attention rating tasks. The elderly group scored higher on the semantic tasks than on the prosodic tasks, while the young group scored similarly on the semantic and prosodic tasks.Conclusion: The elderly adults have lower abilities to perceive emotion in speech than the young adults. They have difficulty in using the prosodic cues of emotional speech. In addition, the elderly adults try to use the semantic cues of emotional speech in order to compensate for their poor abilities to process the prosodic cues.
APA, Harvard, Vancouver, ISO, and other styles
24

Pattnaik, Padmalaya. "Impact of Emotion on Prosody Analysis." IOSR Journal of Computer Engineering 5, no. 4 (2012): 10–15. http://dx.doi.org/10.9790/0661-0541015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fernández Flecha, María de los Ángeles. "La adquisición de las relaciones entre prosodia e intención comunicativa: primeras asociaciones entre forma y función." Lexis 38, no. 1 (July 31, 2014): 5–33. http://dx.doi.org/10.18800/lexis.201401.001.

Full text
Abstract:
ResumenEl presente artículo aborda la relación entre la intención comunicativa y la prosodia de las vocalizaciones producidas por infantes de entre 16 y 24 meses de edad. Los resultados muestran que (1) las vocalizaciones comunicativas y no comunicativas no se diferencian de forma significativa a partir de su contorno entonativo final (ascendente, descendente o suspensivo), (2) aunque sí a partir de su frecuencia fundamental, más alta en el caso de las vocalizaciones comunicativas; y que (3) las funciones comunicativas tempranas (declarativa, imperativa, expresiva, mímica, guía de acción y “relleno” conversacional) se diferencian por su contorno entonativo final y, también, más marcadamente, (4) por su frecuencia fundamental, más alta en los imperativos (y vocalizaciones expresivas) que en los declarativos.Palabras clave: intención comunicativa, función comunicativa, prosodia, adquisición del lenguajeAbstractThis paper deals with the relation between communicative intention and prosody in vocalizations uttered by infants 16 to 24 months-old. Results show that (1) communicative and non-communicative vocalizations are not significantly different based on pitch final contour (rising, falling and flat), (2) but show significantly different fundamental frequency values, which are higher in communicative vocalizations; and that (3) early communicative functions (declarative, imperative, emotive, mimic, action guide, and conversational “filling”) can be differentiated based on their final pitch contour and, more clearly, (4) on their fundamental frequency, which is overall higher in imperatives (and emotive vocalizations) than in declaratives.Keywords: communicative intention, communicative function, prosody, language acquisition
APA, Harvard, Vancouver, ISO, and other styles
26

Oguni, Ryuji, and Keiko Otake. "Prosocial Repertoire Mediates the Effects of Gratitude on Prosocial Behavior." Letters on Evolutionary Behavioral Science 11, no. 2 (September 22, 2020): 37–40. http://dx.doi.org/10.5178/lebs.2020.79.

Full text
Abstract:
Gratitude promotes prosocial behavior, but little is known about the psychological mechanisms that underpin this relationship. We examined whether the prosocial repertoire mediates the effects of gratitude on prosocial behavior. Participants were assigned to either a gratitude group or neutral group. We carried out emotion induction manipulation by recalling autobiographical memories and required participants to write prosocial repertoires they intended to do for others. One week later, participants had to report all the prosocial behaviors they engaged in during that period. The results indicated that the number of prosocial repertoires and prosocial behaviors in the gratitude group were higher than in the neutral group. Importantly, our results demonstrated that prosocial repertoires mediated the effects of gratitude on prosocial behavior. Our results suggest that prosocial repertoire is the crucial cognitive component involved in the relationship between gratitude and prosocial behavior.
APA, Harvard, Vancouver, ISO, and other styles
27

Werner, S., and G. N. Petrenko. "Speech Emotion Recognition: Humans vs Machines." Discourse 5, no. 5 (December 18, 2019): 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.

Full text
Abstract:
Introduction. The study focuses on emotional speech perception and speech emotion recognition using prosodic clues alone. Theoretical problems of defining prosody, intonation and emotion along with the challenges of emotion classification are discussed. An overview of acoustic and perceptional correlates of emotions found in speech is provided. Technical approaches to speech emotion recognition are also considered in the light of the latest emotional speech automatic classification experiments.Methodology and sources. The typical “big six” classification commonly used in technical applications is chosen and modified to include such emotions as disgust and shame. A database of emotional speech in Russian is created under sound laboratory conditions. A perception experiment is run using Praat software’s experimental environment.Results and discussion. Cross-cultural emotion recognition possibilities are revealed, as the Finnish and international participants recognised about a half of samples correctly. Nonetheless, native speakers of Russian appear to distinguish a larger proportion of emotions correctly. The effects of foreign languages knowledge, musical training and gender on the performance in the experiment were insufficiently prominent. The most commonly confused pairs of emotions, such as shame and sadness, surprise and fear, anger and disgust as well as confusions with neutral emotion were also given due attention.Conclusion. The work can contribute to psychological studies, clarifying emotion classification and gender aspect of emotionality, linguistic research, providing new evidence for prosodic and comparative language studies, and language technology, deepening the understanding of possible challenges for SER systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Tseng, Huai-Hsuan, Yu-Lien Huang, Jian-Ting Chen, Kuei-Yu Liang, Chao-Cheng Lin, and Sue-Huei Chen. "Facial and prosodic emotion recognition in social anxiety disorder." Cognitive Neuropsychiatry 22, no. 4 (May 24, 2017): 331–45. http://dx.doi.org/10.1080/13546805.2017.1330190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ripoll, Karen, Sonia Carrillo, Yvonne Gómez, and Johny Villada. "Predicting Well-Being and Life Satisfaction in Colombian Adolescents: The Role of Emotion Regulation, Proactive Coping, and Prosocial Behavior." Psykhe (Santiago) 29, no. 2 (November 2020): 1–16. http://dx.doi.org/10.7764/psykhe.29.1.1420.

Full text
Abstract:
The aim of this study was to evaluate the association between positive competences, such as emotion regulation, proactive coping and prosocial behavior, and Colombian adolescents' perception of their well-being and life satisfaction. Through a convenience sample, 930 7th and 9th grade adolescents attending 11 public and private schools in 2 main cities of Colombia answered to a set of scales that evaluate proactive coping, emotion regulation, prosocial behavior, perceived life satisfaction and well-being. Multiple linear regression analyses were conducted to evaluate models for adolescents' well-being and life satisfaction, with the positive competences taken as predictive variables. The model that showed the best fit and accounted for the greatest amount of variance in adolescents' well-being and life satisfaction included 2 dimensions of proactive coping (positive and social), emotion regulation and prosocial behavior. Recommendations for future research and the development of intervention programs to promote adolescents' well-being are presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Chinn, Lisa K., Irina Ovchinnikova, Anastasia A. Sukmanova, Aleksandra O. Davydova, and Elena L. Grigorenko. "Early institutionalized care disrupts the development of emotion processing in prosody." Development and Psychopathology 33, no. 2 (February 15, 2021): 421–30. http://dx.doi.org/10.1017/s0954579420002023.

Full text
Abstract:
AbstractMillions of children worldwide are raised in institutionalized settings. Unfortunately, institutionalized rearing is often characterized by psychosocial deprivation, leading to difficulties in numerous social, emotional, physical, and cognitive skills. One such skill is the ability to recognize emotional facial expressions. Children with a history of institutional rearing tend to be worse at recognizing emotions in facial expressions than their peers, and this deficit likely affects social interactions. However, emotional information is also conveyed vocally, and neither prosodic information processing nor the cross-modal integration of facial and prosodic emotional expressions have been investigated in these children to date. We recorded electroencephalograms (EEG) while 47 children under institutionalized care (IC) (n = 24) or biological family care (BFC) (n = 23) viewed angry, happy, or neutral facial expressions while listening to pseudowords with angry, happy, or neutral prosody. The results indicate that 20- to 40-month-olds living in IC have event-related potentials (ERPs) over midfrontal brain regions that are less sensitive to incongruent facial and prosodic emotions relative to children under BFC, and that their brain responses to prosody are less lateralized. Children under IC also showed midfrontal ERP differences in processing of angry prosody, indicating that institutionalized rearing may specifically affect the processing of anger.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Hui, Yu Tai Wang, and Xing Hai Yang. "Emotion Detection System Based on Speech and Facial Signals." Advanced Materials Research 459 (January 2012): 483–87. http://dx.doi.org/10.4028/www.scientific.net/amr.459.483.

Full text
Abstract:
This paper introduces the present status of speech emotion detection. In order to improve the emotion recognition rate of single mode, the bimodal fusion method based on speech and facial expression is proposed. First, we establishes emotional database include speech and facial expression. For different emotions, calm, happy, surprise, anger, sad, we extract ten speech parameters and use the PCA method to detect the speech emotion. Then we analyze the bimodal emotion detection of fusing facial expression information. The experiment results show that the emotion recognition rate with bimodal fusion is about 6 percent points higher than the recognition rate with only speech prosodic features
APA, Harvard, Vancouver, ISO, and other styles
32

Carstensen, Laura L., and Kevin Chi. "Emotion and prosocial giving in older adults." Nature Aging 1, no. 10 (October 2021): 866–67. http://dx.doi.org/10.1038/s43587-021-00126-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jain, Vybhav, S. B. Rajeshwari, and Jagadish S. Kallimani. "Emotion Analysis from Human Voice Using Various Prosodic Features and Text Analysis." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4244–47. http://dx.doi.org/10.1166/jctn.2020.9055.

Full text
Abstract:
Emotion Analysis is a dynamic field of research with the aim to provide a method to recognize the emotions of a person only from their voice. It is more famously recognized as the Speech Emotion Recognition (SER) problem. This problem has been studied upon from more than a decade with results coming from either Voice Analysis or Text Analysis. Individually, both these methods have shown a good accuracy up till now. But, the use of both of these methods in unison has showed a much more better result than either one of those parts considered individually. When different people of different age groups are talking, it is important to understand their emotions behind what they say as this will in turn help us in reacting better. To try and achieve this, the paper implements a model which performs Emotion Analysis based on both Tone and Text Analysis. The prosodic features of the tone are analyzed and then the speech is converted to text. Once the text has been extracted from the speech, Sentiment Analysis is done on the extracted text to further improve the accuracy of the Emotion Recognition.
APA, Harvard, Vancouver, ISO, and other styles
34

Buck, Ross. "Conceptualizing motivation and emotion." Behavioral and Brain Sciences 23, no. 2 (April 2000): 195–96. http://dx.doi.org/10.1017/s0140525x00262420.

Full text
Abstract:
Motivation and emotion are not clearly defined and differentiated in Rolls's The brain and emotion, reflecting a widespread problem in conceptualizing these phenomena. An adequate theory of emotion cannot be based upon reward and punishment alone. Basic mechanisms of arousal, agonistic, and prosocial motives-emotions exist in addition to reward-punishment systems.
APA, Harvard, Vancouver, ISO, and other styles
35

VAN DE VELDE, Daan J., Niels O. SCHILLER, Claartje C. LEVELT, Vincent J. VAN HEUVEN, Mieke BEERS, Jeroen J. BRIAIRE, and Johan H. M. FRIJNS. "Prosody perception and production by children with cochlear implants." Journal of Child Language 46, no. 1 (October 18, 2018): 111–41. http://dx.doi.org/10.1017/s0305000918000387.

Full text
Abstract:
AbstractThe perception and production of emotional and linguistic (focus) prosody were compared in children with cochlear implants (CI) and normally hearing (NH) peers. Thirteen CI and thirteen hearing-age-matched school-aged NH children were tested, as baseline, on non-verbal emotion understanding, non-word repetition, and stimulus identification and naming. Main tests were verbal emotion discrimination, verbal focus position discrimination, acted emotion production, and focus production. Productions were evaluated by NH adult Dutch listeners. All scores between groups were comparable, except a lower score for the CI group for non-word repetition. Emotional prosody perception and production scores correlated weakly for CI children but were uncorrelated for NH children. In general, hearing age weakly predicted emotion production but not perception. Non-verbal emotional (but not linguistic) understanding predicted CI children's (but not controls’) emotion perception and production. In conclusion, increasing time in sound might facilitate vocal emotional expression, possibly requiring independently maturing emotion perception skills.
APA, Harvard, Vancouver, ISO, and other styles
36

Kurniawan, Wisnu, and Sigit Sanyata. "The Effectiveness of Rational Emotive Behaviour Therapy Approach Counselling on Students’ Prosocial Behaviour." Jurnal Pendidikan dan Pengajaran 54, no. 2 (July 17, 2021): 328. http://dx.doi.org/10.23887/jpp.v54i2.33163.

Full text
Abstract:
Prosocial behaviour needs to be developed because it is needed to make it easier to interact and establish relationships with others. This study aims to determine the level of effectiveness of group counselling in the Rational Emotive Behaviour (REBT) approach to the improvement of prosocial behaviour in class XI students. This type of research is quantitative with a form of quasi-experimental research. The subjects of the study were 132 students. The instrument used was a scale that contained 51 statement items. Analysis of the data used in this study is the Wilcoxon Signed Rank Test using the help of IBM SPSS v.22. The Wilcoxon Signed Rank Test is used to test the differences and the magnitude of the pre-test and post-test results. This research went through four steps pre-test and post-test to get the results that were wanted. The results of the fourth per-test and post-test revealed that if (p) = 0.018, it was smaller than 0.05 (0.018 <0.05), which means it had a high level of effectiveness, meaning that there were differences in prosocial behaviour before and after group counselling using the Rational Emotive Behaviour (REBT) approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Prokofyeva, L. P., I. L. Plastun, N. V. Filippova, L. Yu Matveeva, and Na S. Plastun. "Emotion recognition by speech signal characteristics (linguistic, clinical, informative aspects)." Sibirskiy filologicheskiy zhurnal, no. 2 (2021): 325–36. http://dx.doi.org/10.17223/18137083/75/23.

Full text
Abstract:
The paper presents an experimental project of linguists, medical professionals, lawyers, com-puter security specialists dealing with emotions discernment by basic speech signal characteristics. The software has been created, and its first testing has been carried out in the social network VKontakte. The collected recordings of speech fragments of live spontaneous prox-imate-intermediated dialogical speech were analyzed at several levels. First, a complex lin-guistic analysis revealed lexico-semantic and prosodic features of emotionality. Then, a com-parison with the software results was carried out, and the data obtained were systematized. Also, conclusions on the leading role of prosody in revealing hidden types of emotional stress were made. Frequent agreements of digital meanings of prosodic elements in speech segments were found demonstrating emotions not fixed at the lexico-semantic level. Finally, the work-ing formulation for external and internal ways of emotionality expression in the live speech was offered.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Zhanxing, Dong Dong, and Jun Qiao. "The Role of Social Value Orientation in Chinese Adolescents’ Moral Emotion Attribution." Behavioral Sciences 13, no. 1 (December 20, 2022): 3. http://dx.doi.org/10.3390/bs13010003.

Full text
Abstract:
Previous studies have explored the role of cognitive factors and sympathy in children’s development of moral emotion attribution, but the effect of personal dispositional factors on adolescents’ moral emotion expectancy has been neglected. In this study, we address this issue by testing adolescents’ moral emotion attribution with different social value orientation (SVO). Eight hundred and eighty Chinese adolescents were classified into proselfs, prosocials and mixed types in SVO and asked to indicate their moral emotions in four moral contexts (prosocial, antisocial, failing to act prosocially (FAP) and resisting antisocial impulse (RAI)). The findings revealed an obvious contextual effect in adolescents’ moral emotion attribution and the effect depends on SVO. Prosocials evaluated more positively than proselfs and mixed types in the prosocial and RAI contexts, but proselfs evaluated more positively than prosocials and mixed types in the antisocial and FAP contexts. The findings indicate that individual differences of adolescents’ moral emotion attribution have roots in their social value orientation, and suggest the role of dispositional factors in the processing of moral emotion.
APA, Harvard, Vancouver, ISO, and other styles
39

Samad, A., A. U. Rehman, and S. A. Ali. "Performance Evaluation of Learning Classifiers of Children Emotions using Feature Combinations in the Presence of Noise." Engineering, Technology & Applied Science Research 9, no. 6 (December 1, 2019): 5088–92. http://dx.doi.org/10.48084/etasr.3193.

Full text
Abstract:
Recognition of emotion-based utterances from speech has been produced in a number of languages and utilized in various applications. This paper makes use of the spoken utterances corpus recorded in Urdu with different emotions of normal and special children. In this paper, the performance of learning classifiers is evaluated with prosodic and spectral features. At the same time, their combinations considering children with autism spectrum disorder (ASD) as noise in terms of classification accuracy has also been discussed. The experimental results reveal that the prosodic features show significant classification accuracy in comparison with the spectral features for ASD children with different classifiers, whereas combinations of prosodic features show substantial accuracy for ASD children with J48 and rotation forest classifiers. Pitch and formant express considerable classification accuracy with MFCC and LPCC for special (ASD) children with different classifiers.
APA, Harvard, Vancouver, ISO, and other styles
40

Monnot, Marilee, Robert Foley, and Elliott Ross. "Affective prosody: Whence motherese." Behavioral and Brain Sciences 27, no. 4 (August 2004): 518–19. http://dx.doi.org/10.1017/s0140525x04390114.

Full text
Abstract:
Motherese is a form of affective prosody injected automatically into speech during caregiving solicitude. Affective prosody is the aspect of language that conveys emotion by changes in tone, rhythm, and emphasis during speech. It is a neocortical function that allows graded, highly varied vocal emotional expression. Other mammals have only rigid, species-specific, limbic vocalizations. Thus, encephalization with corticalization is necessary for the evolution of progressively complex vocal emotional displays.
APA, Harvard, Vancouver, ISO, and other styles
41

Ben-David, Boaz M., Namita Multani, Vered Shakuf, Frank Rudzicz, and Pascal H. H. M. van Lieshout. "Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech." Journal of Speech, Language, and Hearing Research 59, no. 1 (February 2016): 72–89. http://dx.doi.org/10.1044/2015_jslhr-h-14-0323.

Full text
Abstract:
Purpose Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). Results We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. Conclusions Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.
APA, Harvard, Vancouver, ISO, and other styles
42

TADA, Kazuhiko, Yoshikazu YANO, Shinji DOKI, and Shigeru OKUMA. "Emotion Transition Distinction by Detecting Remarkable Changes of Prosodic Features." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 22, no. 1 (2010): 90–101. http://dx.doi.org/10.3156/jsoft.22.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ventura, Maria I., Kathleen Baynes, Karen A. Sigvardt, April M. Unruh, Sarah S. Acklin, Heidi E. Kirsch, and Elizabeth A. Disbrow. "Hemispheric asymmetries and prosodic emotion recognition deficits in Parkinson's disease." Neuropsychologia 50, no. 8 (July 2012): 1936–45. http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Yu-Lien, Sue-Huei Chen, and Huai-Hsuan Tseng. "Attachment avoidance and fearful prosodic emotion recognition predict depression maintenance." Psychiatry Research 272 (February 2019): 649–54. http://dx.doi.org/10.1016/j.psychres.2018.12.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Koolagudi, Shashidhar G., and K. Sreenivasa Rao. "Emotion recognition from speech using source, system, and prosodic features." International Journal of Speech Technology 15, no. 2 (March 20, 2012): 265–89. http://dx.doi.org/10.1007/s10772-012-9139-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Rao, K. Sreenivasa, Shashidhar G. Koolagudi, and Ramu Reddy Vempada. "Emotion recognition from speech using global and local prosodic features." International Journal of Speech Technology 16, no. 2 (August 4, 2012): 143–60. http://dx.doi.org/10.1007/s10772-012-9172-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sheppard, Shannon M., Lynsey M. Keator, Bonnie L. Breining, Amy E. Wright, Sadhvi Saxena, Donna C. Tippett, and Argye E. Hillis. "Right hemisphere ventral stream for emotional prosody identification." Neurology 94, no. 10 (December 31, 2019): e1013-e1020. http://dx.doi.org/10.1212/wnl.0000000000008870.

Full text
Abstract:
ObjectiveTo determine whether right ventral stream and limbic structures (including posterior superior temporal gyrus [STG], STG, temporal pole, inferior frontal gyrus pars orbitalis, orbitofrontal cortex, amygdala, anterior cingulate, gyrus, and the sagittal stratum) are implicated in emotional prosody identification.MethodsPatients with MRI scans within 48 hours of unilateral right hemisphere ischemic stroke were enrolled. Participants were presented with 24 sentences with neutral semantic content spoken with happy, sad, angry, afraid, surprised, or bored prosody and chose which emotion the speaker was feeling based on tone of voice. Multivariable linear regression was used to identify individual predictors of emotional prosody identification accuracy from a model, including percent damage to proposed right hemisphere structures, age, education, and lesion volume across all emotions (overall emotion identification) and 6 individual emotions. Patterns of recovery were also examined at the chronic stage.ResultsThe overall emotion identification model was significant (adjusted r2 = 0.52; p = 0.043); greater damage to right posterior STG (p = 0.038) and older age (p = 0.009) were individual predictors of impairment. The model for recognition of fear was also significant (adjusted r2 = 0.77; p = 0.002), with greater damage to right amygdala (p = 0.047), older age (p < 0.001), and less education (p = 0.005) as individual predictors. Over half of patients with chronic stroke had residual impairments.ConclusionsRight posterior STG in the right hemisphere ventral stream is critical for emotion identification in speech. Patients with stroke with damage to this area should be assessed for emotion identification impairment.
APA, Harvard, Vancouver, ISO, and other styles
48

Myers, Brett, Miriam Lense, and Reyna Gordon. "Pushing the Envelope: Developments in Neural Entrainment to Speech and the Biological Underpinnings of Prosody Perception." Brain Sciences 9, no. 3 (March 22, 2019): 70. http://dx.doi.org/10.3390/brainsci9030070.

Full text
Abstract:
Prosodic cues in speech are indispensable for comprehending a speaker’s message, recognizing emphasis and emotion, parsing segmental units, and disambiguating syntactic structures. While it is commonly accepted that prosody provides a fundamental service to higher-level features of speech, the neural underpinnings of prosody processing are not clearly defined in the cognitive neuroscience literature. Many recent electrophysiological studies have examined speech comprehension by measuring neural entrainment to the speech amplitude envelope, using a variety of methods including phase-locking algorithms and stimulus reconstruction. Here we review recent evidence for neural tracking of the speech envelope and demonstrate the importance of prosodic contributions to the neural tracking of speech. Prosodic cues may offer a foundation for supporting neural synchronization to the speech envelope, which scaffolds linguistic processing. We argue that prosody has an inherent role in speech perception, and future research should fill the gap in our knowledge of how prosody contributes to speech envelope entrainment.
APA, Harvard, Vancouver, ISO, and other styles
49

PETERS, SARA, KATHRYN WILSON, TIMOTHY W. BOITEAU, CARLOS GELORMINI-LEZAMA, and AMIT ALMOR. "Do you hear it now? A native advantage for sarcasm processing." Bilingualism: Language and Cognition 19, no. 2 (March 10, 2015): 400–414. http://dx.doi.org/10.1017/s1366728915000048.

Full text
Abstract:
Context and prosody are the main cues native-English speakers rely on to detect and interpret sarcastic irony within spoken discourse. The importance of each type of cue for detecting sarcasm has not been fully investigated in native speakers and has not been examined at all in adult English learners. Here, we compare the extent to which native-English speakers and Arabic-speaking English learners rely on contextual and prosodic cues to identify sarcasm in spoken English, situating these findings within current cross-linguistic effects literature. We show Arabic speakers utilize the cues to a different extent than native speakers: they tend not to utilize prosodic information, focusing on contextual semantic information. These results help clarify the relative weight of contextual and prosodic cues in native-English speakers and support theories that suggest that prosody and emotion could transfer separately in second language learning such that one could transfer while the other does not.
APA, Harvard, Vancouver, ISO, and other styles
50

Brethel-Haurwitz, Kristin M., Maria Stoianova, and Abigail A. Marsh. "Empathic emotion regulation in prosocial behaviour and altruism." Cognition and Emotion 34, no. 8 (June 23, 2020): 1532–48. http://dx.doi.org/10.1080/02699931.2020.1783517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography