Contents
Academic literature on the topic 'Prosodie visuelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prosodie visuelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Prosodie visuelle"
Vampé, Anne, and Véronique Aubergé. "Prosodie expressive audio-visuelle de l'interaction personne-machine. Etats mentaux, attitudes, intentions et affects (Feeling of Thinking) en dehors du tour de parole." Techniques et sciences informatiques 29, no. 7 (September 20, 2010): 807–32. http://dx.doi.org/10.3166/tsi.29.807-832.
Full textDufays, Jean-Louis. "EFFEUILLER LA CHANSON." Revue de recherches en littératie médiatique multimodale 1 (June 8, 2018). http://dx.doi.org/10.7202/1047793ar.
Full textDissertations / Theses on the topic "Prosodie visuelle"
Barbulescu, Adela. "Génération de la prosodie audio-visuelle pour les acteurs virtuels expressifs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM071/document.
Full textThe work presented in this thesis addresses the problem of generating audio-visual expressive performances for virtual actors. A virtual actor is represented by a 3D talking head and an audio-visual performance refers to facial expressions, head movements, gaze direction and the speech signal.While an important amount of work has been dedicated to emotions, we explore here expressive verbal behaviors that signal mental states, i.e "how speakers feel about what they say". We explore the characteristics of these so-called dramatic attitudes and the way they are encoded with speaker-specific prosodic signatures i.e. mental state-specific patterns of trajectories of audio-visual prosodic parameters
Lu, Yan. "Etude contrastive de la prosodie audio-visuelle des affects sociaux en chinois mandarin vs.français : vers une application pour l'apprentissage de la langue étrangère ou seconde." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAL001/document.
Full textIn human face-to-face interaction, social affects should be distinguished from emotional expressions, triggered by innate and involuntary controls of the speaker, by their nature of voluntary controls expressed within the audiovisual prosody and by their important role in the realization of speech acts. They also put into circulation between the interlocutors the social context and social relationship information. The prosody is a main vector of social affects and its cross-language variability is a challenge for language description as well as for foreign language teaching. Thus, cultural and linguistic specificities of the socio-affective prosody in oral communication could be a difficulty, even a risk of misunderstanding, for foreign language and second language learners. This thesis is dedicated to intra- and intercultural studies on perception of the prosody of 19 social affects in Mandarin Chinese and in French, on their cognitive representations, as well as on Chinese and French socio-affective prosody learning for foreign and second language learners. The first task of this thesis concerns the construction of a large audio-visual corpus of Chinese social affects. 152 sentences with the variation of length, tone location and syntactic structures of utterances, have been incorporated with 19 social affects. This corpus is served to examine the identification and perceptual confusion of these Chinese social affects by native and non-native listeners, as well as the tonal effect on non-native subjects' identification. Experimental results reveal that the majority of social affects are similarly perceived by native and non-native subjects, otherwise, some differences are also observed. Lexical tones lead to certain perceptual problems also for Vietnamese listeners (of a tonal language) and for French listeners (of a non-tonal language). In parallel, an acoustic analysis investigates the production side of prosodic socio-affects in Mandarin Chinese, and allows highlighting the more prominent patterns of acoustical variations as well as supporting the perceptual resultants obtained on the same expressions. Then, a study on conceptual and psycho-acoustic distances between social affects is carried out with Chinese and French subjects. The main results indicate that all subjects share to a very large extent the knowledge about these 19 social affects, regardless of their mother language, gender or how to present social affects (concept or acoustic realization). Finally, the last chapter of thesis is dedicated to the differences in the perception of 11 Chinese social affects expressed in different modalities (audio only, video only and audio-visual) for French learners and native subjects, as well as in the perception of the same French socio-affects for Chinese learners and native subjects. According to the results, the identification of affective expressions depends more on their affective values and on their presentation modality. Subject's learning level (beginner or intermediate) does not have a significant effect on their identification
Fares, Mireille. "Multimodal Expressive Gesturing With Style." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Full textThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks