Dissertationen zum Thema „Multimodality in interactions“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Multimodality in interactions" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Filho, Valdinar CustÃdio. „Multiple factors, different interactions: scrutinizing the heterogeneous nature of referentiation“. Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=6377.
Der volle Inhalt der QuelleThis work aims to describe the integration of multiple factors for the construction of reference. We consider that referentiation is built by the social-cognitive work undertaken by individuals in order to establish objects of discourse. Thus, we argue that such action is carried out through complex strategies, which are not limited to the presence or value of nominal expressions that take part on textual surface. The fundamental theoretical background is the social-cognitive paradigm, from which we propose a new view on analysis taken by Text Linguistics researchers. On one hand, we sustain that other semiosis, besides the verbal one, once they are part of textual materiality, can accomplish the same strategies normally described with an exclusive focus on linguistic constructions. On the other hand, we suggest that observation of texts different from those usually analyzed could provide new proposals about the description of referential processes. We analyze, in this thesis, a complete short tale and four episodes of a TV series, in order to investigate how material elements, linked to context, promote the introduction and reformulation of objects built in texts. For such analysis, we elect three main assumptions: 1) the verbal content which takes part in referential processes is not limited to anaphoric relations between referential expressions; 2) image, when it is part of the text, must be considered as textual materiality to be analyzed; 3) the referent transformation is a process more discursive than formal; thus it is inherently not linear. Through the application of these principles to the analysis of our sample, we define four general steps of referentiation: presentation, addition, correction and confirmation.
Este trabalho tem como objetivo descrever a integraÃÃo de mÃltiplos fatores para a construÃÃo da referÃncia. Partindo da concepÃÃo de que a referenciaÃÃo contempla o trabalho sociocognitivo empreendido pelos sujeitos para a construÃÃo dos objetos de discurso, defendemos que tal aÃÃo à realizada por meio de estratÃgias complexas, as quais nÃo se limitam à presenÃa ou ao valor dos sintagmas nominais que fazem parte da superfÃcie textual. Como pressuposto teÃrico fundamental, elegemos o Sociocognitivismo, a partir do qual propomos um redimensionamento acerca das anÃlises efetivadas em LinguÃstica Textual. De um lado, sustentamos que os outros modos de enunciaÃÃo, alÃm do verbal, por fazerem parte da materialidade do texto, podem efetivar as mesmas estratÃgias normalmente descritas apenas com foco nas construÃÃes linguÃsticas. De outro, sugerimos que a reflexÃo sobre textos diferentes dos normalmente analisados pode fornecer novas propostas descritivas dos processos referenciais. Optamos por analisar, nesta tese, um conto completo e quatro episÃdios de um seriado de televisÃo, a fim de verificarmos como os elementos da materialidade, conjugados ao aparato contextual, promovem a apresentaÃÃo e a reformulaÃÃo dos objetos textualmente acionados. Partimos das ideias de que 1) o conteÃdo verbal que participa da aÃÃo de referir nÃo se limita Ãs relaÃÃes anafÃricas entre expressÃes referenciais; 2) a materialidade textual analisada deve considerar o modo de enunciaÃÃo visual, quando este fizer parte do texto; e 3) o processo de transformaÃÃo dos referentes à mais discursivo que formal, por isso à constitutivamente nÃo linear. A partir desses princÃpios aplicados à anÃlise de nossa amostra, definimos quatro etapas gerais do processo de um referente: apresentaÃÃo, acrÃscimo, correÃÃo e confirmaÃÃo.
Jhaj, Sunjum. „Interactions with Culturally Relevant Children's Literature: A Punjabi Perspective“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40563.
Der volle Inhalt der QuelleHugol-Gential, Clémentine. „Le service au restaurant : analyse linguistique et multimodale des interactions entre personnel de service et clients“. Thesis, Lyon 2, 2012. http://www.theses.fr/2012LYO20011.
Der volle Inhalt der QuelleBased on a rich array of verbal and multimodal resources, the service is crucial in the organization of the meal at restaurant. Within this study, we are particularly interested in the interactions taking place between service staff and customers. On the basis of a corpus of video recordings realized in natural settings within several restaurants, the empirical analyses have been carried out within a praxeological and interactional perspective. Several interactional patterns within professional practices of service have been identified. These phenomena allow us to underline the importance and the complexity of various multimodal resources implemented by the participants in the organization and the coordination of their activities. This study is interested first of all in the practices by which service staff opens regularly the interaction with customers, then in the various uses of menu, and finally in the organization of the choice and the use of ad hoc categories during the order-taking of dishes and wines. The issue is to understand the detailed organization of the interactions between service staff and customers and so, to underline their fundamental and structuring character for the dining experience
De, Koning Marieke. „La multimodalité comme ressource en interprétation de dialogue : une étude de simulations d'interactions médiées par interprète en (cours de) formation“. Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALH015.
Der volle Inhalt der QuelleThis PhD research focuses on multimodal interactional competences in interpreters’ education. Previous research has shown that non-verbal semiotic resources like gaze, gesture, and body positioning play an important role in communication outcome during interpreter-mediated interactions (Wadensjö 1998, 2001). The co-construction of coordinating actions (Baraldi & Gavioli, 2012) and the central position of the interpreter in these plurilingual triadic encounters require, in addition to translation skills, specific interactional skills which include the use of multimodality as a resource. However, this resource is rarely taken into account in interpreter training and education (Krystallidou, 2014). Consequently, we may question if and how interpreter students acquire these skills. To this end, a qualitative study was carried out with a group of 10 interpreter students at the University of Bologna. Their performances during the role play sessions in a learning context were filmed and analysed in order to answer the following research questions: what nonverbal semiotic resources are found in dialogue interpreting students when simulating interpreter-mediated interactions in a learning context? What purpose do they serve? How do they vary? After transcription and annotation with ELAN2 software, a descriptive analysis was carried out on students’ use of multimodal resources during role plays. This allowed a selection of excerpts to be analysed following a Multimodal Conversation Analysis Method (Mondada, 2018, 2019). This fine-grained analysis shed light on a series of salient situations and the different ways in which the embodied and situated actions impact their outcome. In addition, the students took part in semi-directed self-reflection interviews which were also recorded. These were designed to give us access to the students’ criteria and level of multimodal interactional awareness. The overall results show relatively little usage of multimodality as a resource. However, the analysis highlights numerous individual differences and allows identification of issues that should be considered in order to optimise role play activities in dialogue interpreter training with the inclusion of multimodality as a resource
Martin, Laurence. „S'entraîner à expliquer une procédure instrumentale : ethnographie multisituée d'un projet filmique mené avec des aides à domicile engagées dans une formation en français langue étrangère“. Thesis, Montpellier 3, 2020. http://www.theses.fr/2020MON30023.
Der volle Inhalt der QuelleThis research examines highly multimodal activities constituting a video project conducted in collaboration with home helpers, in the context of adult language training. The activities under study focus on explaining procedures carried out in various situations, weakly or heavily instrumented (spontaneous group oral, simulation, camera-facing shooting) and participate in the process of appropriation of the foreign language by placing the learner in an expert role. Participant observations in an ethnographic approach, supported by audiovisual recordings of these activities, brought together a corpus that we organized into two main collections. They are based primarily on these data. They deal with the dynamics of multi-sensory, situated actions and interactions (verbal actions, gestures, object manipulations, body placements, movements), as they were developed by the participants. They also discuss the Goffmanian notions of framework, position, commitment and reiteration. This ecological approach to human activity, relatively recent in language science, highlights the multimodality of the resources deployed in those educational situations, and shows their connection with the environment in its social and material dimensions
Vincent, Caroline. „Interactions pédagogiques "fortement multimodales" en ligne : le cas de tuteurs en formation“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00765986.
Der volle Inhalt der QuelleChen, Wei-Ching. „Les interactions verbales au cours du repas : analyse de la co-construction des activités de "manger et parler"“. Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO20045.
Der volle Inhalt der QuelleMy dissertation, which lies within the framework of international linguistics, is about the interaction during meals among friends in invitations in France. Based on audio and video data recorded in naturally-occurring situations, this empirical study aims to show the modalities by which the participants co-construct the two main activities observed at table: eating and talking. Concerning eating, the study describes in detail the interactions since the eaters sit down at the table until they finish the meal. On the basis of verbal and multimodal analysis, this study brings out the linguistic and multimodal resources used by the hosts and the guests in order to make sure of the course of the meal. As for talking, this dissertation focuses on the assessment on the dishes served. The analysis shows that through the assessment on the dishes served, the speakers express their personal observation about the food, as well, they realize various actions such as complimenting, criticizing, self-complimenting and self-depreciating. The key issue of this study is to shed light on the principles seen but unnoticed by which the French speakers construct interaction at table
Drissi, Samira. „Apprendre à enseigner par visioconférence : étude d'interactions pédagogiques entre futurs enseignants et apprenants de FLE“. Thesis, Lyon, École normale supérieure, 2011. http://www.theses.fr/2011ENSL0678.
Der volle Inhalt der QuelleThe aim of this thesis is to describe and analyse desktop videoconferencing pedagogical interactions that took place as between, learners of French students in Berkeley (UCB) at one end and French as a foreign language instructor trainees in Lyon (University Lyon 2) at the other end. Our work particularly focuses on communicational strategies mobilized by the trainees to conduct distance teaching/learning language sessions.Our approach draws on theoretical issues in education studies (online education, the concept of pedagogical presence within the community of inquiry framework), second language acquisition, and language sciences (pedagogical interaction analysis) (conceptual framework, Part 1). Instances of educational exchanges were collected through dynamic screenshots and provide material for studying emerging pedagogical and communicational strategies (methodological framework, Part 2). As part of these exchanges, tutors produce utterances and actions that shape the learning activities. We describe these regulations from the concept of pedagogical presence -declined in three aspects: teaching presence, social presence and cognitive presence- that allows us to identify pedagogical practices observed during desktop videoconferencing interactions (analysis of the pedagogical presence, Part 3). This research, conducted with multimodal transcripts of the recorded data allows us to uncover some of the aspects of online synchronous teaching. It also allows us to propose criteria for evaluating the pedagogical practices identified, described and analysed in this thesis to improve some facets of language training in tutoring online
Vidal, Julie. „Etude des séquences de rétroaction corrective dans un dispositif en ligne d'enseignement/apprentissage du français langue étrangère : une approche multimodale de l'oral“. Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2125.
Der volle Inhalt der QuelleThis work aims to analyze six weeks of videoconferenced pedagogical interaction between trainee teachers enrolled in a master's degree in teaching French as a foreign language (FLE) at a French university (Lyon 2) and learners of French at a foreign university (Dublin City University). Corrective feedback is an important issue in foreign language pedagogy, renewed by the use of technologies. However, there has been little research on how teachers provide corrective feedback on learners' oral production in online interactions. Our qualitative study is based on the analysis of ecological data, organized into a complex corpus of video interactions transcribed and annotated using the ELAN software. We observed multimodal assessments made by the teachers, as well as participants' commentary containing their perception of the corrective feedback. We analyzed these data from a multimodal perspective according to which all the semiotic resources contribute to make meaning without automatically prioritizing one mode over another. In sum, this work aims to understand how teachers and learners co-construct corrective feedback sequences. We also want to update the effects of multimodality on the interactions in order to make pedagogical proposals for the training of future teachers of French as a foreign language
Zhang, Zhuoming. „Improving mediated touch interaction with multimodality“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT017.
Der volle Inhalt der QuelleAs one of the most important non-verbal communication channels, touch is widely used for different purposes. It is a powerful force in human physical and psychological development, shaping social structures as well as communicating emotions. However, even though current information and communication technology (ICT) systems enable the use of various non-verbal languages, the support of communicating through the sense of touch is still insufficient. Inspired by the cross-modal interaction of human perception, the approach I present in this dissertation is to use multimodality to improve mediated touch interaction. Following this approach, I present three devices that provide empirical contributions to multimodal touch interaction: VisualTouch, SansTouch, and In-Flat. To understand if multimodal stimuli can improve the emotional perception of touch, I present the VisualTouch device, and quantitatively evaluate the cross-modal interaction between the visual and tactile modality. To investigate the use of different modalities in real touch communication, I present the SansTouch device, which provides empirical insights on multimodal interaction and skin-like touch generation in the context of face-to-face communication. Going one step forward in the use of multimodal stimuli in touch interaction, I present the In-Flat device, an input/output touch overlay for smartphones. In-Flat not only provides further insights on the skin-like touch generation, but also a better understanding of the role that mediated touch plays in more general contexts. In summary, this dissertation strives to bridge the gap between touch communication and HCI, by contributing to the design and understanding of multimodal stimuli in mediated touch interaction
Clark, Jessica. „The Sensory Mechanisms of Crayfish (Orconectes rusticus) Used in Detecting Predatory Threats“. Bowling Green State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1490027671892276.
Der volle Inhalt der QuelleIbnelkaïd, Samira. „Identité et altérité par écran : modalités de l’intersubjectivité en interaction numérique“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2069.
Der volle Inhalt der QuelleThough our research is firmly anchored within the field of linguistics, it constitutes an interdisciplinary approach as well, aiming to establish a dialogue between Interaction Analysis and Phenomenology. This research examines the complex notion of identity by defining it as a verbal, technical, and intersubjective phenomenon. The bodily, sensory, relational and social human existence is henceforth engaged in digital interaction devices inducing unprecedented modalities of intersubjectivity. Therefore, we propose to analyze the novel features of intersubjectivity involved in digital interactions. In the first part of our dissertation, the theoretical exploration, we seek to apprehend the nature of identity co-construction, the stakes of interindividual encounter understood as an intersubjective phenomenon, and the spatio-temporal characteristics of digital interactions. Firstly, through a phenomenological approach, we define the encounter as a meaningful event and we explore the phenomenotechnical properties of digital intersubjectivity. Secondly, through an interactionist approach, we focus on language and its role in identity co-construction, and more specifically on sequence organization and embodiment within physical and digital interactions. Thereafter, in the second part of our dissertation, those theorizations are submitted to a data analysis. This empirical exploration consists in studying online encounters between geographically distant participants. This study allows us to draw a topography of the spatio-temporal framework of phygital interaction, a typology of the acts of enacting existence on screen and a description of the ontological process of identity co-construction
Barchunova, Alexandra [Verfasser]. „Manual interaction: multimodality, decomposition, recognition / Alexandra Barchunova. Technische Fakultät“. Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2013. http://d-nb.info/1031505245/34.
Der volle Inhalt der QuelleNicolaev, Viorica. „L'apprentissage du FLE dans un dispositif vidéographique synchrone : étude des séquences métalinguistiques“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00793185.
Der volle Inhalt der QuelleSaubesty, Jorane. „Analyses multimodales de l'interaction patient-médecin en situation de formation à l'annonce d'un événement indésirable grave : modélisation en vue d'implémenter un outil de formation par la réalité virtuelle“. Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0010.
Der volle Inhalt der QuelleThe ACORFORMed ANR project, in which this PhD project is integrated, aims at the creation (by computer scientists) of a "patient" animated conversational agent as a training tool for announcing, simulating and using a virtual environment. Using the methodology resulting from gestures studies and contributions of the literature on the organisation of interactions, we try to answer the following question: what is the overall structural organisation of the doctor/patient interaction, when this last is training to break the news of a damage associated with care? The analyses carried out in this thesis allow us to describe the doctor/patient interaction during training to break the news by proposing different phases that make up the interaction, as well as details about their division and their articulations. They are an indispensable and usable base for computer scientists to design and implement a credible "patient" conversational agent that can be used in physician training. Located at the heart of an interdisciplinary project, this thesis in linguistics makes it possible to transpose the interactional practices of physicians with a view to the implementation of a virtual agent by computer scientists
Rodriguez, Bertha Helena. „Modèle SOA sémantique pour la multimodalité et son support pour la découverte et l'enregistrement de services d'assistance“. Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0006/document.
Der volle Inhalt der QuelleUnimodal inputs and outputs in current systems have become very mature with touch applications or distributed services for geo-localization or speech, audio and image recognition. However, the integration and instantiation of all these modalities, lack of an intelligent management of the acquisition and restitution context, based on highly formalized notions reflecting common sense. This requires a more dynamic behavior of the system with a more appropriate approach to manage the user environment.However, the technology required to achieve such a goal is not yet available in a standardized manner, both in terms of the functional description of unimodal services and in terms of their semantic description. This is also the case for multimodal architectures, where the semantic management is produced by each project without a common agreement in the field to ensure inter-operability, and it is often limited to the processing of inputs and outputs or fusion / fission mechanisms. To fill this gap, we propose a semantic service-oriented generic architecture for multimodal systems. This proposal aims to improve the description and the discovery of modality components for assistance services: this is the architecture SOA2m. This architecture is fully focused on multimodality and it is enriched with semantic technologies because we believe that this approach will enhance the autonomous behavior of multimodal applications, provide a robust perception of the user-system exchanges, and help in the control of the semantic integration of the human-computer interaction.As a result, the challenge of discovery is addressed using the tools provided by the field of the semantic web services
Nigay, Laurence. „Modalité d'interaction et multimodalité“. Habilitation à diriger des recherches, Université de la Méditerranée - Aix-Marseille II, 2001. http://tel.archives-ouvertes.fr/tel-00004696.
Der volle Inhalt der QuelleZhang, Leticia Tian. „Understanding danmu: interaction, learning and multimodality in fan video comments“. Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669267.
Der volle Inhalt der QuelleCuando ven la televisión, muchas personas usan las redes sociales para compartir opiniones y emociones en tiempo real. La covisualización mediada se estudia ampliamente bajo la denominación de “segunda pantalla” o “televisión social”. En Japón y China, una reciente tecnología permite incrustar las redes sociales (la sección de comentarios) dinámicamente en las secuencias del video, creando una forma de participación sin precedentes llamada danmu o danmaku (“barrera de fuego”). Este trabajo se propone describir las características de este género discurso emergente. Utilizando el análisis del contenido y del discurso, analizamos danmu de: 1) una serie de televisión, y 2) un hilo de “danmu graciosos” de sitios populares de repositorios de videos. Nuestros resultados revelan que los usuarios tienen diversos intereses (trama, lenguaje, cultura), se apropian de recursos multimodales (color, posición, símbolos) para hacer humor y construyen significados usando estrategias discursivas originales. Este estudio muestra el potencial del danmu como un espacio para el aprendizaje informal, la creatividad semiótica y la interacción (para)social, además de motivar futuras investigaciones sobre las prácticas de compartir videos más allá de YouTube.
Quan veuen la televisió, moltes persones fan servir les xarxes socials per compartir opinions i emocions en temps real. La covisualització mediada s’estudia àmpliament sota la denominació de “segona pantalla” o “televisió social”. Al Japó i a la Xina, una tecnologia recent permet incrustar les xarxes socials (la secció de comentaris) dinàmicament a les seqüències de vídeo, creant una forma de participació sense precedents anomenada danmu o danmaku (“barrera de foc”). Aquest treball es proposa descriure les característiques d’aquest gènere discursiu emergent. Utilitzant l’anàlisi del contingut i del discurs, estudiem danmu de: 1) una sèrie de televisió, i 2) un fil de “danmu graciosos” de llocs populars de repositoris de vídeos. Els nostres resultats revelen que els usuaris tenen diversos interessos (trama, llenguatge, cultura), s’apropien dels recursos multimodals (color, posició, símbols) per fer humor i construeixen significats utilitzant estratègies discursives originals. Aquest estudi mostra el potencial del danmu com a espai per a l’aprenentatge informal, la creativitat semiòtica i la interacció (para)social, a més de motivar futures investigacions sobre les pràctiques de compartir vídeos més enllà de YouTube.
Jourde, Frédéric. „Collecticiel et multimodalité : spécification de l'interaction la notation COMM et l'éditeur e-COMM“. Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM018/document.
Der volle Inhalt der QuelleMulti-user multimodal interactive systems involve multiple users that can use multiple interaction modalities. Although multi-user multimodal systems are becoming more prevalent (especially multi-user multimodal systems involving multi-touch surfaces), their design is still ad-hoc without properly keeping track of the design process. Addressing this issue of lack of design tools, our doctoral research is dedicated to the specification of multi-user multimodal interaction. The doctoral research contributions include the COMM (Collaborative and MultiModal) notation and its on-line editor for specifying multi-user multimodal interactive systems. Extending the CTT notation, the salient features of the COMM notation include the concepts of interactive role and modal task as well as a refinement of the temporal operators applied to tasks using the Allen relationships. The COMM notation and its on-line editor e-COMM (http://iihm.imag.fr/demo/editeur/) have been successfully applied to a large scale project dedicated to a multimodal military command post for the control of unmanned aerial vehicles (UAV) by two operators
Gonseth, Chloe. „Multimodalité de la communication langagière humaine : interaction geste/parole et encodage de distance dans le pointage“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS011/document.
Der volle Inhalt der QuelleDesignating an object for the benefit of another person is one of the most basic processes inlinguistic communication. It is most of the time performed through the combined use of vocaland manual productions. The goal of this work is to understand and characterize the interactionsbetween speech and manual gesture during pointing tasks, in order to determine howmuch linguistic information is carried by each of these two systems, and eventually to test themain models of speech and gesture production.The first part of the study is about the production of vocal and manual pointing. The originalaspect of this work is to look for distance encoding parameters in the lexical, acoustic,articulatory and kinematic properties of multimodal pointing, and to show that these differentcharacteristics can be related with each other, and underlain by a similar basic motor behaviour: designating a distant object induces larger gestures, be they vocal or manual. This motorpattern can be related with the phonological pattern that is used for distance encoding in theworld’s languages. The experimental design that is used in this study contrasts bimodal vs. vocalmonomodal vs. monomodal manual pointings, and a comparison between these conditionsreveals that the vocal and manual modalities act in bidirectional cooperation for deixis, sharingthe informational load when used together.The second part of the study explores the development of multimodal pointing. The propertiesof multimodal pointing are assessed in 6-12 year-old children, in an experimental task similarto that of the adults. This second experiment attests a progressive evolution of speech/gestureinteractions in the development of spatial deixis. It reveals that distance is preferentially encodedin manual gestures in children, rather than in vocal gestures (and especially so in youngerchildren). It also shows that the cooperative use of speech and manual gesture in deixis is alreadyat play in children, though with more influence of gesture on speech than the reversedpattern.The third part of the study looks at sensorimotor interactions in the perception of spatial deixis.This experimental study, based on an intermodal priming paradigm, reveals that manual gestureplays a role in the production/perception mechanism associated with the semantic processingof language. These results can be related with those of studies on the sensorimotor nature ofrepresentations in the processing of linguistic sound units.Altogether, these studies provide strong evidence for an integrated representation of speech andmanual gestures in the human linguistic brain, even at a relatively early age in its development.They also show that distance encoding is a robust feature, which is also present in all aspectsof multimodal pointing
Fuchs, Yann. „Les quotatifs en interaction. Approche synchronique d'un paradigme en mouvement, dans un corpus d'anglais oral britannique et irlandais“. Thesis, Paris 3, 2012. http://www.theses.fr/2012PA030139.
Der volle Inhalt der QuelleThis thesis gives a synchronic account of the quotative paradigm in oral English, subsequently to the recent arrival of the new introducers GO and BE LIKE less than five decades ago. It stands in the wake of earlier studies that have aimed at analysing the quotative system of English since the earliest phases of this recently attested change in progress. The study was carried out on a corpus of original data, the Cambridge Student Corpus, which contains semi-guided dyadic conversations between British and Irish native speakers. This thesis examines, from an empirical point of view, the various pragmatic, interactional and discourse functions of quotatives in order to shed light on their complementary distribution in oral interaction. The chosen approach is polysystemic, multi-theoretic and multimodal. Language is a complex system, within which several sub-systems interact to participate in building spoken interaction. In order to account for this complexity, it is necessary to apply several methods of analysis and various linguistic theories simultaneously. This thesis also takes into account the multimodal aspects of oral interaction. It gives a qualitative and quantitative account of quotatives with respect to their functions of representation, reiteration of prior events and multimodal performance. It also examines various narrative strategies that these markers may implement as they participate together in the elaboration of sequences of dialogue. This work illustrates the notion that only through a combination of different methods can the analyst reduce the number of unexplained events that occur in oral interaction
Helm, Francesca. „I'm not disagreeing, I'm just curious: Exploring identities through multimodal interaction in virtual exchange“. Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/400763.
Der volle Inhalt der QuelleEsta tesis explora la aparición de la identidad en la interacción multimodal dentro de un contexto situado en un intercambio virtual, es decir “un programa de educación persona-a-persona, sostenido y habilitado por la tecnología”. Considera que la identidad es construida a través del discurso y reconstruida cada vez que abordamos una interacción, y que los contextos situados en una interacción pueden tanto limitar como aumentar las oportunidades de los individuos para abordar el uso del lenguaje y la construcción de la identidad. Entender como los alumnos construyen y representan identidades en determinados contextos puede ayudarnos en educación al diseñar y implementar los procesos del aprendizaje de la lengua. La tesis empieza presentando un marco para el análisis de la identidad en contextos “online”, basado en propuestas etnográficas y centradas en el discurso sobre la identidad en la interacción, la investigación en la Comunicación Mediada por el Ordenador (CMO) y la multimodalidad. Este marco me permite explorar la identidad en el intercambio virtual en varios niveles. Primero de todo miro los fundamentos ideológicos, políticos y teóricos del intercambio virtual, un programa virtual que fue diseñado para gestionar la relación entre las “sociedades occidentales” y las “sociedades predominantemente musulmanas”. Posteriormente exploro el papel de mediador y la potencialidad del uso de tecnologías asíncronas y síncronas multimodales adoptadas para la interacción y la construcción de la identidad en el intercambio virtual. Finalmente analizo los patrones de interacción y el posicionamiento de la identidad de un grupo particular de gente joven conectado desde Palestina, Túnez, Egipto, Jordania, Qatar y los Estados Unidos para participar en el “diálogo facilitado” en el intercambio a lo largo de un periodo de 7 semanas. Mis hallazgos indican que las identidades en este contexto son fluidas y emergentes, como lo son las dinámicas de poder dentro de las interacciones. Las relaciones de poder entre los facilitadores y los participantes son asimétricas y esto se refleja en ciertos intercambios , pero los facilitadores también gestionan las dinámicas de poder creando un espacio seguro y asumiendo el control de las interacciones apoyando la comprensión a través de transcripciones. En consecuencia las dinámicas de poder cambiaron desde las primeras sesiones, cuando los participantes no iniciaron muchas interacciones en comparación con sesiones posteriores, cuando empezaron cuestionar, desafiar y construir conocimientos basados en las perspectivas de unos y otros. Al intentar comprender el ‘otro’, los participantes hacen relevante suyo propio identidad y el de las identidades locales vividas, por ejemplo ser un joven en Túnez o Egipto en un momento histórico cuando la gente había tomado las calles para demandar un cambio; o el hecho de ser un joven americano que está en desacuerdo con o sabe poco sobre la politicaasimetríasexterior de su gobierno. Utiliza estas identidades, dentro del contexto de un grupo de diálogo, y tienen el mismo objetivo de compartir y adquirir una mayor comprensión del otro. El contexto les permitió a muchos de los participantes encontrar su propia voz, a pesar de las asimetrías en términos de competencias del lenguaje, en las dinámicas de poder y las diferencias de conocimientos de la geopolítica.
This thesis explores the emergence of identity in multimodal interaction within the situated context of a virtual exchange, that is a "technology-enabled, sustained, people-to-people education program". It takes the view that identity is discursively constructed and reconstituted every time we engage in interaction, and that the situated contexts of interaction can both limit and enhance opportunities for individuals to engage in language use and identity construction. Understanding how learners construct and enact identities in certain contexts can thus help us in the design and implementation of online language learning in education. This thesis begins by presenting a framework for the analysis of identity in online contexts, based on ethnographic and discourse-centred approaches to identity in interaction, research on CMC and multimodality. This framework allows me to explore identity in virtual exchange on various levels. I first of all of look at the ideological, political and theoretical underpinnings of a virtual exchange which was designed to address the relationship between ‘Western societies’ and ‘predominantly Muslim societies’. I then explore the mediating role and the affordances of the multimodal, asynchnronous and synchronous technologies adopted for interaction and identity construction in the exchange. Finally, I analyse the patterns of interaction and identity positionings of one particular group of young people connecting from Palestine, Tunisia, Egypt, Jordan, Qatar and the United States to engage in “facilitated dialogue” in the virtual exchange over a period of 7 weeks. Findings indicate that identities in this context are fluid and emergent, as are the power dynamics within interactions. The power relations between facilitators and participants are asymmetric and this is reflected in certain exchanges, but the facilitators also address power dynamics by supporting understanding through transcription, and supporting the participants in creating a safe space and to take control of the interactions. Power dynamics thus shifted as participants initiated interactions, questioned, challenged and built upon one another’s perspectives. In seeking to gain understanding of the “other”, interactants made relevant their own and each others’ local and lived identities, for example being a young person in Tunisia or Egypt at a historic time when people have been taking to the streets to demand change or being a young American who disagrees with or knows little about his government’s foreign policy. Orienting to these identities in the context of a facilitated dialogue group with the shared goal of acquiring greater understanding of the “other” allowed many of the participants to find a voice, despite the asymmetries in terms of language competence, quality of connection and knowledge of geopolitics. Research in this area is relevant, particularly in the current time which is marked by refugee crises, increased nationalisms and populisms, terror alerts and fear of the “other”. Policy makers are calling on educators to have students engage positively with difference, develop digital literacies and critical awareness and there is an urgent need for greater understanding as to how this can be done. I seek to highlight how we can apply some of the findings of this study to the design of learning in multimodal online contexts which offer learners a range of identity positionings.
Orso, Valeria. „Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction“. Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424451.
Der volle Inhalt der QuelleI sistemi computazionali hanno ormai da tempo abbandonato lo scenario immobile della scrivania e tendono oggi a coinvolgere sempre di più ambiti della vita quotidiana, in altre parole pervadono le nostre vite. Nel contesto del pervasive o ubiquitous computing, l’interazione tra l’utente e la macchina dipende in misura sempre minore da specifici sistemi di input (per esempio mouse e tastiera) e sfrutta sempre di più modalità di controllo naturali per operare con i dispositivi (per esempio tramite i gesti o il riconoscimento vocale). Numerosi sono stati i tentativi di trasformare in modo sostanziale il design dei computer e delle modalità di interazione tra cui l’impiego di sistemi per il riconoscimento dei comandi gestuali, dispositivi indossabili e la realtà aumentata. In tali contesti, i metodi tradizionalmente impiegati per lo studio della relazione uomo-macchina si rivelano poco efficaci e si delinea la necessità di una adeguata revisione di tali metodi per poter indagare adeguatamente le caratteristiche dei nuovi sistemi. Nel presente lavoro, sono state analizzate le modalità di interazione dell’utente con diversi sistemi innovativi, ciascuno caratterizzato da un diverso tipo di interfaccia. Sono stati inoltre considerati contesti d’uso diversi. I metodi impiegati sono stati concepiti per rispondere alle diverse caratteristiche delle interfacce in esame e una serie di raccomandazioni per gli sviluppatori sono state derivate dai risultati degli esperimenti. Il primo dominio di applicazione investigato è quello domestico. In particolare, è stato esaminato il design di una interfaccia gesturale per il controllo di un sistema di illuminazione integrato in un mobile della cucina. Un gruppo rappresentativo di utenti è stato osservato mentre interagiva con una simulazione virtuale del prototipo. In base all’analisi dei comportamenti spontanei degli utenti, abbiamo potuto osservare una serie di regolarità nelle azioni dei partecipanti. Il secondo dominio di applicazione riguarda l’esplorazione di un ambiente urbano in mobilità. In un esperimento comparativo, sono state confrontate un’interfaccia audio-aptica e una interfaccia audio- visiva per guidare gli utenti verso dei punti di interesse e per fornire loro delle informazioni a riguardo. I risultati indicano che entrambi i sistemi sono ugualmente efficienti ed entrambi hanno ricevuto valutazioni positive da parte degli utenti. In un compito di navigazione sono stati confrontati due display tattili, ciascuno integrato in un diverso dispositivo indossabile, ovvero un guanto e un giubbotto. Nonostante le differenze nella forma e nella dimensione, entrambi i sistemi hanno condotto efficacemente l’utente verso il target. I punti di forza e le debolezze dei due sistemi sono state evidenziate dagli utenti. In un contesto simile, sono stati confrontati due dispositivi che supportano la Realtà Aumentata, ovvero un paio di smartglass e uno smartphone. L’esperimento ci ha permesso di identificare le circostanze che favoriscono l’impiego dell’uno o dell’altro dispositivo. Considerando i risultati degli esperimenti complessivamente, possiamo quindi delineare una serie di raccomandazione per gli sviluppatori di sistemi innovativi. Innanzitutto, si evidenzia l’importanza di coinvolgere in modo adeguato gli utenti per indentificare modalità di interazione intuitive con le interfacce gesturali. Inoltre emerge l’importanza di fornire all’utente la possibilità di scegliere la modalità di interazione che meglio risponde alle caratteristiche del contesto insieme alla possibilità di personalizzare le proprietà di ciascuna modalità di interazione alle proprie esigenze. Infine, viene messa in luce le potenzialità dei dispositivi indossabili nelle interazioni in mobilità insieme con l’importanza di trovare il giusto equilibrio tra la quantità di informazioni che il dispositivo è in grado di inviare e la dimensione dello stesso.
Ursi, Biagio. „Le refus en interaction : une approche syntaxique et séquentielle de la négation“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2184.
Der volle Inhalt der QuelleOur research focuses on rejection in conversation, from an interactional linguistic perspective. Rejection is sequentially characterized as a second pair part. Our analysis is based on a collection of instances from naturally occurring video data (ordinary conversations, dinner conversations, interactions in commercial settings, meal preparations, guided tours). We propose a fine-grained transcription of conversational excerpts, taking into account multimodal and verbal resources. From a perspective relying on interactional linguistics and Aix macrosyntax, we carry out a mixed analysis in order to study both sequential and syntactic characterizations of initiative and reactive actions in sequences involving rejection.The first part of our study focuses on the rejection of concrete offers dealing with objects, the second part concerns rejections of candidate answers dealing with confirmation requests. Our research is grounded in talk-in-interaction and we mobilize two approaches that operate in this field. The multimodal and interactional analysis allows us to highlight sequential patterns, which can also be characterized in macrosyntactic terms. In our data, negation is closely connected to the realization of rejection: it is considered both through the verbal resources that enable it to be expressed and in its physical manifestations (head shakes, hand gestures, facial expressions)
Rodriguez, Bertha Helena. „Modèle SOA sémantique pour la multimodalité et son support pour la découverte et l'enregistrement de services d'assistance“. Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0006.
Der volle Inhalt der QuelleUnimodal inputs and outputs in current systems have become very mature with touch applications or distributed services for geo-localization or speech, audio and image recognition. However, the integration and instantiation of all these modalities, lack of an intelligent management of the acquisition and restitution context, based on highly formalized notions reflecting common sense. This requires a more dynamic behavior of the system with a more appropriate approach to manage the user environment.However, the technology required to achieve such a goal is not yet available in a standardized manner, both in terms of the functional description of unimodal services and in terms of their semantic description. This is also the case for multimodal architectures, where the semantic management is produced by each project without a common agreement in the field to ensure inter-operability, and it is often limited to the processing of inputs and outputs or fusion / fission mechanisms. To fill this gap, we propose a semantic service-oriented generic architecture for multimodal systems. This proposal aims to improve the description and the discovery of modality components for assistance services: this is the architecture SOA2m. This architecture is fully focused on multimodality and it is enriched with semantic technologies because we believe that this approach will enhance the autonomous behavior of multimodal applications, provide a robust perception of the user-system exchanges, and help in the control of the semantic integration of the human-computer interaction.As a result, the challenge of discovery is addressed using the tools provided by the field of the semantic web services
Pelurson, Sébastien. „Navigation multimodale dans une vue bifocale sur dispositifs mobiles“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM035/document.
Der volle Inhalt der QuelleMobile devices are now ubiquitous in everyday computing. Technological advances and increasing mobile network performance allow users to manipulate more and more information on their mobile devices, changing the use they make of these types of devices, which are gradually replacing desktop computers. However mobile devices are not used in the same way as desktops and face specific constraints. In particular, smaller screens fail to display as much information as a computer screen. In addition, these screens, mostly tactile, are used as both input and output devices, leading to occlusion of a portion of the screen during touch interaction. These findings and limitations give rise to the problem of interactive visualization of large amounts of information on mobile devices.We addressed this problem by considering two related research axes: on the one hand information visualization and on the other hand interaction on mobile devices.For the first axis, we focused on visualization techniques that provide an overview of the information space and a detailed subset of it. Indeed, only one view of a subset of the information space makes it difficult to understand it because of the lack of context. Conversely, visualizing the complete information space on the screen of a mobile device makes it unreadable.For the second axis, we studied interaction techniques for navigating an information space. Facing the variety of sensors available in todays mobile devices, there is a vast set of possibilities in terms of interaction modalities.We provide two types of contribution: conceptual and practical.First we present a design space of navigation techniques on mobile devices: this design space enables us to describe, compare and design interaction modalities for the task of navigation in an information space. Second we propose a conceptual model of multimodal navigation for navigating a multiscale information space.Based on a state of the art of visualization techniques on mobile devices, we designed, developed and experimentally tested a bifocal view on a mobile device. By relying on our design space and by operationalizing our conceptual model of navigation, we designed developed and experimentally compared several multimodal interaction techniques for navigating a multiscale information space
Notér, Hooshidar Annika. „Dansundervisning som förkroppsligad multimodal praktik : en studie om kommunikation och interaktion i dansundervisning“. Licentiate thesis, Stockholms universitet, Institutionen för pedagogik och didaktik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-107135.
Der volle Inhalt der QuelleDebras, Camille. „L'expression multimodale du positionnement interactionnel (multimodal stance-taking) : étude d'un corpus oral vidéo de discussions sur l'environnement en anglais britannique“. Thesis, Paris 3, 2013. http://www.theses.fr/2013PA030155.
Der volle Inhalt der QuelleIn this research, we propose a multimodal analysis of stance-taking based a collection of semi-guided discussions between pairs of friends who discuss environmental issues (2h 20 min). All 16 speakers are university students who are native speakers of British English. We filmed, transcribed and annotated this video corpus in three compatible software tools, CLAN, PRAAT and ELAN. In this research, we defend a broad understanding of “language”, defined as encompassing all verbal and non-verbal semiotic resources involved in the dynamic and intersubjective co-construction of meaning during spoken interaction. We show that speakers integrate a wide range of verbal resources (segments, utterances) as well as vocal (intonation) and visual ones (gestures, postures and facial expressions), and synchronize these resources simultaneously and sequentially so as to take stances with respect to their interlocutors. On a theoretical level, our multi-level, multimodal approach brings together French utterer-centred approaches to language (Benveniste, 1966, Morel and Danon-Boileau, 1998), discursive-functional theories of stance-taking (Kärkkäinen, 2006, Du Bois, 2007), multimodal conversation analysis (C. Goodwin and M.H. Goodwin, 1992, Mondada, 2007), linguistic anthropology (Ochs, 1996) and gesture studies (Kendon, 2004, Müller, 2004, Streeck, 2009); our methodology combines qualitative analysis with systematic coding. This thesis starts with laying the theoretical and methodological bases for a multimodal study of stance-taking (Part 1); it then proposes that some gestures and facial expressions can be used as intersubjective visual stance markers (Part 2), before showing how speakers integrate words and syntax, voice, facial expressions, gestures and physical posture to take stance in interaction (Part 3)
Jourde, Frederic. „Collecticiel et Multimodalité : spécification de l'interaction la notation COMM et l'éditeur e-COMM“. Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00618919.
Der volle Inhalt der QuellePietrzak, Thomas. „Contributions à la dissémination d'informations haptiques dans un environnement multimodal“. Phd thesis, Université de Metz, 2008. http://tel.archives-ouvertes.fr/tel-00390057.
Der volle Inhalt der QuelleTouileb, Djaid Nadia. „Contribution à la mise en œuvre d’une architecture ambiante d’interaction homme-robot-environnement. Dans le cadre de la robotique d’aide à la personne dépendante“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV037/document.
Der volle Inhalt der QuelleThe subject of this thesis is to provide an ambient architecture for the human-robotenvironment interaction, as part of thedependent person robotics help. This architecture will enable the robot to take into account the changing context and continually provide a service to the user. The architecture uses the concept of ontology for the descriptionof the environment. We have chosen to use the open source PROTEGE because it allows the definition of the ontology and the fusion and fission engines. Indeed, multimodal inputs will be merged and subdivided into elementary tasks and sent tocontrol the wheelchair with the manipulated arm. This architecture will be validated by specifications and simulations via temporal and stochastic Petri nets
Touileb, Djaid Nadia. „Contribution à la mise en œuvre d’une architecture ambiante d’interaction homme-robot-environnement. Dans le cadre de la robotique d’aide à la personne dépendante“. Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV037.
Der volle Inhalt der QuelleThe subject of this thesis is to provide an ambient architecture for the human-robotenvironment interaction, as part of thedependent person robotics help. This architecture will enable the robot to take into account the changing context and continually provide a service to the user. The architecture uses the concept of ontology for the descriptionof the environment. We have chosen to use the open source PROTEGE because it allows the definition of the ontology and the fusion and fission engines. Indeed, multimodal inputs will be merged and subdivided into elementary tasks and sent tocontrol the wheelchair with the manipulated arm. This architecture will be validated by specifications and simulations via temporal and stochastic Petri nets
Tevissen, Yannis. „Diarisation multimodale : vers des modèles robustes et justes en contexte réel“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS014.
Der volle Inhalt der QuelleSpeaker diarization, or the task of automatically determining "who spoke, when?" in an audio or video recording, is one of the pillars of modern conversation analysis systems. On television, the content broadcasted is very diverse and covers about every type of conversation, from calm discussions between two people to impassioned debates and wartime interviews. The archiving and indexing of this content, carried out by the Newsbridge company, requires robust and fair processing methods. In this work, we present two new methods for improving systems' robustness via fusion approaches. The first method focuses on voice activity detection, a necessary pre-processing step for every diarization system. The second is a multimodal approach that takes advantage of the latest advances in natural language processing. We also show that recent advances in diarization systems make the use of speaker diarization realistic, even in critical sectors such as the analysis of large audiovisual archives or the home care of the elderly. Finally, this work shows a new method for evaluating the algorithmic fairness of speaker diarization, with the objective to make its use more responsible
Wassrin, Maria. „Musicking : Kreativ improvisation i förskolan“. Licentiate thesis, Stockholms universitet, Barn- och ungdomsvetenskapliga institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-88733.
Der volle Inhalt der QuelleForskarskolan: Globalisering, literacy och utforskande lärprocesser: Förskolebarns språk, läsande, skrivande och matematiserande (GUL).
Mohand, Oussaïd Linda. „Conception et vérification formelles des interfaces homme-machine multimodales : applications à la multimodalité en sortie“. Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0022/document.
Der volle Inhalt der QuelleMultimodal Human-Computer Interfaces (HCI) offer to users the possibility to combine interaction modalities in order to increase user interface robustness and usability. Specifically, output multimodal HCI allow system to return to the user, the information generated by the functional core by combining semantically different modalities. In order to design such interfaces for critical systems, we proposed a formal model for the design of output multimodal interfaces. The proposed model consists of two models: the semantic fission model describes the decomposition of the information to return into elementary information and the allocation model specifies the allocation of the elementary information with modalities and media. We have also developed a detailed Event B formalization for the two models: semantic fission and allocation. This formalization has been instantiated on case studies and generalized in an Event B development process framework including semantic fission and allocation models. This formalization allows to carry out safety, liveness and usability properties verification
Gherman, Tatiana I. „Spoken and embodied interaction in facilitated computer-supported workplace meetings“. Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36164.
Der volle Inhalt der QuelleJacquet, Christophe. „Présentation opportuniste et multimodale d'informations dans le cadre de l'intelligence ambiante“. Paris 11, 2006. http://www.theses.fr/2006PA112267.
Der volle Inhalt der QuelleThis research work takes place in the domain of human-computer interaction, and particularly multimodal interaction and ambient intelligence. It aims at specifying a theoretical model and a platform for the design and implementation of mobile users assistance systems. We introduce the KUP model, in which the system's functional core, the users and the presentation devices (screens, loudspeakers, etc. ) are represented by logical entities. This model is original because it imposes no spatial nor temporal coupling between information providing by the functional core to the user entity on the one hand, and information presentation by a suitable device on the other hand. Both phases are opportunistic: they happen fortuitously, as (physical) users move around. When a user is located at proximity of a number of presentation devices, the system must determine which device and which modality shall be used to convey information. First, an incremental algorithm is responsible for choosing a device while abiding by three ergonomic constraints: completeness, stability and display space optimization. Second, a tree-based algorithm selects and instantiates a modality while satisfying users' preferences. The KUP model and the algorithms have been implemented in the PRIAM platform (PResentation of Information in AMbient Intelligence), which has enabled us to carry out evaluations in mock-up environments. The evaluations have shown that dynamic display systems enable users to look up their information far more quickly than static displays
Gonseth, Chloé. „Multimodalité de la communication langagière humaine : interaction geste/parole et encodage de distance dans le pointage“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00949090.
Der volle Inhalt der QuelleSalazar, Gómez Antonio José. „Développement d'une station d'imagerie médicale multimodalité et multimédia pour la téléradiologie interactive“. Compiègne, 1996. http://www.theses.fr/1996COMPD907.
Der volle Inhalt der QuelleRollet, Nicolas. „Analyse conversationnelle des pratiques dans les appels au Samu-Centre 15 : Vers une approche praxéologique d’une forme située « d’accord »“. Thesis, Paris 3, 2012. http://www.theses.fr/2012PA030097/document.
Der volle Inhalt der QuelleIn the context of a Conversational Analysis approach ethnographically oriented, my work deals withcoordination in the telephonic interaction of calls made to the French medical emergency call («15»). Two aspects of this coordination are explored : (1) organization of the questioning in the following ternary sequential format « Question-Answer-Acknowledgement » ; (2) coordination between the production of these ternary sequential formats and their interaction with the computerized system.This research is based on audio-visual data gathered at the Center of Reception and Regulation of Calls (in French : CRRA) of the SAMU of Versailles (France).The first aspect of the coordination (1) proposes an analysis of the various actions accomplished through an « OK » ( « d’accord », or its equivalent) which is itself the result of a question put to a caller (fireman, ambulance staff or a private individual) by the CRRA call takers ( »permanencière » in French). This response after an answer presents a wealth of prospective and retrospective features, in terms of the work performed by the participants to obtain and gather informations about a medical problem, to ensure coordination in order to advance step-by-step, to investigate further, to infer, and to establish transitions in the interaction.The second aspect of the coordination (2) illustrates the complexity of the activities of the CCRA staff who must, in a synchronized manner, be engaged in an exchange of a conversational nature, and at the same time organize the gathering of information on the medical problem, while using objects such as a computer mouse, a keyboard, notebooks
Clay, Alexis. „La branche émotion, un modèle conceptuel pour l’intégration de la reconnaissance multimodale d’émotions dans des applications interactives : application au mouvement et `a la danse augmentée“. Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13935/document.
Der volle Inhalt der QuelleComputer-based emotion recognition is a growing field which develops new needs in terms of software modeling and integration of existing models. This thesis describes a conceptual framework for designing emotionally-aware interactive software. Our approach is based upon conceptual results from the field of multimodal interaction: we redefine the concepts of modality and multimodality within the frame of passive emotion recognition. We then describe a component-based conceptual model relying on this redefinition. The emotion branch facilitates the design, development and maintenance of emotionally-aware systems. A multimodal, interactive, gesture-based emotion recognition software based on the emotion branch was developed. This system was integrated within an augmented reality system to augment a ballet dance show according to the dancer's expressed emotions
Pietrzak, Thomas. „Contributions à la dissémination d'informations haptiques dans un environnement multimodal“. Electronic Thesis or Diss., Metz, 2008. http://www.theses.fr/2008METZ017S.
Der volle Inhalt der QuelleMost of cumputer interfaces mostly rely on vision to transmit information to users. However some situations require interaction techniques allowing to display information in a non visual manner. We studied solutions using the sense of touch. After an overview of the interface design domain, in particular the notion of multimodality, we focussed on a particular modality: haptics, that uses the sense of touch. Our contribution to this domain begins with the design and the evaluation of tactile icons that allows to encode information with pin matrices. They have been used in a guidance system to help users to explore geometric shapes thanks to the sense of touch. We also designed and evaluated force feedback icons that use a robot arm. These two icon systems have been used in an electric circuits exploration application. This application allows visually impaired users and sighted users to explore electric schematics together in collaboration. They are provided visual and haptic information to understand circuits' shapes and components. This application uses a software architecture we design and detailed. This architecture provides building blocks to ease the design and development of multimodal applications, and especially applications using haptic feedback
Appert, Damien. „Conception et évaluation de techniques d'interaction non visuelle optimisées pour de la transmission d'information“. Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30095/document.
Der volle Inhalt der QuelleIn situations where the visual perception is strongly constraint or deficient, it is necessary to make perceptible the information with a "not visual form" while taking into account human sensory and mnesic capacities. For example, a blind person wishing to acquaint an itinerary must read it under a non visual form and memorize it. However, besides the material aspect, the implementation of alternatives (non-visual) still faces to the cognitive abilities of the user (comprehension, memorization, integration of various information, etc.). The purpose of this thesis is to contribute to the design of interaction techniques allowing to optimize the transmission not visual of the information. For these purposes, I explored the feature of multimodality as a means of optimization, allowing of exceeding the memorization limits. I focused on the study of interaction techniques based on auditory and tactile modalities and by minimizing the use of the speech, in order to develop techniques for different environments (flexibility), optimize the use of perceptual channels (operating the properties of sound in audio messages to transmit more information, for example), avoid limiting my techniques by the language barrier or understanding and finally, to explore alternatives to the synthesised voice alone. The works of my thesis led to the design, to the implementation and to the evaluation of interaction techniques "non-visual" and "multiform", in answer to different contexts, whom in particular those of the information transmission of type , (pair of coordinates) and (sequence of couples direction-distance). To achieve design my interactions, I have made a review of literature in order to extract the main factors of design of interaction techniques dedicated to the transmission not visual of the information. Then, I have organized these factors in an analytical framework on which I have relied to design each of my techniques. Three separate experiments were led to evaluate the influence of design factors on the effectiveness of interactions and satisfaction towards users of technology. I can give some of them, the involvement of users (active or passive), the presence of explicit help, the transmission of several information in parallel, the main modality used and the type of coding in which is encoded the information
Pruvost, Gaëtan. „Modélisation et conception d’une plateforme pour l’interaction multimodale distribuée en intelligence ambiante“. Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112017/document.
Der volle Inhalt der QuelleThis thesis deals with ambient intelligence and the design of Human-Computer Interaction (HCI). It studies the automatic generation of user interfaces that are adapted to the interaction context in ambient environments. This problem raises design issues that are specific to ambient HCI, particularly in the reuse of multimodal and multidevice interaction techniques. The present work falls into three parts. The first part is an analysis of state-of-the-art software architectures designed to solve those issues. This analysis outlines the limits of current approaches and enables us to propose, in the second part, a new approach for the design of ambient HCI called DAME. This approach relies on the automatic and dynamic association of software components that build a user interface. We propose and define two complementary models that allow the description of ergonomic and architectural properties of the software components. The design of such components is organized in a layered architecture that identifies reusable levels of abstraction of an interaction language. A third model, called behavioural model, allows the specification of recommendations about the runtime instantiation of components. We propose an algorithm that allows the generation of context-adapted user interfaces and the evaluation of their quality according to the recommendations issued from the behavioural model. In the third part, we detail our implementation of a platform that implements the DAME approach. This implementation is used in a qualitative experiment that involves end-users. Encouraging preliminary results have been obtained and open new perspectives on multi-devices and multimodal HCI in ambient computing
Fricke, Ellen, und Jana Bressem. „Gesten - gestern, heute, übermorgen. Vom Forschungsprojekt zur Ausstellung“. Universitätsverlag Chemnitz, 2020. https://monarch.qucosa.de/id/qucosa%3A33959.
Der volle Inhalt der QuelleThe book „Gestures - past, present, future: From a research project to an exhibition', edited by Ellen Fricke and Jana Bressem, is a publication between catalogue, art, and science. From different disciplines, it presents a tour through the exhibition „gestures - past, present, future' that was and is shown in the Saxon Museum of Industry in Chemnitz and the Museums for Communication in Berlin and Frankfurt Main. Articles, interviews, and photo documentations put exhibits into the context of science as well as aesthetic reflection and hope to initiate a societal discourse about the world of tomorrow and the day after tomorrow.:I Gesten – gestern, heute, übermorgen II Hände und Objekte in Sprache, Kultur und Technik III Wie Gesten und Hände sich wandeln: Evolution, Anthropologie, Technologie IV Hände, Dinge und Gesten ästhetisch reflektiert: Interviews und Texte V Industriekultur im Wandel: Vom Handgriff zur Gestensteuerung VI Dokumentation
de, Roock Robert Santiago. „Literacy as an Interactional Achievement: The Material Semiotics of Making Meaning Through Technology“. Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/578718.
Der volle Inhalt der QuelleSong, Le. „Multimodal Interactional Practices in Live Streams on Twitter“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT019.
Der volle Inhalt der QuelleAs an emerging form of mediated interaction, live streaming has become a rapidly growing practice that combines the technical and interactional features of video-mediated interaction and multi-party chat. Live streaming with mobile devices on multiple platforms has thus been a practice in which streamers and viewers interact in highly asymmetric forms—the streamer's video display and the viewer's written text. This doctoral dissertation focuses on live streams as interactional phenomena from a sequential perspective. Drawing on video-recorded data from ordinary users' naturally unfolding activities in daily life-oriented live streams on Twitter (now ‘X') and taking ethnomethodology and conversation analysis (EMCA) as its theoretical and methodological perspective, the thesis explores how the use of multiple (e.g., spoken, written and embodied) resources, as well as the manipulation of affordance of the devices in establishing the participation framework of live streaming interactions and achieving different joint actions stepwise. The dissertation consists of four main research articles, each focusing on a typical interactional phenomenon in live streaming. All of the articles have been published or are under review. Article I investigates the openings of live streaming. Unlike phone conversations with a canonical opening sequence, live stream openings appear more variable, with laminated participation frames, although there is usually a recognizable "installation" phase where the stream activity begins. We also identified interactional concerns in the opening, that is, the streamers' wait for an adequate audience, their collective and individual management of viewers within a guest/host relationship, and the concern of participants regarding the immediate intelligibility of the stream. Article II discusses how streamers and viewers manage attention and engagement through noticing-based actions. It looks at how streamers and viewers produce noticing sequences and noticing-based sequences, and how the orientation towards noticing may lead to a distinctive form of ‘noticing effervescence.' Article III inspects the activity of tasting in live streaming, re-examining tasting in this particular ecology as an interactive process that combines individual sensory experience with a public, witnessable, and intersubjective dimension. Article IV investigates the organization of closing sequences in live streaming. It shows that while participants can be seen to orient to the sequential organization of closings in ordinary conversation, they do so in a way that is particularly sensitive to the affordances of live video streams. The thesis thus provides a systematic analysis of the most characteristic interactional properties of live streaming
Hudson, Nancie. „Practical Theology in an Interpretive Community: An Ethnography of Talk, Texts and Video in a Mediated Women's Bible Study“. Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6713.
Der volle Inhalt der QuelleWallentin, Rebecca. „Lärares ledarskap och kommunikation : En interaktionsanalytisk studie av tillämpade ledarstrategier i klassrummet med fokus på multimodalitet“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96404.
Der volle Inhalt der QuelleReboud, Alison. „Towards automatic understanding of narrative audiovisual content“. Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS398.pdf.
Der volle Inhalt der QuelleModern storytelling is digital and video-based. Understanding the stories contained in videos remains a challenge for automatic systems. Having multimodality as a transversal theme, this research thesis breaks down the "understanding" task into the following challenges: Predicting memorability, summarising and modelling stories from audiovisual content