Tesi sul tema "Interaction humain et machine"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Interaction humain et machine".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Buseyne, Julien. "Jeu vidéo et traduction, étude d’une relation humain-machine". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE001/document.
Testo completoSince it came into being in the beginning of the 970 decade, the video games industry made its way to the top of the entertainment industry. The translation of the product it delivered played an important part in this evolution, allowing some platform holders, developers and publishers to become worldwide actors. Video games are interactive technical artifacts. Their fabrication relies on both the coding of a software acting as a playable structure, and on artistic assets used to build its representation. Translators operating in such a context are confronted with a mesh of challenges that they need to overcome in order to successfully complete their task. This PhD thesis is a comprehensive study of the interaction between video games and translation, considered as a human-machine relationship. Unfortunately, the rules enforced by the industry regarding the protection of intellectual property, the complexity of the technical artifacts involved and the size of the productions are an impediment to any analysis. In order to remedy this situation, this study endeavors to analyze translation as a technical operation, performed on a type of artifacts, video games, while considering the digital technical system as an environment. Case studies complete this work. Its conclusion gives several leads for research and development, to be investigated in partnership with actors in the localization or video games industry
Coutrix, Céline. "Interfaces de réalité mixte : conception et prototypage". Grenoble 1, 2009. http://www.theses.fr/2009GRE10064.
Testo completoResearch in Human-Computer Interaction explores Mixed Reality Systems for several decades. Mixed interfaces seek to smoothly merge the physical and digital (data processing) environments. Facing a lack of capitalization and help when designing such systems, our work aims at defining a unifying model of interaction with mixed reality systems, capitalizing existing approaches in this domain. This model, called Mixed Interaction Model, allows a designer to describe, characterize and explore interaction techniques. In order to do so, we focuse on physical-digital objects taking part in the interaction with the user. In order to operationalize this interaction model during design and allow designers to realize interactive sketches simultaneously, we developped a prototyping tool. This tool capitalizes the concepts of the Mixed Interaction Model. We conducted conceptual and experimental evaluations, by considering existing validation frameworks and we designed several systems in collaboration with design actors
Horchani, Meriam. "Vers une communication humain-machine naturelle : stratégies de dialogue et de présentation multimodales". Phd thesis, Grenoble 1, 2007. http://www.theses.fr/2007GRE10290.
Testo completoThis thesis focuses on multimodal human-computer communication for public service systems. In this context, the naturalness of the communication relies on sensory-motor, cognitive and rhetorical accessibility of the interactive system by the user. Towards this goal of natural communication, we identify the key role of dialogic and presentation strategies : 1) A dialogic strategy of a cooperative information system defines the behavior of the system and the content to convey : Examples include relaxation, statement and restriction. 2) A presentation strategy defines a multimodal presentation specification, specifying the allocated modalities and how these modalities are combined. Highlighting the intertwined relation of content and presentation, we identify a new software component, namely the dialogic strategy component, as a mediator between the dialogue controller and the concrete presentation components within the reference Arch software architecture. The content selection and the presentation allocation managed by the dialogic strategy component are based on various constraints including the inherent characteristics of modalities, the availability of modalities as well as explicit choices or preferences of the user. In addition to the software development of this new component in two systems, we developed a design tool, which allows ergonomists and non-programmers to define and configure the dialogic strategies and to generate the dialogic strategy component of a particular system, as part of an iterative user-centered design process. Moreover this tool is integrated with a simulation framework for wizard of Oz experiments
Horchani, Meriam. "Vers une communication humain-machine naturelle : stratégies de dialogue et de présentation multimodales". Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00258072.
Testo completoCallupe, Luna Jhedmar Jhonathan. "Étude du fauteuil roulant Volting : interaction, commande et assistance". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG050.
Testo completoA significant portion of mobility solutions for wheelchairs focuses on assisting the user's movement. Most wheelchairs aim to provide a tailored solution that ensures appropriate driving and stability for the user. The Volting project, developed at LISV since 2020, introduces a new research direction: the development of a wheelchair capable of offering enhanced postural mobility for the user based on lateral tilting.The main objective pursued in the Volting project is to propose a reappropriation of the user's body with the imbalance caused by the tilting. In this regard, an initial proposition was made, consisting of a wheelchair capable of tilting laterally in proportion to the user's inclination. This additional freedom of movement provides the user with a greater capacity for bodily gestures, and it is in this context that wheelchair dancing was chosen as the application domain.In this thesis, the Volting prototype was studied from three perspectives: interaction, control, and assistance.Firstly, the interaction between the user and Volting refers to the relationship between the user's trunk inclination and the tilt of the Volting wheelchair. It was analyzed to find an appropriate behavior. Consequently, a model analysis was conducted to study the parameters involved in this interaction. Experiments demonstrated that the use of our model provided the appropriate parameters for correct kinematics.The second focus of this study is on the user's control of the Volting wheelchair. For this purpose, the use of a multi-sensor device called WISP was proposed. This device takes the user's trunk posture or hand position as input commands to control Volting. The implemented control logic allows the user to control the wheelchair with greater freedom in executing gestures. WISP was tested by a professional and an amateur wheelchair dancer, showing their quick adaptation to the device's use and their motivation towards dance practice.The third area of investigation concerns the use of the incline assistant, named "Glissiere" in Volting. This device relies on shifting the user's seat to promote or counteract the lateral tilt of the Volting wheelchair and, consequently, the user's body tilt. To pre-evaluate this solution, experiments showed that users with different morphologies, who were unable to use Volting to its full capacity, could use Volting with the assistance of "Glissiere."
Avril, Eugénie. "Automatisation de l'information et fiabilité : effets sur le comportement humain". Electronic Thesis or Diss., Toulouse 2, 2020. http://www.theses.fr/2020TOU20073.
Testo completoIn our daily life, it is easy to notice the overwhelming increase of automated systems. Studying the consequences of reliability problems of these systems on human behavior has become a necessity for different fields and especially for the human factors. The purpose of this thesis is to study the effects of imperfect automated systems on human behavior, knowing that their reliability is not perfect. More specifically, it focuses on automated systems that support the information function as presented in the model of Parasuraman, Sheridan and Wickens (2000). These effects are studied across two mains sectors: the aeronautical sector and the road freight transport sector. Using these two sectors allow to implement complementary methods. More precisely two types of support of the information function with reliability problems have been investigated. One is through an aircraft piloting simulation task and the second on a planning scenario evaluation task. These studies allowed on one hand (1) to deepen the knowledge on the reliability of automated systems according to the functions supported and (2) on another hand to bring new knowledge on the implementation of new automated systems supporting the planning activity where studies on the reliability of a system supporting information are non-existent
Bailly, Gilles. "Techniques de menus : caractérisation, conception et évaluation". Grenoble 1, 2009. http://www.theses.fr/2009GRE10062.
Testo completoMenus are used for exploring and selecting commands in interactive applications. They are widespread in current applications and used by a large variety of users. As a consequence, menus are at the heart of Human-Computer Interaction (HCI) and motivate many studies in HCI. Facing the large variety of designed menu techniques, it is however difficult to have a clear understanding of the design possibilities, to understand the advances as well as to compare existing menu techniques. In this context, this thesis in HCI proposes a design space of menu techniques called MenUA. MenUA is based on a list of criteria that define a coherent framework of design issues for menus. MenUA also helps application designers to make informed design choices and is a support for exploring design alternatives. Stemming from MenUA, we designed, developed and evaluated four menu techniques: Wave menus, Flower menus, Leaf menus and Multi-Touch Menus. Wave menus improve the novice mode of Marking menus by making the navigation within the hierarchy of commands easier. Flower menus increase the menu breadth of Marking menus while supporting good learning performance of the expert mode. Leaf menus are linear menus enriched by stroke shortcuts to facilitate the selection of commands on small handheld touch-screen devices. Finally, Multi-Touch Menus exploit the recent capabilities of multi-touch surfaces in order to allow the users to explore and select commands using the five fingers of the hands
Clodic, Aurélie. "Supervision pour un robot interactif : action et interaction pour un robot autonome en environnement humain". Toulouse 3, 2007. http://www.theses.fr/2007TOU30248.
Testo completoHuman-robot collaborative task achievement requires specific task supervision and execution. In order to close the loop with their human partners robots must maintain an interaction stream in order to communicate their own intentions and beliefs and to monitor the activity of their human partner. In this work we introduce SHARY, a supervisor dedicated to collaborative task achievement in the human robot interaction context. The system deals for one part with task refinement and on the other part with communication needed in the human-robot interaction context. To this end, each task is defined at a communication level and at an execution level. This system has been developped on the robot Rackham for a tour-guide demonstration and has then be used on the robot Jido for a task of fetch and carry to demonstrate system genericity
He, Ruan. "Architecture et mécanismes de sécurité pour l'auto-protection des systèmes pervasifs". Paris, Télécom ParisTech, 2010. https://pastel.hal.science/pastel-00579773.
Testo completoIn this thesis, we propose: - A three-layer abstract architecture: a three-layer self-protection architecture is applied to the framework. A lower execution space provides running environment for applications, a control plane controls the execution space, and an autonomic plane guides the control behavior of the control plane in taking into account system status, context evolution, administrator strategy and user preferences. - An attribute-based access control model: the proposed model (Generic Attribute-Based Access Control) is an attribute-based access control model which improves both the policy-neutrality to specify other access control policies and exibility to enable fine-grain manipulations on one policy. - A policy-based framework for authorization integrating autonomic computing: the policy-based approach has shown its advantages when handling complex and dynamic systems. In integrating autonomic functions into this approach, an Autonomic Security Policy Framework provides a consistent and decentralized solution to administer G-ABAC policies in large-scale distributed pervasive systems. Moreover, the integration of autonomic functions enhances user-friendliness and context-awareness. - A terminal-side access control enforcement OS: the distributed authorization policies are then enforced by an OS level authorization architecture. It is an effcient OS kernel which controls resource access through a dynamic manner to reduce authorization overhead. On the other hand, this dynamic mechanism improves the integrability of di_erent authorization policies. - An adaptation policy specifcation Domain Speci_c Language (DSL) for adaptation policy specification
Vo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0053/document.
Testo completoTelevision has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
Marache-Francisco, Cathie. "Gamification des interactions humain-technologie : représentation, conception et évaluation d’un guide pour la gamification des interfaces". Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0365.
Testo completoGamification is the process of using game elements in professional digital systems, elements which are tailored to users’ profiles in order to increase their motivation and commitment with an emphasis on appealing, even amusing, interactions. This digital interface design modality questions ergonomics, especially when applied to professional contexts. Gamification needs to be defined, designed for and evaluated. Furthermore, the mechanism behind user commitment and the meaning of gamification needs to be carefully considered. The first experiment analyzes the perception of gamification by designers. They have to identify the ludic parts of gamification interfaces and then to categorize it. Two dimensions are identified: cosmetic and involvement. The second experiment analyzes the perception of gamification by end users through a comparison between gamified and non gamified versions of an interface. Analyses reveal that gamification has a primary impact, which affects positively the interaction, and an unwanted secondary impact. Then, a design guide is created which consists of a description of the design process and a toolbox (main principles, decision trees, design grid). It is validated through a third experiment: subjects, split into two groups (with / without the guide) are asked to gamify a system. The results show that the guide favors fluidity, flexibility, novelty and details. To conclude, this work provides conceptual details as well as a discussion about the ideological part of gamification and the psychological techniques, which are used to commit and motivate as well as to persuade, change behavior and control. Finally, next steps for research are suggested
Flavigné, David. "Planification de mouvement interactive : coopération humain-machine pour la recherche de trajectoire et l'animation". Toulouse 3, 2010. http://thesesups.ups-tlse.fr/983/.
Testo completoThis thesis presents a motion planning method that integrates a user input into the planning loop to find a path and animate a virtual character. After a general introduction, a general presentation of the motion planning problem is made and the most known algorithms are presented in the second chapter. Then, two works on which this chapter is based are presented in details. A new motion planning method designed for planning in human environments is described. Several tests and a study are made on different environments and show the advantages of the approach. The third chapter introduces a new interactive motion planning method that allows a human operator and an algorithm to cooperate in a single interaction loop. This method is based on a pseudo-force exchange between the user and the algorithm through a virtual scene using interactive devices such as a space mouse or a haptic arm. Several examples illustrate the approach and an automotive case is presented. A analysis shows the influence of several parameters on the performances. The fourth chapter presents an intuitive and interactive character animation method. Using a previous work based on motion capture interpolation for animation and the previous chapter algorithm, this method introduces interactivity to create animated trajectories for virtual characters that follows the user intentions. Some examples illustrate the method and show the advantages
Rosso, Juan. "Surfaces malléables pour l'interaction mobile et tangible à distance". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM078/document.
Testo completoSliders are one of the most used widgets to control continuous parameters - e.g., brightness, sound volume, the temperature of a smart house, etc. On mobile phones, sliders are represented graphically, requiring the user's visual attention. They are mostly operated with a single thumb. While large sliders offer better performance, they present areas difficult for the thumb to reach. This article explores different tangible slider designs to offer eyes-free and efficient interaction with the thumb. The novel designs that we explored are based on a design space encompassing graphical solutions and the unexplored tangible solutions. To evaluate our designs, we built prototypes and experimentally tested them in three experiments. In our first experiment, we analyzed the impact on the performance of the tangible slider's length: either within the thumb's comfortable area or not. In our second experiment, we analyzed the performance of an extensible tangible design that allows operation within the comfortable area of the thumb. In our third experiment, we analyzed the performance of a bi-modal deformable tangible design that allows operation within the comfortable area of the thumb, and beyond this area, with the index finger on the back of the device. This work contributes to the literature by: first, providing a design space for one-handed interaction with deformable tangible elements. Second, analyzing the impact on performance when manipulating tangible sliders outside the thumb's comfortable area. And third, analyzing the impact that deformation has during manipulation
Vo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0053.
Testo completoTelevision has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
Fekete, Jean-Daniel. "Nouvelle génération d'Interfaces Homme-Machine pour mieux agir et mieux comprendre". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00876183.
Testo completoBouzekri, Elodie. "Notation et processus outillé pour la description, l'analyse et la compréhension de l'automatisation dans les systèmes de commande et contrôle". Thesis, Toulouse 3, 2021. http://www.theses.fr/2021TOU30003.
Testo completoAutomation enables systems to execute some functions without outside control and to adapt functions they execute to new contexts and goals. Systems with automation are used more and more to help humans in everyday tasks with, for example, the dishwasher. Systems with automation are also used to help humans in their professional life. For example, in the field of aeronautics, automation has gradually reduced the number from 4 pilots to 2 pilots. Automation was first considered as a way to increase performance and reduce effort by migrating tasks previously allocated to humans to systems. This, in the hypothesis that systems would be better than humans would at performing certain tasks and vice-versa. Paul Fitts proposed MABA-MABA (Machine Are Better At - Man Are Better At), a tasks and functions allocation method based on this hypothesis. In line with this hypothesis, various descriptions of levels of automation have been proposed. The 10 levels of Automation (LoA) by Parasuraman, Sheridan et Wickens describe different tasks and functions allocations between the human and the system. The higher the level of automation, the more tasks migrate from human to system. These approaches have been the subject of criticism. " MABA-MABA or Abracadabra? Progress on Human-Automation Coordination " of Dekker and Woods highlights that automation leads to new tasks allocated to humans to manage this automation. Moreover, they recall that these approaches hide the cooperative aspect of the human-system couple. To characterize the human-system cooperation, the importance of considering, at design time, the allocation of authority, responsibility, control and the initiative to modify these allocations during the activity was demonstrated. However, the existing approaches describe a high-level design of automation and cooperation early in the design and development process. These approaches do not provide support for reasoning about the allocation of resources, control transitions, responsibility and authority throughout the design and development process. The purpose of this thesis is to demonstrate the possibility to analyze and describe at a low-level tasks and functions as well as the cooperation between humans and the system with automation. This analysis and this description enable to characterize tasks, functions and the cooperation in terms of authority, responsibility, resource sharing and control transition initiation. The aim of this work is to provide a framework and a model-based and tool supported process to analyze and understand automation. In order to show the feasibility of this approach, this thesis presents the results of the application of the proposed process to an industrial case study in the field of aeronautics
George, Sébastien. "Interactions et communications contextuelles dans les environnements informatiques pour l'apprentissage humain". Habilitation à diriger des recherches, INSA de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00557182.
Testo completoGrizou, Jonathan. "Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0146/document.
Testo completoThis thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions
Flavigne, David. "Planification de mouvement interactive: coopération humain-machine pour la recherche de trajectoire et l'animation". Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00538807.
Testo completoHoarau, Raphaël. "Interaction et visualisation avec des liens de dépendances". Phd thesis, Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2776/.
Testo completoAlthough many interactive tools allows manipulating effectively objects individually by applying principles such as direct manipulation, interaction with multiple objects have been little studied so far. Our thesis is that it is important to design interactions for the synchronous operation of multiple objects that are efficient, while fostering exploratory design. From contextual inquiries, we describe an analysis of needs which established a set of requirements for interactions on sets. We present four interactive tools to illustrate concepts of interaction with structures: links delegation, which are based on the delegation mechanism in prototype languages, allowing to establish dependencies between objects (and create clones) and extend the scope of interactions by propagating them; ManySpector, a new type of property inspector that reveals an implicit structuring of a scene and allows for building, with instrumental interactions, graphical query and selection; IHR, a tool that can automatically create a hierarchy of delegated properties; and Histoglass a movable lens that allows to locally manipulate the properties and past user actions. We also present the results of evaluations conducted with users in order to measure interest in the concepts provided by the tools developed. All of this work has led us to the draft of a new interaction paradigm, "the structural interaction", which extends the paradigms of direct manipulation and instrumental interaction
Bouyer, Antoine. "Plateformes et services multimodaux basés sur des interfaces plastiques". Caen, 2010. http://www.theses.fr/2010CAEN2048.
Testo completoThis thesis brings together two research domains, namely multimodality and plasticity of interfaces. Multimodality allows for the accessing of a system simultaneously through different modes of communication. The realization of such infrastructure is difficult because of the technical and functional diversity of the devices that may be connected. Plasticity defines abstract interfaces that are then converted to different formats. Although these two domains have been studied extensively in the past, no one has, to date, used plasticity to facilitate the design and the development of multimodal services. The work carried out during this thesis was to think through, design, and implement such a platform. This thesis is divided into three parts. The first part describes the existing technology at the beginning of the thesis period. More precisely, we present the state of the art of the two domains. We also discuss the architecture of the PMX platform previously developed and from which we reuse some concepts. The second part deals with our reasoning throughout this study. We have developed two successive multimodal platforms, and we explain in detail the reasons and the issues that have pushed us to improve upon the existing technology. The final section presents the validation and implementation of our work through the different services we have developed and experimented. The market opportunities currently being transferred are also presented
Hoarau, Raphaël. "Interaction et Visualisation avec des liens de dépendances". Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00976601.
Testo completoRodriguez, Bertha Helena. "Modèle SOA sémantique pour la multimodalité et son support pour la découverte et l'enregistrement de services d'assistance". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0006/document.
Testo completoUnimodal inputs and outputs in current systems have become very mature with touch applications or distributed services for geo-localization or speech, audio and image recognition. However, the integration and instantiation of all these modalities, lack of an intelligent management of the acquisition and restitution context, based on highly formalized notions reflecting common sense. This requires a more dynamic behavior of the system with a more appropriate approach to manage the user environment.However, the technology required to achieve such a goal is not yet available in a standardized manner, both in terms of the functional description of unimodal services and in terms of their semantic description. This is also the case for multimodal architectures, where the semantic management is produced by each project without a common agreement in the field to ensure inter-operability, and it is often limited to the processing of inputs and outputs or fusion / fission mechanisms. To fill this gap, we propose a semantic service-oriented generic architecture for multimodal systems. This proposal aims to improve the description and the discovery of modality components for assistance services: this is the architecture SOA2m. This architecture is fully focused on multimodality and it is enriched with semantic technologies because we believe that this approach will enhance the autonomous behavior of multimodal applications, provide a robust perception of the user-system exchanges, and help in the control of the semantic integration of the human-computer interaction.As a result, the challenge of discovery is addressed using the tools provided by the field of the semantic web services
Hurter, Christophe. "Caractérisation de visualisations et exploration interactive de grandes quantités de données multidimensionnelles". Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00610623.
Testo completoVelo, Jérôme. "Interactions entre un utilisateur et un système informatique : application à un système d'e-administration". Rouen, 2009. http://www.theses.fr/2009ROUES056.
Testo completoWe present here a mediation system allowing interworkability between heterogeneous data as well by their nature as by their source. The purpose of this system is to enable interactions between a user and a computerized system dedicated to a platform of e-administration in Madagascar. After a state of art concrning possibilities of access to numerical devices in comparison with three other African countries (South-Africa, Senegal and Cap Verde), we will describe the SOA method with which we have implemented an interactive MVC model for a prototype platform building benefiting from a Java EE. The computing choices are validated by single-post experiments. The tool of interaction xhich we adapted to the Malagasy realities will remain to be tested at a full-scale, not only from a technical point of view, but also from a social and economic point of view
Mazuel, Laurent. "Traitement de l'hétérogénéité sémantique dans les interactions humain-agent et agent-agent". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00413004.
Testo completoLa plupart des approches segmentent ce traitement en fonction de l'émetteur de la demande (humain ou agent). Nous pensons au contraire qu'il est possible de proposer un modèle d'interaction commun aux deux situations. Ainsi, nous présentons d'abord un algorithme d'interprétation sémantique de la commande indépendant du type d'interaction (humain-agent ou agent-agent). Cet algorithme considère le rapport entre « ce qui est compris » de la commande et « ce qui est possible » pour la machine. Ce rapport intervient dans un système de sélection de réponses basé sur une mesure de degré de relation sémantique. Nous proposons ensuite une telle mesure, conçue pour prendre en compte plus d'informations que la plupart des mesures actuelles.
Nous étudions ensuite les implémentations que nous avons faites dans les cadres humain-agent et agent-agent. Pour l'implémentation humain-agent, l'une des spécificités est l'utilisation d'une langue naturelle, impliquant le besoin d'utiliser des outils de modélisation de la langue. Pour l'implémentation agent-agent, nous proposerons une adaptation de notre architecture, en s'appuyant sur des protocoles d'interactions entre agents.
Nemery, Alexandra. "Elaboration, validation et application de la grille de critères de persuasion interactive". Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0372.
Testo completoInterface designer have a constant need to influence users. In any field, interfaces are more and more smart, adaptive and interactive in order to encourage people to change. If ergonomic inspection and usability has long been considered to be a part of the evaluation phase or product design process, persuasive technology has not yet been taken into account. Faced with the lack of validated tool in this area, a set of criteria was elaborated. Following the review of 164 articles on the captology field, a grid of eight criteria was proposed. Following various pre-test and its stabilization, this list of criteria was tested with 30 experts in ergonomics to proceed to its validation. Experience-based identification task called persuasive elements in interfaces; 15 have been chosen. The use of the grid showed that it helped the experts identify 78.8% of persuasive elements. With a Kappa score of 0.76, a strong inter-judges agreement has been demonstrated. Following this validation in the laboratory, a real experience test to prove its effectiveness in the field has also been conducted. Applied in the context of an online survey company, annual use of the criteria increased the number of respondents from 25% to 41% in a population of 897 employees. Finally, intelligence has specific features to the world of professional software
Cronel, Martin. "Une approche pour l'ingénierie des systèmes interactifs critiques multimodaux et multi-utilisateurs : application à la prochaine génération de cockpit d'aéronefs". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30247/document.
Testo completoThe work of this thesis aims at contributing to the field of the engineering of interactive critical systems. We aim at easing the introduction of new input and output devices (such as touch screens, mid-air gesture recognition ...) allowing multi-user and multimodal interactions in next generation of aircraft’s cockpits. Currently, development process in the aeronautical filed is not compatible with the complexity of multimodal interaction. On the other side development process of wide spread systems cannot meet the requirements of critical systems. We introduce a generic software and hardware architecture model called MIODMIT (Multiple Input Output devices Multiple Interaction Techniques) which aim the introduction of dynamically instantiated devices, allowing multimodal interaction in critical systems. It describes the organization of information flux with a complete and non-ambiguous way. It covers the entire spectrum of multimodal interactive systems, from input devices and their drivers, to the specification of interaction techniques and the core of the application. It also covers the rendering of every software components, dealing with fission and fusion of information. Furthermore, this architecture model ensure the system configuration modifiability (i.e. add or suppress a device in design or operation phase). Furthermore, moralizing a system reveals that an important part of the interactive part is autonomous (i.e. not driven by the user). This kind of behavior is very difficult to understand and to foresee for the users, causing errors called automation surprises. We introduce a model-based process of evaluation of the interaction techniques which decrease significantly this kind of error. Lastly, we exploited ICO (Interactive Cooperative Objects) formalism , to describe completely and unambiguously each of the software components of MIODMIT. This language is available in an IDE (integrated development environment) called Petshop, which can execute globally the interactive application (from input/output devices to the application core). We completed this IDE with an execution platform named ARISSIM (ARINC 653 Standard SIMulator), adding safety mechanisms. More precisely, ARRISIM allows spatial segregation of processes (memory allocution to each executing partition to ensure the confinement of potential errors) and temporal segregation (sequential use of processor). Those adding increase significantly the system reliability during execution. Our work is a base for multidisciplinary teams (more specifically ergonoms, HMI specialist and developers) which will conceive future human machine interaction in the next generation of aircraft cockpits
Rodriguez, Bertha Helena. "Modèle SOA sémantique pour la multimodalité et son support pour la découverte et l'enregistrement de services d'assistance". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0006.
Testo completoUnimodal inputs and outputs in current systems have become very mature with touch applications or distributed services for geo-localization or speech, audio and image recognition. However, the integration and instantiation of all these modalities, lack of an intelligent management of the acquisition and restitution context, based on highly formalized notions reflecting common sense. This requires a more dynamic behavior of the system with a more appropriate approach to manage the user environment.However, the technology required to achieve such a goal is not yet available in a standardized manner, both in terms of the functional description of unimodal services and in terms of their semantic description. This is also the case for multimodal architectures, where the semantic management is produced by each project without a common agreement in the field to ensure inter-operability, and it is often limited to the processing of inputs and outputs or fusion / fission mechanisms. To fill this gap, we propose a semantic service-oriented generic architecture for multimodal systems. This proposal aims to improve the description and the discovery of modality components for assistance services: this is the architecture SOA2m. This architecture is fully focused on multimodality and it is enriched with semantic technologies because we believe that this approach will enhance the autonomous behavior of multimodal applications, provide a robust perception of the user-system exchanges, and help in the control of the semantic integration of the human-computer interaction.As a result, the challenge of discovery is addressed using the tools provided by the field of the semantic web services
Lanrezac, André. "Interprétation de données expérimentales par simulation et visualisation moléculaire interactive". Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7133.
Testo completoThe goal of Interactive Molecular Simulations (IMS) is to observe the conformational dynamics of a molecular simulation in real-time. Instant visual feedback enables informative monitoring and observation of structural changes imposed by the user's manipulation of the IMS. I conducted an in-depth study of knowledge to gather and synthesize all the research that has developed IMS. Interactive Molecular Dynamics (IMD) is one of the first IMS protocols that laid the foundation for the development of this approach. My thesis laboratory was inspired by IMD to develop the BioSpring simulation engine based on the elastic network model. This model allows for the simulation of the flexibility of large biomolecular ensembles, potentially revealing long-timescale changes that would not be easily captured by molecular dynamics. This simulation engine, along with the UnityMol visualization software, developed through the Unity3D game engine, and linked by the MDDriver communication interface, has been extended to converge towards a complete software suite. The goal is to provide an experimenter, whether an expert or novice, with a complete toolbox for modeling, displaying, and interactively controlling all parameters of a simulation. The particular implementation of such a protocol, based on formalized and extensible communication between the different components, was designed to easily integrate new possibilities for interactive manipulation and sets of experimental data that will be added to the restraints imposed on the simulation. Therefore, the user can manipulate the molecule of interest under the control of biophysical properties integrated into the simulated model, while also having the ability to dynamically adjust simulation parameters. Furthermore, one of the initial objectives of this thesis was to integrate the management of ambiguous interaction constraints from the HADDOCK biomolecular docking software directly into UnityMol, making it possible to use these same restraints with a variety of simulation engines. A primary focus of this research was to develop a fast and interactive protein positioning algorithm in implicit membranes using a model called the Integrative Membrane Protein and Lipid Association Method (IMPALA), developed by Robert Brasseur's team in 1998. The first step was to conduct an in-depth search of the conditions under which the experiments were performed at the time to verify the method and validate our own implementation. We will see that this opens up interesting questions about how scientific experiments can be reproduced. The final step that concluded this thesis was the development of a new universal lipid-protein interaction method, UNILIPID, which is an interactive protein incorporation model in implicit membranes. It is independent of the representation scale and can be applied at the all-atom, coarse-grain, or grain-by-grain level. The latest Martini3 representation, as well as a Monte Carlo sampling method and rigid body dynamics simulation, have been specially integrated into the method, in addition to various system preparation tools. Furthermore, UNILIPID is a versatile approach that precisely reproduces experimental hydrophobicity terms for each amino acid. In addition to simple implicit membranes, I will describe an analytical implementation of double membranes as well as a generalization to arbitrarily shaped membranes, both of which rely on novel applications
Thierry, Benjamin G. "Donner à voir, permettre d’agir. L’invention de l’interactivité graphique et du concept d’utilisateur en informatique et en télécommunications en France (1961-1990)". Electronic Thesis or Diss., Paris 4, 2013. http://www.theses.fr/2013PA040152.
Testo completoIn the late 1950s, the computer has no user. Computing has customers, designers and servants, but its use has not resulted in the establishment of a direct relationship between the individual and the machine. It was only during the 1960s that appeared the need to equip the first professionals to use of the processing power of the computer. This appearance is historically situated in the civil aviation adminstration and created the first reflections in ergonomics on the role of interfaces in the understanding and use of the device by the user. The computer then finds the opportunity to accelerate its diffusion. From office automation to microcomputing via telematics wich represents a french way for interactivity, this doctoral thesis aims to explain the birth and the role of graphical user interfaces and user concept in the development of interactive devices in french computing and telecommunications
Hamon-Keromen, Arnaud. "Définition d'un langage et d'une méthode pour la description et la spécification d'IHM post-W. I. M. P. Pour les cockpits interactifs". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2590/.
Testo completoWith the advent of new technologies such as the iPad, general public software feature richer and more innovative interfaces. These innovations are both on the input layer (e. G. Multi-touch screens) and on the output layer (e. G. Display). These interfaces are categorized as post-W. I. M. P. Type and allow to increase the bandwidth between the user and the system he manipulates. Specifically it allows the user to more quickly deliver commands to the system and the system to present more information to the user enabling him managing increasingly complex systems. The large use in the general public and the level of maturity of these technologies allows to consider their integration in critical systems (such as cockpits or more generally control and command systems). However, the software issues related to these technologies are far from being resolved judging by the many problems encountered by users. While the latter may be tolerated for gaming applications and entertainment, it is not acceptable in the field of critical systems described above. The problem of this thesis focuses specifically on the development of methods, languages, techniques and tools for the design and development of innovative and reliable interactive systems. The contribution of this thesis is the extension of a formal notation: ICO (Interactive Cooperative Object) to describe in a complete and unambiguous way multi-touch interaction techniques and is applied in the context of multi-touch applications for civilians aircrafts. We provide in addition to this notation, a method for the design and validation of interactive systems featuring multi-touch interactions. The mechanisms of these interactive systems are based on a generic architecture structuring models from the hardware part of the input devices up to the application part for the control and monitoring of these systems. This set of contribution is applied on a set of case studies, the most significant being an application for weather management in civilian aircrafts
Dillen, Arnau. "Interface neuronale directe pour applications réelles". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1295.
Testo completoSince the inception of digital computers, creating intuitive user interfaces has been crucial. Effective and efficient user interfaces ensure usability, significantly influenced by the deployment environment and target demographic. Diverse interaction modalities are essential for inclusive device usability.Brain-computer interfaces (BCIs) enable interaction with devices through neural signals, promising enhanced interaction for individuals with paralysis and improving their autonomy and quality of life. This research project develops a proof-of-concept software using off-the-shelf hardware to control a robotic arm with BCI. The BCI system decodes user intentions from EEG signals to execute commands, focusing on the optimal design of a BCI control system for practical human-robot collaboration.The research established the following key objectives: developing a real-time motor imagery (MI) decoding strategy with fast decoding, minimal computational cost, and low calibration time; designing a control system to address low MI decoding accuracy while enhancing user experience; and developing an evaluation procedure to quantify system performance and inform improvements.The literature review identified issues like the prevalence of offline decoding and lack of standardized evaluation procedures for BCIs, and highlighted the limitations of using deep learning for MI decoding. This prompted a focus on off-the-shelf machine learning methods for EEG decoding.Initial development benchmarked various EEG decoding pipelines for neuroprostheses control, finding that standard common spatial patterns and linear discriminant analysis were practical despite user customization yielding optimal results. Another investigation reduced the number of sensors for MI decoding, using a 64-channel EEG device and demonstrating that reliable MI decoding can be achieved with just eight well-placed sensors. This feasibility of using low-density EEG devices with fewer than 32 electrodes reduces costs.A comprehensive evaluation framework for BCI control systems was developed, ensuring iterative software improvements and participant training. An augmented reality (AR) control system design was also described, integrating visual feedback with real-world overlays via a shared control approach using eye tracking for object selection and computer vision for spatial awareness.A user study compared the developed BCI control system to an eye-tracking-only control system. While eye tracking outperformed the BCI system, the study confirmed the feasibility of the BCI design for real-world applications with potential enhancements.Key findings include:- Eight well-placed EEG sensors are sufficient for adequate decoding performance, with a non-significant decrease in accuracy from 0.67 to 0.65 when reducing from 64 sensors to 8.- A shared control design informed by real-world contexts simplifies BCI decoding, and AR integration enhances the user interface. Only 2 MI classes are needed to achieve a success rate of 0.83 on evaluation tasks.- Despite eye tracking outperforming current BCI systems, BCIs are feasible for real-world use, with significantly higher efficiency in task completion time for the eye-tracking system.- Consumer-grade EEG devices are viable for EEG acquisition in BCI control systems, with all participants using the commercial EEG device successfully completing evaluation tasks, indicating further cost reductions beyond sensor reduction.Future research should integrate advanced EEG decoding methods like deep learning, transfer learning, and continual learning. Gamifying the calibration procedure may yield better training data and make the control system more attractive to users. Closer hardware-software integration through embedded decoding and built-in sensors in AR headsets should lead to a consumer-ready BCI control system for diverse applications
Jourde, Frederic. "Collecticiel et Multimodalité : spécification de l'interaction la notation COMM et l'éditeur e-COMM". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00618919.
Testo completoPOOK, Stuart. "Interaction et contexte dans les interfaces zoomables". Phd thesis, Télécom ParisTech, 2001. http://tel.archives-ouvertes.fr/tel-00005779.
Testo completoL'accès interactif aux bases de données et la navigation dans les espaces d'information de grande taille constituent des tâches primordiales pour de nombreuses applications. Or, les systèmes de visualisation posent souvent des problèmes de désorientation, les utilisateurs ayant fréquemment des difficultés à se localiser précisément dans l'espace d'information et à trouver les données recherchées. Cette thèse propose plusieurs techniques d'aide contextuelle pour remédier à ces problèmes dans le cadre des interfaces basées sur le concept de zoom sémantique (ou interfaces zoomables).
Le premier type d'aide, qui offre une vue " en profondeur " de l'espace d'information via une représentation hiérarchique, permet non seulement de faciliter la localisation de la position courante et des informations recherchées mais aussi d'accélérer la navigation. La seconde technique est basée sur la génération dynamique de vues transparentes et temporaires que les utilisateurs peuvent créer et contrôler en un seul geste. Ces vues interactives se superposent à la vue courante en y rajoutant des informations contextuelles ou historiques qui aident l'utilisateur à comprendre ce qui entoure le point de focus ou quel chemin a été effectué pour arriver à ce dernier. Ces aides contextuelles nécessitent un couplage étroit entre interaction et présentation, qui est obtenu en utilisant des Control Menus.
Dupuy-Chessa, Sophie. "Modélisation en Interaction Homme-Machine et en Système d'Information : A la croisée des chemins". Habilitation à diriger des recherches, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00764845.
Testo completoMerlin, Bruno. "Méthodologie et instrumentalisation pour la conception et l'évaluation des claviers logiciels". Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1323/.
Testo completoThe expansion of mobile devices turn text input performances a major challenge for Human-Machine Interaction. We observed that, even if traditional QWERTY soft keyboards or telephone based soft keyboard were evaluated as poorly efficient, and, even if several alternatives evaluated as more efficient were proposed in the research field, these new alternatives are rarely used. Based on this observation, we argue that the goal of soft keyboard evaluation focus on long term performances whereas does not take into account the perspective for a user to use it in his quotidian. Consequently, we propose a complementary evaluation strategy base on heuristic evaluation methodology. In order to ease the evaluation and design of new soft keyboards, we proposed a new version (E-Assist II) of the E-Assiste plate-form. This plate-form aims, at first, to facilitate the design and procedure of experimentations and, more generally, to guide the theoretical, experimental and heuristic evaluations. A compact version (TinyEAssist) enables to perform experimentation on mobile environment such as mobile phone. At second, based on soft keyboard structure study, we proposed a keyboard specification language enabling to generate complex keyboard (including soft keyboard interacting with prediction systems). The generated soft keyboards may be used into the experimentation plate-form or interacting with the exploration system. At last, based on the criteria highlighted by the heuristic evaluation, we proposed four new soft keyboard paradigms. Among them two paradigms showed interesting perspectives: at first the multilayer keyboard consist in accompanying the user from a standard QWERTY layout to an optimized layout during a transition period; the second consist in accelerating the access to the characters such as accents, upper-case, punctuation, etc. , frequently ignored in the keyboard optimizations
Touraine, Damien. "Interaction "naturelle" en environnements immersifs - Démonstrateur multimodal et validation sur des applications scientifiques". Phd thesis, Université Paris Sud - Paris XI, 2003. http://tel.archives-ouvertes.fr/tel-00004056.
Testo completoChalon, René. "Réalité mixte et travail collaboratif : IRVO, un modèle de l'interaction homme-machine". Phd thesis, Ecole Centrale de Lyon, 2004. http://tel.archives-ouvertes.fr/tel-00152230.
Testo completoDans notre état de l'art nous analysons de nombreuses applications et technologies qui ont été conçues depuis une dizaine d'années dans ce domaine de façon généralement indépendantes et autonomes. Il semble désormais possible de proposer des modèles unificateurs et des méthodologies de conception associées.
Les modèles traditionnels de l'Interaction Homme-Machine (IHM) et du Travail Collaboratif Assisté par Ordinateur (TCAO), comme nous le montrons, ne sont pas adaptés car ils sont conçus pour la modélisation du logiciel et ne prennent pas (ou peu) en compte les artefacts réels.
Nous proposons donc le modèle IRVO (Interacting with Real and Virtual Objects) qui cherche à formaliser l'interaction entre les utilisateurs et les objets réels et virtuels. Pour cela IRVO identifie les entités intervenant dans le système (les utilisateurs, les outils, les objets de la tâche et l'application informatique) et leurs relations. Son pouvoir d'expression nous permet de modéliser les applications de la littérature dans le but de les comprendre, les comparer et les classer.
Nous montrons également que IRVO est un support à la conception d'applications nouvelles : il peut être utilisé soit comme point de départ, soit en fin d'analyse pour aider le concepteur à vérifier un certain nombre de critères (complétude, cohérence, continuité, etc.). Nous étudions la liaison entre IRVO et les modèles classiques de l'IHM et du TCAO et plus spécifiquement avec les modèles AMF-C et Arch. Dans une démarche MBA (Model-Based Approach), IRVO prend naturellement la place aux côtés des modèles de tâches et de dialogues.
Pietriga, Emmanuel. "Langages et techniques d'interaction pour la visualisation et la manipulation de masses de données". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00709533.
Testo completoRoussel, Nicolas. "Nouvelles formes de communication et nouvelles interactions homme-machine pour enrichir et simplifier le quotidien". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00280550.
Testo completoLe premier axe concerne la conception de systèmes interactifs pour la coordination, la communication et la collaboration entre individus. Je m'intéresse en particulier à la manière dont des moyens vidéo peuvent être utilisés pour permettre des échanges plus subtils (i.e. légers, nuancés, implicites) et plus informels (i.e. spontanés, opportuns) que ceux permis par les systèmes actuels.
Le deuxième axe concerne la conception de nouvelles métaphores et techniques destinées à enrichir et simplifier l'interaction au quotidien avec les systèmes informatiques. Je m'intéresse plus particulièrement aux moyens de faire évoluer la métaphore du bureau sous-jacente à la gestion des données et des applications dans la plupart des systèmes actuels.
Ce document présente les problématiques liées à ces deux axes de recherche, les travaux s'y rapportant auxquels j'ai participé depuis septembre 2001 et quelques perspectives ouvertes par ces travaux.
Guidini, Gonçalves Taisa. "Intégration des questions d'ingénierie de l'interaction homme-machine dans les modèles d'aptitude et maturité de processus logiciel". Thesis, Valenciennes, 2017. http://www.theses.fr/2017VALE0040/document.
Testo completoSoftware process capability maturity (SPCM) models are currently widely used in industry. To perform the practices defined in these models, software engineering approaches are applied. We also have experienced a large definition of methods, techniques, patterns, and standards for the analysis, design, implementation, and evaluation of interactive systems focusing on Human-Computer Interaction (HCI) issues. Nevertheless, it is well known that HCI approaches are not largely used in industry. In order to take advantage of the widespread use of SPCM models, this thesis proposes to integrate HCI issues (concepts of design, implementation, and evaluation of interactive systems) in the most known international SPCM model (CMMI-DEV – Capability Maturity Model Integration for Development) and in the Brazilian SPCM model (MR-MPS-SW – MPS for Software reference model). To that end, we have worked on (i) the identification of appropriate HCI approaches for each practice of the engineering advocated by these models, (ii) the evaluation and improvement of the identified HCI approaches with HCI experts, (iii) the validation of the proposition in an academic environment, and (iv) the conduction of two empirical studies about the perception of knowledge and use of HCI approaches in the industry. As a result, we got 14 categories of HCI approaches with examples of methods, techniques, patterns, and standards adequate to perform each practice of engineering activities of the both models when developing interactive systems. Moreover, the empirical study, in Brazilian industry, confirmed statistically that consultants of those SPCM models do not know and do not use HCI approaches as well as they know and use software engineering approaches
Campeau-Lecours, Alexandre. "Développement d'algorithmes de commande et d'interfaces mécatroniques pour l'interaction physique humain-robot". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29608/29608.pdf.
Testo completoFor a long time, simple and advanced systems such as robots have been helping humans to accomplish several tasks. In some cases, the system simply replaces the operator while in other cases, the system cooperates with him/her. In the latter case, the system is more a tool used to increase performance or to avoid unpleasant tasks. The principal advantage of this human augmentation is to leave a certain latitude to the operator in the task decision process. Specific strengths of humans and robots are then combined to obtain a synergy, that is obtaining a more complete system than the sum of its parts. However, achieving complex tasks in a way that is intuitive to the human represents a huge challenge. While robots were previously segregated from humans and then designed and programmed accordingly, the new generation of robots must be able to perceive their environment and the human intentions and to respond to them safely, adequately, intuitively and ergonomically. This leads to several opportunities in a wide range of fields such as materials handling, assembly, physical rehabilitation, surgery, learning through haptic simulations, help to disabled people and others. This thesis comprises three parts. The first one deals with the control of physical interaction robots. The approach to an intuitive control, good practices, an interaction algorithm adapting to human intentions and the adaptation of a computed-torque control scheme for human-robot interaction are presented. The second part presents hands on payload systems which are more intuitive to use for the operator. These system developments include mechanical and advanced control innovations. The third part introduces safety features. First, the development of a vibration observer/controller algorithm is presented and then the development of a sensor detecting human proximity is reported. This thesis attempts to provide contributions, in a scientific spirit as much as for industrial applications requiring immediate solutions.
Fenicio, Alessandro. "Ingénierie de l'Interaction Homme-Machine et Persuasion Technologique : étude du concept de Chemin Persuasif". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM035/document.
Testo completoSocietal challenges are an international concern. Daily advertising campaigns rise attention of people to make them change: "Smoking kills", "Drinking or driving, choose", "Eating five fruits and vegetables a day", etc. However, these campaigns have limited effect.Persuasive technologies have been explored for fifteen years to orient technology on the difficulty of changing behavior. Monitoring devices such as bracelets or watches of physical activities and applications are multiplying obtaining commercial successes. However, despite the potential capabilities of technology of delivering personalized strategies, the incentive to change remains limited. The difficulty lies in the multidisciplinarity of the field: designing persuasive interactive systems requires mastering the fundamental concepts and the advances in cognitive and social psychology, which makes the persuasive practice extremely ambitious.This thesis contributes to the engineering of persuasive interactive systems. It deals with the process of behavior change and proposes the concept of persuasive path to stimulate users in their behavior change. The persuasive path is a succession of events designed to pave the progression of the user toward the change among the set of possible behaviors. This set is modeled with state machines describing all the possible transitions between behaviors. Transitions between behaviors are triggered when the determinants of the corresponding behaviors are satisfied in the current user's context. A persuasive architecture is proposed to orchestrate the state machines and the persuasive paths. The formalism of state machines also allows the characterization and comparison of change processes in the literature.An incremental design method is proposed to design, step by step, the state machine and the persuasive path. The steps proceed in order to actuate design choices that make the system little by little more dependent: problem dependent, domain dependent, task dependent and context dependent. This structuring progressive conception allows a revision of the design choices according to the observed performance of the persuasion.The conceptual contributions (concepts and design method): CRegrette, an application aimed at stopping behavior (smoking); on the other hand, Mhikes, an application aimed at reinforcing behavior (walking). A complete implementation of Mhikes (concepts and architecture) is made available to show the technical feasibility of the approach. The technological maturity of this approach allow the deployment of the application at real scale and an experimental evaluation of the contributions.The evaluation results confirms the relevance of the models and of the architecture, allowing the introduction of software probes (1) to identify the roles endorsed by users, 2) to follow the possible changes and 3) to produce personalized notifications. The notifications resulted more efficient than the communication campaigns operated by Mhikes. However, the role changes remains complex, with extra-transitions that are more difficult to actuate than intra-transitions.In conclusion, the thesis delivers a complete set of methods, models and tools for the engineering of persuasive interactive systems. More broadly, this set can be used by other communities to progress in the compression of human interaction
Jourde, Frédéric. "Collecticiel et multimodalité : spécification de l'interaction la notation COMM et l'éditeur e-COMM". Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM018/document.
Testo completoMulti-user multimodal interactive systems involve multiple users that can use multiple interaction modalities. Although multi-user multimodal systems are becoming more prevalent (especially multi-user multimodal systems involving multi-touch surfaces), their design is still ad-hoc without properly keeping track of the design process. Addressing this issue of lack of design tools, our doctoral research is dedicated to the specification of multi-user multimodal interaction. The doctoral research contributions include the COMM (Collaborative and MultiModal) notation and its on-line editor for specifying multi-user multimodal interactive systems. Extending the CTT notation, the salient features of the COMM notation include the concepts of interactive role and modal task as well as a refinement of the temporal operators applied to tasks using the Allen relationships. The COMM notation and its on-line editor e-COMM (http://iihm.imag.fr/demo/editeur/) have been successfully applied to a large scale project dedicated to a multimodal military command post for the control of unmanned aerial vehicles (UAV) by two operators
Rey, Stéphanie. "Apports des interactions tangibles pour la création, le choix et le suivi de parcours de visite personnalisés dans les musées". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0049.
Testo completoMuseums are considering the personalization trend to accommodate the diversity of visitors and their visiting practices. In order to support the spread of personalized visits, we question the contribution of tangible interactions, not only for visitors but also for museum professionals. How to help cultural mediators design personalized visits that reflect the diversity of visitor profiles? How to help visitors choose and follow the visit that suits their wishes and needs? We applied a user-centered design process with partner museums to design, implement and evaluate tangible tools to answer these questions. The user needs analysis (cultural mediators and visitors) allowed us to define six main characteristics to consider for the personalization of visits and to identify the concept of multi-criteria personalization. For the cultural mediators, we propose an interface concept combining the choice of visitor characteristics to constitute a profile and the dynamic monitoring of the personalized visits creation progress for each combination. We instantiated this concept using two interaction modalities, tangible and tactile, which we compared through an experimental study with museum mediators. For the visitors, we iteratively designed prototypes to help them choose personalized visits and conducted a pilot study in a partner museum. The prototypes designed during this thesis implement the token+constraint interaction paradigm. We propose a systematic literature review referencing the token+constraint interaction paradigm, as well as a heuristic grid of 24 properties divided into five categories that resume, synthesize and illustrate the concepts of the seminal article. We applied this heuristic to the prototypes designed during the thesis
Alexandra, Nemery. "Elaboration, validation et application de la grille de critères de persuasion interactive". Phd thesis, Université de Metz, 2012. http://tel.archives-ouvertes.fr/tel-00735714.
Testo completoSerrano, Marcos. "Interaction multimodale en entrée : conception et prototypage". Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM033.
Testo completoLn this work, we focus on the design of interactive systems. We are specificaly interested in multimodal systems, systems that use several interaction modalities. Our work aims at defining methods and tools for the design of those interactive systems. We aim specially the prototyping phase of the conception of multimodal interface. We present a component-based approach for designing and prototyping input multimodal interfaces, both functional or simulated. This approach is based on our conceptual compone nt mode!. Our contribution is both a conceptual model and a software too!. Our conceptual model is based on a characterization space for software components. This space hels interaction designers describing the component assembly and considering different design alternatives. Our sorftware tool is a component-based prototyping tool that implements the characterization space (our conceptual model). We have implemented several multimodal prototypes using this tools, both functional or simulated bya human using a Wizard Of Oz approach
Serrano, Marcos. "Interaction multimodale en entrée : Conception et Prototypage". Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-01017242.
Testo completoAbdel, Wahab Shaimaa. "Le multimédia en maternelle : tâches, activités et apprentissage du langage". Electronic Thesis or Diss., Paris 8, 2016. http://www.theses.fr/2016PA080018.
Testo completoThe purpose of this research is to study the impact of multimedia assisted learning on vocabulary development and comprehension among children of preschool, compared to traditional learning. It also aims to study the impact of different modes of interaction in computerized environments on language development and comprehension of the story among children of preschool.Learning the language is a major challenge for future academic success of students in kindergarten. This doctoral research aims to study the impact on the acquisition of certain skills on the language, and introduction of computerized environments in the final year of kindergarten (KG2, 5 to 6 year-olds). The study focuses particularly on the children acquisition of language skills in vocabulary and through the reception and comprehension of narratives. This work aims to take stock of existing research and analyses software (electronic stories) in French. It then uses special software (Un Prince à l’école) in the Paris region, and study the effectiveness in vocabulary development (pre/post test) and comprehension of the story (post-test) for these children. We studied (i) the impact of the interaction with the e-story vs. the story on paper, (ii) the impact of the interaction (individual vs. collaborative) with e-story on vocabulary development and comprehension of the story