To see the other types of publications on this topic, follow the link: Human machine interaction.

Dissertations / Theses on the topic 'Human machine interaction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human machine interaction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gouvrit, Montaño Florence. "Empathy and Human-Machine Interaction." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313442553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

MANURI, FEDERICO. "Visualization and Human-Machine Interaction." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2673784.

Full text
Abstract:
The digital age offers a lot of challenges in the eld of visualization. Visual imagery has been effectively used to communicate messages through the ages, to express both abstract and concrete ideas. Today, visualization has ever-expanding applications in science, engineering, education, medicine, entertainment and many other areas. Different areas of research contribute to the innovation in the eld of interactive visualization, such as data science, visual technology, Internet of things and many more. Among them, two areas of renowned importance are Augmented Reality and Visual Analytics. This thesis presents my research in the fields of visualization and human-machine interaction. The purpose of the proposed work is to investigate existing solutions in the area of Augmented Reality (AR) for maintenance. A smaller section of this thesis presents a minor research project on an equally important theme, Visual Analytics. Overall, the main goal is to identify the most important existing problems and then design and develop innovative solutions to address them. The maintenance application domain has been chosen since it is historically one of the first fields of application for Augmented Reality and it offers all the most common and important challenges that AR can arise, as described in chapter 2. Since one of the main problem in AR application deployment is reconfigurability of the application, a framework has been designed and developed that allows the user to create, deploy and update in real-time AR applications. Furthermore, the research focused on the problems related to hand-free interaction, thus investigating the area of speech-recognition interfaces and designing innovative solutions to address the problems of intuitiveness and robustness of the interface. On the other hand, the area of Visual Analytics has been investigated: among the different areas of research, multidimensional data visualization, similarly to AR, poses specific problems related to the interaction between the user and the machine. An analysis of the existing solutions has been carried out in order to identify their limitations and to point out possible improvements. Since this analysis delineates the scatterplot as a renowned visualization tool worthy of further research, different techniques for adapting its usage to multidimensional data are analyzed. A multidimensional scatterplot has been designed and developed in order to perform a comparison with another multidimensional visualization tool, the ScatterDice. The first chapters of my thesis describe my investigations in the area of Augmented Reality for maintenance. Chapter 1 provides definitions for the most important terms and an introduction to AR. The second chapter focuses on maintenance, depicting the motivations that led to choose this application domain. Moreover, the analysis concerning open problems and related works is described along with the methodology adopted to design and develop the proposed solutions. The third chapter illustrates how the adopted methodology has been applied in order to assess the problems described in the previous one. Chapter 4 describes the methodology adopted to carry out the tests and outlines the experimental results, whereas the fifth chapter illustrates the conclusions and points out possible future developments. Chapter 6 describes the analysis and research work performed in the eld of Visual Analytics, more specifically on multidimensional data visualizations. Overall, this thesis illustrates how the proposed solutions address common problems of visualization and human-machine interaction, such as interface de- sign, robustness of the interface and acceptance of new technology, whereas other problems are related to the specific research domain, such as pose tracking and reconfigurability of the procedure for the AR domain.
APA, Harvard, Vancouver, ISO, and other styles
3

Ogrinc, Matjaž. "Information acquisition in physical human-machine interaction." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/58935.

Full text
Abstract:
Exploration is an active, closed loop process, where actions are coordinated to maximise sensory information gain through perception. Exploratory actions provide complementary and redundant sensory information, which our brain efficiently combines to reduce the uncertainty about the natural environment. As humans increasingly interact with machines, there is a growing need for human-machine interfaces to support natural interactions and efficient information display. The integration of sensory cues allows humans to resolve ambiguities in everyday natural interactions. Here, this mechanism is exploited to enhance information transfer of abstract tactile cues. This thesis develops a feedback method based on vibrotactile apparent motion, where an array of stimulators is excited in a particular spatio-temporal pattern to induce an illusion of motion across the skin. In the proposed approach, the speed of motion is coupled with additional cues to ease the discrimination between similar speeds. The increased throughput of information promises an efficient and convenient way for substitution of auditory or visual navigation cues. Sensory loss and dysfunctions, and cognitive disorders, such as blindness, tactile hypersensitivity and autism, often severely constrain one's ability to function. Assistive technology can greatly improve their life, such as in the case of tactile sensory substitution devices for visually and hearing impaired. However, as sensory impairments sometimes lead to cognitive dysfunctions, it is crucial to consider these relationships when designing assistive devices. Here, a case study investigated the use of vibrotactile cues to communicate with a deafblind autistic individual during equestrian therapy. The approach was validated by evaluating the individual's sensory perception and motor behaviour. Human ability to acquire and act upon sensory information trough touch is possible thanks to simultaneous control of arm motion, force and impedance. This capability remains absent in human machine interactions, such as in the case of VR and telerobotics, due to the complexity of arm impedance estimation. A novel approach is demonstrated here where impedance control is achieved by simplifying the model of human arm use. The benefits are demonstrated in virtual object manipulation. The improved control of contact dynamics promise more efficient exploration of virtual and remote environments. This thesis presents methods for efficient information transfer through tactile perception by both sensory feedback and motor actions. The capabilities and limitations of the human sensorimotor system are carefully considered and employed to design wearable interfaces applied to sensory substitution and telerobotics.
APA, Harvard, Vancouver, ISO, and other styles
4

Gnjatović, Milan. "Adaptive dialogue management in human-machine interaction." München Verl. Dr. Hut, 2009. http://d-nb.info/997723475/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Westerberg, Simon. "Semi-Automating Forestry Machines : Motion Planning, System Integration, and Human-Machine Interaction." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-89067.

Full text
Abstract:
The process of forest harvesting is highly mechanized in most industrialized countries, with felling and processing of trees performed by technologically advanced forestry machines. However, the maneuvering of the vehicles through the forest as well as the control of the on-board hydraulic boom crane is currently performed through continuous manual operation. This complicates the introduction of further incremental productivity improvements to the machines, as the operator becomes a bottleneck in the process. A suggested solution strategy is to enhance the production capacity by increasing the level of automation. At the same time, the working environment for the operator can be improved by a reduced workload, provided that the human-machine interaction is adapted to the new automated functionality. The objectives of this thesis are 1) to describe and analyze the current logging process and to locate areas of improvements that can be implemented in current machines, and 2) to investigate future methods and concepts that possibly require changes in work methods as well as in the machine design and technology. The thesis describes the development and integration of several algorithmic methods and the implementation of corresponding software solutions, adapted to the forestry machine context. Following data recording and analysis of the current work tasks of machine operators, trajectory planning and execution for a specific category of forwarder crane motions has been identified as an important first step for short term automation. Using the method of path-constrained trajectory planning, automated crane motions were demonstrated to potentially provide a substantial improvement from motions performed by experienced human operators. An extension of this method was developed to automate some selected motions even for existing sensorless machines. Evaluation suggests that this method is feasible for a reasonable deviation of initial conditions. Another important aspect of partial automation is the human-machine interaction. For this specific application a simple and intuitive interaction method for accessing automated crane motions was suggested, based on head tracking of the operator. A preliminary interaction model derived from user experiments yielded promising results for forming the basis of a target selection method, particularly when combined with some traded control strategy. Further, a modular software platform was implemented, integrating several important components into a framework for designing and testing future interaction concepts. Specifically, this system was used to investigate concepts of teleoperation and virtual environment feedback. Results from user tests show that visual information provided by a virtual environment can be advantageous compared to traditional video feedback with regards to both objective and subjective evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
6

DE, PACE FRANCESCO. "Natural and multimodal interfaces for human-machine and human-robot interaction." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2918004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Georgiev, Nikolay. "Assisting physiotherapists by designing a system utilising Interactive Machine Learning." Thesis, Uppsala universitet, Institutionen för informatik och media, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447489.

Full text
Abstract:
Millions of people throughout the world suffer from physical injuries and impairments and require physiotherapy to successfully recover. There are numerous obstacles in the way of having access to the necessary care – high costs, shortage of medical personnel and the need to travel to the appropriate medical facilities, something even more challenging during the Covid-19 pandemic. One approach to addressing this issue is to incorporate technology in the practice of physiotherapists, allowing them to help more patients. Using research through design, this thesis explores how interactive machine learning can be utilised in a system, designed for aiding physiotherapists. To this end, after a literature review, an informal case study was conducted. In order to explore what functionality the suggested system would need, an interface prototype was iteratively developed and subsequently evaluated through formative testing by three physiotherapists. All participants found value in the proposed system, and were interested in how such a system can be implemented and potentially used in practice. In particular the ability of the system to monitor the correct execution of the exercises by the patient, and the increased engagement during rehabilitative training brought by the sonification. Several suggestions for future developments in the topic are also presented at the end of this work.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Van Toi. "Visual interpretation of hand postures for human-machine interaction." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS035/document.

Full text
Abstract:
Aujourd'hui, les utilisateurs souhaitent interagir plus naturellement avec les systèmes numériques. L'une des modalités de communication la plus naturelle pour l'homme est le geste de la main. Parmi les différentes approches que nous pouvons trouver dans la littérature, celle basée sur la vision est étudiée par de nombreux chercheurs car elle ne demande pas de porter de dispositif complémentaire. Pour que la machine puisse comprendre les gestes à partir des images RGB, la reconnaissance automatique de ces gestes est l'un des problèmes clés. Cependant, cette approche présente encore de multiples défis tels que le changement de point de vue, les différences d'éclairage, les problèmes de complexité ou de changement d'environnement. Cette thèse propose un système de reconnaissance de gestes statiques qui se compose de deux phases : la détection et la reconnaissance du geste lui-même. Dans l'étape de détection, nous utilisons un processus de détection d'objets de Viola Jones avec une caractérisation basée sur des caractéristiques internes d'Haar-like et un classifieur en cascade AdaBoost. Pour éviter l'influence du fond, nous avons introduit de nouvelles caractéristiques internes d'Haar-like. Ceci augmente de façon significative le taux de détection de la main par rapport à l'algorithme original. Pour la reconnaissance du geste, nous avons proposé une représentation de la main basée sur un noyau descripteur KDES (Kernel Descriptor) très efficace pour la classification d'objets. Cependant, ce descripteur n'est pas robuste au changement d'échelle et n'est pas invariant à l'orientation. Nous avons alors proposé trois améliorations pour surmonter ces problèmes : i) une normalisation de caractéristiques au niveau pixel pour qu'elles soient invariantes à la rotation ; ii) une génération adaptative de caractéristiques afin qu'elles soient robustes au changement d'échelle ; iii) une construction spatiale spécifique à la structure de la main au niveau image. Sur la base de ces améliorations, la méthode proposée obtient de meilleurs résultats par rapport au KDES initial et aux descripteurs existants. L'intégration de ces deux méthodes dans une application montre en situation réelle l'efficacité, l'utilité et la faisabilité de déployer un tel système pour l'interaction homme-robot utilisant les gestes de la main
Nowadays, people want to interact with machines more naturally. One of the powerful communication channels is hand gesture. Vision-based approach has involved many researchers because this approach does not require any extra device. One of the key problems we need to resolve is hand posture recognition on RGB images because it can be used directly or integrated into a multi-cues hand gesture recognition. The main challenges of this problem are illumination differences, cluttered background, background changes, high intra-class variation, and high inter-class similarity. This thesis proposes a hand posture recognition system consists two phases that are hand detection and hand posture recognition. In hand detection step, we employed Viola-Jones detector with proposed concept Internal Haar-like feature. The proposed hand detection works in real-time within frames captured from real complex environments and avoids unexpected effects of background. The proposed detector outperforms original Viola-Jones detector using traditional Haar-like feature. In hand posture recognition step, we proposed a new hand representation based on a good generic descriptor that is kernel descriptor (KDES). When applying KDES into hand posture recognition, we proposed three improvements to make it more robust that are adaptive patch, normalization of gradient orientation in patches, and hand pyramid structure. The improvements make KDES invariant to scale change, patch-level feature invariant to rotation, and final hand representation suitable to hand structure. Based on these improvements, the proposed method obtains better results than original KDES and a state of the art method
APA, Harvard, Vancouver, ISO, and other styles
9

Spall, Roger Paul. "A human-machine interaction tool set for Smalltalk 80." Thesis, Sheffield Hallam University, 1990. http://shura.shu.ac.uk/20389/.

Full text
Abstract:
This research represents an investigation into user acceptance of computer systems. It starts with the premise that existing systems do not fully meet user requirements, and are therefore rejected as 'difficult to use'. Various problems and influences affecting user acceptance are identified, and improvements are suggested. Although a broad range of factors affecting user acceptance are discussed, emphasis is given to the impact of actual computer software. Initially, both general and specific user interface software influences are examined, and it is shown how these needs can be met using new software technology. A new Intelligent Interface architecture model is presented, and comparisons are made to existing interface design approaches. Secondly, the role of empirical work within the field of Human Computer Interaction is highlighted. An investigation into the usability and user. acceptance of a large working library database system is described, and the results discussed. The role of Systems Analysis and Design and its effect upon user acceptance is also explored. It is argued that despite improvements in interface technology and related software engineering techniques, a software application is also a product of the Systems Analysis and Design process. Traditional Systems Design approaches are examined, and suitable improvements suggested based upon experience with emerging separable software architectures. Thirdly, the research proceeds to examine the potential of Quantitative User Modelling, and describes the implementation of an example object oriented Quantitative User Model. This is then evaluated in order to determine new knowledge, concerning the major issues surrounding the potential application of user modelling to interface design. Finally, attention is given to the concept of interface and application separation. An object oriented User Interface Management System is presented, and its implementation in the Smalltalk 80 programming language discussed. The proposed User Interface Management System utilises a new software architecture which provides explicit user interface separation, using the concept of a Pluggable View Controller. It also incorporates an integrated design Tool-set for Direct Manipulation interfaces. The proposed User Interface Management System and software architecture represents the major contribution of this project to the growing body of Human Computer Interaction research. In particular, the importance of explicit interface separation is established, and the proposed software architecture is critically evaluated to determine new knowledge concerning the requirements, constraints, and potential of proper user interface separation. The implementation of an object oriented Part Hierarchy mechanism is also presented. This mechanism is related to the proposed User Interface Management System, and is critically evaluated in order to add to the body of knowledge concerning object oriented systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Wood, David K. "Learning from Gross Motion Observations of Human-Machine Interaction." Thesis, The University of Sydney, 2011. https://hdl.handle.net/2123/29223.

Full text
Abstract:
This thesis discusses the problems inherent in the modelling and classification of human interactions with robots using gross motions observations. Contributions to this field are one approach by which robots can be made socially aware, at a low enough cost for the commercialisation of such systems to be viable. In general, it cheaper and simpler both in terms of sensing requirements and computational power to determine the position of a person participating in an interaction than to attempt to perform more advanced operations such as face detection and recognition, gaze tracking or gesture recognition. Being able to perform classification and modelling of human behaviour from gross motion observations is a useful ability for the designers of such HRI systems to have at their disposal. Two contributions are made to the problem of gross motion modelling and classifica— tion. The first is an approach to measuring error levels implicit to the models learned in a generative classification scenario. By comparing the results from these model— based error measures to the results obtained from more traditional data-based error measures an assessment can be made about how well the internal models within the classifier represent the true state of the world. A method is also presented to sum— marise these comparisons using the symmetric Kullback—Leibler divergence, enabling the rapid analysis of the large numbers of classifiers produced with the application of cross—validation techniques. The second contribution is a taxonomy of feature representations and a set of design rules derived from this taxonomy for the representation of human—robot interaction modelling features. These rules are focussed on gross motion features, but can be extended to cover almost any human-robot interaction modelling or classification task. These two contributions are then demonstrated on interaction data gathered from the Fish—Bird new media artwork. This is a challenging problem due to the interaction parameters being modelled, however the use of a rigorous design approach and the application of the divergence measures derived earlier in the thesis enable targeted analysis and useful conclusions to be drawn. Results are shown to demonstrate these applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Tarpin-Bernard, Franck. "INTERACTION HOMME-MACHINE ADAPTATIVE." Habilitation à diriger des recherches, INSA de Lyon, 2006. http://tel.archives-ouvertes.fr/tel-00164110.

Full text
Abstract:
Mes travaux se situent à la frontière des champs disciplinaires que sont l'Interaction Homme-Machine d'une part et le génie logiciel d'autre part. L'objectif général est de fournir des modèles, méthodes et outils pour construire et implémenter à moindre coût des applications interactives adaptatives et/ou facilement adaptables à divers contextes d'utilisation dans le respect d'une démarche qualité. En effet, la variabilité des dispositifs d'interaction, des utilisateurs eux-mêmes et de l'environnement dans lequel s'effectuent les interactions impose aujourd'hui de supporter divers niveaux d'adaptation. Les travaux réalisés à ce jour ont permis d'apporter des résultats significatifs en ce qui concerne la définition de classifications et de grilles d'analyse, les processus de conception et de construction, l'instrumentalisation des méthodes associées, les techniques spécifiques d'adaptation notamment s'appuyant sur le profil cognitif des utilisateurs, mais aussi les méthodologies d'analyse des usages et de l'efficacité et la pertinence des adaptations.
APA, Harvard, Vancouver, ISO, and other styles
12

Vo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0053/document.

Full text
Abstract:
La télévision n’a cessé de se populariser et d’évoluer en proposant de nouveaux services. Ces services de plus en plus interactifs rendent les téléspectateurs plus engagés dans l’activité télévisuelle. Contrairement à l’usage d’un ordinateur, ils interagissent sur un écran distant avec une télécommande et des applications depuis leur canapé peu propice à l’usage d’un clavier et d’une souris. Ce dispositif et les techniques d’interaction actuelles qui lui sont associées peinent à répondre correctement à leurs attentes. Afin de répondre à cette problématique, les travaux de cette thèse explorent les possibilités offertes par la modalité gestuelle pour concevoir de nouvelles techniques d’interaction pour la télévision interactive en tenant compte de son contexte d’usage. Dans un premier temps, nous présentons le contexte singulier de l’activité télévisuelle. Puis, nous proposons un espace de caractérisation des travaux de la littérature cherchant à améliorer la télécommande pour, finalement, nous focaliser sur l’interaction gestuelle. Nous introduisons un espace de caractérisation qui tente d’unifier l’interaction gestuelle contrainte par une surface, mains libres, et instrumentée ou non afin de guider la conception de nouvelles techniques. Nous avons conçu et évalué diverses techniques d’interaction gestuelle selon deux axes de recherche : les techniques d’interaction gestuelle instrumentées permettant d’améliorer l’expressivité interactionnelle de la télécommande traditionnelle, et les techniques d’interaction gestuelles mains libres en explorant la possibilité de réaliser des gestes sur la surface du ventre pour contrôler sa télévision
Television has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
APA, Harvard, Vancouver, ISO, and other styles
13

Huot, Stéphane. "'Designeering Interaction': un chaînon manquant dans l'évolution de l'Interaction Homme-Machine." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823763.

Full text
Abstract:
Human Computer Interaction (HCI) is a fascinating research field because of its multidisciplinary nature, combining such diverse research domains as design, human factors and computer science as well as a variety of methods including empirical and theoretical research. HCI is also fascinating because it is still young and so much is left to discover, invent and understand. The evolution of computers, and more generally of interactive systems, is not frozen, and so are the ways in which we interact with them. From desktop computers, to mobile devices, to large displays or multi-surface environments, technology extends the possibles, needs initiate technologies, and HCI is thus a constantly moving field. The variety of challenges to address, as well as their underlying combinations of sub-domains (design, computer science, experimental psychology, sociology, etc.), imply that we should also adapt, question and sometimes reinvent our research methods and processes, pushing the limits of HCI research further. Since I entered the field 12 years ago, my research activities have essentially revolved around two main themes: the design, implementation and evaluation of novel interaction techniques (on desktop computers, mobile devices and multi- surface environments) and the engineering of interactive systems (models and toolkits for advanced input and interaction). Over time, I realized that I had entered a loop between these two concerns, going back and forth between design- ing and evaluating new interaction techniques, and defining and implementing new software architectures or toolkits. I observed that they strongly influence each other: The design of interaction techniques informs on the capabilities and limitations of the platform and the software being used, and new architectures and software tools open the way to new designs and possibilities. Through the discussion of several of my research contributions in these fields, this document investigates how interaction design challenges technology, and how technology - or engineering of interactive systems - could support and unleash interaction design. These observations will lead to a first definition of the "Designeering Interaction" conceptual framework that encompasses the specificities of these two fields and builds a bridge between them, paving the way to new research perspectives. In particular, I will discuss which types of tools, from the system level to the end user, should be designed, implemented and studied in order to better support interaction design along the evolution of interactive systems. At a more general level, Designeering Interaction is also a contribution that, I hope, will help better "understand how HCI works with technology".
APA, Harvard, Vancouver, ISO, and other styles
14

Sanchez, Téo. "Interactive Machine Teaching with and for Novices." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG055.

Full text
Abstract:
Les algorithmes d'apprentissage machine déployés dans la société ou la technologie offrent généralement aux utilisateurs aucune prise sur la manière dont les modèles d'apprentissage sont optimisés à partir des données. Seuls les experts conçoivent, analysent et optimisent les algorithmes d'apprentissage automatique. À l'intersection de l'Interaction Humain-Machine (IHM) et de l'apprentissage machine, le domaine de l'apprentissage automatique interactif vise à intégrer l'apprentissage automatique dans des pratiques existantes. L'enseignement machine interactif (Interactive Machine Teaching), en particulier, cherche à impliquer des utilisateurs non experts en tant qu'enseignant de la machine afin de les autonomiser dans le processus de construction de modèles d'apprentissage. Ces utilisateurs pourraient profiter de la construction de modèles d'apprentissage pour traiter et automatiser des tâches sur leurs propres données, conduisant à des modèles plus robustes et moins biaisés pour des problèmes spécialisés. Cette thèse adopte une approche empirique sur l'enseignement machine interactif en se concentrant sur la façon dont les utilisateurs développent des stratégies et comprennent les systèmes d'apprentissage machine interactifs à travers l'acte d'enseigner. Cette recherche fournit deux études utilisateurs impliquant des participants en tant qu'enseignant de classificateurs d'images utilisant des réseaux de neurones artificiels appris par transfert. Ces études se concentrent sur ce que les utilisateurs comprennent du comportement du modèle ML et sur la stratégie qu'ils peuvent utiliser pour le "faire fonctionner". La seconde étude se concentre sur la compréhension et l'utilisation de deux types d'incertitude : l'incertitude aléatorique, qui traduit l'ambiguïté, et l'incertitude épistémique, qui traduit la nouveauté. Je discute de l'utilisation de l'incertitude et de l'apprentissage actif (Active Learning) comme outils pour l'enseignement machine interactif. Enfin, je présente mes collaborations artistiques et adopte une approche réflexive sur les obstacles et les opportunités de développement de l'apprentissage automatique interactif pour l'art. Je soutiens que les utilisateurs novices développent différentes stratégies d'enseignement qui peuvent évoluer en fonction des informations obtenues tout au long de l'interaction. Les stratégies d'enseignement structurent la composition des données d'entraînement et affectent la capacité des utilisateurs à comprendre et à prédire le comportement de l'algorithme. En plus de permettre aux gens de construire des modèles d'apprentissage automatique, l'enseignement machine interactif présente un intérêt pédagogique en favorisant les comportements d'investigation, renforçant les connaissances des novices en apprentissage machine
Machine Learning algorithms in society or interactive technology generally provide users with little or no agency with respect to how models are optimized from data. Only experts design, analyze, and optimize ML algorithms. At the intersection of HCI and ML, the field of Interactive Machine Learning (IML) aims at incorporating ML workflows within existing users' practices. Interactive Machine Teaching (IMT), in particular, focuses on involving non-expert users as "machine teachers" and empowering them in the process of building ML models. Non-experts could take advantage of building ML models to process and automate tasks on their data, leading to more robust and less biased models for specialized problems. This thesis takes an empirical approach to IMT by focusing on how people develop strategies and understand interactive ML systems through the act of teaching. This research provides two user studies involving participants as teachers of image-based classifiers using transfer-learned artificial neural networks. These studies focus on what users understand from the ML model's behavior and what strategy they may use to "make it work." The second study focuses on machine teachers' understanding and use of two types of uncertainty: aleatoric uncertainty, which conveys ambiguity, and epistemic uncertainty, which conveys novelty. I discuss the use of uncertainty and active learning in IMT. Finally, I report artistic collaborations and adopt an auto-ethnographic approach to challenges and opportunities for developing IMT with artists. I argue that people develop different teaching strategies that can evolve with insights obtained throughout the interaction. People's teaching strategies structure the composition of the data they curated and affect their ability to understand and predict the algorithm behavior. Besides empowering people to build ML models, IMT can foster investigative behaviors, leveraging peoples' literacy in ML and artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
15

Renna, I. "Upper body tracking and Gesture recognition for Human-Machine Interaction." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00717443.

Full text
Abstract:
Les robots sont des agents artificiels qui peuvent agir dans le monde des humains grâce aux capacités de perception. Dans un contexte d'interaction homme-robot, les humains et les robots partagent le même espace de communication. En effet, les robots compagnons sont censés communiquer avec les humains d'une manière naturelle et intuitive: l'une des façons les plus naturelles est basée sur les gestes et les mouvements réactifs du corps. Pour rendre cette interaction la plus conviviale possible, un robot compagnon doit, donc, être doté d'une ou plusieurs capacités lui permettant de percevoir, de reconnaître et de réagir aux gestes humains. Cette thèse a été focalisée sur la conception et le développement d'un système de reconnaissance gestuelle dans un contexte d'interaction homme-robot. Ce système comprend un algorithme de suivi permettant de connaître la position du corps lors des mouvements et un module de niveau supérieur qui reconnaît les gestes effectués par des utilisateurs humains. De nouvelles contributions ont été apportées dans les deux sujets. Tout d'abord, une nouvelle approche est proposée pour le suivi visuel des membres du haut du corps. L'analyse du mouvement du corps humain est difficile, en raison du nombre important de degrés de liberté de l'objet articulé qui modélise la partie supérieure du corps. Pour contourner la complexité de calcul, chaque membre est suivi avec un filtre particulaire à recuit simulé et les différents filtres interagissent grâce à la propagation de croyance. Le corps humain en 3D est ainsi qualifié comme un modèle graphique dans lequel les relations entre les parties du corps sont représentées par des distributions de probabilité conditionnelles. Le problème d'estimation de la pose est donc formulé comme une inférence probabiliste sur un modèle graphique, où les variables aléatoires correspondent aux paramètres des membres individuels (position et orientation) et les messages de propagation de croyance assurent la cohérence entre les membres. Deuxièmement, nous proposons un cadre permettant la détection et la reconnaissance des gestes emblématiques. La question la plus difficile dans la reconnaissance des gestes est de trouver de bonnes caractéristiques avec un pouvoir discriminant (faire la distinction entre différents gestes) et une bonne robustesse à la variabilité intrinsèque des gestes (le contexte dans lequel les gestes sont exprimés, la morphologie de la personne, le point de vue, etc). Dans ce travail, nous proposons un nouveau modèle de normalisation de la cinématique du bras reflétant à la fois l'activité musculaire et l'apparence du bras quand un geste est effectué. Les signaux obtenus sont d'abord segmentés et ensuite analysés par deux techniques d'apprentissage : les chaînes de Markov cachées et les Support Vector Machine. Les deux méthodes sont comparées dans une tâche de reconnaissance de 5 classes de gestes emblématiques. Les deux systèmes présentent de bonnes performances avec une base de données de formation minimaliste quels que soient l'anthropométrie, le sexe, l'âge ou la pose de l'acteur par rapport au système de détection. Le travail présenté ici a été réalisé dans le cadre d'une thèse de doctorat en co-tutelle entre l'Université "Pierre et Marie Curie" (ISIR laboratoire, Paris) et l'Université de Gênes (IIT - Tera département) et a été labelisée par l'Université Franco-Italienne.
APA, Harvard, Vancouver, ISO, and other styles
16

Collins, Micah Thomas 1975. "Modeling human-machine interaction in production systems for equipment design." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Perez, Jorge (Jorge I. ). "Designing interaction for human-machine collaboration in multi-agent scheduling." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106007.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
In the field of multi-agent task scheduling, there are many algorithms that are capable of minimizing objective functions when the user is able to specify them. However, there is a need for systems and algorithms that are able to include user preferences or domain knowledge into the final solution. This will increase the usability of algorithms that would otherwise not include some characteristics desired by the end user but are highly optimal mathematically. We hypothesize that allowing subjects to iterate over solutions while adding allocation and temporal constraints would allow them to take advantage of the computational power to solve the temporal problem while including their preferences. No statistically significant results were found that supported that such algorithm is preferred over manually solving the problem among the participants. However, there are trends that support the hypothesis. We found statistically significant evidence (p=0.0027), that subjects reported higher workload when working with Manual Mode and Modification Mode rather than Iteration Mode and Feedback Iteration Mode. We propose changes to the system that can provide guidance for future design of interaction for scheduling problems.
by Jorge Perez.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Renna, Ilaria. "Upper body tracking and Gesture recognition for Human-Machine Interaction." Paris 6, 2012. http://www.theses.fr/2012PA066119.

Full text
Abstract:
Les robots sont des agents artificiels qui peuvent agir dans le monde des humains grâce aux capacités de perception. Dans un contexte d’interaction homme-robot, les humains et les robots partagent le même espace de communication. En effet, les robots compagnons sont censés communiquer avec les humains d’une manière naturelle et intuitive: l’une des façons les plus naturelles est basée sur les gestes et les mouvements réactifs du corps. Pour rendre cette interaction la plus conviviale possible, un robot compagnon doit, donc, être doté d’une ou plusieurs capacités lui permettant de percevoir, de reconnaître et de réagir aux gestes humains. Cette thèse a été focalisée sur la conception et le développement d’un système de reconnaissance gestuelle dans un contexte d’interaction homme-robot. Ce système comprend un algorithme de suivi permettant de connaître la position du corps lors des mouvements et un module de niveau supérieur qui reconnaît les gestes effectués par des utilisateurs humains. De nouvelles contributions ont été apportées dans les deux sujets. Tout d’abord, une nouvelle approche est proposée pour le suivi visuel des membres du haut du corps. L’analyse du mouvement du corps humain est difficile, en raison du nombre important de degrés de liberté de l’objet articulé qui modélise la partie supérieure du corps. Pour contourner la complexité de calcul, chaque membre est suivi avec un filtre particulaire à recuit simulé et les différents filtres interagissent grâce à la propagation de croyance. Le corps humain en 3D est ainsi qualifié comme un modèle graphique dans lequel les relations entre les parties du corps sont représentées par des distributions de probabilité conditionnelles. Le problème d’estimation de la pose est donc formulé comme une inférence probabiliste sur un modèle graphique, où les variables aléatoires correspondent aux paramètres des membres individuels (position et orientation) et les messages de propagation de croyance assurent la cohérence entre les membres. Deuxièmement, nous proposons un cadre permettant la détection et la reconnais- sance des gestes emblématiques. La question la plus difficile dans la reconnaissance des gestes est de trouver de bonnes caractéristiques avec un pouvoir discriminant (faire la distinction entre différents gestes) et une bonne robustesse à la variabilité intrinsèque des gestes (le contexte dans lequel les gestes sont exprimés, la morpholo- gie de la personne, le point de vue, etc). Dans ce travail, nous proposons un nouveau modèle de normalisation de la cinématique du bras reflétant à la fois l’activité mus- culaire et l’apparence du bras quand un geste est effectué. Les signaux obtenus sont d’abord segmentés et ensuite analysés par deux techniques d’apprentissage : les chaînes de Markov cachées et les Support Vector Machine. Les deux méthodes sont comparées dans une tâche de reconnaissance de 5 classes de gestes emblématiques. Les deux systèmes présentent de bonnes performances avec une base de données de formation minimaliste quels que soient l’anthropométrie, le sexe, l’âge ou la pose de l’acteur par rapport au système de détection. Le travail présenté ici a été réalisé dans le cadre d’une thèse de doctorat en co-tutelle entre l’Université “Pierre et Marie Curie” (ISIR laboratoire, Paris) et l’Université de Gênes (IIT - Tera département) et a été labelisée par l’Université Franco-Italienne
Robots are artificial agents that can act in humans’ world thanks to perception, action and reasoning capacities. In particular, robots companion are designed to share with humans the same physical and communication spaces in performing daily life collaborative tasks and aids. In such a context, interactions between humans and robots are expected to be as natural and as intuitive as possible. One of the most natural ways is based on gestures and reactive body motions. To make this friendly interaction possible, a robot companion has to be endowed with one or more capabilities allowing him to perceive, to recognize and to react to human gestures. This PhD thesis has been focused on the design and the development of a gesture recognition system that can be exploited in a human-robot interaction context. This system includes (1) a limbs-tracking algorithm that determines human body position during movements and (2) a higher-level module that recognizes gestures performed by human users. New contributions were made in both topics. First, a new approach is proposed for visual tracking of upper-body limbs. Analysing human body motion is challenging, due to the important number of degrees of freedom of the articulated object modelling the upper body. To circumvent the computational complexity, each limb is tracked with an Annealed Particle Filter and the different filters interact through Belief Propagation. 3D human body is described as a graphical model in which the relationships between the body parts are represented by conditional probability distributions. Pose estimation problem is thus formulated as a probabilistic inference over a graphical model, where the random variables correspond to the individual limb parameters (position and orientation) and Belief Propagation messages ensure coherence between limbs. Secondly, we propose a framework allowing emblematic gestures detection and recognition. The most challenging issue in gesture recognition is to find good features with a discriminant power (to distinguish between different gestures) and a good robustness to intrinsic gestures variability (the context in which gestures are expressed, the morphology of the person, the point of view, etc. ). In this work, we propose a new arm's kinematics normalization scheme reflecting both the muscular activity and arm's appearance when a gesture is performed. The obtained signals are first segmented and then analysed by two machine learning techniques: Hidden Markov Models and Support Vector Machines. The two methods are compared in a 5 classes emblematic gestures recognition task. Both systems show good performances with a minimalistic training database regardless to performer's anthropometry, gender, age or pose with regard to the sensing system. The work presented here has been done within the framework of a PhD thesis in joint supervision between the “Pierre et Marie Curie” University (ISIR laboratory, Paris) and the University of Genova (IIT--Tera department) and was labelled by the French-Italian University
APA, Harvard, Vancouver, ISO, and other styles
19

Ding, Sihao. "Multi-Perspective Image and Video Processing for Human-Machine Interaction." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488462115943949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Norstedt, Emil, and Timmy Sahlberg. "Human Interaction with Autonomous machines: Visual Communication to Encourage Trust." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19706.

Full text
Abstract:
En pågående utveckling sker inom konstruktionsbranschen där maskiner går från att styras manuellt av en mänsklig förare till styras autonomt, d.v.s. utan mänsklig förare. Detta arbete har varit i samarbete med Volvo CE och deras nya autonoma hjullastare. Då maskinen kommer operera i en miljö kring människor, så krävs en hög säkerhet för att eliminera olyckor. Syftet med arbetet har varit att utveckla ett system för öka säkerheten och förtroendet för människorna i närheten av den autonoma maskinen. Systemet byggs på visuell kommunikation för att uppnå en tillit mellan parterna. Arbetet har baserats på en iterativ process där prototypande, testande och analysering har varit i focus för att uppnå ett lyckat resultat. Genom skapande av modeller med olika funktioner så har en större förståelse kring hur visuell kommunikation mellan människa och maskin kan skapas för att bygga upp en tillit sinsemellan. Detta resulterade i ett koncept som bygger på en kommunikation via ögon från maskinen. Ögonkontakt har visats sig vara en viktig faktor för människor för att skapa ett förtroende för någon eller något i obekväma och utsatta situationer. Maskinen förmedlar olika uttryck genom att ändra färg och form på ögonen för att uppmärksamma och informera människor som rör sig i närheten av maskinen. Genom att anpassa färg och form på ögon kan information uppfattas på olika sätt. Med denna typ av kommunikation kan ett förtroende för maskinen skapas och på så sätt höjs säkerhet och tillit.
Ongoing development is happening within the construction industry. Machines are transformed from being operated by humans to being autonomous. This project has been a collaboration with Volvo Construction Equipment (Volvo CE), and their new autonomous wheel loader. The autonomous machine is supposed to operate in the same environment as people. Therefore, a developed safety system is required to eliminate accidents. The purpose has been developing a system to increase the safety for the workers and to encourage trust for the autonomous machine. The system is based on visual communication to achieve trust between the machine and the people around it. An iterative process, with a focus on testing, prototyping, and analysing, has been used to accomplish a successful result. Better understanding has been developed on how to design a human-machine-interface to encourage trust by creating models with a variety of functions. The iterative process resulted in a concept that communicates through eyes. Eye-contact is an essential factor for creating trust in unfamiliar and exposed situations. The solution mediating different expressions by changing the colour and shape of the eyes to create awareness and to inform people moving around in the same environment. Specific information can be mediated in various situations by adopting the colour and shape of the eyes. Trust can be encouraged for the autonomous machine using this way of communicating.
APA, Harvard, Vancouver, ISO, and other styles
21

Battut, Alexandre. "Interaction substrates and instruments for interaction histories." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG026.

Full text
Abstract:
Dans le monde numérique comme dans le monde physique, nos interactions avec les objets laissent des traces qui racontent l'histoire qui les a façonnés au fil du temps. Ces données historiques peuvent être consultées par les utilisateurs afin de mieux comprendre les étapes qui ont conduit à l'état actuel de leur système. Elles peuvent également être re-documentées afin d'arranger l'historique d'une manière plus compréhensible pour les utilisateurs. Dans des environnements collaboratifs, les utilisateurs peuvent être amenés à partager ces données, afin de mieux coordonner ou synchroniser leur travail d'équipe.Des travaux antérieurs ont tenté de démontrer les avantages des historiques partagés entre applications, mais les implémentations actu- elles des historiques dans les systèmes interactifs continuent de confiner les historiques à leur application d'origine.Les utilisateurs ne peuvent pas croiser leur historiques pour corréler les événements qui se sont produits dans différentes applications. Dans cette thèse, je montre que concevoir des historiques de l'interaction pouvant être partagés entre les applications et les utilisateurs faciliterait la navigation, la compréhension et la réutilisation des données historiques. J'ancre le début de mes travaux dans le cas de l'écriture collaborative afin d'explorer des écologies de traces et des usages familiers, mais néanmoins complexes. J'identifie les pratiques récurrentes et les problèmes liés à l'utilisation des données historiques en interrogeant des utilisateurs habitués de l'écriture collaborative, et je mène plusieurs activités de conception basées sur les observations qui en découlent. Je décris ensuite un premier système en tant que preuve de concept intégrant deux outils résultant de ces activités de conception. Ce système intègre également la première itération d'une structure unique pour les données d'historique partagées entre applications et utilisateurs. Les résultats des études utilisateurs menées sur ce système montrent que ces derniers expriment effectivement le besoin de disposer d'historiques d'interaction unifiés et personnalisables. En compilant les données recueillies au cours de ces activités de recherche et en me basant sur des travaux antérieurs concernant les "médias dynamiques partageables" et les substrats d'interaction, je décris un cadre permettant de concevoir des historiques d'interaction plus flexibles. Je présente Steps, une structure d'unification des données historiques qui intègre un noyau d'attributs descriptifs qui préserve l'intégrité d'une trace entre les applications, et des attributs contextuels extensibles qui permettent aux utilisateurs de modeler leurs historiques en fonction de leurs besoins. Je présente ensuite OneTrace, un prototype basé sur les Steps. Son implémentation suit mon cadre descriptif pour les historiques inter-applications et définit l'historique comme un matériau numérique à façonner par l'utilisation d'outils dédiés. Je discute des opportunités offertes par cette approche pour réaliser un changement de paradigme sur la façon dont nous concevons les historiques et leurs outils
In the digital world, as in the physical world, our interactions with objects leave traces that tell the story of the actions that shaped these objects over time. This historical data can be accessed by end users to help them better understand the steps that led to the current state of their system. These traces can also be reused for activities such as re-documenting their own history to arrange it in a way that they find more understandable. Users may also be led to share these data in collaborative environments, to better coordinate and synchronize their work. While previous work has attempted to show the benefits of cross-application histories, current implementations of interaction histories in interactive systems tend to tie history data to their source application. This prevents users from cross-referencing historical data to review and correlate events that occurred in different applications.In this thesis, I argue that designing interaction histories that can be shared among applications and users would support browsing, understanding and reusing historical data. I first ground my work in the use case of collaborative writing to explore relatable yet complex traces ecologies and interaction history use. I identify recurring practices and issues with the use of history data by interviewing knowledge workers and conducting several design activities based on these observations. I describe a first proof-of-concept system integrating two history instruments resulting from these design activities, and the first iteration of a unifying structure for historical data to be shared among applications and users. The results of user studies show that users indeed express a need for unified and customizable interaction histories.Compiling the data gathered during these research activities and based on previous works about “Dynamic Shareable Media” and the Interaction Substrates and Instruments model, I describe a framework to help create more flexible interaction histories. The goal is to describe how to design interaction history systems that would help users take control of their historical data. I introduce Steps, a structure for unifying historical data that includes descriptive core attributes to preserve the integrity of a trace across applications, and extensible contextual attributes that let users reshape their histories to suit their needs. I then introduce OneTrace, a proof-of-concept prototype based on Steps that follows my descriptive framework for cross-application histories and defines interaction histories as digital material to be shaped by digital tool use. I discuss the opportunities offered by this approach to support a shift in paradigm on how we design and interact with interaction histories
APA, Harvard, Vancouver, ISO, and other styles
22

Holmberg, Lars. "Human In Command Machine Learning." Licentiate thesis, Malmö universitet, Malmö högskola, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-42576.

Full text
Abstract:
Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts.  This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions.  HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.
APA, Harvard, Vancouver, ISO, and other styles
23

BAZZANO, FEDERICA. "Human-Machine Interfaces for Service Robotics." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2734314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rydström, Annie. "The effect of haptic feedback in visual-manual human-machine interaction." Licentiate thesis, Luleå tekniska universitet, Arbetsvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-25910.

Full text
Abstract:
Humans use all their senses when they explore and interact with the environment. In human-machine interaction (HMI) vision is the dominant sense, followed by audition. Haptic information - information concerning the sense of touch - is not commonly available. The overall aim of this thesis was to investigate how haptic feedback affects visual-manual HMI. In the experiment presented in paper I the spatial haptic properties shape and location were compared. Shape encoding often relies on users sequentially exploring differently shaped knobs, levers or buttons. The experiment revealed that physical shapes available through a shape-changing device can be as efficient as adjacently located push-buttons to encode functions in an interface. The experiment presented in paper II investigated the extent to which interface information can be transferred between the haptic and visual modalities. The feedback - rendered textures - was displayed haptically through a rotary device and visually through a computer monitor. There was a cross-modal transfer between the modalities, although not effortless, and the transfer from haptics to vision seemed to be easier than the transfer from vision to haptics. The asymmetry of the cross-modal transfer and the enhanced visual performance might be a result of the visual information being more useful for the task at hand. Paper III presents an experiment carried out in a car simulator. The experiment was conducted to investigate how haptic feedback in an in-car interface affects driver behaviour. Visual feedback was provided on a screen at the centre panel of the simulator. Haptic feedback was provided through the interaction device - a rotary device. The results revealed that, although driving performance degradation did not differ between the different haptic and visual feedback conditions, all conditions caused a degradation in driving performance. Visual behaviour did not differ between conditions including visual feedback. It is therefore apparent that the haptic feedback was not actively used when visual interface information was provided. Using haptic feedback only was shown to be more time-consuming. In addition it was revealed that tasks with only haptic feedback induce a cognitive load on the driver. It was apparent in studies II and III that the haptic information is not actively used if the visual information is more easily achieved.

Godkänd; 2007; 20070919 (biem)

APA, Harvard, Vancouver, ISO, and other styles
25

Rydström, Annie. "The effect of haptic feedback in visual-manual human-machine interaction /." Luleå : Luleå University of Technology, 2007. http://epubl.ltu.se/1402-1757/2007/41/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Degani, Asaf. "Modeling human-machine systems : on modes, error, and patterns of interaction." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Riviere, Jean-Philippe. "Capturing traces of the dance learning process." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG054.

Full text
Abstract:
Cette thèse porte sur la conception d’outils interactifs pour comprendre et faciliter l’apprentissage de la danse à partir de vidéos. Les processus d’apprentissage des danseurs représentent une source d’informations riches pour les chercheurs qui s’intéressent à la conception de systèmes soutenant l’apprentissage moteur. En effet, les danseurs experts réutilisent un large éventail de compétences qu’ils ont appris. Cependant, ces compétences sont en partie le résultat de connaissances implicites et incarnées,qui sont difficilement exprimables et verbalisables par un individu.Dans cette thèse, je soutiens que nous pouvons capturer et sauvegarder une trace des connaissances implicites des danseurs et les utiliser pour concevoir des outils interactifs qui soutiennent l’apprentissage de la danse. Mon approche consiste à étudier différentes sessions d’apprentissage de danse dans des contextes réels, aussi bien individuels que collaboratifs.Sur la base des résultats apportés par ces études, je contribue à une meilleure compréhension des processus implicites qui sous-tendent l’apprentissage de la danse dans des contextes individuels et collectifs. Je présente plusieurs stratégies d’apprentissage utilisées par des danseurs et j’affirme que l’on peut documenter ces stratégies en sauvegardant une trace de l’apprentissage. Je discute de l’opportunité que représente la capture de ces connaissances incarnées et j’apporte de nouvelles perspectives pour la conception d’outils d’aide à l’apprentissage du mouvement par la vidéo
This thesis focuses on designing interactive tools to understand and support dance learning from videos. Dancers' learning practice represents a rich source of information for researchers interested in designing systems that support motor learning. Indeed, dancers embody a wide range of skills that they reuse during new dance sequences learning. However, these skills are in part the result of embodied implicit knowledge. In this thesis, I argue that we can capture and save traces of dancers' embodied knowledge and use them to design interactive tools that support dance learning. My approach is to study real-life dance learning tasks in individual and collaborative settings. Based on the findings from all the studies, I discuss the challenge of capturing embodied knowledge to support dancers’ learning practice. My thesis highlights that although dancers’ learning processes are diverse, similar strategies emerge to structure their learning process. Finally, I bring and discuss new perspectives to the design of movement-based learning tools
APA, Harvard, Vancouver, ISO, and other styles
28

Strickland, Ted John Jr. "Dynamic management of multichannel interfaces for human interaction with computer-based intelligent assistants." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184793.

Full text
Abstract:
For complex man-machine tasks where multi-media interaction with computer-based assistants is appropriate, a portion of the assistant's intelligence must be devoted to managing its communication processes with the user. Since people often serve the role of assistants, the conventions of human communication provide a basis for designing the communication processes of the computer-based assistant. Human decision making for communication requires knowledge of the user's style, the task demands, and communication practices, and knowledge of the current situation. Decisions necessary for effective communication, when, how, and what to communicate, can be expressed using these knowledge sources. A system based on human communication rules was developed to manage the communication decisions of an intelligent assistant. The Dynamic Communication Management (DCM) system consists of four components, three models and a manager. The model of the user describes the user's communication preferences for different task situations. The model of the task is used to establish the user's current activity and to describe how communication should be conducted for this activity. The communication model provides the rules needed to make decisions: when to communicate the message, how to present the message to the user, and what information should be communicated. The Communication Manager controls and coordinates these models to conduct all communication with the user. Performance with DCM as the interface to a simulated Flexible Manufacturing System (FMS) control task was established to learn about the potential benefits of the concept. An initial comparison showed no improvement over a keyboard and monitor interface, but provided performance data which exposed the differences in information needed for decision making using auditory and visual communication. This knowledge and related performance data were used to redesign features of the DCM. The redesigned DCM significantly improved all aspects of system performance compared to the keyboard and monitor interface. The FMS performance measures and performance on a secondary task improved, user communication behavior was changed favorably, and users preferred the advanced features of DCM. These types of benefits can potentially accrue for a variety of tasks where multi-media communication with computer-based intelligent assistants is managed with DCM.
APA, Harvard, Vancouver, ISO, and other styles
29

Vo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0053.

Full text
Abstract:
La télévision n’a cessé de se populariser et d’évoluer en proposant de nouveaux services. Ces services de plus en plus interactifs rendent les téléspectateurs plus engagés dans l’activité télévisuelle. Contrairement à l’usage d’un ordinateur, ils interagissent sur un écran distant avec une télécommande et des applications depuis leur canapé peu propice à l’usage d’un clavier et d’une souris. Ce dispositif et les techniques d’interaction actuelles qui lui sont associées peinent à répondre correctement à leurs attentes. Afin de répondre à cette problématique, les travaux de cette thèse explorent les possibilités offertes par la modalité gestuelle pour concevoir de nouvelles techniques d’interaction pour la télévision interactive en tenant compte de son contexte d’usage. Dans un premier temps, nous présentons le contexte singulier de l’activité télévisuelle. Puis, nous proposons un espace de caractérisation des travaux de la littérature cherchant à améliorer la télécommande pour, finalement, nous focaliser sur l’interaction gestuelle. Nous introduisons un espace de caractérisation qui tente d’unifier l’interaction gestuelle contrainte par une surface, mains libres, et instrumentée ou non afin de guider la conception de nouvelles techniques. Nous avons conçu et évalué diverses techniques d’interaction gestuelle selon deux axes de recherche : les techniques d’interaction gestuelle instrumentées permettant d’améliorer l’expressivité interactionnelle de la télécommande traditionnelle, et les techniques d’interaction gestuelles mains libres en explorant la possibilité de réaliser des gestes sur la surface du ventre pour contrôler sa télévision
Television has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
APA, Harvard, Vancouver, ISO, and other styles
30

Bushman, James B. "Identification of an operator's associate model for cooperative supervisory control situations." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/30992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cheng, Kelvin. "0Direct interaction with large displays through monocular computer vision." Connect to full text, 2008. http://ses.library.usyd.edu.au/handle/2123/5331.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed November 5, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Information Technologies in the the Faculty of Engineering & Information Technologies. Degree awarded 2009; thesis submitted 2008. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
32

Eklund, Robert. "Disfluency in Swedish human-human and human-machine travel booking dialogues /." Doctoral thesis, Linköping : Univ, 2004. http://www.ep.liu.se/diss/science_technology/08/82/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fekete, Jean-Daniel. "Nouvelle génération d'Interfaces Homme-Machine pour mieux agir et mieux comprendre." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00876183.

Full text
Abstract:
Depuis une dizaine d'années, l'interaction Homme-Machine (IHM) subit une mutation importante liée en particulier à la diversification des supports et des usages ainsi qu'au changement d'échelle des données qui transitent ou résident sur nos ordinateurs. Cette diversité est aujourd'hui une grande source de difficultés pour la recherche et pour l'industrie. Les modèles architecturaux et les outils de conception et de réalisation de systèmes interactifs n'arrivent plus à la gérer convenablement. La quantité de données qui transitent sur nos ordinateurs et que nous manipulons, a augmenté de manière exponentielle depuis dix ans. Face à cette mutation, le domaine de l'interaction Homme-Machine a été jusqu'à présent en retard. Les systèmes d'exploitation commencent à proposer des mécanismes d'indexation automatique qui facilitent la recherche. Nous arguons que cela ne remplace pas une vue d'ensemble et une structuration des données. Trouver rapidement une information est utile, mais savoir comment cette information est organisée permet de comprendre, et cette compréhension est une arme importante pour manipuler et gérer des masses de données. Pour comprendre, il faut une vue d'ensemble - une structure visible et compréhensible - et c'est ce que nous voulons fournir à l'aide de la visualisation d'information. Nous présentons nos travaux dans ces domaines : une vision unifiée de l'architecture logicielle des systèmes interactifs pour faciliter la construction d'applications graphiques hautement interactives et profondément adaptables; des nouvelles techniques de visualisation et d'interaction, mais aussi des méthodes permettant de faire évoluer le domaine de la visualisation de manière plus rigoureuse.
APA, Harvard, Vancouver, ISO, and other styles
34

Courtoux, Emmanuel. "Tangible Interaction for Wall Displays." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG028.

Full text
Abstract:
Les murs d'écrans plongent les utilisateurs dans de larges espaces d'informations ultra-haute résolution. Ils sont bien adaptés à l'analyse de grands ensembles de données car les utilisateurs peuvent se déplacer physiquement pour explorer ce qui est affiché à l'écran. Ils facilitent également la collaboration car leur taille permet facilement d'accueillir plusieurs utilisateurs à la fois. Cependant, créer des interactions efficaces avec les murs d'écrans est un défi. Les périphériques traditionnels tel que le clavier et la souris montre vite leur limite dans un contexte ou plusieurs utilisateurs interagissent et se déplacent librement.La littérature en IHM propose des moyens d'interaction alternatifs. En particulier, l'interaction tangible, qui s'appuie sur la manipulation d'objets physiques pour interagir avec des scènes virtuelles, offre des avantages intéressants pour les murs d'écrans. La matérialité des contrôleurs offre un retour haptique qui permet de les manipuler sans les regarder. Leur forme suggère aussi la manière de les attraper et de les manipuler, guidant les utilisateurs sur leurs fonctionnalités. De nombreuses études empiriques comparant les contrôleurs tangibles à d'autres formes d'interaction montrent qu'ils procurent des gains significatifs en termes de vitesse et de précision de manipulation.Cependant, fabriquer des contrôleurs tangibles pour les murs d'écrans est difficile. Tout d'abord, l'orientation de l'écran et sa taille doivent être prise en compte pour fabriquer des tangibles appropriés. De plus, les utilisateurs sont mobiles : ils s'éloignent pour obtenir un angle de vue plus large, se rapprochent pour voir plus de détails, ou ajustent leur position en fonction de celle des autres utilisateurs. Les contrôleurs tangibles doivent donc être conçu pour être utilisable quelle que soit la position de l'utilisateur dans la pièce. Enfin, un mur d'écrans est souvent situé dans un contexte rassemblant d'autres dispositifs (tables interactives, ordinateurs, etc.). Dans ce cas, il est nécessaire de prendre en compte l'ensemble du contexte, contraignant la forme des tangibles et les technologies sous-jacentes.Mon travail de thèse propose trois contributions pour faciliter l'interaction tangible avec les murs d'écrans.Mon premier projet, WallTokens, propose des tangibles qui permettent d'interagir sur la surface des murs d'écrans. Les WallTokens sont équipés d'un mécanisme qui permet aux utilisateurs de les attacher et de les détacher facilement de la surface du mur. Cela permet de les laisser en place lorsque les utilisateurs veulent libérer leur main pour d'autres tâches. Nous présentons deux études évaluant la facilité d'utilisation et l'efficacité des WallTokens. Nos résultats montrent qu'ils sont plus précis et plus confortables que les interactions tactiles pour effectuer des manipulations de bas niveau sur mur d'écrans.Mon deuxième projet, SurfAirs, propose des tangibles permettant des interactions avec les murs d'écrans en surface, quand les utilisateurs ont besoin de détails et de précision, mais aussi à distance quand ils ont besoin d'un grand angle de vue. Les SurfAirs permettent également une transition continue entre ces deux modes d'interaction. Nous présentons deux études qui comparent les SurfAirs avec des gestes à main nue pour effectuer des tâches de manipulation de bas niveau. Les SurfAirs sont plus performants que les gestes à main nue en termes de précision et de vitesse et les utilisateurs les préfèrent.Le troisième projet propose une étude de la littérature sur l'utilisation de contrôleur tangible avec des écrans. Chaque article étudié est classifié selon 12 dimensions qui reflètent les aspects de la conception du contrôleur et de l'écran. Nous proposons un outil Web qui permet l'exploration de notre corpus d'articles à travers ces dimensions de classification. Nous discutons ensuite les défis qui sous-tendent la conception de contrôleurs tangibles dans un environnement multi-écrans
Wall displays immerse users in large, high-resolution information spaces. They are well suited for data analysis, as users only need to move around the physical space to explore the virtual information space displayed on the wall. They also facilitate collaboration as their large physical size can accommodate multiple users. However, designing effective ways of interacting with wall displays is challenging. Traditional input devices, such as mice and keyboards, quickly show their limitations in an environment where multiple users can interact and move freely.HCI literature offers interesting alternatives to traditional input techniques. In particular, Tangible User Interactions (TUIs), where users rely on custom tangible objects to interact with the virtual scene, have proved efficient with different types of displays ranging from smartphones to tabletops. Tangible controllers have natural advantages such as the haptic feedback they provide that enables eyes-free manipulations. They also afford specific grasps and manipulations, guiding users on what they can do with them. Empirical studies that compare tangibles to other forms of input also report quantitative gains in regarding manipulation speed and precision in different hardware setups.However, designing tangible controllers for wall displays is difficult. First, the large size and vertical orientation of walls must be taken into account to design tangibles with a suitable form factor. Second, users move in space. They move away to get a wider view, move closer to see details, or adjust their physical position based on other users and objects in the room. This means that tangible controllers must be usable regardless of the user's position in the room, which has some impact on design and engineering aspects. Finally, a wall display is often located in an environment that feature other devices and displays. In such cases, designing tangible controllers for a wall display requires to consider the whole multi-display environment, which constrains even more the tangibles' form factor and the underlying technologies.My thesis work makes three contributions towards enabling tangible interaction with wall displays.The first project, WallTokens, contributes tangibles for enabling on surface interaction with wall displays. WallTokens are low-cost, passive controllers that users can manipulate directly on the wall's surface. WallTokens have a mechanism that allows users to easily attach and detach them from the wall surface, so that when users are done interacting, they can leave them in place and free their hands for other purposes. We report on two studies assessing WallTokens' usability, showing that they are more precise and comfortable than bare-hand gestures to perform low-level manipulations on walls.The second project, SurfAirs, contributes tangibles that support not only on surface interaction but also distant interaction with wall displays. We present two possible designs for versatile tangible controllers that can be used both on the wall surface when users need precision and detail, and in the air when they need a wide viewing angle. SurfAirs support both types of input, as well as smooth transitions between the two. We report on two studies that compare SurfAir prototypes with bare hand gestures for performing low-level manipulation tasks. SurfAirs outperform bare hand gestures regarding accuracy, speed and user preference.The third project contributes a survey about the use of physical controllers to interact with a physical display. Each project is described along twelve dimensions that capture the design aspects of the controller, the properties of the display and how they communicate with each other. We contribute a Web page to explore this list of references along the different dimensions, and use it to discuss the challenges that underlie the design of tangible controllers in a multi-display environment
APA, Harvard, Vancouver, ISO, and other styles
35

Neiberg, Daniel. "Modelling Paralinguistic Conversational Interaction : Towards social awareness in spoken human-machine dialogue." Doctoral thesis, KTH, Tal-kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102335.

Full text
Abstract:
Parallel with the orthographic streams of words in conversation are multiple layered epiphenomena, short in duration and with a communicativepurpose. These paralinguistic events regulate the interaction flow via gaze,gestures and intonation. This thesis focus on how to compute, model, discoverand analyze prosody and it’s applications for spoken dialog systems.Specifically it addresses automatic classification and analysis of conversationalcues related to turn-taking, brief feedback, affective expressions, their crossrelationshipsas well as their cognitive and neurological basis. Techniques areproposed for instantaneous and suprasegmental parameterization of scalarand vector valued representations of fundamental frequency, but also intensity and voice quality. Examples are given for how to engineer supervised learned automata’s for off-line processing of conversational corpora as well as for incremental on-line processing with low-latency constraints suitable as detector modules in a responsive social interface. Specific attention is given to the communicative functions of vocal feedback like "mhm", "okay" and "yeah, that’s right" as postulated by the theories of grounding, emotion and a survey on laymen opinions. The potential functions and their prosodic cues are investigated via automatic decoding, data-mining, exploratory visualization and descriptive measurements.

QC 20120914

APA, Harvard, Vancouver, ISO, and other styles
36

Tsonis, Christos George. "An analysis of information complexity in air traffic control human machine interaction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35560.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.
Includes bibliographical references (p. 121-126).
This thesis proposes, develops and validates a methodology to quantify the complexity of air traffic control (ATC) human-machine interaction (HMI). Within this context, complexity is defined as the minimum amount of information required to describe the human machine interaction process in some fixed description language and chosen level of detail. The methodology elicits human information processing via cognitive task analysis (CTA) and expresses the HMI process algorithmically as a cognitive interaction algorithm (CIA). The CIA is comprised of multiple functions which formally describe each of the interaction processes required to complete a nominal set of tasks using a certain machine interface. Complexities of competing interface and task configurations are estimated by weighted summations of the compressed information content of the associated CIA functions. This information compression removes descriptive redundancy and approximates the minimum description length (MDL) of the CIA. The methodology is applied to a representative en-route ATC task and interface, and the complexity measures are compared to performance results obtained experimentally by human-in-the-loop simulations.
(cont.) It is found that the proposed complexity analysis methodology and resulting complexity metrics are able to predict trends in operator performance and workload. This methodology would allow designers and evaluators of human supervisory control (HSC) interfaces the ability to conduct complexity analyses and use complexity measures to more objectively select between competing interface and task configurations. Such a method could complement subjective interface evaluations, and reduce the amount of costly experimental testing.
by Christos George Tsonis.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Tidball, Brian Esley. "Designing Computer Agents with Facial Personality to Improve Human-Machine Collaboration." Wright State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=wright1146857305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Toure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.

Full text
Abstract:
This Master thesis proposes a human-computer interface for individual with limited hand movements that incorporate the use of facial gesture as a means of communication. The system recognizes faces and extracts facial gestures to map them into Morse code that would be translated in English in real time. The system is implemented on a MACBOOK computer using Python software, OpenCV library, and Dlib library. The system is tested by 6 students. Five of the testers were not familiar with Morse code. They performed the experiments in an average of 90 seconds. One of the tester was familiar with Morse code and performed the experiment in 53 seconds. It is concluded that errors occurred due to variations in features of the testers, lighting conditions, and unfamiliarity with the system. Implementing an auto correction and auto prediction system will decrease typing time considerably and make the system more robust.
APA, Harvard, Vancouver, ISO, and other styles
39

Wagy, Mark David. "Enabling Machine Science through Distributed Human Computing." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.

Full text
Abstract:
Distributed human computing techniques have been shown to be effective ways of accessing the problem-solving capabilities of a large group of anonymous individuals over the World Wide Web. They have been successfully applied to such diverse domains as computer security, biology and astronomy. The success of distributed human computing in various domains suggests that it can be utilized for complex collaborative problem solving. Thus it could be used for "machine science": utilizing machines to facilitate the vetting of disparate human hypotheses for solving scientific and engineering problems. In this thesis, we show that machine science is possible through distributed human computing methods for some tasks. By enabling anonymous individuals to collaborate in a way that parallels the scientific method -- suggesting hypotheses, testing and then communicating them for vetting by other participants -- we demonstrate that a crowd can together define robot control strategies, design robot morphologies capable of fast-forward locomotion and contribute features to machine learning models for residential electric energy usage. We also introduce a new methodology for empowering a fully automated robot design system by seeding it with intuitions distilled from the crowd. Our findings suggest that increasingly large, diverse and complex collaborations that combine people and machines in the right way may enable problem solving in a wide range of fields.
APA, Harvard, Vancouver, ISO, and other styles
40

Brasier, Eugénie. "Using Augmented Reality in Everyday Life." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG110.

Full text
Abstract:
Ce manuscrit présente les recherches que j’ai effectuées dans le cadre de ma thèse de doctorat autour des usages de la Réalité Augmentée (RA) dans la vie de tous les jours. J'y présente trois questions de recherche :QR1 - Comment reconsidérer nos interactions avec la Réalité Augmentée portative pour que celles-ci soient appropriées pour des interactions prolongées ?QR2 - Peut-on énoncer des recommandations sur l'augmentation des appareils mobiles avec la Réalité Augmentée portative ?QR3 - Comment un utilisateur peut-il garder le contrôle sur le contenu qu'il perçoit en Réalité Augmentée ?Le manuscrit est organisé en trois parties répondant respectivement à chacune de ces trois questions de recherche.Pour répondre à notre première question (QR1), nous présentons le projet ARPads. Un ARPad n'a pas de réalité physique, mais représente un plan d'interaction flottant dans l'air dans lequel les mouvements de l'utilisateur sont traduits par des mouvements de curseur dans une fenêtre de RA affichée dans l'environnement. Ce contrôle indirect permet à l'utilisateur d'adopter une posture plus confortable pour manipuler du contenu affiché en RA par opposition au contrôle direct qui force l'utilisateur à maintenir ses bras en l'air en direction du contenu de RA. Après avoir défini les critères de conceptions de ces ARPads en prenant en compte leur position et leur orientation relatives à l'utilisateur, nous avons implémenté et évalué certaines conceptions prometteuses. Nos résultats montrent que le contrôle indirect peut atteindre des performances similaires à celles du contrôle direct, tout en limitant la fatigue perçue par l'utilisateur et identifient les conceptions d'ARPads les plus efficaces.Pour notre deuxième question (QR2), nous nous concentrons sur la combinaison entre les appareils mobiles et la RA. Suite à un atelier de conception mené avec des utilisateurs d'appareils mobiles, nous avons défini un espace de conception d'applications centrées sur le smartphone pouvant bénéficier d'une association avec la RA. En particulier, nous avons identifié deux dimensions principales : la fonction et la position du contenu holographique par rapport au contenu de premier intérêt affiché sur l'appareil mobile. Ce premier travail montre une grande richesse en termes de créativité pour proposer des améliorations possibles pour les appareils mobiles. Cependant, le manque de données quantifiables sur leur performance effective nous empêche encore de poser un avis objectif sur les réels avantages d'une telle association. Après avoir prototypé quelques exemples d'application pour démontrer leur faisabilité, nous nous sommes concentrés sur l'évaluation d'un cas spécifique : les composants d'interface (widgets) traditionnels des smartphones.Enfin, pour notre troisième question (QR3), nous nous projetons dans un futur dans lequel le champ de vision des utilisateurs contient fréquemment du contenu virtuel, et proposons une solution concrète pour que celui-ci n'interfère pas avec leur perception du monde réel. Pour cela, nous introduisons le concept de de-augmentation de la RA. Il s'agit de permettre aux utilisateurs de retirer volontairement certains éléments holographiques de leur champ de vision. Nous commençons par définir une taxonomie des opérations de dé-augmentation qui s'articule autour de trois dimensions : l'objet de la dé-augmentation, son déclencheur, et son rendu graphique. Nous illustrons ensuite ce concept au travers de trois scénarios démontrant l'utilité de telles opérations dans quelques situations particulières. Finalement, nous proposons une implémentation pratique de ce concept avec un prototype opérationnel, dont nous détaillons les différentes interactions qui permettent à l'utilisateur de définir une zone dé-augmentée. Ce dernier projet, plus théorique, ouvre des questionnements et des perspectives sur l'orientation des futurs systèmes de RA
This manuscript describes the research topics I explored during my Ph.D. regarding the uses of Augmented Reality (AR) in everyday life. I tackle the three following research questions:RQ1 - How to reconsider interaction with wearable AR to better support long-lasting tasks?RQ2 - Can we provide guidelines on how a wearable AR device can enhance a handheld device?RQ3 - How can users be in control of what Augmented Reality tells them?The manuscript is divided into three parts respectively addressing each of these three research questions.To answer our first research question (RQ1), we present the ARPads project. An ARPad has no tangibility but represents a floating interaction plane on which users can move their hands to control a cursor displayed in an AR window. Such indirect input allows users to keep a comfortable posture while interacting with AR, as opposed to direct input that forces users to keep their arms in an upward position towards the content displayed in AR in front of them. After exploring a design space of these ARPads regarding their position and orientation relative to the user, we implement and empirically evaluate some promising combinations. Our results show that indirect input can achieve the same performance as direct input while limiting users' fatigue. From these results, we derive guidelines for future implementations.Regarding our second research question (RQ2), we focus on the association of wearable AR and mobile devices. We adopt a user-centered approach with a workshop organized with users of mobile devices. Based on the feedback derived from this workshop, we define a design space of smartphone-centric applications that could benefit from this association. In particular, we identify two main dimensions: the function and the location of the AR content relative to the main content displayed on the mobile device's screen. This first contribution highlights the creative ways to enhance mobile devices with AR. However, not much can be asserted regarding such enhancements without actual measures about their performance. After prototyping some use cases to show their feasibility, we evaluate some UI component distributions between a phone's screen and AR.Finally, we answer our last question by anticipating a future where users' field of view is frequently augmented with AR content. We introduce the concept of AR de-augmentation as a means to prevent this AR content from interfering with users' perception of the real world. AR de-augmentation gives users agency over the AR content. We first define a taxonomy of such de-augmentation operations regarding three aspects: its scope, its trigger, and its rendering. Then, we illustrate the concept of AR de-augmentation with three scenarios to demonstrate its usefulness in some specific use cases. Finally, we implement a working prototype in which we detail some interactions that allow users to define a de-augmented area. This last project is more theoretical and projective, opening questions and perspectives on future research directions
APA, Harvard, Vancouver, ISO, and other styles
41

Benkaouar, johal Wafa. "Companion Robots Behaving with Style : Towards Plasticity in Social Human-Robot Interaction." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM082/document.

Full text
Abstract:
De nos jours, les robots compagnons présentent de réelles capacités et fonctionnalités. Leurs acceptabilité dans nos habitats est cependant toujours un objet d'étude du fait que les motivations et la valeur du companionage entre robot est enfant n'a pas encore été établi. Classiquement, les robots sociaux avaient des comportements génériques qui ne prenaient pas en compte les différences inter-individuelles. De plus en plus de travaux en Interaction Humain-Robot se penchent sur la personnalisation du compagnon. Personnalisation et contrôle du compagnon permettrai une meilleure compréhension de ses comportements par l'utilisateur. Proposer une palette d'expressions du compagnon jouant un rôle social permettrait à l'utilisateur de customiser leur compagnon en fonction de leur préférences.Dans ce travail, nous proposons un système de plasticité pour l'interaction humain-robot. Nous utilisons une méthode de Design Basée Scenario pour expliciter les rôles sociaux attendu des robot compagnons. Puis en nous appuyant sur la littérature de plusieurs disciplines, nous proposons de représenter ces variations de comportement d'un robot compagnon par les styles comportementaux. Les styles comportementaux sont défini en fonction du rôle social grâce à des paramètres d'expressivité non-verbaux. Ces paramètres (statiques, dynamiques et décorateurs) permettent de transformer des mouvements dit neutres en mouvements stylés. Nous avons mener une étude basée sur des vidéos, qui montraient deux robots avec des mouvement stylés, afin d'évaluer l'expressivité de deux styles parentaux par deux types de robots. Les résultats montrent que les participants étaient capable de différentier les styles en termes de dominance et d'autorité, en accord avec la théorie en psychologie sur ces styles. Nous avons constater que le style préféré par les parents n'étaient pas corréler à leur propre style en tant que parents. En conséquence, les styles comportementaux semblent être des outils pertinents pour la personnalisation social du robot compagnon par les parents.Une seconde expérience, dans un appartement impliquant 16 enfants dans des interaction enfant-robot, a montré que parents et enfants attendent plutôt d'un robot d'être polyvalent et de pouvoir jouer plusieurs rôle à la maison. Cette étude a aussi montré que les styles comportementaux ont une influence sur l'attitude corporelle des enfants pendant l'interaction avec le robot. Des dimensions classiquement utilisées en communication non-verbal nous ont permises de développer des mesures pour l'interaction enfant-robot, basées sur les données capturées avec un capteur Kinect 2.Dans cette thèse nous proposons également la modularisation d'une architecture cognitive et affective précédemment proposé résultant dans l'architecture Cognitive et Affective orientées Interaction (CAIO) pour l'interaction social humain-robot. Cette architecture a été implémenter en ROS, permettant son utilisation par des robots sociaux. Nous proposons aussi l'implémentation des Stimulus Evaluation Checks (SECs) de [Scherer, 2009] pour deux plateformes robotiques permettant l'expression dynamique d'émotion.Nous pensons que les styles comportementaux et l'architecture CAIO pourront s'avérer utile pour l'amélioration de l'acceptabilité et la sociabilité des robot compagnons
Companion robots are technologically and functionally more and more efficient. Capacities and usefulness of companion robots is nowadays a reality. These robots that have now more efficient are however not accepted yet in home environments as worth of having such robot and companionship hasn't been establish. Classically, social robots were displaying generic social behaviours and not taking into account inter-individual differences. More and more work in Human-Robot Interaction goes towards personalisation of the companion. Personalisation and control of the companion could lead to better understanding of the robot's behaviour. Proposing several ways of expression for companion robots playing role would allow user to customize their companion to their social preferences.In this work, we propose a plasticity framework for Human-Robot Interaction. We used a Scenario-Based Design method to elicit social roles for companion robots. Then, based on the literature in several disciplines, we propose to depict variations of behaviour of the companion robot with behavioural styles. Behavioural styles are defined according to the social role with non-verbal expressive parameters. The expressive parameters (static, dynamic and decorators) allow to transform neutral motions into styled motion. We conducted a perceptual study through a video-based survey showing two robots displaying styles allowing us to evaluate the expressibility of two parenting behavioural styles by two kind robots. We found that, participants were indeed able to discriminate between the styles in term of dominance and authoritativeness, which is in line with the psychological theory on these styles. Most important, we found that styles preferred by parents for their children was not correlated to their own parental practice. Consequently, behavioural styles are relevant cues for social personalisation of the companion robot by parents.A second experimental study in a natural environment involving child-robot interaction with 16 children showed that parents and children were expected a versatile robot able to play several social role. This study also showed that behavioural styles had an influence on the child's bodily attitudes during the interaction. Common dimension studied in non-verbal communication allowed us to develop measures for child-robot interaction, based on data captured with a Kinect2 sensor .In this thesis, we also propose a modularisation of a previously proposed affective and cognitive architecture resulting in the new Cognitive, Affective Interaction Oriented (CAIO) architecture. This architecture has been implemented in ROS framework allowing it to use it on social robots. We also proposed instantiations of the Stimulus Evaluation Checks of [Scherer, 2009]for two robotic platforms allowing dynamic expression of emotions.Both behavioural style framework and CAIO architecture can be useful in socialise companion robots and improving their acceptability
APA, Harvard, Vancouver, ISO, and other styles
42

Farneland, Christian, and Magnus Harrysson. "Developing a Human-Machine-Interface with high usability." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188499.

Full text
Abstract:
When developing a Human-Machine-Interface (HMI) it is important to make sure that it is easy to learn and use, to have high usability. If it does not, the operator of the machine suffers unnecessarily and it also becomes harder to sell for the producer of the machine. The effectiveness and efficiency of the machine drops down when it is hard to operate. To make it easier for future developers to reach a high usability factor when developing a HMI, this thesis aimed to find a carefully prepared process to follow when doing so. The result was a process that was tested out with a HMI prototype for waterjet cutting machines. This prototype was then tested in different use cases by both experienced operators as well as beginners. The testing produced positive feedback on the prototype, proving that the process that had been followed was being successful.
När man utvecklar ett Människa-Maskin-Gränssnitt (HMI) så är det viktigt att se till att det är lätt att lära sig och använda, att det har hög användbarhet. Ifall den inte har det så försämrar det operatörers situation i onödan och gör det svårare för producenter att sälja produkten. Produktionseffektiviteten minskar ifall maskinen är svår att hantera. För att göra det lättare för framtida utvecklare att nå en hög användbarhet när de utvecklar ett HMI så siktade detta examensarbete på att hitta en genomtänkt process att följa vid ett sådant tillfälle. Resultatet blev en process som testades via en HMI prototyp för vattenskärnings maskiner. Denna prototyp blev sedan testad i olika användarfall av både erfarna operatörer och nybörjare. Testerna visade sig ge positiv återkoppling, vilket bevisade att processen som följts upp till den punkten fungerade.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Liu. "Modelling interruptions in human-agent interaction." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS611.pdf.

Full text
Abstract:
Les interruptions jouent un rôle important dans l’élaboration de la communication humaine et se produisent fréquemment dans les conversations quotidiennes. Ils servent à réguler le flux des conversations, à transmettre des signaux sociaux et à promouvoir une compréhension partagée entre les locuteurs. La communication humaine implique une gamme de signaux multimodaux au-delà de la simple parole. Les modes de communication verbaux et non verbaux sont intimement liés, transmettant un contenu sémantique et pragmatique tout en adaptant le processus de communication. Le mode vocal intègre des fonctionnalités acoustiques, telles que la prosodie, tandis que le mode visuel englobe les expressions faciales, les gestes des mains et le langage corporel. L’essor de la communication virtuelle et en ligne a nécessité le développement d’une communication expressive pour les agents incarnés de type humain, notamment les agents conversationnels incarnés (ECA) et les robots sociaux. Pour favoriser des interactions fluides et naturelles entre les humains et les agents virtuels, il est crucial de doter les agents virtuels de la capacité de gérer les interruptions lors des interactions. Ce manuscrit se concentre sur l’étude des interruptions dans les interactions humain-humain et sur la possibilité pour les ECA d’interrompre les utilisateurs humains pendant les conversations. Les principaux objectifs de cette recherche sont doubles : (1) dans l'interaction humain-humain, analyse des signaux acoustiques et visuels pour catégoriser le type d'interruption et détecter le moment où les interruptions se produisent ; (2) doter ECA de la capacité de prédire quand interrompre et de générer son comportement multimodal. Pour atteindre ces objectifs, nous proposons un schéma d'annotation permettant d'identifier et de classer les échanges fluides, les canaux de retour et les différents types d'interruptions. Nous annotons manuellement les échanges dans deux corpus, une partie du corpus AMI et la partie française du corpus NoXi. Après avoir analysé les signaux non verbaux multimodaux, nous introduisons MIC, une approche permettant de classer le type d'interruption en fonction de signaux non verbaux sélectionnés (expression faciale, prosodie, mouvements de la tête et de la main) provenant des deux interlocuteurs (la personne interrompue et l'interrupteur). Nous introduisons également One-PredIT, qui utilise un classificateur à une classe pour identifier les points d'interruption potentiels en surveillant le comportement non verbal en temps réel du locuteur actuel (uniquement la personne interrompue). De plus, nous proposons AI-BGM, un modèle génératif pour calculer les expressions faciales et les rotations de la tête des ECA lors d'une interruption. Compte tenu de la quantité limitée de données à notre disposition, nous utilisons une technologie d'apprentissage par transfert pour entraîner notre modèle de génération de comportement d'interruption à l'aide du modèle de réseau neuronal bien entraîné Augmented Self-Attention Pruning
Interruptions play a significant role in shaping human communication, occurring frequently in everyday conversations. They serve to regulate conversation flow, convey social cues, and promote shared understanding among speakers. Human communication involves a range of multimodal signals beyond just speech. Verbal and non-verbal modes of communication are intricately intertwined, conveying semantic and pragmatic content while tailoring the communication process. The vocal mode incorporates acoustic features, such as prosody, while the visual mode encompasses facial expressions, hand gestures, and body language. The rise of virtual and online communication has necessitated the development of expressive communication for human-like embodied agents, including Embodied Conversational Agents (ECA) and social robots. To foster seamless and natural interactions between humans and virtual agents, it is crucial to equip virtual agents with the ability to handle interruptions during interactions. This manuscript focuses on studying interruptions in human-human interactions and enabling ECAs to interrupt human users during conversations. The primary objectives of this research are twofold: (1) in human-human interaction, analysis of acoustic and visual signals to categorise interruption type and detect when interruptions occur; (2) endow ECA with the capability to predict when to interrupt and generate its multimodal behaviour. To achieve these goals, we propose an annotation schema for identifying and classifying smooth turn exchanges, backchannels, and different interruption types. We manually annotate exchanges in two corpora, a part of the AMI corpus and the French section of the NoXi corpus. After analysing multimodal non-verbal signals, we introduce MIC, an approach to classify the interruption type based on selected non-verbal signals (facial expression, prosody, head and hand motion) from both interlocutors (the interruptee and the interrupter). We also introduce One-PredIT, which utilises a one-class classifier to identify potential interruption points by monitoring the real-time non-verbal behaviour of the current speaker (only interruptee). Additionally, we propose AI-BGM, a generative model to compute the facial expressions and head rotations of ECAs when it is interrupting. Given the limited amount of data at our disposal, we employ transfer learning technology to train our interruption behaviour generation model using the well-trained Augmented Self-Attention Pruning neural network model
APA, Harvard, Vancouver, ISO, and other styles
44

Chevet, Clotilde. ""L'interaction homme-machine" : un système d'écritures qui fait monde." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUL170.

Full text
Abstract:
Notre thèse a pour objets d’étude les « assistants vocaux », ces « êtres textuels » partagés entre code informatique et langage naturel, entre écriture et oralité. À l’aune de la promesse adressée par Apple à ses utilisateurs, « Adressez-vous à Siri comme à une personne », nous cherchons à comprendre et décrire le « système d’écritures » au sein duquel prend forme la mimesis communicationnelle humain-machine. Il s’agit pour nous d’étudier les différentes facettes de ce système : ses énonciations (humaines et machiniques), ses gestes (mobilisant autant la main que la voix) ainsi que ses acteurs (en coulisses comme sur le devant de la scène). À la croisée des sciences de l’information et de la communication et de l’anthropologie, cette thèse mobilise diverses approches : l’enquête épistémologique, l’archéologie des médias et l’ethnographie en ligne. Nous explorons trois enjeux de l’écriture dans « l’interaction homme-machine » : le rapport à soi puisqu’elle permet de s’exprimer, le rapport à l’Autre puisqu’elle permet de communiquer, et enfin le rapport au monde en ce qu’elle permet de le dire et de l’organiser
This thesis aims to study “personal assistants”, these “textual beings” split between computer code and natural language, between writing and orality. In light of Apple's promise to its users, "Talk to Siri as a person", our research explores the writing system within which human-machine communicational mimesis takes shape. We study different facets of this system: its (human and machinic) enunciations, its gestures (mobilising both the hand and the voice) and its actors (backstage as in the spotlight). At the crossroads of information and communication sciences and anthropology, this thesis combines various approaches: epistemological investigation, media archaeology and online ethnography. We explore three issues of writing in "human-machine interaction": the relationship to oneself as it allows personal expression, the relationship to the Other since it enables communication, and finally the relationship to the world since it enables it to be said and organized
APA, Harvard, Vancouver, ISO, and other styles
45

Marín, Urías Luis Felipe. "Reasoning about space for human-robot interaction." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.

Full text
Abstract:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives
Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
APA, Harvard, Vancouver, ISO, and other styles
46

Poltavchenko, Irina. "De l'analyse d'opinions à la détection des problèmes d'interactions humain-machine : application à la gestion de la relation client." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0030.

Full text
Abstract:
Motivée par le gain en popularité des chatbots prenant le rôle de conseillers sur les sites Web des entreprises, cette thèse s'attaque au problème de la détection des problèmes d’interaction entre un conseiller virtuel et ses utilisateurs sous l'angle de l'analyse des opinions et des émotions dans les textes. Cette thèse s’est déroulée dans le cadre d’une application concrète pour l’entreprise EDF et s'est appuyée sur le corpus du chatbot d'EDF. Ce corpus regroupe des expressions spontanées et riches, collectées dans les conditions écologiques (parfois appelées « in-the-wild »), difficiles à analyser de façon automatique, et encore peu étudiées. Nous proposons une typologie des problèmes d’interaction et faisons annoter une partie du corpus selon cette typologie, annotation dont une partie servira à l’évaluation du système. Le système de Détection Automatique des Problèmes d’Interaction (DAPI) développé lors de cette thèse est un système hybride qui allie l’approche symbolique et l’apprentissage non supervisé de représentation sémantique par plongements lexicaux (word embeddings). Le système DAPI a pour vocation d'être directement connecté au chatbot et de détecter des problèmes d’interaction en ligne, dès la réception d’un énoncé utilisateur. L'originalité de la méthode proposée repose sur : i) la prise en compte de l'historique du dialogue; ii) la modélisation des problèmes d’interaction en tant qu'expressions des opinions et des phénomènes reliés aux opinions spontanées de l'utilisateur vis-à-vis de l'interaction; iii) l'intégration des spécificités du langage web et « in-the-wild » comme des indices linguistiques pour les règles linguistiques; iv) recours aux plongements lexicaux de mots (word2vec) appris sur le grand corpus du chatbot non étiqueté afin de modéliser des similarités sémantiques. Les résultats obtenus sont très encourageants compte tenu de la complexité des données : F-score = 74,3%
This PHD thesis is motivated by the growing popularity of chatbots acting as advisors on corporate websites. This research addresses the detection of the interaction problems between a virtual advisor and its users from the angle of opinion and emotion analysis in the texts. The present study takes place in the concrete application context of a French energy supplier EDF, using EDF chatbot corpus. This corpus gathers spontaneous and rich expressions, collected in "in-the-wild" conditions, difficult to analyze automatically, and still little studied. We propose a typology of interaction problems and annotate a part of the corpus according to this typology. A part of created annotation is used to evaluate the system. The system named DAPI (automatic detection of interaction problems) developed during this thesis is a hybrid system that combines the symbolic approach and the unsupervised learning of semantic representation (word embeddings). The purpose of the DAPI system is to be directly connected to the chatbot and to detect online interaction problems as soon as a user statement is received. The originality of the proposed method is based on : i) taking into account the history of the dialogue ; ii) the modeling of interaction problems as the expressions of user spontaneous opinion or emotion towards the interaction ; iii) the integration of the web-chat and in-the-wild language specificities as linguistic cues for linguistic rules ; iv) use of lexical word embedding (word2vec) learned on the large untagged chatbot corpus to model semantic similarities. The results obtained are very encouraging considering the complexity of the data : F-score = 74.3%
APA, Harvard, Vancouver, ISO, and other styles
47

Évain, Andéol. "Optimizing the use of SSVEP-based brain-computer interfaces for human-computer interaction." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S083/document.

Full text
Abstract:
Cette thèse porte sur la conception et l'évaluation de systèmes interactifs utilisant des interfaces cerveau-machine (BCI pour Brain-Computer Interfaces). Ce type d'interfaces s'est développé dans les années récentes tout d'abord dans le domaine du handicap, afin de fournir aux grands handicapés des moyens d'interaction et de communication, et plus récemment dans d'autres domaines comme celui des jeux vidéo. Néanmoins, la plupart des travaux ont porté sur l'identification des signaux du cerveau susceptibles de porter une information utile, et sur les traitements nécessaires à l'extraction de cette information. Peu de travaux ont porté sur les aspects d'utilisabilité et de prise en compte des facteurs humains dans l'ensemble du système interactif. Cette thèse se concentre sur les systèmes basées sur SSVEP (steady-state visually evoked potentials), et se propose d'étudier l'ensemble du système interactif cerveau-machine, selon les critères de l'interaction homme-machine (IHM). Plus précisément, les points étudiés portent sur la demande cognitive, la frustration de l'utilisateur, les conditions de calibration, et les BCI hybrides
This PhD deals with the conception and evaluation of interactive systems based on Brain-Computer Interfaces (BCI). This type of interfaces has developed in recent years, first in the domain of handicaps, in order to provide disabled people means of interaction and communication, and more recently in other fields as video games. However, most of the research so far focused on the identification of cerebral pattern carrying useful information, a on signal processing for the detection of these patterns. Less attention has been given to usability aspects. This PhD focuses on interactive systems based on Steady-State Visually Evoked Potentials (SSVEP), and aims at considering the interactive system as a whole, using the concepts of Human-Computer Interaction. More precisely, a focus is made on cognitive demand, user frustration, calibration conditions, and hybrid BCIs
APA, Harvard, Vancouver, ISO, and other styles
48

Ravenel, John Bishop. "Applying human-machine interaction design principles to retrofit existing automated freight planning systems." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122253.

Full text
Abstract:
Thesis: M. Eng. in Supply Chain Management, Massachusetts Institute of Technology, Supply Chain Management Program, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 66-70).
With the increased application of cognitive computing across the spectrum of industries, companies strive to ready their people and machines for future system change. Based on resource constraints, business needs, and the speed of change, many companies may opt for system augmentation rather than the adoption of entirely new systems. At the same time, changes in technology are increasing at paces never before realized. Against this backdrop, human actors and machines are working together interactively in new and increasing ways. Further, recent business model innovations, particularly in the retail space, have cast focus on logistics execution as a potential major competitive advantage. In this context, we considered the conceptual question of how best to iteratively improve a logistics planning system, which is composed of both human and machine actors, to reduce transportation and labor costs and increase the ability of the organization to think and act strategically.
In order to front these current technological realities - the need to stage for agent based systems and cognitive computing, the likelihood of system retrofit over rebuild, the ever increasing rate of change, and the rapid intertwining of human and machine roles - we proposed using human-machine interaction (HMI) design paradigms to retrofit an existing loosely coupled human-machine planning system. While HMI principles are normally applied to tightly coupled systems such as jet airplanes, the HMI architectural design applied novelly in this case showed significant application to an existing loosely coupled planning system. In addition to meeting the realities of today's competitive landscape, the developed HMI framework is tailored to a retrofit situation and also meets resiliency considerations. That novel conceptual proposal of HMI frameworks to an existing loosely coupled joint cognitive planning system shows tremendous promise to address these imminent realities.
With regards to the particular freight planning system considered, 71% of manual interventions were caused by the wrong sourcing facility being assigned to supply pallets to a customer. The remaining intervention causes were carrier changes 18%, customer restrictions 9%, and one change prompted by a data discrepancy. Further, at a conceptual level, the application of HMI frameworks to an existing freight planning system was effective at isolating data and alignment incongruences, displayed lower communication costs than recurrent system rework processes, and tethered well with system resiliency factors.
by John Bishop Ravenel.
M. Eng. in Supply Chain Management
M.Eng.inSupplyChainManagement Massachusetts Institute of Technology, Supply Chain Management Program
APA, Harvard, Vancouver, ISO, and other styles
49

Eid, Fatma Elzahraa Sobhy. "Predicting the Interactions of Viral and Human Proteins." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77581.

Full text
Abstract:
The world has proven unprepared for deadly viral outbreaks. Designing antiviral drugs and strategies requires a firm understanding of the interactions taken place between the proteins of the virus and human proteins. The current computational models for predicting these interactions consider only single viruses for which extensive prior knowledge is available. The two prediction frameworks in this dissertation, DeNovo and DeNovo-Human, make it possible for the first time to predict the interactions between any viral protein and human proteins. They further helped to answer critical questions about the Zika virus. DeNovo utilizes concepts from virology, bioinformatics, and machine learning to make predictions for novel viruses possible. It pools protein-protein interactions (PPIs) from different viruses sharing the same host. It further introduces taxonomic partitioning to make the reported performance reflect the situation of predicting for a novel virus. DeNovo avoids the expected low accuracy of such a prediction by introducing a negative sampling scheme that is based on sequence similarity. DeNovo achieved accuracy up to 81% and 86% when predicting for a new viral species and a new viral family, respectively. This result is comparable to the best achieved previously in single virus-host and intra-species PPI prediction cases. DeNovo predicts PPIs of a novel virus without requiring known PPIs for it, but with a limitation on the number of human proteins it can make predictions against. The second framework, DeNovo-Human, relaxes this limitation by forcing in-network prediction and random sampling while keeping the pooling technique of DeNovo. The accuracy and AUC are both promising ($>85%$, and $>91%$ respectively). DeNovo-Human facilitates predicting the virus-human PPI network. To demonstrate how the two frameworks can enrich our knowledge about virus behavior, I use them to answer interesting questions about the Zika virus. The research questions examine how the Zika virus enters human cells, fights the innate immune system, and causes microcephaly. The answers obtained are well supported by recently published Zika virus studies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Fruchard, Bruno. "Techniques d'interaction exploitant la mémoire pour faciliter l'activation de commandes." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT010/document.

Full text
Abstract:
Pour contrôler un système interactif, un utilisateur doit habituellement sélectionner des commandes en parcourant des listes et des menus hiérarchiques. Pour les sélectionner plus rapidement, il peut effectuer des raccourcis gestuels. Cependant, pour être efficace, il doit mémoriser ces raccourcis, une tâche difficile s’il doit activer un grand nombre de commandes. Nous étudions dans une première partie les avantages des gestes positionnels (pointage) et directionnels (Marking menus) pour la mémorisation de commandes, ainsi que l’utilisation du corps de l’utilisateur comme surface d’interaction et l’impact de deux types d’aides sémantiques (histoires, images) sur l’efficacité à mémoriser. Nous montrons que les gestes positionnels permettent d’apprendre plus rapidement et plus facilement, et que suggérer aux utilisateurs de créer des histoires liées aux commandes améliore considérablement leurs taux de rappel. Dans une deuxième partie, nous présentons des gestes bi-positionnels qui permettent l’activation d’un grand nombre de commandes. Nous montrons leur efficacité à l’aide de deux contextes d’interaction : le pavé tactile d’un ordinateur portable (MarkPad) et une montre intelligente (SCM)
To control an interactive system, users usually have to select commands by browsing lists and hierarchical menus. To go faster, they can perform gestural shortcuts. However, to be effective, they must memorize these shortcuts, which is a difficult task when activating a large number of commands. In a first part, we study the advantages of positional (pointing) and directional (Marking menus) gestures for command memorization, as well as the use of the user's body as an interaction surface and the impact of two types of semantic aids (stories, images) on the effectiveness to memorize. We show that positional gestures make learning faster and easier, and that suggesting to users to create stories related to commands significantly improves their recall rate. In the second part, we present bi-positional gestures that allow the activation of a large number of commands. We demonstrate their effectiveness using two interaction contexts: the touchpad of a laptop (MarkPad) and a smartwatch (SCM)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography