Dissertations / Theses on the topic 'Human machine interaction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Human machine interaction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Gouvrit, Montaño Florence. "Empathy and Human-Machine Interaction." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313442553.
Full textMANURI, FEDERICO. "Visualization and Human-Machine Interaction." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2673784.
Full textOgrinc, Matjaž. "Information acquisition in physical human-machine interaction." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/58935.
Full textGnjatović, Milan. "Adaptive dialogue management in human-machine interaction." München Verl. Dr. Hut, 2009. http://d-nb.info/997723475/04.
Full textWesterberg, Simon. "Semi-Automating Forestry Machines : Motion Planning, System Integration, and Human-Machine Interaction." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-89067.
Full textDE, PACE FRANCESCO. "Natural and multimodal interfaces for human-machine and human-robot interaction." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2918004.
Full textGeorgiev, Nikolay. "Assisting physiotherapists by designing a system utilising Interactive Machine Learning." Thesis, Uppsala universitet, Institutionen för informatik och media, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447489.
Full textNguyen, Van Toi. "Visual interpretation of hand postures for human-machine interaction." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS035/document.
Full textNowadays, people want to interact with machines more naturally. One of the powerful communication channels is hand gesture. Vision-based approach has involved many researchers because this approach does not require any extra device. One of the key problems we need to resolve is hand posture recognition on RGB images because it can be used directly or integrated into a multi-cues hand gesture recognition. The main challenges of this problem are illumination differences, cluttered background, background changes, high intra-class variation, and high inter-class similarity. This thesis proposes a hand posture recognition system consists two phases that are hand detection and hand posture recognition. In hand detection step, we employed Viola-Jones detector with proposed concept Internal Haar-like feature. The proposed hand detection works in real-time within frames captured from real complex environments and avoids unexpected effects of background. The proposed detector outperforms original Viola-Jones detector using traditional Haar-like feature. In hand posture recognition step, we proposed a new hand representation based on a good generic descriptor that is kernel descriptor (KDES). When applying KDES into hand posture recognition, we proposed three improvements to make it more robust that are adaptive patch, normalization of gradient orientation in patches, and hand pyramid structure. The improvements make KDES invariant to scale change, patch-level feature invariant to rotation, and final hand representation suitable to hand structure. Based on these improvements, the proposed method obtains better results than original KDES and a state of the art method
Spall, Roger Paul. "A human-machine interaction tool set for Smalltalk 80." Thesis, Sheffield Hallam University, 1990. http://shura.shu.ac.uk/20389/.
Full textWood, David K. "Learning from Gross Motion Observations of Human-Machine Interaction." Thesis, The University of Sydney, 2011. https://hdl.handle.net/2123/29223.
Full textTarpin-Bernard, Franck. "INTERACTION HOMME-MACHINE ADAPTATIVE." Habilitation à diriger des recherches, INSA de Lyon, 2006. http://tel.archives-ouvertes.fr/tel-00164110.
Full textVo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0053/document.
Full textTelevision has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
Huot, Stéphane. "'Designeering Interaction': un chaînon manquant dans l'évolution de l'Interaction Homme-Machine." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823763.
Full textSanchez, Téo. "Interactive Machine Teaching with and for Novices." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG055.
Full textMachine Learning algorithms in society or interactive technology generally provide users with little or no agency with respect to how models are optimized from data. Only experts design, analyze, and optimize ML algorithms. At the intersection of HCI and ML, the field of Interactive Machine Learning (IML) aims at incorporating ML workflows within existing users' practices. Interactive Machine Teaching (IMT), in particular, focuses on involving non-expert users as "machine teachers" and empowering them in the process of building ML models. Non-experts could take advantage of building ML models to process and automate tasks on their data, leading to more robust and less biased models for specialized problems. This thesis takes an empirical approach to IMT by focusing on how people develop strategies and understand interactive ML systems through the act of teaching. This research provides two user studies involving participants as teachers of image-based classifiers using transfer-learned artificial neural networks. These studies focus on what users understand from the ML model's behavior and what strategy they may use to "make it work." The second study focuses on machine teachers' understanding and use of two types of uncertainty: aleatoric uncertainty, which conveys ambiguity, and epistemic uncertainty, which conveys novelty. I discuss the use of uncertainty and active learning in IMT. Finally, I report artistic collaborations and adopt an auto-ethnographic approach to challenges and opportunities for developing IMT with artists. I argue that people develop different teaching strategies that can evolve with insights obtained throughout the interaction. People's teaching strategies structure the composition of the data they curated and affect their ability to understand and predict the algorithm behavior. Besides empowering people to build ML models, IMT can foster investigative behaviors, leveraging peoples' literacy in ML and artificial intelligence
Renna, I. "Upper body tracking and Gesture recognition for Human-Machine Interaction." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00717443.
Full textCollins, Micah Thomas 1975. "Modeling human-machine interaction in production systems for equipment design." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80503.
Full textPerez, Jorge (Jorge I. ). "Designing interaction for human-machine collaboration in multi-agent scheduling." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106007.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
In the field of multi-agent task scheduling, there are many algorithms that are capable of minimizing objective functions when the user is able to specify them. However, there is a need for systems and algorithms that are able to include user preferences or domain knowledge into the final solution. This will increase the usability of algorithms that would otherwise not include some characteristics desired by the end user but are highly optimal mathematically. We hypothesize that allowing subjects to iterate over solutions while adding allocation and temporal constraints would allow them to take advantage of the computational power to solve the temporal problem while including their preferences. No statistically significant results were found that supported that such algorithm is preferred over manually solving the problem among the participants. However, there are trends that support the hypothesis. We found statistically significant evidence (p=0.0027), that subjects reported higher workload when working with Manual Mode and Modification Mode rather than Iteration Mode and Feedback Iteration Mode. We propose changes to the system that can provide guidance for future design of interaction for scheduling problems.
by Jorge Perez.
M. Eng.
Renna, Ilaria. "Upper body tracking and Gesture recognition for Human-Machine Interaction." Paris 6, 2012. http://www.theses.fr/2012PA066119.
Full textRobots are artificial agents that can act in humans’ world thanks to perception, action and reasoning capacities. In particular, robots companion are designed to share with humans the same physical and communication spaces in performing daily life collaborative tasks and aids. In such a context, interactions between humans and robots are expected to be as natural and as intuitive as possible. One of the most natural ways is based on gestures and reactive body motions. To make this friendly interaction possible, a robot companion has to be endowed with one or more capabilities allowing him to perceive, to recognize and to react to human gestures. This PhD thesis has been focused on the design and the development of a gesture recognition system that can be exploited in a human-robot interaction context. This system includes (1) a limbs-tracking algorithm that determines human body position during movements and (2) a higher-level module that recognizes gestures performed by human users. New contributions were made in both topics. First, a new approach is proposed for visual tracking of upper-body limbs. Analysing human body motion is challenging, due to the important number of degrees of freedom of the articulated object modelling the upper body. To circumvent the computational complexity, each limb is tracked with an Annealed Particle Filter and the different filters interact through Belief Propagation. 3D human body is described as a graphical model in which the relationships between the body parts are represented by conditional probability distributions. Pose estimation problem is thus formulated as a probabilistic inference over a graphical model, where the random variables correspond to the individual limb parameters (position and orientation) and Belief Propagation messages ensure coherence between limbs. Secondly, we propose a framework allowing emblematic gestures detection and recognition. The most challenging issue in gesture recognition is to find good features with a discriminant power (to distinguish between different gestures) and a good robustness to intrinsic gestures variability (the context in which gestures are expressed, the morphology of the person, the point of view, etc. ). In this work, we propose a new arm's kinematics normalization scheme reflecting both the muscular activity and arm's appearance when a gesture is performed. The obtained signals are first segmented and then analysed by two machine learning techniques: Hidden Markov Models and Support Vector Machines. The two methods are compared in a 5 classes emblematic gestures recognition task. Both systems show good performances with a minimalistic training database regardless to performer's anthropometry, gender, age or pose with regard to the sensing system. The work presented here has been done within the framework of a PhD thesis in joint supervision between the “Pierre et Marie Curie” University (ISIR laboratory, Paris) and the University of Genova (IIT--Tera department) and was labelled by the French-Italian University
Ding, Sihao. "Multi-Perspective Image and Video Processing for Human-Machine Interaction." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488462115943949.
Full textNorstedt, Emil, and Timmy Sahlberg. "Human Interaction with Autonomous machines: Visual Communication to Encourage Trust." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19706.
Full textOngoing development is happening within the construction industry. Machines are transformed from being operated by humans to being autonomous. This project has been a collaboration with Volvo Construction Equipment (Volvo CE), and their new autonomous wheel loader. The autonomous machine is supposed to operate in the same environment as people. Therefore, a developed safety system is required to eliminate accidents. The purpose has been developing a system to increase the safety for the workers and to encourage trust for the autonomous machine. The system is based on visual communication to achieve trust between the machine and the people around it. An iterative process, with a focus on testing, prototyping, and analysing, has been used to accomplish a successful result. Better understanding has been developed on how to design a human-machine-interface to encourage trust by creating models with a variety of functions. The iterative process resulted in a concept that communicates through eyes. Eye-contact is an essential factor for creating trust in unfamiliar and exposed situations. The solution mediating different expressions by changing the colour and shape of the eyes to create awareness and to inform people moving around in the same environment. Specific information can be mediated in various situations by adopting the colour and shape of the eyes. Trust can be encouraged for the autonomous machine using this way of communicating.
Battut, Alexandre. "Interaction substrates and instruments for interaction histories." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG026.
Full textIn the digital world, as in the physical world, our interactions with objects leave traces that tell the story of the actions that shaped these objects over time. This historical data can be accessed by end users to help them better understand the steps that led to the current state of their system. These traces can also be reused for activities such as re-documenting their own history to arrange it in a way that they find more understandable. Users may also be led to share these data in collaborative environments, to better coordinate and synchronize their work. While previous work has attempted to show the benefits of cross-application histories, current implementations of interaction histories in interactive systems tend to tie history data to their source application. This prevents users from cross-referencing historical data to review and correlate events that occurred in different applications.In this thesis, I argue that designing interaction histories that can be shared among applications and users would support browsing, understanding and reusing historical data. I first ground my work in the use case of collaborative writing to explore relatable yet complex traces ecologies and interaction history use. I identify recurring practices and issues with the use of history data by interviewing knowledge workers and conducting several design activities based on these observations. I describe a first proof-of-concept system integrating two history instruments resulting from these design activities, and the first iteration of a unifying structure for historical data to be shared among applications and users. The results of user studies show that users indeed express a need for unified and customizable interaction histories.Compiling the data gathered during these research activities and based on previous works about “Dynamic Shareable Media” and the Interaction Substrates and Instruments model, I describe a framework to help create more flexible interaction histories. The goal is to describe how to design interaction history systems that would help users take control of their historical data. I introduce Steps, a structure for unifying historical data that includes descriptive core attributes to preserve the integrity of a trace across applications, and extensible contextual attributes that let users reshape their histories to suit their needs. I then introduce OneTrace, a proof-of-concept prototype based on Steps that follows my descriptive framework for cross-application histories and defines interaction histories as digital material to be shaped by digital tool use. I discuss the opportunities offered by this approach to support a shift in paradigm on how we design and interact with interaction histories
Holmberg, Lars. "Human In Command Machine Learning." Licentiate thesis, Malmö universitet, Malmö högskola, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-42576.
Full textBAZZANO, FEDERICA. "Human-Machine Interfaces for Service Robotics." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2734314.
Full textRydström, Annie. "The effect of haptic feedback in visual-manual human-machine interaction." Licentiate thesis, Luleå tekniska universitet, Arbetsvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-25910.
Full textGodkänd; 2007; 20070919 (biem)
Rydström, Annie. "The effect of haptic feedback in visual-manual human-machine interaction /." Luleå : Luleå University of Technology, 2007. http://epubl.ltu.se/1402-1757/2007/41/.
Full textDegani, Asaf. "Modeling human-machine systems : on modes, error, and patterns of interaction." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25983.
Full textRiviere, Jean-Philippe. "Capturing traces of the dance learning process." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG054.
Full textThis thesis focuses on designing interactive tools to understand and support dance learning from videos. Dancers' learning practice represents a rich source of information for researchers interested in designing systems that support motor learning. Indeed, dancers embody a wide range of skills that they reuse during new dance sequences learning. However, these skills are in part the result of embodied implicit knowledge. In this thesis, I argue that we can capture and save traces of dancers' embodied knowledge and use them to design interactive tools that support dance learning. My approach is to study real-life dance learning tasks in individual and collaborative settings. Based on the findings from all the studies, I discuss the challenge of capturing embodied knowledge to support dancers’ learning practice. My thesis highlights that although dancers’ learning processes are diverse, similar strategies emerge to structure their learning process. Finally, I bring and discuss new perspectives to the design of movement-based learning tools
Strickland, Ted John Jr. "Dynamic management of multichannel interfaces for human interaction with computer-based intelligent assistants." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184793.
Full textVo, Dong-Bach. "Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0053.
Full textTelevision has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use. More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction. Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set
Bushman, James B. "Identification of an operator's associate model for cooperative supervisory control situations." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/30992.
Full textCheng, Kelvin. "0Direct interaction with large displays through monocular computer vision." Connect to full text, 2008. http://ses.library.usyd.edu.au/handle/2123/5331.
Full textTitle from title screen (viewed November 5, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Information Technologies in the the Faculty of Engineering & Information Technologies. Degree awarded 2009; thesis submitted 2008. Includes bibliographical references. Also available in print form.
Eklund, Robert. "Disfluency in Swedish human-human and human-machine travel booking dialogues /." Doctoral thesis, Linköping : Univ, 2004. http://www.ep.liu.se/diss/science_technology/08/82/index.html.
Full textFekete, Jean-Daniel. "Nouvelle génération d'Interfaces Homme-Machine pour mieux agir et mieux comprendre." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00876183.
Full textCourtoux, Emmanuel. "Tangible Interaction for Wall Displays." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG028.
Full textWall displays immerse users in large, high-resolution information spaces. They are well suited for data analysis, as users only need to move around the physical space to explore the virtual information space displayed on the wall. They also facilitate collaboration as their large physical size can accommodate multiple users. However, designing effective ways of interacting with wall displays is challenging. Traditional input devices, such as mice and keyboards, quickly show their limitations in an environment where multiple users can interact and move freely.HCI literature offers interesting alternatives to traditional input techniques. In particular, Tangible User Interactions (TUIs), where users rely on custom tangible objects to interact with the virtual scene, have proved efficient with different types of displays ranging from smartphones to tabletops. Tangible controllers have natural advantages such as the haptic feedback they provide that enables eyes-free manipulations. They also afford specific grasps and manipulations, guiding users on what they can do with them. Empirical studies that compare tangibles to other forms of input also report quantitative gains in regarding manipulation speed and precision in different hardware setups.However, designing tangible controllers for wall displays is difficult. First, the large size and vertical orientation of walls must be taken into account to design tangibles with a suitable form factor. Second, users move in space. They move away to get a wider view, move closer to see details, or adjust their physical position based on other users and objects in the room. This means that tangible controllers must be usable regardless of the user's position in the room, which has some impact on design and engineering aspects. Finally, a wall display is often located in an environment that feature other devices and displays. In such cases, designing tangible controllers for a wall display requires to consider the whole multi-display environment, which constrains even more the tangibles' form factor and the underlying technologies.My thesis work makes three contributions towards enabling tangible interaction with wall displays.The first project, WallTokens, contributes tangibles for enabling on surface interaction with wall displays. WallTokens are low-cost, passive controllers that users can manipulate directly on the wall's surface. WallTokens have a mechanism that allows users to easily attach and detach them from the wall surface, so that when users are done interacting, they can leave them in place and free their hands for other purposes. We report on two studies assessing WallTokens' usability, showing that they are more precise and comfortable than bare-hand gestures to perform low-level manipulations on walls.The second project, SurfAirs, contributes tangibles that support not only on surface interaction but also distant interaction with wall displays. We present two possible designs for versatile tangible controllers that can be used both on the wall surface when users need precision and detail, and in the air when they need a wide viewing angle. SurfAirs support both types of input, as well as smooth transitions between the two. We report on two studies that compare SurfAir prototypes with bare hand gestures for performing low-level manipulation tasks. SurfAirs outperform bare hand gestures regarding accuracy, speed and user preference.The third project contributes a survey about the use of physical controllers to interact with a physical display. Each project is described along twelve dimensions that capture the design aspects of the controller, the properties of the display and how they communicate with each other. We contribute a Web page to explore this list of references along the different dimensions, and use it to discuss the challenges that underlie the design of tangible controllers in a multi-display environment
Neiberg, Daniel. "Modelling Paralinguistic Conversational Interaction : Towards social awareness in spoken human-machine dialogue." Doctoral thesis, KTH, Tal-kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102335.
Full textQC 20120914
Tsonis, Christos George. "An analysis of information complexity in air traffic control human machine interaction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35560.
Full textIncludes bibliographical references (p. 121-126).
This thesis proposes, develops and validates a methodology to quantify the complexity of air traffic control (ATC) human-machine interaction (HMI). Within this context, complexity is defined as the minimum amount of information required to describe the human machine interaction process in some fixed description language and chosen level of detail. The methodology elicits human information processing via cognitive task analysis (CTA) and expresses the HMI process algorithmically as a cognitive interaction algorithm (CIA). The CIA is comprised of multiple functions which formally describe each of the interaction processes required to complete a nominal set of tasks using a certain machine interface. Complexities of competing interface and task configurations are estimated by weighted summations of the compressed information content of the associated CIA functions. This information compression removes descriptive redundancy and approximates the minimum description length (MDL) of the CIA. The methodology is applied to a representative en-route ATC task and interface, and the complexity measures are compared to performance results obtained experimentally by human-in-the-loop simulations.
(cont.) It is found that the proposed complexity analysis methodology and resulting complexity metrics are able to predict trends in operator performance and workload. This methodology would allow designers and evaluators of human supervisory control (HSC) interfaces the ability to conduct complexity analyses and use complexity measures to more objectively select between competing interface and task configurations. Such a method could complement subjective interface evaluations, and reduce the amount of costly experimental testing.
by Christos George Tsonis.
S.M.
Tidball, Brian Esley. "Designing Computer Agents with Facial Personality to Improve Human-Machine Collaboration." Wright State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=wright1146857305.
Full textToure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.
Full textWagy, Mark David. "Enabling Machine Science through Distributed Human Computing." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.
Full textBrasier, Eugénie. "Using Augmented Reality in Everyday Life." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG110.
Full textThis manuscript describes the research topics I explored during my Ph.D. regarding the uses of Augmented Reality (AR) in everyday life. I tackle the three following research questions:RQ1 - How to reconsider interaction with wearable AR to better support long-lasting tasks?RQ2 - Can we provide guidelines on how a wearable AR device can enhance a handheld device?RQ3 - How can users be in control of what Augmented Reality tells them?The manuscript is divided into three parts respectively addressing each of these three research questions.To answer our first research question (RQ1), we present the ARPads project. An ARPad has no tangibility but represents a floating interaction plane on which users can move their hands to control a cursor displayed in an AR window. Such indirect input allows users to keep a comfortable posture while interacting with AR, as opposed to direct input that forces users to keep their arms in an upward position towards the content displayed in AR in front of them. After exploring a design space of these ARPads regarding their position and orientation relative to the user, we implement and empirically evaluate some promising combinations. Our results show that indirect input can achieve the same performance as direct input while limiting users' fatigue. From these results, we derive guidelines for future implementations.Regarding our second research question (RQ2), we focus on the association of wearable AR and mobile devices. We adopt a user-centered approach with a workshop organized with users of mobile devices. Based on the feedback derived from this workshop, we define a design space of smartphone-centric applications that could benefit from this association. In particular, we identify two main dimensions: the function and the location of the AR content relative to the main content displayed on the mobile device's screen. This first contribution highlights the creative ways to enhance mobile devices with AR. However, not much can be asserted regarding such enhancements without actual measures about their performance. After prototyping some use cases to show their feasibility, we evaluate some UI component distributions between a phone's screen and AR.Finally, we answer our last question by anticipating a future where users' field of view is frequently augmented with AR content. We introduce the concept of AR de-augmentation as a means to prevent this AR content from interfering with users' perception of the real world. AR de-augmentation gives users agency over the AR content. We first define a taxonomy of such de-augmentation operations regarding three aspects: its scope, its trigger, and its rendering. Then, we illustrate the concept of AR de-augmentation with three scenarios to demonstrate its usefulness in some specific use cases. Finally, we implement a working prototype in which we detail some interactions that allow users to define a de-augmented area. This last project is more theoretical and projective, opening questions and perspectives on future research directions
Benkaouar, johal Wafa. "Companion Robots Behaving with Style : Towards Plasticity in Social Human-Robot Interaction." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM082/document.
Full textCompanion robots are technologically and functionally more and more efficient. Capacities and usefulness of companion robots is nowadays a reality. These robots that have now more efficient are however not accepted yet in home environments as worth of having such robot and companionship hasn't been establish. Classically, social robots were displaying generic social behaviours and not taking into account inter-individual differences. More and more work in Human-Robot Interaction goes towards personalisation of the companion. Personalisation and control of the companion could lead to better understanding of the robot's behaviour. Proposing several ways of expression for companion robots playing role would allow user to customize their companion to their social preferences.In this work, we propose a plasticity framework for Human-Robot Interaction. We used a Scenario-Based Design method to elicit social roles for companion robots. Then, based on the literature in several disciplines, we propose to depict variations of behaviour of the companion robot with behavioural styles. Behavioural styles are defined according to the social role with non-verbal expressive parameters. The expressive parameters (static, dynamic and decorators) allow to transform neutral motions into styled motion. We conducted a perceptual study through a video-based survey showing two robots displaying styles allowing us to evaluate the expressibility of two parenting behavioural styles by two kind robots. We found that, participants were indeed able to discriminate between the styles in term of dominance and authoritativeness, which is in line with the psychological theory on these styles. Most important, we found that styles preferred by parents for their children was not correlated to their own parental practice. Consequently, behavioural styles are relevant cues for social personalisation of the companion robot by parents.A second experimental study in a natural environment involving child-robot interaction with 16 children showed that parents and children were expected a versatile robot able to play several social role. This study also showed that behavioural styles had an influence on the child's bodily attitudes during the interaction. Common dimension studied in non-verbal communication allowed us to develop measures for child-robot interaction, based on data captured with a Kinect2 sensor .In this thesis, we also propose a modularisation of a previously proposed affective and cognitive architecture resulting in the new Cognitive, Affective Interaction Oriented (CAIO) architecture. This architecture has been implemented in ROS framework allowing it to use it on social robots. We also proposed instantiations of the Stimulus Evaluation Checks of [Scherer, 2009]for two robotic platforms allowing dynamic expression of emotions.Both behavioural style framework and CAIO architecture can be useful in socialise companion robots and improving their acceptability
Farneland, Christian, and Magnus Harrysson. "Developing a Human-Machine-Interface with high usability." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188499.
Full textNär man utvecklar ett Människa-Maskin-Gränssnitt (HMI) så är det viktigt att se till att det är lätt att lära sig och använda, att det har hög användbarhet. Ifall den inte har det så försämrar det operatörers situation i onödan och gör det svårare för producenter att sälja produkten. Produktionseffektiviteten minskar ifall maskinen är svår att hantera. För att göra det lättare för framtida utvecklare att nå en hög användbarhet när de utvecklar ett HMI så siktade detta examensarbete på att hitta en genomtänkt process att följa vid ett sådant tillfälle. Resultatet blev en process som testades via en HMI prototyp för vattenskärnings maskiner. Denna prototyp blev sedan testad i olika användarfall av både erfarna operatörer och nybörjare. Testerna visade sig ge positiv återkoppling, vilket bevisade att processen som följts upp till den punkten fungerade.
Yang, Liu. "Modelling interruptions in human-agent interaction." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS611.pdf.
Full textInterruptions play a significant role in shaping human communication, occurring frequently in everyday conversations. They serve to regulate conversation flow, convey social cues, and promote shared understanding among speakers. Human communication involves a range of multimodal signals beyond just speech. Verbal and non-verbal modes of communication are intricately intertwined, conveying semantic and pragmatic content while tailoring the communication process. The vocal mode incorporates acoustic features, such as prosody, while the visual mode encompasses facial expressions, hand gestures, and body language. The rise of virtual and online communication has necessitated the development of expressive communication for human-like embodied agents, including Embodied Conversational Agents (ECA) and social robots. To foster seamless and natural interactions between humans and virtual agents, it is crucial to equip virtual agents with the ability to handle interruptions during interactions. This manuscript focuses on studying interruptions in human-human interactions and enabling ECAs to interrupt human users during conversations. The primary objectives of this research are twofold: (1) in human-human interaction, analysis of acoustic and visual signals to categorise interruption type and detect when interruptions occur; (2) endow ECA with the capability to predict when to interrupt and generate its multimodal behaviour. To achieve these goals, we propose an annotation schema for identifying and classifying smooth turn exchanges, backchannels, and different interruption types. We manually annotate exchanges in two corpora, a part of the AMI corpus and the French section of the NoXi corpus. After analysing multimodal non-verbal signals, we introduce MIC, an approach to classify the interruption type based on selected non-verbal signals (facial expression, prosody, head and hand motion) from both interlocutors (the interruptee and the interrupter). We also introduce One-PredIT, which utilises a one-class classifier to identify potential interruption points by monitoring the real-time non-verbal behaviour of the current speaker (only interruptee). Additionally, we propose AI-BGM, a generative model to compute the facial expressions and head rotations of ECAs when it is interrupting. Given the limited amount of data at our disposal, we employ transfer learning technology to train our interruption behaviour generation model using the well-trained Augmented Self-Attention Pruning neural network model
Chevet, Clotilde. ""L'interaction homme-machine" : un système d'écritures qui fait monde." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUL170.
Full textThis thesis aims to study “personal assistants”, these “textual beings” split between computer code and natural language, between writing and orality. In light of Apple's promise to its users, "Talk to Siri as a person", our research explores the writing system within which human-machine communicational mimesis takes shape. We study different facets of this system: its (human and machinic) enunciations, its gestures (mobilising both the hand and the voice) and its actors (backstage as in the spotlight). At the crossroads of information and communication sciences and anthropology, this thesis combines various approaches: epistemological investigation, media archaeology and online ethnography. We explore three issues of writing in "human-machine interaction": the relationship to oneself as it allows personal expression, the relationship to the Other since it enables communication, and finally the relationship to the world since it enables it to be said and organized
Marín, Urías Luis Felipe. "Reasoning about space for human-robot interaction." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.
Full textHuman Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
Poltavchenko, Irina. "De l'analyse d'opinions à la détection des problèmes d'interactions humain-machine : application à la gestion de la relation client." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0030.
Full textThis PHD thesis is motivated by the growing popularity of chatbots acting as advisors on corporate websites. This research addresses the detection of the interaction problems between a virtual advisor and its users from the angle of opinion and emotion analysis in the texts. The present study takes place in the concrete application context of a French energy supplier EDF, using EDF chatbot corpus. This corpus gathers spontaneous and rich expressions, collected in "in-the-wild" conditions, difficult to analyze automatically, and still little studied. We propose a typology of interaction problems and annotate a part of the corpus according to this typology. A part of created annotation is used to evaluate the system. The system named DAPI (automatic detection of interaction problems) developed during this thesis is a hybrid system that combines the symbolic approach and the unsupervised learning of semantic representation (word embeddings). The purpose of the DAPI system is to be directly connected to the chatbot and to detect online interaction problems as soon as a user statement is received. The originality of the proposed method is based on : i) taking into account the history of the dialogue ; ii) the modeling of interaction problems as the expressions of user spontaneous opinion or emotion towards the interaction ; iii) the integration of the web-chat and in-the-wild language specificities as linguistic cues for linguistic rules ; iv) use of lexical word embedding (word2vec) learned on the large untagged chatbot corpus to model semantic similarities. The results obtained are very encouraging considering the complexity of the data : F-score = 74.3%
Évain, Andéol. "Optimizing the use of SSVEP-based brain-computer interfaces for human-computer interaction." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S083/document.
Full textThis PhD deals with the conception and evaluation of interactive systems based on Brain-Computer Interfaces (BCI). This type of interfaces has developed in recent years, first in the domain of handicaps, in order to provide disabled people means of interaction and communication, and more recently in other fields as video games. However, most of the research so far focused on the identification of cerebral pattern carrying useful information, a on signal processing for the detection of these patterns. Less attention has been given to usability aspects. This PhD focuses on interactive systems based on Steady-State Visually Evoked Potentials (SSVEP), and aims at considering the interactive system as a whole, using the concepts of Human-Computer Interaction. More precisely, a focus is made on cognitive demand, user frustration, calibration conditions, and hybrid BCIs
Ravenel, John Bishop. "Applying human-machine interaction design principles to retrofit existing automated freight planning systems." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122253.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 66-70).
With the increased application of cognitive computing across the spectrum of industries, companies strive to ready their people and machines for future system change. Based on resource constraints, business needs, and the speed of change, many companies may opt for system augmentation rather than the adoption of entirely new systems. At the same time, changes in technology are increasing at paces never before realized. Against this backdrop, human actors and machines are working together interactively in new and increasing ways. Further, recent business model innovations, particularly in the retail space, have cast focus on logistics execution as a potential major competitive advantage. In this context, we considered the conceptual question of how best to iteratively improve a logistics planning system, which is composed of both human and machine actors, to reduce transportation and labor costs and increase the ability of the organization to think and act strategically.
In order to front these current technological realities - the need to stage for agent based systems and cognitive computing, the likelihood of system retrofit over rebuild, the ever increasing rate of change, and the rapid intertwining of human and machine roles - we proposed using human-machine interaction (HMI) design paradigms to retrofit an existing loosely coupled human-machine planning system. While HMI principles are normally applied to tightly coupled systems such as jet airplanes, the HMI architectural design applied novelly in this case showed significant application to an existing loosely coupled planning system. In addition to meeting the realities of today's competitive landscape, the developed HMI framework is tailored to a retrofit situation and also meets resiliency considerations. That novel conceptual proposal of HMI frameworks to an existing loosely coupled joint cognitive planning system shows tremendous promise to address these imminent realities.
With regards to the particular freight planning system considered, 71% of manual interventions were caused by the wrong sourcing facility being assigned to supply pallets to a customer. The remaining intervention causes were carrier changes 18%, customer restrictions 9%, and one change prompted by a data discrepancy. Further, at a conceptual level, the application of HMI frameworks to an existing freight planning system was effective at isolating data and alignment incongruences, displayed lower communication costs than recurrent system rework processes, and tethered well with system resiliency factors.
by John Bishop Ravenel.
M. Eng. in Supply Chain Management
M.Eng.inSupplyChainManagement Massachusetts Institute of Technology, Supply Chain Management Program
Eid, Fatma Elzahraa Sobhy. "Predicting the Interactions of Viral and Human Proteins." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77581.
Full textPh. D.
Fruchard, Bruno. "Techniques d'interaction exploitant la mémoire pour faciliter l'activation de commandes." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT010/document.
Full textTo control an interactive system, users usually have to select commands by browsing lists and hierarchical menus. To go faster, they can perform gestural shortcuts. However, to be effective, they must memorize these shortcuts, which is a difficult task when activating a large number of commands. In a first part, we study the advantages of positional (pointing) and directional (Marking menus) gestures for command memorization, as well as the use of the user's body as an interaction surface and the impact of two types of semantic aids (stories, images) on the effectiveness to memorize. We show that positional gestures make learning faster and easier, and that suggesting to users to create stories related to commands significantly improves their recall rate. In the second part, we present bi-positional gestures that allow the activation of a large number of commands. We demonstrate their effectiveness using two interaction contexts: the touchpad of a laptop (MarkPad) and a smartwatch (SCM)