Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Robust Human Detection.

Rozprawy doktorskie na temat „Robust Human Detection”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 48 najlepszych rozpraw doktorskich naukowych na temat „Robust Human Detection”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Li, Ying. "Efficient and Robust Video Understanding for Human-robot Interaction and Detection". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Leu, Adrian [Verfasser]. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu". Aachen : Shaker, 2014. http://d-nb.info/1060622432/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Leu, Adrian [Verfasser], Axel [Akademischer Betreuer] Gräser i Udo [Akademischer Betreuer] Frese. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu. Gutachter: Udo Frese. Betreuer: Axel Gräser". Bremen : Staats- und Universitätsbibliothek Bremen, 2014. http://d-nb.info/1072226340/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Pełny tekst źródła
Streszczenie:
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point of view. This motivates our thesis that is divided in two parts. The first part of this thesis is devoted to the development of parsimonious algorithms for action/gesture recognition in human-centric applications such as sports and anomaly detection for artificial pancreas. The data sources employed for the validation of our approaches consist of a collection of time-series data coming from sensors, such as accelerometers or glycemic. The main challenge in this context is to discard (i.e. being invariant to) many nuisance factors that make the recognition task difficult, especially where many different users are involved. Moreover, in some cases, data cannot be easily labelled, making supervised approaches not viable. Thus, we present the mathematical tools and the background with a focus to the recognition problems and then we derive novel methods for: (i) gesture/action recognition using sparse representations for a sport application; (ii) gesture/action recognition using a symbolic representations and its extension to the multivariate case; (iii) model-free and unsupervised anomaly detection for detecting faults on artificial pancreas. These algorithms are well-suited to be deployed in resource constrained devices, such as wearables. In the second part, we investigate the feasibility of deep learning frameworks where human interpretation is crucial. Standard deep learning models are not robust and, unfortunately, literature approaches that ensure robustness are typically detrimental to accuracy in general. However, in general, real-world applications often require a minimum amount of accuracy to be employed. In view of this, after reviewing some results present in the recent literature, we formulate a new algorithm being able to semantically trade-off between accuracy and robustness, where a cost-sensitive classification problem is provided and a given threshold of accuracy is required. In addition, we provide a link between robustness to input perturbations and interpretability guided by a physical minimum energy principle: in fact, leveraging optimal transport tools, we show that robust training is connected to the optimal transport problem. Thanks to these theoretical insights we develop a new algorithm that provides robust, interpretable and more transferable representations.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhu, Youding. "Model-Based Human Pose Estimation with Spatio-Temporal Inferencing". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1242752509.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Tasaki, Tsuyoshi. "People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System". 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180473.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yi, Fei. "Robust eye coding mechanisms in humans during face detection". Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.

Pełny tekst źródła
Streszczenie:
We can detect faces more rapidly and efficiently compared to non-face object categories (Bell et al., 2008; Crouzet, 2011), even when only partial information is visible (Tang et al., 2014). Face inversion impairs our ability to recognise faces. The key to understand this effect is to determine what special face features are processed and how coding of these features is affected by face inversion. Previous studies from our lab showed coding of the contralateral eye in an upright face detection task, which was maximal around the N170 recorded at posterior-lateral electrodes (Ince et al., 2016b; Rousselet et al., 2014). In chapter 2, we used the Bubble technique to determine whether brain responses also reflect the processing of eyes in inverted faces and how it does so in a simple face detection task. The results suggest that in upright and inverted faces alike the N170 reflects coding of the contralateral eye, but face inversion quantitatively weakens the early processing of the contralateral eye, specifically in the transition between the P1 and the N170 and delays this local feature coding. Group and individual results support this claim. First, regardless of face orientation, the N170 coded the eyes contralateral to the posterior-lateral electrodes, which was the case in all participants. Second, face inversion delayed coding of contralateral eye information. Third, time course analysis of contralateral eye coding revealed weaker contralateral eye coding for inverted compared to upright faces in the transition between the P1 and the N170. Fourth, single-trial EEG responses were driven by the corresponding single-trial visibility of the left eye. The N170 amplitude was larger and latency shorter as the left eye visibility increased in upright and upside-down faces for the majority of participants. However, for images of faces, eye position and face orientation were confounded, i.e., the upper visual field usually contains eyes in upright faces; in upside-down faces lower visual field contains eyes. Thus, the impaired processing of the contralateral eye by inversion might be simply attributed to that face inversion removes the eyes away from upper visual filed. In chapter 3, we manipulated three vertical locations of images in which eyes are presented in upper, centre and lower visual field relative to fixation cross (the centre of the screen) so that in upright and inverted faces the eyes can shift from the upper to the lower visual field. We used the similar technique as in chapter 2 during a face detection task. First, we found 2 that regardless of face orientation and position, the modulations of ERPs recorded at the posterior-lateral electrodes were associated with the contralateral eye. This suggests that coding of the contralateral eye underlying the N170. Second, face inversion delayed processing of the contralateral eye when the eyes of faces were presented in the same position, Above, Below or at the Centre of the screen. Also, in the early N170, most of our participants showed weakened contralateral eye sensitivity by inversion of faces, of which the eyes appeared in the same position. The results suggest that face inversion related changes in processing of the contralateral eye cannot be simply considered as the results of differences of eye position. The scan-paths traced by human eye movements are similar to the low-level computation saliency maps produced by contrast based computer vision algorithms (Itti et al., 1998). This evidence leads us to a question of whether the coding function to encode the eyes is due to the significance in the eye regions. In chapter 4, we aim to answer the question. We introduced two altered version of original faces: normalised and reversed contrast faces in a face detection task - removing eye saliency (Simoncelli and Olshausen, 2001) and reversing face contrast polarity (Gilad et al., 2009) in a simple face detection task. In each face condition, we observed ERPs, that recorded at contralateral posterior lateral electrodes, were sensitive to eye regions. Both contrast manipulations delayed and reduced eye sensitivity during the rising part of the N170, roughly 120 – 160 ms post-stimulus onset. Also, there were no such differences between two contrast-manipulated faces. These results were observed in the majority of participants. They suggest that the processing of contralateral eye is due partially to low-level factors and may reflect feature processing in the early N170.
Style APA, Harvard, Vancouver, ISO itp.
8

Alanenpää, Madelene. "Gaze detection in human-robot interaction". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Pełny tekst źródła
Streszczenie:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
Style APA, Harvard, Vancouver, ISO itp.
9

Antonucci, Alessandro. "Socially aware robot navigation". Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356142.

Pełny tekst źródła
Streszczenie:
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
Style APA, Harvard, Vancouver, ISO itp.
10

Briquet-Kerestedjian, Nolwenn. "Impact detection and classification for safe physical Human-Robot Interaction under uncertainties". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC038/document.

Pełny tekst źródła
Streszczenie:
La problématique traitée dans cette thèse vise à développer une stratégie efficace de détection et de classification des impacts en présence d'incertitudes de modélisation du robot et de son environnement et en utilisant un nombre minimal de capteurs, notamment en l'absence de capteur d’effort.La première partie de la thèse porte sur la détection d'un impact pouvant avoir lieu à n'importe quel endroit du bras robotique et à n'importe quel moment de sa trajectoire. Les méthodes de détection d’impacts sont généralement basées sur un modèle dynamique du système, ce qui les rend sujettes au compromis entre sensibilité de détection et robustesse aux incertitudes de modélisation. A cet égard, une méthodologie quantitative a d'abord été mise au point pour rendre explicite la contribution des erreurs induites par les incertitudes de modèle. Cette méthodologie a été appliquée à différentes stratégies de détection, basées soit sur une estimation directe du couple extérieur, soit sur l'utilisation d'observateurs de perturbation, dans le cas d’une modélisation parfaitement rigide ou à articulations flexibles. Une comparaison du type et de la structure des erreurs qui en découlent et de leurs conséquences sur la détection d'impacts en a été déduite. Dans une deuxième étape, de nouvelles stratégies de détection d'impacts ont été conçues: les effets dynamiques des impacts sont isolés en déterminant la marge d'erreur maximale due aux incertitudes de modèle à l’aide d’une approche stochastique.Une fois l'impact détecté et afin de déclencher la réaction post-impact du robot la plus appropriée, la deuxième partie de la thèse aborde l'étape de classification. En particulier, la distinction entre un contact intentionnel (l'opérateur interagit intentionnellement avec le robot, par exemple pour reconfigurer la tâche) et un contact non-désiré (un sujet humain heurte accidentellement le robot), ainsi que la localisation du contact sur le robot, est étudiée en utilisant des techniques d'apprentissage supervisé et plus spécifiquement des réseaux de neurones feedforward. La généralisation à plusieurs sujet humains et à différentes trajectoires du robot a été étudiée
The present thesis aims to develop an efficient strategy for impact detection and classification in the presence of modeling uncertainties of the robot and its environment and using a minimum number of sensors, in particular in the absence of force/torque sensor.The first part of the thesis deals with the detection of an impact that can occur at any location along the robot arm and at any moment during the robot trajectory. Impact detection methods are commonly based on a dynamic model of the system, making them subject to the trade-off between sensitivity of detection and robustness to modeling uncertainties. In this respect, a quantitative methodology has first been developed to make explicit the contribution of the errors induced by model uncertainties. This methodology has been applied to various detection strategies, based either on a direct estimate of the external torque or using disturbance observers, in the perfectly rigid case or in the elastic-joint case. A comparison of the type and structure of the errors involved and their consequences on the impact detection has been deduced. In a second step, novel impact detection strategies have been designed: the dynamic effects of the impacts are isolated by determining the maximal error range due to modeling uncertainties using a stochastic approach.Once the impact has been detected and in order to trigger the most appropriate post-impact robot reaction, the second part of the thesis focuses on the classification step. In particular, the distinction between an intentional contact (the human operator intentionally interacts with the robot, for example to reconfigure the task) and an undesired contact (a human subject accidentally runs into the robot), as well as the localization of the contact on the robot, is investigated using supervised learning techniques and more specifically feedforward neural networks. The challenge of generalizing to several human subjects and robot trajectories has been investigated
Style APA, Harvard, Vancouver, ISO itp.
11

Linder, Timm [Verfasser], i Kai O. [Akademischer Betreuer] Arras. "Multi-modal human detection, tracking and analysis for robots in crowded environments". Freiburg : Universität, 2020. http://d-nb.info/1228786798/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Zhang, Yan. "Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot Interactions". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307548707.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Banerjee, Nandan. "Human Supervised Semi-Autonomous Approach for the DARPA Robotics Challenge Door Task". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/584.

Pełny tekst źródła
Streszczenie:
As the field of autonomous robots continue to advance, there is still a tremendous benefit to research human-supervised robot systems for fielding them in practical applications. The DRC inspired by the Fukushima nuclear power plant disaster has been a major research and development program for the past three years, to advance the field of human supervised control of robots for responding to natural and man-made disasters. The overall goal of the research presented in this thesis is to realise a new approach for semi-autonomous control of the Atlas humanoid robot under discrete commands from the human operator. A combination of autonomous and semi-autonomous perception and manipulation techniques to accomplish the task of detecting, opening and walking through a door are presented. The methods are validated in various different scenarios relevant to DRC door task.
Style APA, Harvard, Vancouver, ISO itp.
14

Mazhar, Osama. "Vision-based human gestures recognition for human-robot interaction". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS044.

Pełny tekst źródła
Streszczenie:
Dans la perspective des usines du futur, pour garantir une interaction productive, sure et efficace entre l’homme et le robot, il est impératif que le robot puisse interpréter l’information fournie par le collaborateur humain. Pour traiter cette problématique nous avons exploré des solutions basées sur l’apprentissage profond et avons développé un framework pour la détection de gestes humains. Le framework proposé permet une détection robuste des gestes statiques de la main et des gestes dynamiques de la partie supérieure du corps.Pour la détection des gestes statiques de la main, openpose est associé à la caméra Kinect V2 afin d’obtenir un pseudo-squelette humain en 3D. Avec la participation de 10 volontaires, nous avons constitué une base de données d’images, opensign, qui comprend les images RGB et de profondeur de la Kinect V2 correspondant à 10 gestes alphanumériques statiques de la main, issus de l’American Sign Language. Un réseau de neurones convolutifs de type « Inception V3 » est adapté et entrainé à détecter des gestes statiques de la main en temps réel.Ce framework de détection des gestes est ensuite étendu pour permettre la reconnaissance des gestes dynamiques. Nous avons proposé une stratégie de détection de gestes dynamiques basée sur un mécanisme d’attention spatiale. Celle-ci utilise un réseau profond de type « Convolutional Neural Network - Long Short-Term Memory » pour l’extraction des dépendances spatio-temporelles dans des séquences vidéo pur RGB. Les blocs de construction du réseau de neurones convolutifs sont pré-entrainés sur notre base de données opensign de gestes statiques de la main, ce qui permet une extraction efficace des caractéristiques de la main. Un module d’attention spatiale exploite la posture 2D de la partie supérieure du corps pour estimer, d’une part, la distance entre la personne et le capteur pour la normalisation de l’échelle et d’autre part, les paramètres des cadres délimitant les mains du sujet sans avoir recourt à un capteur de profondeur. Ainsi, le module d’attention spatiale se focalise sur les grands mouvements des membres supérieurs mais également sur les images des mains, afin de traiter les petits mouvements de la main et des doigts pour mieux distinguer les classes de gestes. Les informations extraites d’une caméra de profondeur sont acquises de la base de données opensign. Par conséquent, la stratégie proposée pour la reconnaissance des gestes peut être adoptée par tout système muni d’une caméra de profondeur.Ensuite, nous explorons brièvement les stratégies d’estimation de postures 3D à l’aide de caméras monoculaires. Nous proposons d’estimer les postures 3D chez l’homme par une approche hybride qui combine les avantages des estimateurs discriminants de postures 2D avec les approches utilisant des modèles génératifs. Notre stratégie optimise une fonction de coût en minimisant l’écart entre la position et l’échelle normalisée de la posture 2D obtenue à l’aide d’openpose, et la projection 2D virtuelle du modèle cinématique du sujet humain.Pour l’interaction homme-robot en temps réel, nous avons développé un système distribué asynchrone afin d’associer notre module de détection de gestes statiques à une librairie consacrée à l’interaction physique homme-robot OpenPHRI. Nous validons la performance de notre framework grâce à une expérimentation de type « apprentissage par démonstration » avec un bras robotique
In the light of factories of the future, to ensure productive, safe and effective interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. To address this, deep learning solutions are explored and a reliable human gesture detection framework is developed in this work. Our framework is able to robustly detect static hand gestures plus upper-body dynamic gestures.For static hand gestures detection, openpose is integrated with Kinect V2 to obtain a pseudo-3D human skeleton. With the help of 10 volunteers, we recorded an image dataset opensign, that contains Kinect V2 RGB and depth images of 10 alpha-numeric static hand gestures taken from the American Sign Language. "Inception V3" neural network is adapted and trained to detect static hand gestures in real-time.Subsequently, we extend our gesture detection framework to recognize upper-body dynamic gestures. A spatial attention based dynamic gestures detection strategy is proposed that employs multi-modal "Convolutional Neural Network - Long Short-Term Memory" deep network to extract spatio-temporal dependencies in pure RGB video sequences. The exploited convolutional neural network blocks are pre-trained on our static hand gestures dataset opensign, which allow efficient extraction of hand features. Our spatial attention module focuses on large-scale movements of upper limbs plus on hand images for subtle hand/fingers movements, to efficiently distinguish gestures classes.This module additionally exploits 2D upper-body pose to estimate distance of user from the sensor for scale-normalization plus determine the parameters of hands bounding boxes without a need of depth sensor. The information typically extracted from a depth camera in similar strategies is learned from opensign dataset. Thus the proposed gestures recognition strategy can be implemented on any system with a monocular camera.Afterwards, we briefly explore 3D human pose estimation strategies for monocular cameras. To estimate 3D human pose, a hybrid strategy is proposed which combines the merits of discriminative 2D pose estimators with that of model based generative approaches. Our method optimizes an objective function, that minimizes the discrepancy between position & scale-normalized 2D pose obtained from openpose, and a virtual 2D projection of a kinematic human model.For real-time human-robot interaction, an asynchronous distributed system is developed to integrate our static hand gestures detector module with an open-source physical human-robot interaction library OpenPHRI. We validate performance of the proposed framework through a teach by demonstration experiment with a robotic manipulator
Style APA, Harvard, Vancouver, ISO itp.
15

Sahindal, Boran. "Detecting Conversational Failures in Task-Oriented Human-Robot Interactions". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272135.

Pełny tekst źródła
Streszczenie:
In conversations between humans, not only the content of the utterances but also our social signals provide information on the state of communication. Similarly, in the field of human-robot interaction, we want the robots to have the capability of interpreting social signals given by human users. Such social signals can be operationalised in order to detect unexpected behaviours of robots. This thesis work aims to compare machine learning based methods to investigate robots’ recognition of their own unexpected behaviours based on human social signals. We trained support vector machine, random forest and logistic regression classifiers with a guided task human-robot interaction corpus that includes planned robot failures. We created features based on gaze, motion and facial expressions. We defined data points of different window lengths and compared effects of different robot embodiments. The results show that there is a promising potential in this field and also that the accuracy of this classification task depends on different variables that require careful tuning.
I samtal mellan människor är det inte bara yttrandens innehåll utan också våra sociala signaler som bidrar till kommunikationstillståndet. Inom området människa-robot-interaktion vill vi på samma sätt att robotarna ska kunna tolka sociala signaler som ges av människor. Sådana sociala signaler kan utnyttjas för att upptäcka oväntade beteenden hos robotar. Detta examensarbete syftar till att jämföra maskininlärningsbaserade metoder för att undersöka robotars igenkänning av sina egna oväntade beteenden baserat på mänskliga sociala signaler. Vi tränade SVM, Random Forest- och Logistic Regression-klassificerare med en styrd människa-robot-interaktionskorpus som inkluderar planerade robotfel. Vi skapade attribut baserade på blick, rörelse och ansiktsuttryck. Vi definierade datapunkter med olika fönsterlängder och jämförde effekter av olika robotformer. Resultaten visar att det finns en lovande potential inom detta fält och att noggrannheten för denna klassificeringsuppgift beror på olika variabler som kräver noggrann inställning.
Style APA, Harvard, Vancouver, ISO itp.
16

Birchmore, Frederick Christopher. "A holistic approach to human presence detection on man-portable military ground robots". Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1464660.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 2, 2009). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 85-90).
Style APA, Harvard, Vancouver, ISO itp.
17

Alhusin, Alkhdur Abdullah. "Toward a Sustainable Human-Robot Collaborative Production Environment". Doctoral thesis, KTH, Industriell produktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202388.

Pełny tekst źródła
Streszczenie:
This PhD study aimed to address the sustainability issues of the robotic systems from the environmental and social aspects. During the research, three approaches were developed: the first one an online programming-free model-driven system that utilises web-based distributed human-robot collaboration architecture to perform distant assembly operations. It uses a robot-mounted camera to capture the silhouettes of the components from different angles. Then the system analyses those silhouettes and constructs the corresponding 3D models.Using the 3D models together with the model of a robotic assembly cell, the system guides a distant human operator to assemble the real components in the actual robotic cell. To satisfy the safety aspect of the human-robot collaboration, a second approach has been developed for effective online collision avoidance in an augmented environment, where virtual three-dimensional (3D) models of robots and real images of human operators from depth cameras are used for monitoring and collision detection. A prototype system is developed and linked to industrial robot controllers for adaptive robot control, without the need of programming by the operators. The result of collision detection reveals four safety strategies: the system can alert an operator, stop a robot, move away the robot, or modify the robot’s trajectory away from an approaching operator. These strategies can be activated based on the operator’s location with respect to the robot. The case study of the research further discusses the possibility of implementing the developed method in realistic applications, for example, collaboration between robots and humans in an assembly line.To tackle the energy aspect of the sustainability for the human-robot production environment, a third approach has been developed which aims to minimise the robot energy consumption during assembly. Given a trajectory and based on the inverse kinematics and dynamics of a robot, a set of attainable configurations for the robot can be determined, perused by calculating the suitable forces and torques on the joints and links of the robot. The energy consumption is then calculated for each configuration and based on the assigned trajectory. The ones with the lowest energy consumption are selected.

QC 20170223

Style APA, Harvard, Vancouver, ISO itp.
18

Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.

Pełny tekst źródła
Streszczenie:
This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot.
Style APA, Harvard, Vancouver, ISO itp.
19

Taqi, Sarah M. A. M. "Reproduction of Observed Trajectories Using a Two-Link Robot". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308031627.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Kit, Julian Chua Ying. "The human-machine interface (HMI) and re-bar detection aspects of a non-destructive testing (NDT) robot". Thesis, City University London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245862.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

ANGELONI, Fabio. "Collision Detection for Industrial Applications". Doctoral thesis, Università degli studi di Bergamo, 2017. http://hdl.handle.net/10446/77107.

Pełny tekst źródła
Streszczenie:
In the manufacturing industry, the request of complex products and the decreasing of production time have led to more and more sophisticated CNC-controlled multi-axis machines. Often their setup process is affected by different mistakes caused by the persons responsible that lead to collisions inside the working area. Those collisions often lead to damage the tool and the work piece. The thesis deals with this problem, providing new insights for a fast and robust collision detection. Imagining to start from scratch, through a dynamic analysis of the impact in a mechanical transmission, we reached to identify the sensors which provide the optimal trade-off between the quality of impact information measured, feasibility and costs. Then, we propose two new collision detection algorithms able to identify the unwanted event as fast as possible, with the goal to reduce the impact force and containing the damage. Furthermore, their performance are compared with the most successful algorithm found in literature on two different mechanical systems: a heavy automatic access gate and the laboratory’s robotic arm.
Style APA, Harvard, Vancouver, ISO itp.
22

Carraro, Marco. "Real-time RGB-Depth preception of humans for robots and camera networks". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426800.

Pełny tekst źródła
Streszczenie:
This thesis deals with robot and camera network perception using RGB-Depth data. The goal is to provide efficient and robust algorithms for interacting with humans. For this reason, a special care has been devoted to design algorithms which can run in real-time on consumer computers and embedded cards. The main contribution of this thesis is the 3D body pose estimation of the human body. We propose two novel algorithms which take advantage of the data stream of a RGB-D camera network outperforming the state-of-the-art performance in both single-view and multi-view tests. While the first algorithm works on point cloud data which is feasible also with no external light, the second one performs better, since it deals with multiple persons with negligible overhead and does not rely on the synchronization between the different cameras in the network. The second contribution regards long-term people re-identification in camera networks. This is particularly challenging since we cannot rely on appearance cues, in order to be able to re-identify people also in different days. We address this problem by proposing a face-recognition framework based on a Convolutional Neural Network and a Bayes inference system to re-assign the correct ID and person name to each new track. The third contribution is about Ambient Assisted Living. We propose a prototype of an assistive robot which periodically patrols a known environment, reporting unusual events as people fallen on the ground. To this end, we developed a fast and robust approach which can work also in dimmer scenes and is validated using a new publicly-available RGB-D dataset recorded on-board of our open-source robot prototype. As a further contribution of this work, in order to boost the research on this topics and to provide the best benefit to the robotics and computer vision community, we released under open-source licenses most of the software implementations of the novel algorithms described in this work.
Questa tesi tratta di percezione per robot autonomi e per reti di telecamere da dati RGB-Depth. L'obiettivo è quello di fornire algoritmi robusti ed efficienti per l'interazione con le persone. Per questa ragione, una particolare attenzione è stata dedicata allo sviluppo di soluzioni efficienti che possano essere eseguite in tempo reale su computer e schede grafiche consumer. Il contributo principale di questo lavoro riguarda la stima automatica della posa 3D del corpo delle persone presenti in una scena. Vengono proposti due algoritmi che sfruttano lo stream di dati RGB-Depth da una rete di telecamere andando a migliorare lo stato dell'arte sia considerando dati da singola telecamera che usando tutte le telecamere disponibili. Il secondo algoritmo ottiene risultati migliori in quanto riesce a stimare la posa di tutte le persone nella scena con overhead trascurabile e non richiede sincronizzazione tra i vari nodi della rete. Tuttavia, il primo metodo utilizza solamente nuvole di punti che sono disponibili anche in ambiente con poca luce nei quali il secondo algoritmo non raggiungerebbe gli stessi risultati. Il secondo contributo riguarda la re-identificazione di persone a lungo termine in reti di telecamere. Questo problema è particolarmente difficile in quanto non si può contare su feature di colore o che considerino i vestiti di ogni persona, in quanto si vuole che il riconoscimento funzioni anche a distanza di giorni. Viene proposto un framework che sfrutta il riconoscimento facciale utilizzando una Convolutional Neural Network e un sistema di classificazione Bayesiano. In questo modo, ogni qual volta viene generata una nuova traccia dal sistema di people tracking, la faccia della persona viene analizzata e, in caso di match, il vecchio ID viene riassegnato. Il terzo contributo riguarda l'Ambient Assisted Living. Abbiamo proposto e implementato un robot di assistenza che ha il compito di sorvegliare periodicamente un ambiente conosciuto, riportando eventi non usuali come la presenza di persone a terra. A questo fine, abbiamo sviluppato un approccio veloce e robusto che funziona anche in assenza di luce ed è stato validato usando un nuovo dataset RGB-Depth registrato a bordo robot. Con l'obiettivo di avanzare la ricerca in questi campi e per fornire il maggior beneficio possibile alle community di robotica e computer vision, come contributo aggiuntivo di questo lavoro, abbiamo rilasciato, con licenze open-source, la maggior parte delle implementazioni software degli algoritmi descritti in questo lavoro.
Style APA, Harvard, Vancouver, ISO itp.
23

Kim, Ui-Hyun. "Improvement of Sound Source Localization for a Binaural Robot of Spherical Head with Pinnae". 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180475.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Dumora, Julie. "Contribution à l’interaction physique homme-robot : application à la comanipulation d’objets de grandes dimensions". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20030/document.

Pełny tekst źródła
Streszczenie:
La robotique collaborative a pour vocation d'assister physiquement l'opérateur dans ses tâches quotidiennes. Les deux partenaires qui composent un tel système possèdent des atouts complémentaires : physique pour le robot versus cognitif pour l'opérateur. Cette combinaison offre ainsi de nouvelles perspectives d'applications, notamment pour la réalisation de tâches non automatisables. Dans cette thèse, nous nous intéressons à une application particulière qui est l'assistance à la manipulation de pièces de grande taille lorsque la tâche à réaliser et l'environnement sont inconnus du robot. La manutention de telles pièces est une activité quotidienne dans de nombreux domaines et dont les caractéristiques en font une problématique à la fois complexe et critique. Nous proposons une stratégie d'assistance pour répondre à la problématique de contrôle simultané des points de saisie du robot et de l'opérateur liée à la manipulation de pièces de grandes dimensions, lorsque la tâche n'est pas connue du robot. Les rôles du robot et de l'opérateur dans la réalisation de la tâche sont distribués en fonction de leurs compétences relatives. Alors que l'opérateur décide du plan d'action et applique la force motrice qui permet de déplacer la pièce, le robot détecte l'intention de mouvement de l'opérateur et bloque les degrés de liberté qui ne correspondent pas au mouvement désiré. De cette façon, l'opérateur n'a pas à contrôler simultanément tous les degrés de liberté de la pièce. Les problématiques scientifiques relatives à l'interaction physique homme-robot abordées dans cette thèse se décomposent en trois grandes parties : la commande pour l'assistance, l'analyse du canal haptique et l'apprentissage lors de l'interaction. La stratégie développée s'appuie sur un formalisme unifié entre la spécification des assistances, la commande du robot et la détection d'intention. Il s'agit d'une approche modulaire qui peut être utilisée quelle que soit la commande bas niveau imposée dans le contrôleur du robot. Nous avons mis en avant son intérêt au travers de tâches différentes réalisées sur deux plateformes robotiques : un bras manipulateur et un robot humanoïde bipède
Collaborative robotics aims at physically assisting humans in their daily tasks.The system comprises two partners with complementary strengths : physical for the robot versus cognitive for the operator. This combination provides new scenarios of application such as the accomplishment of difficult-to-automate tasks. In this thesis, we are interested in assisting the human operator to manipulate bulky parts while the robot has no prior knowledge of the environment and the task. Handling such parts is a daily activity in manyareas which is a complex and critical issue. We propose a new strategy of assistances to tackle the problem of simultaneously controlling both the grasping point of the operator and that of the robot. The task responsibilities for the robot and the operator are allocated according to their relative strengths. While the operator decides the plan and applies the driving force, the robot detects the operator's intention of motion and constrains the degrees of freedom that are useless to perform the intended motion. This way, the operator does not have to control all the degrees of freedom simultaneously. The scientific issues we deal with are split into three main parts : assistive control, haptic channel analysis and learning during the interaction.The strategy is based on a unified framework of the assistances specification, robot control and intention detection. This is a modular approach that can be applied with any low-level robot control architecture. We highlight its interest through manifold tasks completed with two robotics platforms : an industrial arm manipulator and a biped humanoid robot
Style APA, Harvard, Vancouver, ISO itp.
25

Reynaga, Barba Valeria. "Detecting Changes During the Manipulation of an Object Jointly Held by Humans and RobotsDetektera skillnader under manipulationen av ett objekt som gemensamt hålls av människor och robotar". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174027.

Pełny tekst źródła
Streszczenie:
In the last decades research and development in the field of robotics has grown rapidly. This growth has resulted in the emergence of service robots that need to be able to physically interact with humans for different applications. One of these applications involves robots and humans cooperating in handling an object together. In such cases, there is usually an initial arrangement of how the robot and the humans hold the object and the arrangement stays the same throughout the manipulation task. Real-world scenarios often require that the initial arrangement changes throughout the task, therefore, it is important that the robot is able to recognize these changes and act accordingly. We consider a setting where a robot holds a large flat object with one or two humans. The aim of this research project is to detect the change in the number of agents grasping the object using only force and torque information measured at the robot's wrist. The proposed solution involves defining a transition sequence of four steps that the humans should perform to go from the initial scenario to the final one. The force and torque information is used to estimate the grasping point of the agents with a Kalman filter. While the humans are going from one scenario to the other, the estimated point changes according to the step of the transition the humans are in. These changes are used to track the steps in the sequence using a hidden Markov model (HMM). Tracking the steps in the sequence means knowing how many agents are grasping the object. To evaluate the method, humans that were not involved in the training of the HMM were asked to perform two tasks: a) perform the previously defined sequence as is, and b) perform a deviation of the sequence. The results of the method show that it is possible to detect the change between one human and two humans holding the object using only force and torque information.
Style APA, Harvard, Vancouver, ISO itp.
26

Wåhlin, Peter. "Enhanching the Human-Team Awareness of a Robot". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16371.

Pełny tekst źródła
Streszczenie:
The use of autonomous robots in our society is increasing every day and a robot is no longer seen as a tool but as a team member. The robots are now working side by side with us and provide assistance during dangerous operations where humans otherwise are at risk. This development has in turn increased the need of robots with more human-awareness. Therefore, this master thesis aims at contributing to the enhancement of human-aware robotics. Specifically, we are investigating the possibilities of equipping autonomous robots with the capability of assessing and detecting activities in human teams. This capability could, for instance, be used in the robot's reasoning and planning components to create better plans that ultimately would result in improved human-robot teamwork performance. we propose to improve existing teamwork activity recognizers by adding intangible features, such as stress, motivation and focus, originating from human behavior models. Hidden markov models have earlier been proven very efficient for activity recognition and have therefore been utilized in this work as a method for classification of behaviors. In order for a robot to provide effective assistance to a human team it must not only consider spatio-temporal parameters for team members but also the psychological.To assess psychological parameters this master thesis suggests to use the body signals of team members. Body signals such as heart rate and skin conductance. Combined with the body signals we investigate the possibility of using System Dynamics models to interpret the current psychological states of the human team members, thus enhancing the human-awareness of a robot.
Användningen av autonoma robotar i vårt samhälle ökar varje dag och en robot ses inte längre som ett verktyg utan som en gruppmedlem. Robotarna arbetar nu sida vid sida med oss och ger oss stöd under farliga arbeten där människor annars är utsatta för risker. Denna utveckling har i sin tur ökat behovet av robotar med mer människo-medvetenhet. Därför är målet med detta examensarbete att bidra till en stärkt människo-medvetenhet hos robotar. Specifikt undersöker vi möjligheterna att utrusta autonoma robotar med förmågan att bedöma och upptäcka olika beteenden hos mänskliga lag. Denna förmåga skulle till exempel kunna användas i robotens resonemang och planering för att ta beslut och i sin tur förbättra samarbetet mellan människa och robot. Vi föreslår att förbättra befintliga aktivitetsidentifierare genom att tillföra förmågan att tolka immateriella beteenden hos människan, såsom stress, motivation och fokus. Att kunna urskilja lagaktiviteter inom ett mänskligt lag är grundläggande för en robot som ska vara till stöd för laget. Dolda markovmodeller har tidigare visat sig vara mycket effektiva för just aktivitetsidentifiering och har därför använts i detta arbete. För att en robot ska kunna ha möjlighet att ge ett effektivt stöd till ett mänskligtlag måste den inte bara ta hänsyn till rumsliga parametrar hos lagmedlemmarna utan även de psykologiska. För att tyda psykologiska parametrar hos människor förespråkar denna masteravhandling utnyttjandet av mänskliga kroppssignaler. Signaler så som hjärtfrekvens och hudkonduktans. Kombinerat med kroppenssignalerar påvisar vi möjligheten att använda systemdynamiksmodeller för att tolka immateriella beteenden, vilket i sin tur kan stärka människo-medvetenheten hos en robot.

The thesis work was conducted in Stockholm, Kista at the department of Informatics and Aero System at Swedish Defence Research Agency.

Style APA, Harvard, Vancouver, ISO itp.
27

Iacob, David-Octavian. "Détection du mensonge dans le cadre des interactions homme-robot à l'aide de capteurs et dispositifs non invasifs et mini invasifs". Thesis, Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAE004.

Pełny tekst źródła
Streszczenie:
La Robotique Sociale met l’accent sur l'amélioration de l’aptitude des robots d’interagir avec les humains, y compris la capacité de comprendre leurs interlocuteurs humains. Munis de telles capacités, les robots sociaux peuvent améliorer la qualité de vie de leurs utilisateurs dans une grande variété de contextes: en tant que guides, partenaires de jeux, assistants à domicile, ou, le plus important, pour des buts therapeutiques.Les Robots Sociaux Assistants (RSA) visent à améliorer la qualité de vie de leurs utilisateurs à travers des interactions sociales. Les populations d’utilisateurs vulnérables, comme ceux qu’ont besoin de réhabilitation, thérapie ou assistance permanente, sont ceux qui bénéficient le plus de l’aide des RSAs. Une des responsabilités de ces robots est de s’assurer que leurs utilisateurs respectent les reccomendations therapeutiques et medicales, tant que les utilisateurs humains ne sont pas toujours coopérants. Comme certaines études ont montré, les humains tentent parfois de tromper leurs robots assistants afin d’éviter de respecter leurs recommandations. Les premiers vont donc finir par détériorer leur état de santé et par rendre les derniers incapables d'accomplir leurs tâches. Par conséquent, les RSAs et d’autant plus leurs utilisateurs bénéficieraient si les robots étaient capables de détecter les mensonges dans le cadre des Interactions Homme-Robot (IHR).Cette thèse explore les manifestations et signaux physiologiques et comportementaux associés au mensonge dans le contexte de l’IHR, à partir de la recherche similaire faite dans le cadre des interactions interhumaines. Compte tenu du fait que nous considérons qu’il est très important de ne pas détériorer la qualité de l’interaction de façon significative, notre travail se concentre sur l’évaluation de ces manifestations uniquement à l’aide des moyens et dispositifs non-invasifs et minimalement-invasifs, comme les caméras RGB, RGB-D et thermales, aussi bien que des capteurs portables.À ce but, nous avons conçu plusieurs scénarios d'interaction in-the-wild pendant lesquelles les participants sont incités à mentir. Pendant ces expériences, nous surveillons et mesurons la fréquence cardiaque, la fréquence respiratoire, la température de la peau, la conductivité de la peau, l’ouverture des yeux, la position et l’orientation de la tête aussi bien que le temps de réponse aux questions des participants, à l’aide de capteurs et dispositifs non-invasifs et minimalement invasifs. Nous avons cherché des corrélations entre les variations des paramètres susmentionnés et la véracité des réponses et des affirmations des participants. En plus, nous avons aussi étudié l’impact de la nature de l’interlocuteur (humain ou robot) sur les manifestations des participants.Nous considérons que cette thèse et nos résultats représentent un grand pas en avant vers le développement de robots capables d’établir si leurs interlocuteurs sont honnêtes ou pas, et ainsi d'améliorer la qualité des IHRs et la capacité des RSAs d’exercer leurs fonctions et d'améliorer la qualité de vie de leurs utilisateurs
Social Robotics focuses on improving the ability of robots to interact with humans, including the capacity to understand their human interlocutors. When endowed with such capabilities, social robots can be useful to their users in a large variety of contexts: as guides, play partners, home assistants, or, most importantly, when being used for therapeutic purposes.Socially Assistive Robots (SAR) aim to improve the quality of life of their users by means of social interactions. Vulnerable populations of users, like people requiring rehabilitation, therapy or permanent assistance, benefit the most from the aid of SARs. One of the responsibilities of such robots is to make sure their users respect their therapeutic and medical recommendations, and human users are not always cooperative. As it has been observed in previous studies, humans sometimes deceive their robot caretakers in order to avoid following their recommendations. The former therefore end up deteriorating their medical condition and render the latter incapable of fulfilling theirs duties. Therefore, SARs and especially their users would benefit if robots were able to detect deception in Human-Robot Interactions (HRI).This thesis explores the physiological and behavioural manifestations and cues associated to deception in HRI, based on previous research done in inter-human interactions. As we consider that it is highly important to not impair the quality of the interaction in any way, our work focuses on the evaluation of these manifestations by means of noninvasive and minimally-invasive devices, such as RGB, RGB-D and thermal cameras as well as wearable sensors.To this end, we have designed a series of in-the-wild interaction scenarios during which participants are enticed to lie. During these experiments, we monitored the participants' heart rate, respiratory rate, skin temperature, skin conductance, eye openness, head position and orientation, and their response time to questions using noninvasive and minimally-invasive devices and sensors. We attempted to correlate the variations of the aforementioned parameters to the veracity of the participants' answers and statements. Moreover, we have studied the impact of the nature of the interlocutor (human or robot) on the participants' manifestations.We believe that this thesis and our results represent a major step forward towards the development of robots that are able to establish the honesty and trustworthiness of their interlocutors, thus improving the quality of HRI and the ability of SARs to perform their duties and to improve the quality of life of their users
Style APA, Harvard, Vancouver, ISO itp.
28

Malik, Muhammad Usman. "Learning multimodal interaction models in mixed societies A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms". Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR18.

Pełny tekst źródła
Streszczenie:
Les travaux de recherche proposés se situe au carrefour de deux domaines de recherche, l'interaction humain-agent et l'apprentissage automatique. L’interaction humain-agent fait référence aux techniques et concepts impliqués dans le développement des agents intelligents, tels que les robots et les agents virtuels, capables d'interagir avec les humains pour atteindre un objectif commun. L’apprentissage automatique, d'autre part, exploite des algorithmes statistiques pour apprendre des modèles de donnée. Les interactions humaines impliquent plusieurs modalités, qui peuvent être verbales comme la parole et le texte, ainsi que les comportements non-verbaux, c'est-à-dire les expressions faciales, le regard, les gestes de la tête et des mains, etc. Afin d'imiter l'interaction humain-humain en temps réel en interaction humain-agent, plusieurs modalités d'interaction peuvent être exploitées. Avec la disponibilité de corpus d'interaction multimodales humain-humain et humain-agent, les techniques d'apprentissage automatique peuvent alors être utilisées pour développer des modèles interdépendants participant à l'interaction humain-agent. À cet égard, nos travaux de recherche proposent des modèles originaux pour la détection de destinataires d'énoncés, le changement de tour de parole et la prédiction du prochain locuteur, et enfin la génération de comportement d'attention visuelle en interaction multipartie. Notre modèle de détection de destinataire prédit le destinataire d'un énoncé lors d'interactions impliquant plus de deux participant. Le problème de détection de destinataires a été traité comme un problème d'apprentissage automatique multiclasse supervisé. Plusieurs algorithmes d'apprentissage ont été entrainés pour développer des modèles de détection de destinataires. Les résultats obtenus montrent que ces propositions sont plus performants qu'un algorithme de référence. Le second modèle que nous proposons concerne le changement de tour de parole et la prédiction du prochain locuteur dans une interaction multipartie. La prédiction du changement de tour est modélisée comme un problème de classification binaire alors que le modèle de prédiction du prochain locuteur est considéré comme un problème de classification multiclasse. Des algorithmes d'apprentissage automatique sont entraînés pour résoudre ces deux problèmes interdépendants. Les résultats montrent que les modèles proposés sont plus performants que les modèles de référence. Enfin, le troisième modèle proposé concerne le problème de génération du comportement d'attention visuelle (CAV) pour les locuteurs et les auditeurs dans une interaction multipartie. Ce modèle est divisé en plusieurs sous-modèles qui sont entraînés par l'apprentissage machine ainsi que par des techniques heuristiques. Les résultats attestent que les systèmes que nous proposons sont plus performants que les modèles de référence développés par des approches aléatoires et à base de règles. Le modèle de génération de comportement CAV proposé est mis en œuvre sous la forme d’une série de quatre modules permettant de créer différents scénarios d’interaction entre plusieurs agents virtuels. Afin de l’évaluer, des vidéos enregistrées pour les modèles de génération de CAV pour les orateurs et les auditeurs, sont présentées à des évaluateurs humains qui évaluent les comportements de référence, le comportement réel issu du corpus et les modèles proposés de CAV sur plusieurs critères de naturalité du comportement. Les résultats montrent que le comportement de CAV généré via le modèle est perçu comme plus naturel que les bases de référence et aussi naturel que le comportement réel
Human -Agent Interaction and Machine learning are two different research domains. Human-agent interaction refers to techniques and concepts involved in developing smart agents, such as robots or virtual agents, capable of seamless interaction with humans, to achieve a common goal. Machine learning, on the other hand, exploits statistical algorithms to learn data patterns. The proposed research work lies at the crossroad of these two research areas. Human interactions involve multiple modalities, which can be verbal such as speech and text, as well as non-verbal i.e. facial expressions, gaze, head and hand gestures, etc. To mimic real-time human-human interaction within human-agent interaction,multiple interaction modalities can be exploited. With the availability of multimodal human-human and human-agent interaction corpora, machine learning techniques can be used to develop various interrelated human-agent interaction models. In this regard, our research work proposes original models for addressee detection, turn change and next speaker prediction, and finally visual focus of attention behaviour generation, in multiparty interaction. Our addressee detection model predicts the addressee of an utterance during interaction involving more than two participants. The addressee detection problem has been tackled as a supervised multiclass machine learning problem. Various machine learning algorithms have been trained to develop addressee detection models. The results achieved show that the proposed addressee detection algorithms outperform a baseline. The second model we propose concerns the turn change and next speaker prediction in multiparty interaction. Turn change prediction is modeled as a binary classification problem whereas the next speaker prediction model is considered as a multiclass classification problem. Machine learning algorithms are trained to solve these two interrelated problems. The results depict that the proposed models outperform baselines. Finally, the third proposed model concerns the visual focus of attention (VFOA) behaviour generation problem for both speakers and listeners in multiparty interaction. This model is divided into various sub-models that are trained via machine learning as well as heuristic techniques. The results testify that our proposed systems yield better performance than the baseline models developed via random and rule-based approaches. The proposed VFOA behavior generation model is currently implemented as a series of four modules to create different interaction scenarios between multiple virtual agents. For the purpose of evaluation, recorded videos for VFOA generation models for speakers and listeners, are presented to users who evaluate the baseline, real VFOA behaviour and proposed VFOA models on the various naturalness criteria. The results show that the VFOA behaviour generated via the proposed VFOA model is perceived more natural than the baselines and as equally natural as real VFOA behaviour
Style APA, Harvard, Vancouver, ISO itp.
29

Lohan, Katrin Solveig [Verfasser]. "A model of contingency detection to spot tutoring behavior and respond to ostensive cues in human-robot-interaction / Katrin Solveig Lohan. Technische Fakultät". Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2013. http://d-nb.info/1032453990/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Brèthes, Ludovic. "Suivi visuel par filtrage particulaire : application à l'interaction Homme-robot". Toulouse 3, 2005. http://www.theses.fr/2005TOU30282.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur la détection et le suivi de personnes ainsi que la reconnaissance de gestes élémentaires à partir du flot vidéo d'une caméra couleur embarquée sur le robot. Le filtrage particulaire très adapté dans ce contexte permet de combiner/fusionner aisément différentes sources de mesures. Nous proposons ici différents schémas de filtrage, où l'information visuelle est prise en compte dans les fonctions d'importance et de vraisemblance au moyen de primitives forme, couleur et mouvement image. Nous évaluons alors quelles combinaisons de primitives visuelles et d'algorithmes de filtrage répondent au mieux aux modalités d'interaction envisagées pour notre robot "guide de musée. Notre dernière contribution porte sur la reconnaissance de gestes symboliques permettant de communiquer avec le robot. Une stratégie de filtrage particulaire efficace est proposée afin de suivre et reconnaître simultanément des configurations de la main et des dynamiques gestuelles dans le flot vidéo
This thesis is focused on the detection and the tracking of people and also on the recognition of elementary gestures from video stream of a color camera embeded on the robot. Particle filter well suited to this context enables a straight combination/fusion of several measurement cues. We propose here various filtering strategies where visual information such as shape, color and motion are taken into account in the importance function and the measurement model. We compare and evaluate these filtering strategies in order to show which combination of visual cues and particle filter algorithm are more suitable to the interaction modalities that we consider for our tour-robot. Our last contribution relates to the recognition of symbolic gestures which enable to communicate with the robot. An efficient particle filter strategy is proposed in order to track the hand and to recognize at the same time its configuration and gesture dynamic in video stream
Style APA, Harvard, Vancouver, ISO itp.
31

Souroulla, Timotheos. "Distributed Intelligence for Multi-Robot Environment : Model Compression for Mobile Devices with Constrained Computing Resources". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302151.

Pełny tekst źródła
Streszczenie:
Human-Robot Collaboration (HRC), where both humans and robots work in the same environment simultaneously, is an emerging field and has increased massively during the past decade. For this collaboration to be feasible and safe, robots need to perform a proper safety analysis to avoid hazardous situations. This safety analysis procedure involves complex computer vision tasks that require a lot of processing power. Therefore, robots with constrained computing resources cannot execute these tasks without any delays, thus for executing these tasks they rely on edge infrastructures, such as remote computational resources accessible over wireless communication. In some cases though, the edge may be unavailable, or connection to it may not be possible. In such cases, robots still have to navigate themselves around the environment, while maintaining high levels of safety. This thesis project focuses on reducing the complexity and the total number of parameters of pre-trained computer vision models by using model compression techniques, such as pruning and knowledge distillation. These model compression techniques have strong theoretical and practical foundations, but work on their combination is limited, therefore it is investigated in this work. The results of this thesis project show that in the test cases, up to 90% of the total number of parameters of a computer vision model can be removed without any considerable reduction in the model’s accuracy.
Människa och robot samarbete (förkortat HRC från engelskans Human-Robot Collaboration), där både människor och robotar arbetar samtidigt i samma miljö, är ett växande forskningsområde och har ökat dramatiskt över de senaste decenniet. För att detta samarbetet ska vara möjligt och säkert behöver robotarna genomgå en ordentlig säkerhetsanalys så att farliga situationer kan undvikas. Denna säkerhetsanalys inkluderar komplexa Computer Vision uppgifter som kräver mycket processorkraft. Därför kan inte robotar med begränsad processorkraft utföra dessa beräkningar utan fördröjning, utan måste istället förlita sig på utomstående infrastruktur för att exekvera dem. Vid vissa tillfällen kan dock denna utomstående infrastruktur inte finnas på plats eller vara svår att koppla upp sig till. Även vid dessa tillfällen måste robotar fortfarande kunna navigera sig själva genom en lokal, och samtidigt upprätthålla hög grad av säkerhet. Detta projekt fokuserar på att reducera komplexiteten och det totala antalet parametrar av för-tränade Computer Vision-modeller genom att använda modellkompressionstekniker så som: Beskärning och kunskapsdestilering. Dessa modellkompressionstekniker har starka teoretiska grunder och praktiska belägg, men mängden arbeten kring deras kombinerade effekt är begränsad, därför är just det undersökt i detta arbetet. Resultaten av det här projektet visar att up till 90% av det totala antalet parametrar hos en Computer Vision-modell kan tas bort utan någon noterbar försämring av modellens säkerhet.
Style APA, Harvard, Vancouver, ISO itp.
32

Velor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.

Pełny tekst źródła
Streszczenie:
Robot assisted therapy is becoming increasingly popular. Research has proven it can be of benefit to persons dealing with a variety of disorders, such as Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), and it can also provide a source of emotional support e.g. to persons living in seniors’ residences. The advancement in technology and a decrease in cost of products related to consumer electronics, computing and communication has enabled the development of more advanced social robots at a lower cost. This brings us closer to developing such tools at a price that makes them affordable to lower income individuals and families. Currently, in several cases, intensive treatment for patients with certain disorders (to the level of becoming effective) is practically not possible through the public health system due to resource limitations and a large existing backlog. Pursuing treatment through the private sector is expensive and unattainable for those with a lower income, placing them at a disadvantage. Design and effective integration of technology, such as using social robots in treatment, reduces the cost considerably, potentially making it financially accessible to lower income individuals and families in need. The Objective of the research reported in this manuscript is to design and implement a social robot that meets the low-cost criteria, while also containing the required functions to support children with ASD. The design considered contains knowledge acquired in the past through research involving the use of various types of technology for the treatment of mental and/or emotional disabilities.
Style APA, Harvard, Vancouver, ISO itp.
33

Reseco, Bato Miguel. "Nouvelle méthodologie générique permettant d’obtenir la probabilité de détection (POD) robuste en service avec couplage expérimental et numérique du contrôle non destructif (CND)". Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0014/document.

Pełny tekst źródła
Streszczenie:
L’évaluation des performances des procédures de Contrôle Non Destructifs (CND) en aéronautique est une étape clé dans l’établissement du dossier de certification de l’avion. Une telle démonstration de performances est faite à travers l’établissement de probabilités de détection (Probability Of Detection – POD), qui intègrent l’ensemble des facteurs influents et sources d’incertitudes inhérents à la mise en œuvre de la procédure. Ces études, basées sur des estimations statistiques faites sur un ensemble représentatif d’échantillons, reposent sur la réalisation d’un grand nombre d’essais expérimentaux (un minimum de 60 échantillons contenant des défauts de différentes tailles, qui doivent être inspectés par au moins 3 opérateurs [1]), afin de recueillir un échantillon suffisant pour une estimation statistique pertinente. Le coût financier associé est élevé, parfois prohibitif, et correspond majoritairement à la mise en œuvre des maquettes servant aux essais. Des travaux récents [2-5] ont fait émerger une approche de détermination de courbes POD utilisant la simulation des CND, notamment avec le logiciel CIVA. L’approche, dite de propagation d’incertitudes, consiste à : - Définir une configuration nominale d’inspection, - Identifier l’ensemble des paramètres influents susceptibles de varier dans l’application de la procédure, - Caractériser les incertitudes liées à ces paramètres par des lois de probabilités, - Réaliser un grand nombre de simulations par tirage aléatoire des valeurs prises par les paramètres variables selon les lois de probabilités définies. Le résultat de cet ensemble de simulations constitue enfin la base de données utilisée pour l’estimation des POD. Cette approche réduit de façon très importante les coûts d’obtention des POD mais est encore aujourd’hui sujette à discussions sur sa robustesse vis-à-vis des données d’entrée (les lois de probabilité des paramètres incertains) et sur la prise en compte des facteurs humains. L’objectif de cette thèse est de valider cette approche sur des cas d’application AIRBUS et d’en améliorer la robustesse afin de la rendre couramment utilisable au niveau industriel, notamment en la faisant accepter par les autorités de vol (FAA et EASA). Pour ce faire le thésard devra mener des campagnes de validations des codes de simulation des CND, mettre en œuvre la méthodologie décrite plus haut sur les cas d’application AIRBUS, puis proposer et mettre en œuvre des stratégies d’amélioration de la robustesse de la méthode vis-à-vis des données d’entrée et des facteurs liés à l’humain
The performance assessment of non-destructive testing (NDT) procedures in aeronautics is a key step in the preparation of the aircraft's certification document. Such a demonstration of performance is done through the establishment of Probability of Detection (POD) laws integrating all sources of uncertainty inherent in the implementation of the procedure. These uncertainties are due to human and environmental factors in In-Service maintenance tasks. To establish experimentally these POD curves, it is necessary to have data from a wide range of operator skills, defect types and locations, material types, test protocols, etc. Obtaining these data evidences high costs and significant delays for the aircraft manufacturer. The scope of this thesis is to define a robust methodology of building POD from numerical modeling. The POD robustness is ensured by the integration of the uncertainties through statistical distributions issued from experimental data or engineering judgments. Applications are provided on titanium beta using high frequency eddy currents NDT technique. First, an experimental database will be created from three environments: laboratory, A321 aircraft and A400M aicraft. A representative sample of operators, with different certification levels in NDT technique, will be employed. Multiple inspection scenarios will be carried out to analyze these human and environmental factors. In addition, this study will take into account the impact of using different equipments in the HFEC test. This database is used, subsequently, to build statistical distributions. These distributions are the input data of the simulation models of the inspection. These simulations are implemented with the CIVA software. A POD module, based on the Monte Carlo method, is integrated into this software. This module will be applied to address human and ergonomic influences on POD. Additionally this module will help us to understand in a better way the equipment impact in POD curves. Finally, the POD model will be compared and validated with the experimental results developed
Style APA, Harvard, Vancouver, ISO itp.
34

CHOUDHURY, SUBRADEB. "ROBUST HUMAN DETECTION FOR SURVEILLANCE". Thesis, 2015. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14385.

Pełny tekst źródła
Streszczenie:
ABSTRACT Video Surveillance has received tremendous attention in the present scenario. It has a wide range of applications like it can be used in Border areas of a country or in market areas as well as in the restricted areas for monitoring objects. Human Detection is a field of Video Surveillance where monitoring of humans take place i.e. the human is detected first and its trajectory is estimated for the purpose of monitoring. In this project, a robust human detection method is proposed. The Human Detection System consists of 2 stages. First stage involves Image Pre-processing where the Motion region is extracted and Image Segmentation is applied to this motion region. The second stage classifies the segmented image as a human or a non-human based on Aspect Ratio of Human. So, we can say that the Motion region is incorporated with the Aspect Ratio feature to propose a Robust Human Detection Method. A Dataset is made where the background colour matches with the Human Skin Colour. In this situation it is very difficult to track the human. We propose a system where we can track human under such conditions. The system is tested in PETs Database also and an overall Detection Rate of 85% is reported. However, the Detection rate gets reduced drastically when the human is occluded in the scene.
Style APA, Harvard, Vancouver, ISO itp.
35

Lin, You-Rong, i 林佑融. "A Robust Fall Detection Scheme Using Human Shadow and SVM Classifiers". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/12696932794927637870.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
電子工程系
100
We present a novel real-time video-based human fall detection system in this thesis. Because the system is based on a combination of shadow-based features and various human postures, it can distinguish between fall-down and fall-like incidents with a high degree of accuracy. To support effective operation in different viewpoints, we propose a new feature called virtual height that can estimate the body height without 3D model reconstruction. As a result, the model is low computational complexity. Our experiment results demonstrate that the proposed system can achieve a high detection rate and a low false alarm rate.
Style APA, Harvard, Vancouver, ISO itp.
36

Zeng, Hong-Bo, i 曾泓博. "Robust Vision-based Multiple Fingertip Detection and Human Computer Interface Application". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/05658811497165632062.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
資訊工程研究所
100
Intuitive and easy to use interfaces are very important to a successful product. With the development of technology, gesture-based machine interface has gradually become a trend to replace traditional input devices. For gesture-based machine interface, multiple fingertip detection is a crucial step. The studies of multiple fingertip detection can be classified into two main categories, wearable and markerless. For the former, the users need to wear additional equipment to facilitate the fingertip detection. Considering the inconveniency and the hygiene problem for wearable equipment, the latter requires no additional equipment to get the hand regions or the positions of fingertips. This thesis is a markerless method. We only use a single camera to capture images and locate the fingertip accurately in the images. A lot of markerless approaches limited their experimental environment or gesture definitions. In addition, some of them used contour and distance to centroid information to locate fingertips. Most of these methods made the assumption that only hand regions are in the scene, and didn’t consider the problems that might happen when the arms are also in the scene. In this thesis, we proposed a multiple fingertip detection algorithm based on the likelihood value of Contour and Curvature information with the width data we added, which is more robust and flexible. Finally, we implement a human computer interface system using predefined gestures.
Style APA, Harvard, Vancouver, ISO itp.
37

Liu, Yuan-Ming, i 劉原銘. "A Robust Image Descriptor for Human Detection Based on HoG and Weber's Law". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/71816414716758896469.

Pełny tekst źródła
Streszczenie:
碩士
國立東華大學
資訊工程學系
99
Human detection is essential for many applications such as surveillance and smart car. However, detecting humans in images or videos is a challenging task because of the variable appearance and background clutter. These factors affect significantly human shape. Therefore, in recent years, people are looking for more discriminative descriptors to improve the performance of human detection. In this thesis, a robust descriptor based on HoG and Weber’s Law is proposed. Namely, the proposed descriptor is concatenated by U-HoG and histogram of Weber’s constant. Weber’s Constant has advantages such as robust to noise and detecting edge well. Because there are a large number of weak edges in the cluttered background affecting the detection result, the proposed method uses Weber’s constant to take off the weak edges. If a pixel on the weak edge, the proposed method will ignore the pixel when computing the feature. Therefore, the proposed descriptor may inherit the advantages of Weber’s Law. From the simulation results, the proposed descriptor has better performance than other comparative methods and is more robust to Gaussian white noise than U-HoG.
Style APA, Harvard, Vancouver, ISO itp.
38

Carvalho, Daniel Chichorro de. "Development of a Facial Feature Detection and Tracking Framework for Robust Gaze Estimation". Master's thesis, 2018. http://hdl.handle.net/10316/86556.

Pełny tekst źródła
Streszczenie:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
A estimação remota do olhar é o processo de tentar descobrir a direção do olhar de um humano, e ultimamente, o seu objeto de fixação, tal como os humanos o fazem para comunicar, partindo de informação de imagem capturada remotamente. Este processo é conseguido utilizando informação relativa à orientação da sua cabeça e à geometria dos pontos de referência faciais. Tem muitas aplicações, nomeadamente para Interface Homem-Máquina. No entanto, a deteção e seguimento de pontos faciais é um dos focos mais importantes de Visão por Computador, e nenhuma solução exibe uma robustez satisfatória em condições gerais. A maior parte das abordagens a este objetivo partem de cenários com alta resolução e poses de cabeça essencialmente frontais, e nenhuma abordagem encontrada contempla tanto caras frontais como de perfil. Este trabalho propõe um sistema que tenta alcançar uma melhor performance em pessoas mais distantes, e incorporando seguimento de pontos faciais em caras de perfil. O método desenvolvido baseia-se numa comparação da geometria obtida por detetores independentes de pontos faciais com diferentes modelos geométricos da cara humana, que capturam diferentes orientações da cabeça. O seguimento é feito pelo algoritmo KLT, e deteções parciais ou totais são lançadas por um design baseado em confianças. Uma demonstração conceptual é feita, mostrando que a solução proposta atinge de forma satisfatória os requisitos principais. Trabalho futuro é sugerido para melhorar a performance e lidar com as restantes questões.
Remote gaze estimation is the process of attempting to find the gaze direction, or ultimately, the point of fixation of a human subject, much like humans do to engage in communication. This is achieved by using information from the head pose and geometry of facial feature landmarks. This is useful for a number of applications, namely human-machine-interfacing. However, facial feature detection and tracking remains today one of the most important foci of Computer Vision, and no solution for a general-purpose facial feature detector and tracker exhibits satisfactory robustness under in-the-wild conditions. Most approaches towards this goal rely on high-resolution scenarios and overly near-frontal head poses, and no work was found accounting for both near-frontal and profile views during tracking. This work proposes a proof-of-concept that attempts to achieve better performance on distant subjects, and incorporating tracking of facial features in profile views. The designed method is based on a comparison of the shape geometry obtained from independent feature detectors to different shape models of the human face, capturing different head poses. Tracking is performed by the use of the KLT algorithm, and partial or full re-detections are triggered by a confidence-based design. A proof-of-concept is presented showing that the proposed solution satisfactorily addresses main functional requirements. Future work is suggested to further improve performance and address remaining issues.
Style APA, Harvard, Vancouver, ISO itp.
39

Ong, Kai-Siang, i 王祈翔. "Sensor Fusion Based Human Detection and Tracking System for Human-Robot Interaction". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/14286966706841584842.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電機工程學研究所
100
Service robot has received enormous attention with rapid development of high technology in recent years, and it is endowed with the capabilities of interacting with people and performing human-robot interaction (HRI). For this purpose, the Sampling Importance Resampling (SIR) particle filter is adopted to implement the laser and visual based human tracking system when dealing with human-robot interaction (HRI) in real world environment. The sequence of images and the geometric information from measurements are provided by the vision sensor and the laser range finder (LRF), respectively. We construct a sensor fusion based system to integrate the information from both sensors by using a data association approach – Covariance Intersection (CI). It will be used to increase the robustness and reliability of human in the real world environment. In this thesis, we propose a Behavior System for analyze human features and classify the behavior by the crucial information from sensor fusion based system. The system is used to infer the human behavioral intentions, and also allow the robot to perform more natural and intelligent interaction. We apply a spatial model based on proxemics rules to our robot, and design a behavioral intention inference strategy. Furthermore, the robot will make the corresponding reaction in accordance with the identified behavioral intention. This work concludes with several experimental results with a robot in indoor environment, and promising performance has been observed.
Style APA, Harvard, Vancouver, ISO itp.
40

Schroeder, Kyle Anthony. "Requirements for effective collision detection on industrial serial manipulators". 2013. http://hdl.handle.net/2152/21585.

Pełny tekst źródła
Streszczenie:
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons.
text
Style APA, Harvard, Vancouver, ISO itp.
41

Lin, Chun Yi, i 林峻翊. "A service robot with Human Face Detection, Tracking and Gender Recognition". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/32699358169175211690.

Pełny tekst źródła
Streszczenie:
碩士
長庚大學
電機工程學系
101
In this study, an automatic guidance vehicle (AGV) service robot system with gender recognition function is proposed. This system can detect and track a pedestrian in front of AGV. Moreover, the gender of the target can be recognized, thus the AGV can provide different information according to the target’s gender. The proposed system can be divided into two parts: the first part is the pedestrian detection and tracking function by the AGV robot, the second part is the gender recognition. In the part of pedestrian detection and tracking, AdaBoost algorithm is used to detect the target, and the Kanade-Locus-Tomasi feature tracker is then applied to track the position of the target. Microsoft Kinect Sensor is also utilized as an auxiliary device to measure distance to the target and to determine whether the AGV should stop or not. In the gender recognition part, the proposed system combines multi-scale Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) for feature extraction, and the Particle Swam Optimization with Support Vector Machines (PSO-SVM) is chosen as the classification algorithm. After the feature extraction, the t-test is used to calculate p-value for each feature to determine if it has significant difference between male and female categories. Next, we train SVM model with the significant features, which are selected by the p-value. Experimental results show that the classification accuracy is up to 92.6%, while only 30% of the original features are used.
Style APA, Harvard, Vancouver, ISO itp.
42

Tsou, Tai-Yu, i 鄒岱佑. "Multisensor Fusion Based Large Range Human Detection and Tracking for Intelligent Wheelchair Robots". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70919194795178363367.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電控工程研究所
102
Recently, several robotic wheelchairs have been proposed that employ autonomous functions. In designing wheelchairs, it is important to reduce the accompanist load. To provide such a task, the mobile robot needs to recognize and track people. In this paper, we propose to utilize the multisensory data fusion to track a target accompanist. First, the simultaneous localization and map building is achieved by using the laser range finder (LRF) and inertial sensors with the extended Kalman filter recursively. To track the target person robustly, the accompanist, are tracked by fusing laser and vision data. The human objects are detected by LRF, and the identity of accompanist is recognized using a PTZ camera with a pre-defined signature using the speeded up robust features algorithm. The proposed system can adaptively search visual signature and track the accompanist by dynamically zooming the PTZ camera based on LRF detection results to enlarge the range of human following. The experimental results verified and demonstrated the performance of the proposed system.
Style APA, Harvard, Vancouver, ISO itp.
43

Wang, Wei-Hang, i 王偉航. "Human Detection and Tracking for Embedded Mobile Robots by Integrating Laser-range-finding and Monocular Imaging". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27759179454186109767.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
電機工程學系
100
A fundamental issue for modern service robots is human–robot interaction. In order to perform such a task, these robots need to detect and track people in the surroundings. Especially, to track targets robustly is a indispensable capability of autonomous mobile robots. Thus , a robust human detection and tracking system is an important research area in robotics. In this thesis, we present a system which is able to detect and track people efficiently by integrating laser measurements and monocular camera images information on mobile platform. A laser-based leg detector is used to detect human legs, which is trained by cascaded Adaboost with a set of geometrical features of scan segments. A visual human detector Range C4 is also proposed, which is modified from C4 human detector by adding laser range information. It achieves lower false positive rate than original C4 detector. The detected legs or persons are fused and tracked by a global nearest neighbor (GNN) data association and a sequential Kalman filtering with constant velocity model strategies. Measurements are assigned to tracks by GNN which assigns measurement by maximum similarity sum, and track states are updated by using corresponded measurements sequen- tially. Several experiments are done and to demonstrate the robustness and efficiency of our system.
Style APA, Harvard, Vancouver, ISO itp.
44

Chia, Po Chun, i 賈博鈞. "Applying Biped Humanoid Robot Technologies to Fall Down Scenario Designs and Detections for Human". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78868669441203945307.

Pełny tekst źródła
Streszczenie:
碩士
長庚大學
醫療機電工程研究所
98
Preventing Falling down is an important issue for the aging society; therefore, the fall detection is crucial for the healthcare system. Fall signal is generally collected from a 3-axis accelerometer which is placed on the human’s chest, and the collected signal is further analyzed to develop the fall detection algorithms. Nevertheless, it is not easy to collect realistic fall signals because the experiments of falls may cause serious injuries. Therefore, most of collected fall signals are conservative and cannot completely represent the situations of actual falls. That means the volunteer may perform a slow fall when collecting the fall signal. Especially, it is hardly to collect the fall signals from high risk fall situations such as falls from stairs. Biped humanoid robot researches are fast increasing in recent years, because the torso structures of the biped humanoid robots is similar to the human beings. This thesis proposes a biped humanoid robot based fall scenario simulation system. The proposed fall scenario simulation system constructs the gait pattern libraries for the different fall scenarios which are similar to the falls of human beings. A 3-axis accelerometer is also placed on the chest of the biped humanoid robot to measure the fall signals. In order to verify the proposed approach, a motion capture system is employed in this study to measure the fall motions. At the same time, the fall motions collected from the motion capture system and the fall signals collected from the 3-axis accelerometer are synchronously recorded to verify the signal correlations between the biped humanoid robot and the human beings. Experiment results shows that the signal correlations between the biped humanoid robot and the human beings for typical forward, side and backward falls. Based on this correlation performance, the high risk fall signals such as falls from stairs and slip falls are collected from the biped humanoid robots only. Therefore, the proposed biped humanoid robot based fall scenario simulation system may effectively collect the fall signals for the further fall detection algorithm studies.
Style APA, Harvard, Vancouver, ISO itp.
45

Amaro, Bruno Filipe Viana. "Behavioural attentiveness patterns analysis – detecting distraction behaviours". Master's thesis, 2018. http://hdl.handle.net/1822/66009.

Pełny tekst źródła
Streszczenie:
The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.
A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção.
Style APA, Harvard, Vancouver, ISO itp.
46

Lee, Ching-Wei, i 李靜微. "The Development of a Rehabilitation and Robot Imitative System Based on the Detection of Human Posture and Behavior". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/89250838527567836514.

Pełny tekst źródła
Streszczenie:
碩士
淡江大學
電機工程學系碩士班
103
The Development of a Rehabilitation and Robot Imitative System Based on the Detection of Human Posture and Behavior. In this thesis it basically discusses the design and the development of a rehabilitation and robot imitative system based on the detection of human posture and behavior. The rehabilitation system is using the Kinect’sidentification system based on the limb kinetic and joint movement to perform the rehabilitationprocess while in the design of the robot imitative system it extracts the functions of the kinect’s human posture and behavior to build an imitative system to have the robot to perform the actions as close as to the human beings. Based on the rehabilitation system we setup a functional block diagram for the test of rehabilitationprocess in the hip joint movement in six angles, in the shoulder joint movement, knee movement, hip muscle movement and elbow movement and then make the evaluation of these tests. In the robot imitative system we discuss widely in the structure of the imitative robot system, the hand and foot movement control and the design of human-robot interface and human-computer interaction.
Style APA, Harvard, Vancouver, ISO itp.
47

(6589922), Ashwin Sasidharan Nair. "A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSES". Thesis, 2019.

Znajdź pełny tekst źródła
Streszczenie:
Networked telerobots are operated by humans through remote interactions and have found applications in unstructured environments, such as outer space, underwater, telesurgery, manufacturing etc. In precision agricultural robotics, target monitoring, recognition and detection is a complex task, requiring expertise, hence more efficiently performed by collaborative human-robot systems. A HUB is an online portal, a platform to create and share scientific and advanced computing tools. HUB-CI is a similar tool developed by PRISM center at Purdue University to enable cyber-augmented collaborative interactions over cyber-supported complex systems. Unlike previous HUBs, HUB-CI enables both physical and virtual collaboration between several groups of human users along with relevant cyber-physical agents. This research, sponsored in part by the Binational Agricultural Research and Development Fund (BARD), implements the HUB-CI model to improve the Collaborative Intelligence (CI) of an agricultural telerobotic system for early detection of anomalies in pepper plants grown in greenhouses. Specific CI tools developed for this purpose include: (1) Spectral image segmentation for detecting and mapping to anomalies in growing pepper plants; (2) Workflow/task administration protocols for managing/coordinating interactions between software, hardware, and human agents, engaged in the monitoring and detection, which would reliably lead to precise, responsive mitigation. These CI tools aim to minimize interactions’ conflicts and errors that may impede detection effectiveness, thus reducing crops quality. Simulated experiments performed show that planned and optimized collaborative interactions with HUB-CI (as opposed to ad-hoc interactions) yield significantly fewer errors and better detection by improving the system efficiency by between 210% to 255%. The anomaly detection method was tested on the spectral image data available in terms of number of anomalous pixels for healthy plants, and plants with stresses providing statistically significant results between the different classifications of plant health using ANOVA tests (P-value = 0). Hence, it improves system productivity by leveraging collaboration and learning based tools for precise monitoring for healthy growth of pepper plants in greenhouses.
Style APA, Harvard, Vancouver, ISO itp.
48

Melo, Davide Alexandre Lima. "Seguimento de pessoas e navegação segura para um robô de serviço". Master's thesis, 2015. http://hdl.handle.net/1822/46732.

Pełny tekst źródła
Streszczenie:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores (área de especialização em Microtecnologias e Automação, Controlo e Robótica)
The aim of this work is to develop a person following system that also allows safe navigation for a service robot designed to unknown environments. The person following system provides the detection of people based on Laser Range Finder data. The detection process of the target person begins with the implementation of a segmentation technique. The segments obtained are classified based on their geometric characteristics. Each segment is represented by a characteristic point which defines its position in the real world. The tracking of the target person is done using heuristics based on a region of interest. A following algorithm in which the robot preserves socially acceptable distances while navigating safely was implemented. At the end, the proposed person following system is verified through simulations in various conditions and proved to be robust in following people, even in case of momentary occlusions.
O objetivo desta dissertação é desenvolver um sistema de seguimento de pessoas e navegação segura para um robô de serviço destinado a ambientes desconhecidos. Desenvolveuse um sistema de seguimento que permite a deteção de pessoas a partir das leituras de um Laser Range Finder. O processo de deteção da pessoa alvo baseia-se numa técnica de segmentação. Os segmentos obtidos são classificados com base nas suas características geométricas. Cada segmento é representado por um ponto característico que define a sua posição no mundo real. O seguimento-estático da pessoa alvo é feito recorrendo a heurísticas, baseadas numa região de interesse de procura. Foi implementado um algoritmo de seguimento onde o robô preserva distâncias de seguimento socialmente aceitáveis enquanto navega de forma segura. No final, o sistema de seguimento proposto foi verificado através de simulações em diversas condições e mostrou-se robusto no seguimento de pessoas, mesmo no caso de oclusões momentâneas.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii