Dissertations / Theses on the topic 'Robust Human Detection'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 48 dissertations / theses for your research on the topic 'Robust Human Detection.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Li, Ying. "Efficient and Robust Video Understanding for Human-robot Interaction and Detection." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.
Full textLeu, Adrian [Verfasser]. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu." Aachen : Shaker, 2014. http://d-nb.info/1060622432/34.
Full textLeu, Adrian [Verfasser], Axel [Akademischer Betreuer] Gräser, and Udo [Akademischer Betreuer] Frese. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu. Gutachter: Udo Frese. Betreuer: Axel Gräser." Bremen : Staats- und Universitätsbibliothek Bremen, 2014. http://d-nb.info/1072226340/34.
Full textTerzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.
Full textZhu, Youding. "Model-Based Human Pose Estimation with Spatio-Temporal Inferencing." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1242752509.
Full textTasaki, Tsuyoshi. "People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180473.
Full textYi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.
Full textAlanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.
Full textAntonucci, Alessandro. "Socially aware robot navigation." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356142.
Full textBriquet-Kerestedjian, Nolwenn. "Impact detection and classification for safe physical Human-Robot Interaction under uncertainties." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC038/document.
Full textThe present thesis aims to develop an efficient strategy for impact detection and classification in the presence of modeling uncertainties of the robot and its environment and using a minimum number of sensors, in particular in the absence of force/torque sensor.The first part of the thesis deals with the detection of an impact that can occur at any location along the robot arm and at any moment during the robot trajectory. Impact detection methods are commonly based on a dynamic model of the system, making them subject to the trade-off between sensitivity of detection and robustness to modeling uncertainties. In this respect, a quantitative methodology has first been developed to make explicit the contribution of the errors induced by model uncertainties. This methodology has been applied to various detection strategies, based either on a direct estimate of the external torque or using disturbance observers, in the perfectly rigid case or in the elastic-joint case. A comparison of the type and structure of the errors involved and their consequences on the impact detection has been deduced. In a second step, novel impact detection strategies have been designed: the dynamic effects of the impacts are isolated by determining the maximal error range due to modeling uncertainties using a stochastic approach.Once the impact has been detected and in order to trigger the most appropriate post-impact robot reaction, the second part of the thesis focuses on the classification step. In particular, the distinction between an intentional contact (the human operator intentionally interacts with the robot, for example to reconfigure the task) and an undesired contact (a human subject accidentally runs into the robot), as well as the localization of the contact on the robot, is investigated using supervised learning techniques and more specifically feedforward neural networks. The challenge of generalizing to several human subjects and robot trajectories has been investigated
Linder, Timm [Verfasser], and Kai O. [Akademischer Betreuer] Arras. "Multi-modal human detection, tracking and analysis for robots in crowded environments." Freiburg : Universität, 2020. http://d-nb.info/1228786798/34.
Full textZhang, Yan. "Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot Interactions." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307548707.
Full textBanerjee, Nandan. "Human Supervised Semi-Autonomous Approach for the DARPA Robotics Challenge Door Task." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/584.
Full textMazhar, Osama. "Vision-based human gestures recognition for human-robot interaction." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS044.
Full textIn the light of factories of the future, to ensure productive, safe and effective interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. To address this, deep learning solutions are explored and a reliable human gesture detection framework is developed in this work. Our framework is able to robustly detect static hand gestures plus upper-body dynamic gestures.For static hand gestures detection, openpose is integrated with Kinect V2 to obtain a pseudo-3D human skeleton. With the help of 10 volunteers, we recorded an image dataset opensign, that contains Kinect V2 RGB and depth images of 10 alpha-numeric static hand gestures taken from the American Sign Language. "Inception V3" neural network is adapted and trained to detect static hand gestures in real-time.Subsequently, we extend our gesture detection framework to recognize upper-body dynamic gestures. A spatial attention based dynamic gestures detection strategy is proposed that employs multi-modal "Convolutional Neural Network - Long Short-Term Memory" deep network to extract spatio-temporal dependencies in pure RGB video sequences. The exploited convolutional neural network blocks are pre-trained on our static hand gestures dataset opensign, which allow efficient extraction of hand features. Our spatial attention module focuses on large-scale movements of upper limbs plus on hand images for subtle hand/fingers movements, to efficiently distinguish gestures classes.This module additionally exploits 2D upper-body pose to estimate distance of user from the sensor for scale-normalization plus determine the parameters of hands bounding boxes without a need of depth sensor. The information typically extracted from a depth camera in similar strategies is learned from opensign dataset. Thus the proposed gestures recognition strategy can be implemented on any system with a monocular camera.Afterwards, we briefly explore 3D human pose estimation strategies for monocular cameras. To estimate 3D human pose, a hybrid strategy is proposed which combines the merits of discriminative 2D pose estimators with that of model based generative approaches. Our method optimizes an objective function, that minimizes the discrepancy between position & scale-normalized 2D pose obtained from openpose, and a virtual 2D projection of a kinematic human model.For real-time human-robot interaction, an asynchronous distributed system is developed to integrate our static hand gestures detector module with an open-source physical human-robot interaction library OpenPHRI. We validate performance of the proposed framework through a teach by demonstration experiment with a robotic manipulator
Sahindal, Boran. "Detecting Conversational Failures in Task-Oriented Human-Robot Interactions." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272135.
Full textI samtal mellan människor är det inte bara yttrandens innehåll utan också våra sociala signaler som bidrar till kommunikationstillståndet. Inom området människa-robot-interaktion vill vi på samma sätt att robotarna ska kunna tolka sociala signaler som ges av människor. Sådana sociala signaler kan utnyttjas för att upptäcka oväntade beteenden hos robotar. Detta examensarbete syftar till att jämföra maskininlärningsbaserade metoder för att undersöka robotars igenkänning av sina egna oväntade beteenden baserat på mänskliga sociala signaler. Vi tränade SVM, Random Forest- och Logistic Regression-klassificerare med en styrd människa-robot-interaktionskorpus som inkluderar planerade robotfel. Vi skapade attribut baserade på blick, rörelse och ansiktsuttryck. Vi definierade datapunkter med olika fönsterlängder och jämförde effekter av olika robotformer. Resultaten visar att det finns en lovande potential inom detta fält och att noggrannheten för denna klassificeringsuppgift beror på olika variabler som kräver noggrann inställning.
Birchmore, Frederick Christopher. "A holistic approach to human presence detection on man-portable military ground robots." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1464660.
Full textTitle from first page of PDF file (viewed July 2, 2009). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 85-90).
Alhusin, Alkhdur Abdullah. "Toward a Sustainable Human-Robot Collaborative Production Environment." Doctoral thesis, KTH, Industriell produktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202388.
Full textQC 20170223
Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.
Full textTaqi, Sarah M. A. M. "Reproduction of Observed Trajectories Using a Two-Link Robot." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308031627.
Full textKit, Julian Chua Ying. "The human-machine interface (HMI) and re-bar detection aspects of a non-destructive testing (NDT) robot." Thesis, City University London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245862.
Full textANGELONI, Fabio. "Collision Detection for Industrial Applications." Doctoral thesis, Università degli studi di Bergamo, 2017. http://hdl.handle.net/10446/77107.
Full textCarraro, Marco. "Real-time RGB-Depth preception of humans for robots and camera networks." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426800.
Full textQuesta tesi tratta di percezione per robot autonomi e per reti di telecamere da dati RGB-Depth. L'obiettivo è quello di fornire algoritmi robusti ed efficienti per l'interazione con le persone. Per questa ragione, una particolare attenzione è stata dedicata allo sviluppo di soluzioni efficienti che possano essere eseguite in tempo reale su computer e schede grafiche consumer. Il contributo principale di questo lavoro riguarda la stima automatica della posa 3D del corpo delle persone presenti in una scena. Vengono proposti due algoritmi che sfruttano lo stream di dati RGB-Depth da una rete di telecamere andando a migliorare lo stato dell'arte sia considerando dati da singola telecamera che usando tutte le telecamere disponibili. Il secondo algoritmo ottiene risultati migliori in quanto riesce a stimare la posa di tutte le persone nella scena con overhead trascurabile e non richiede sincronizzazione tra i vari nodi della rete. Tuttavia, il primo metodo utilizza solamente nuvole di punti che sono disponibili anche in ambiente con poca luce nei quali il secondo algoritmo non raggiungerebbe gli stessi risultati. Il secondo contributo riguarda la re-identificazione di persone a lungo termine in reti di telecamere. Questo problema è particolarmente difficile in quanto non si può contare su feature di colore o che considerino i vestiti di ogni persona, in quanto si vuole che il riconoscimento funzioni anche a distanza di giorni. Viene proposto un framework che sfrutta il riconoscimento facciale utilizzando una Convolutional Neural Network e un sistema di classificazione Bayesiano. In questo modo, ogni qual volta viene generata una nuova traccia dal sistema di people tracking, la faccia della persona viene analizzata e, in caso di match, il vecchio ID viene riassegnato. Il terzo contributo riguarda l'Ambient Assisted Living. Abbiamo proposto e implementato un robot di assistenza che ha il compito di sorvegliare periodicamente un ambiente conosciuto, riportando eventi non usuali come la presenza di persone a terra. A questo fine, abbiamo sviluppato un approccio veloce e robusto che funziona anche in assenza di luce ed è stato validato usando un nuovo dataset RGB-Depth registrato a bordo robot. Con l'obiettivo di avanzare la ricerca in questi campi e per fornire il maggior beneficio possibile alle community di robotica e computer vision, come contributo aggiuntivo di questo lavoro, abbiamo rilasciato, con licenze open-source, la maggior parte delle implementazioni software degli algoritmi descritti in questo lavoro.
Kim, Ui-Hyun. "Improvement of Sound Source Localization for a Binaural Robot of Spherical Head with Pinnae." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180475.
Full textDumora, Julie. "Contribution à l’interaction physique homme-robot : application à la comanipulation d’objets de grandes dimensions." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20030/document.
Full textCollaborative robotics aims at physically assisting humans in their daily tasks.The system comprises two partners with complementary strengths : physical for the robot versus cognitive for the operator. This combination provides new scenarios of application such as the accomplishment of difficult-to-automate tasks. In this thesis, we are interested in assisting the human operator to manipulate bulky parts while the robot has no prior knowledge of the environment and the task. Handling such parts is a daily activity in manyareas which is a complex and critical issue. We propose a new strategy of assistances to tackle the problem of simultaneously controlling both the grasping point of the operator and that of the robot. The task responsibilities for the robot and the operator are allocated according to their relative strengths. While the operator decides the plan and applies the driving force, the robot detects the operator's intention of motion and constrains the degrees of freedom that are useless to perform the intended motion. This way, the operator does not have to control all the degrees of freedom simultaneously. The scientific issues we deal with are split into three main parts : assistive control, haptic channel analysis and learning during the interaction.The strategy is based on a unified framework of the assistances specification, robot control and intention detection. This is a modular approach that can be applied with any low-level robot control architecture. We highlight its interest through manifold tasks completed with two robotics platforms : an industrial arm manipulator and a biped humanoid robot
Reynaga, Barba Valeria. "Detecting Changes During the Manipulation of an Object Jointly Held by Humans and RobotsDetektera skillnader under manipulationen av ett objekt som gemensamt hålls av människor och robotar." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174027.
Full textWåhlin, Peter. "Enhanching the Human-Team Awareness of a Robot." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16371.
Full textAnvändningen av autonoma robotar i vårt samhälle ökar varje dag och en robot ses inte längre som ett verktyg utan som en gruppmedlem. Robotarna arbetar nu sida vid sida med oss och ger oss stöd under farliga arbeten där människor annars är utsatta för risker. Denna utveckling har i sin tur ökat behovet av robotar med mer människo-medvetenhet. Därför är målet med detta examensarbete att bidra till en stärkt människo-medvetenhet hos robotar. Specifikt undersöker vi möjligheterna att utrusta autonoma robotar med förmågan att bedöma och upptäcka olika beteenden hos mänskliga lag. Denna förmåga skulle till exempel kunna användas i robotens resonemang och planering för att ta beslut och i sin tur förbättra samarbetet mellan människa och robot. Vi föreslår att förbättra befintliga aktivitetsidentifierare genom att tillföra förmågan att tolka immateriella beteenden hos människan, såsom stress, motivation och fokus. Att kunna urskilja lagaktiviteter inom ett mänskligt lag är grundläggande för en robot som ska vara till stöd för laget. Dolda markovmodeller har tidigare visat sig vara mycket effektiva för just aktivitetsidentifiering och har därför använts i detta arbete. För att en robot ska kunna ha möjlighet att ge ett effektivt stöd till ett mänskligtlag måste den inte bara ta hänsyn till rumsliga parametrar hos lagmedlemmarna utan även de psykologiska. För att tyda psykologiska parametrar hos människor förespråkar denna masteravhandling utnyttjandet av mänskliga kroppssignaler. Signaler så som hjärtfrekvens och hudkonduktans. Kombinerat med kroppenssignalerar påvisar vi möjligheten att använda systemdynamiksmodeller för att tolka immateriella beteenden, vilket i sin tur kan stärka människo-medvetenheten hos en robot.
The thesis work was conducted in Stockholm, Kista at the department of Informatics and Aero System at Swedish Defence Research Agency.
Iacob, David-Octavian. "Détection du mensonge dans le cadre des interactions homme-robot à l'aide de capteurs et dispositifs non invasifs et mini invasifs." Thesis, Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAE004.
Full textSocial Robotics focuses on improving the ability of robots to interact with humans, including the capacity to understand their human interlocutors. When endowed with such capabilities, social robots can be useful to their users in a large variety of contexts: as guides, play partners, home assistants, or, most importantly, when being used for therapeutic purposes.Socially Assistive Robots (SAR) aim to improve the quality of life of their users by means of social interactions. Vulnerable populations of users, like people requiring rehabilitation, therapy or permanent assistance, benefit the most from the aid of SARs. One of the responsibilities of such robots is to make sure their users respect their therapeutic and medical recommendations, and human users are not always cooperative. As it has been observed in previous studies, humans sometimes deceive their robot caretakers in order to avoid following their recommendations. The former therefore end up deteriorating their medical condition and render the latter incapable of fulfilling theirs duties. Therefore, SARs and especially their users would benefit if robots were able to detect deception in Human-Robot Interactions (HRI).This thesis explores the physiological and behavioural manifestations and cues associated to deception in HRI, based on previous research done in inter-human interactions. As we consider that it is highly important to not impair the quality of the interaction in any way, our work focuses on the evaluation of these manifestations by means of noninvasive and minimally-invasive devices, such as RGB, RGB-D and thermal cameras as well as wearable sensors.To this end, we have designed a series of in-the-wild interaction scenarios during which participants are enticed to lie. During these experiments, we monitored the participants' heart rate, respiratory rate, skin temperature, skin conductance, eye openness, head position and orientation, and their response time to questions using noninvasive and minimally-invasive devices and sensors. We attempted to correlate the variations of the aforementioned parameters to the veracity of the participants' answers and statements. Moreover, we have studied the impact of the nature of the interlocutor (human or robot) on the participants' manifestations.We believe that this thesis and our results represent a major step forward towards the development of robots that are able to establish the honesty and trustworthiness of their interlocutors, thus improving the quality of HRI and the ability of SARs to perform their duties and to improve the quality of life of their users
Malik, Muhammad Usman. "Learning multimodal interaction models in mixed societies A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR18.
Full textHuman -Agent Interaction and Machine learning are two different research domains. Human-agent interaction refers to techniques and concepts involved in developing smart agents, such as robots or virtual agents, capable of seamless interaction with humans, to achieve a common goal. Machine learning, on the other hand, exploits statistical algorithms to learn data patterns. The proposed research work lies at the crossroad of these two research areas. Human interactions involve multiple modalities, which can be verbal such as speech and text, as well as non-verbal i.e. facial expressions, gaze, head and hand gestures, etc. To mimic real-time human-human interaction within human-agent interaction,multiple interaction modalities can be exploited. With the availability of multimodal human-human and human-agent interaction corpora, machine learning techniques can be used to develop various interrelated human-agent interaction models. In this regard, our research work proposes original models for addressee detection, turn change and next speaker prediction, and finally visual focus of attention behaviour generation, in multiparty interaction. Our addressee detection model predicts the addressee of an utterance during interaction involving more than two participants. The addressee detection problem has been tackled as a supervised multiclass machine learning problem. Various machine learning algorithms have been trained to develop addressee detection models. The results achieved show that the proposed addressee detection algorithms outperform a baseline. The second model we propose concerns the turn change and next speaker prediction in multiparty interaction. Turn change prediction is modeled as a binary classification problem whereas the next speaker prediction model is considered as a multiclass classification problem. Machine learning algorithms are trained to solve these two interrelated problems. The results depict that the proposed models outperform baselines. Finally, the third proposed model concerns the visual focus of attention (VFOA) behaviour generation problem for both speakers and listeners in multiparty interaction. This model is divided into various sub-models that are trained via machine learning as well as heuristic techniques. The results testify that our proposed systems yield better performance than the baseline models developed via random and rule-based approaches. The proposed VFOA behavior generation model is currently implemented as a series of four modules to create different interaction scenarios between multiple virtual agents. For the purpose of evaluation, recorded videos for VFOA generation models for speakers and listeners, are presented to users who evaluate the baseline, real VFOA behaviour and proposed VFOA models on the various naturalness criteria. The results show that the VFOA behaviour generated via the proposed VFOA model is perceived more natural than the baselines and as equally natural as real VFOA behaviour
Lohan, Katrin Solveig [Verfasser]. "A model of contingency detection to spot tutoring behavior and respond to ostensive cues in human-robot-interaction / Katrin Solveig Lohan. Technische Fakultät." Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2013. http://d-nb.info/1032453990/34.
Full textBrèthes, Ludovic. "Suivi visuel par filtrage particulaire : application à l'interaction Homme-robot." Toulouse 3, 2005. http://www.theses.fr/2005TOU30282.
Full textThis thesis is focused on the detection and the tracking of people and also on the recognition of elementary gestures from video stream of a color camera embeded on the robot. Particle filter well suited to this context enables a straight combination/fusion of several measurement cues. We propose here various filtering strategies where visual information such as shape, color and motion are taken into account in the importance function and the measurement model. We compare and evaluate these filtering strategies in order to show which combination of visual cues and particle filter algorithm are more suitable to the interaction modalities that we consider for our tour-robot. Our last contribution relates to the recognition of symbolic gestures which enable to communicate with the robot. An efficient particle filter strategy is proposed in order to track the hand and to recognize at the same time its configuration and gesture dynamic in video stream
Souroulla, Timotheos. "Distributed Intelligence for Multi-Robot Environment : Model Compression for Mobile Devices with Constrained Computing Resources." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302151.
Full textMänniska och robot samarbete (förkortat HRC från engelskans Human-Robot Collaboration), där både människor och robotar arbetar samtidigt i samma miljö, är ett växande forskningsområde och har ökat dramatiskt över de senaste decenniet. För att detta samarbetet ska vara möjligt och säkert behöver robotarna genomgå en ordentlig säkerhetsanalys så att farliga situationer kan undvikas. Denna säkerhetsanalys inkluderar komplexa Computer Vision uppgifter som kräver mycket processorkraft. Därför kan inte robotar med begränsad processorkraft utföra dessa beräkningar utan fördröjning, utan måste istället förlita sig på utomstående infrastruktur för att exekvera dem. Vid vissa tillfällen kan dock denna utomstående infrastruktur inte finnas på plats eller vara svår att koppla upp sig till. Även vid dessa tillfällen måste robotar fortfarande kunna navigera sig själva genom en lokal, och samtidigt upprätthålla hög grad av säkerhet. Detta projekt fokuserar på att reducera komplexiteten och det totala antalet parametrar av för-tränade Computer Vision-modeller genom att använda modellkompressionstekniker så som: Beskärning och kunskapsdestilering. Dessa modellkompressionstekniker har starka teoretiska grunder och praktiska belägg, men mängden arbeten kring deras kombinerade effekt är begränsad, därför är just det undersökt i detta arbetet. Resultaten av det här projektet visar att up till 90% av det totala antalet parametrar hos en Computer Vision-modell kan tas bort utan någon noterbar försämring av modellens säkerhet.
Velor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.
Full textReseco, Bato Miguel. "Nouvelle méthodologie générique permettant d’obtenir la probabilité de détection (POD) robuste en service avec couplage expérimental et numérique du contrôle non destructif (CND)." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0014/document.
Full textThe performance assessment of non-destructive testing (NDT) procedures in aeronautics is a key step in the preparation of the aircraft's certification document. Such a demonstration of performance is done through the establishment of Probability of Detection (POD) laws integrating all sources of uncertainty inherent in the implementation of the procedure. These uncertainties are due to human and environmental factors in In-Service maintenance tasks. To establish experimentally these POD curves, it is necessary to have data from a wide range of operator skills, defect types and locations, material types, test protocols, etc. Obtaining these data evidences high costs and significant delays for the aircraft manufacturer. The scope of this thesis is to define a robust methodology of building POD from numerical modeling. The POD robustness is ensured by the integration of the uncertainties through statistical distributions issued from experimental data or engineering judgments. Applications are provided on titanium beta using high frequency eddy currents NDT technique. First, an experimental database will be created from three environments: laboratory, A321 aircraft and A400M aicraft. A representative sample of operators, with different certification levels in NDT technique, will be employed. Multiple inspection scenarios will be carried out to analyze these human and environmental factors. In addition, this study will take into account the impact of using different equipments in the HFEC test. This database is used, subsequently, to build statistical distributions. These distributions are the input data of the simulation models of the inspection. These simulations are implemented with the CIVA software. A POD module, based on the Monte Carlo method, is integrated into this software. This module will be applied to address human and ergonomic influences on POD. Additionally this module will help us to understand in a better way the equipment impact in POD curves. Finally, the POD model will be compared and validated with the experimental results developed
CHOUDHURY, SUBRADEB. "ROBUST HUMAN DETECTION FOR SURVEILLANCE." Thesis, 2015. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14385.
Full textLin, You-Rong, and 林佑融. "A Robust Fall Detection Scheme Using Human Shadow and SVM Classifiers." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/12696932794927637870.
Full text國立臺灣科技大學
電子工程系
100
We present a novel real-time video-based human fall detection system in this thesis. Because the system is based on a combination of shadow-based features and various human postures, it can distinguish between fall-down and fall-like incidents with a high degree of accuracy. To support effective operation in different viewpoints, we propose a new feature called virtual height that can estimate the body height without 3D model reconstruction. As a result, the model is low computational complexity. Our experiment results demonstrate that the proposed system can achieve a high detection rate and a low false alarm rate.
Zeng, Hong-Bo, and 曾泓博. "Robust Vision-based Multiple Fingertip Detection and Human Computer Interface Application." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/05658811497165632062.
Full text國立中央大學
資訊工程研究所
100
Intuitive and easy to use interfaces are very important to a successful product. With the development of technology, gesture-based machine interface has gradually become a trend to replace traditional input devices. For gesture-based machine interface, multiple fingertip detection is a crucial step. The studies of multiple fingertip detection can be classified into two main categories, wearable and markerless. For the former, the users need to wear additional equipment to facilitate the fingertip detection. Considering the inconveniency and the hygiene problem for wearable equipment, the latter requires no additional equipment to get the hand regions or the positions of fingertips. This thesis is a markerless method. We only use a single camera to capture images and locate the fingertip accurately in the images. A lot of markerless approaches limited their experimental environment or gesture definitions. In addition, some of them used contour and distance to centroid information to locate fingertips. Most of these methods made the assumption that only hand regions are in the scene, and didn’t consider the problems that might happen when the arms are also in the scene. In this thesis, we proposed a multiple fingertip detection algorithm based on the likelihood value of Contour and Curvature information with the width data we added, which is more robust and flexible. Finally, we implement a human computer interface system using predefined gestures.
Liu, Yuan-Ming, and 劉原銘. "A Robust Image Descriptor for Human Detection Based on HoG and Weber's Law." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/71816414716758896469.
Full text國立東華大學
資訊工程學系
99
Human detection is essential for many applications such as surveillance and smart car. However, detecting humans in images or videos is a challenging task because of the variable appearance and background clutter. These factors affect significantly human shape. Therefore, in recent years, people are looking for more discriminative descriptors to improve the performance of human detection. In this thesis, a robust descriptor based on HoG and Weber’s Law is proposed. Namely, the proposed descriptor is concatenated by U-HoG and histogram of Weber’s constant. Weber’s Constant has advantages such as robust to noise and detecting edge well. Because there are a large number of weak edges in the cluttered background affecting the detection result, the proposed method uses Weber’s constant to take off the weak edges. If a pixel on the weak edge, the proposed method will ignore the pixel when computing the feature. Therefore, the proposed descriptor may inherit the advantages of Weber’s Law. From the simulation results, the proposed descriptor has better performance than other comparative methods and is more robust to Gaussian white noise than U-HoG.
Carvalho, Daniel Chichorro de. "Development of a Facial Feature Detection and Tracking Framework for Robust Gaze Estimation." Master's thesis, 2018. http://hdl.handle.net/10316/86556.
Full textA estimação remota do olhar é o processo de tentar descobrir a direção do olhar de um humano, e ultimamente, o seu objeto de fixação, tal como os humanos o fazem para comunicar, partindo de informação de imagem capturada remotamente. Este processo é conseguido utilizando informação relativa à orientação da sua cabeça e à geometria dos pontos de referência faciais. Tem muitas aplicações, nomeadamente para Interface Homem-Máquina. No entanto, a deteção e seguimento de pontos faciais é um dos focos mais importantes de Visão por Computador, e nenhuma solução exibe uma robustez satisfatória em condições gerais. A maior parte das abordagens a este objetivo partem de cenários com alta resolução e poses de cabeça essencialmente frontais, e nenhuma abordagem encontrada contempla tanto caras frontais como de perfil. Este trabalho propõe um sistema que tenta alcançar uma melhor performance em pessoas mais distantes, e incorporando seguimento de pontos faciais em caras de perfil. O método desenvolvido baseia-se numa comparação da geometria obtida por detetores independentes de pontos faciais com diferentes modelos geométricos da cara humana, que capturam diferentes orientações da cabeça. O seguimento é feito pelo algoritmo KLT, e deteções parciais ou totais são lançadas por um design baseado em confianças. Uma demonstração conceptual é feita, mostrando que a solução proposta atinge de forma satisfatória os requisitos principais. Trabalho futuro é sugerido para melhorar a performance e lidar com as restantes questões.
Remote gaze estimation is the process of attempting to find the gaze direction, or ultimately, the point of fixation of a human subject, much like humans do to engage in communication. This is achieved by using information from the head pose and geometry of facial feature landmarks. This is useful for a number of applications, namely human-machine-interfacing. However, facial feature detection and tracking remains today one of the most important foci of Computer Vision, and no solution for a general-purpose facial feature detector and tracker exhibits satisfactory robustness under in-the-wild conditions. Most approaches towards this goal rely on high-resolution scenarios and overly near-frontal head poses, and no work was found accounting for both near-frontal and profile views during tracking. This work proposes a proof-of-concept that attempts to achieve better performance on distant subjects, and incorporating tracking of facial features in profile views. The designed method is based on a comparison of the shape geometry obtained from independent feature detectors to different shape models of the human face, capturing different head poses. Tracking is performed by the use of the KLT algorithm, and partial or full re-detections are triggered by a confidence-based design. A proof-of-concept is presented showing that the proposed solution satisfactorily addresses main functional requirements. Future work is suggested to further improve performance and address remaining issues.
Ong, Kai-Siang, and 王祈翔. "Sensor Fusion Based Human Detection and Tracking System for Human-Robot Interaction." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/14286966706841584842.
Full text國立臺灣大學
電機工程學研究所
100
Service robot has received enormous attention with rapid development of high technology in recent years, and it is endowed with the capabilities of interacting with people and performing human-robot interaction (HRI). For this purpose, the Sampling Importance Resampling (SIR) particle filter is adopted to implement the laser and visual based human tracking system when dealing with human-robot interaction (HRI) in real world environment. The sequence of images and the geometric information from measurements are provided by the vision sensor and the laser range finder (LRF), respectively. We construct a sensor fusion based system to integrate the information from both sensors by using a data association approach – Covariance Intersection (CI). It will be used to increase the robustness and reliability of human in the real world environment. In this thesis, we propose a Behavior System for analyze human features and classify the behavior by the crucial information from sensor fusion based system. The system is used to infer the human behavioral intentions, and also allow the robot to perform more natural and intelligent interaction. We apply a spatial model based on proxemics rules to our robot, and design a behavioral intention inference strategy. Furthermore, the robot will make the corresponding reaction in accordance with the identified behavioral intention. This work concludes with several experimental results with a robot in indoor environment, and promising performance has been observed.
Schroeder, Kyle Anthony. "Requirements for effective collision detection on industrial serial manipulators." 2013. http://hdl.handle.net/2152/21585.
Full texttext
Lin, Chun Yi, and 林峻翊. "A service robot with Human Face Detection, Tracking and Gender Recognition." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/32699358169175211690.
Full text長庚大學
電機工程學系
101
In this study, an automatic guidance vehicle (AGV) service robot system with gender recognition function is proposed. This system can detect and track a pedestrian in front of AGV. Moreover, the gender of the target can be recognized, thus the AGV can provide different information according to the target’s gender. The proposed system can be divided into two parts: the first part is the pedestrian detection and tracking function by the AGV robot, the second part is the gender recognition. In the part of pedestrian detection and tracking, AdaBoost algorithm is used to detect the target, and the Kanade-Locus-Tomasi feature tracker is then applied to track the position of the target. Microsoft Kinect Sensor is also utilized as an auxiliary device to measure distance to the target and to determine whether the AGV should stop or not. In the gender recognition part, the proposed system combines multi-scale Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) for feature extraction, and the Particle Swam Optimization with Support Vector Machines (PSO-SVM) is chosen as the classification algorithm. After the feature extraction, the t-test is used to calculate p-value for each feature to determine if it has significant difference between male and female categories. Next, we train SVM model with the significant features, which are selected by the p-value. Experimental results show that the classification accuracy is up to 92.6%, while only 30% of the original features are used.
Tsou, Tai-Yu, and 鄒岱佑. "Multisensor Fusion Based Large Range Human Detection and Tracking for Intelligent Wheelchair Robots." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70919194795178363367.
Full text國立交通大學
電控工程研究所
102
Recently, several robotic wheelchairs have been proposed that employ autonomous functions. In designing wheelchairs, it is important to reduce the accompanist load. To provide such a task, the mobile robot needs to recognize and track people. In this paper, we propose to utilize the multisensory data fusion to track a target accompanist. First, the simultaneous localization and map building is achieved by using the laser range finder (LRF) and inertial sensors with the extended Kalman filter recursively. To track the target person robustly, the accompanist, are tracked by fusing laser and vision data. The human objects are detected by LRF, and the identity of accompanist is recognized using a PTZ camera with a pre-defined signature using the speeded up robust features algorithm. The proposed system can adaptively search visual signature and track the accompanist by dynamically zooming the PTZ camera based on LRF detection results to enlarge the range of human following. The experimental results verified and demonstrated the performance of the proposed system.
Wang, Wei-Hang, and 王偉航. "Human Detection and Tracking for Embedded Mobile Robots by Integrating Laser-range-finding and Monocular Imaging." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/27759179454186109767.
Full text國立清華大學
電機工程學系
100
A fundamental issue for modern service robots is human–robot interaction. In order to perform such a task, these robots need to detect and track people in the surroundings. Especially, to track targets robustly is a indispensable capability of autonomous mobile robots. Thus , a robust human detection and tracking system is an important research area in robotics. In this thesis, we present a system which is able to detect and track people efficiently by integrating laser measurements and monocular camera images information on mobile platform. A laser-based leg detector is used to detect human legs, which is trained by cascaded Adaboost with a set of geometrical features of scan segments. A visual human detector Range C4 is also proposed, which is modified from C4 human detector by adding laser range information. It achieves lower false positive rate than original C4 detector. The detected legs or persons are fused and tracked by a global nearest neighbor (GNN) data association and a sequential Kalman filtering with constant velocity model strategies. Measurements are assigned to tracks by GNN which assigns measurement by maximum similarity sum, and track states are updated by using corresponded measurements sequen- tially. Several experiments are done and to demonstrate the robustness and efficiency of our system.
Chia, Po Chun, and 賈博鈞. "Applying Biped Humanoid Robot Technologies to Fall Down Scenario Designs and Detections for Human." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78868669441203945307.
Full text長庚大學
醫療機電工程研究所
98
Preventing Falling down is an important issue for the aging society; therefore, the fall detection is crucial for the healthcare system. Fall signal is generally collected from a 3-axis accelerometer which is placed on the human’s chest, and the collected signal is further analyzed to develop the fall detection algorithms. Nevertheless, it is not easy to collect realistic fall signals because the experiments of falls may cause serious injuries. Therefore, most of collected fall signals are conservative and cannot completely represent the situations of actual falls. That means the volunteer may perform a slow fall when collecting the fall signal. Especially, it is hardly to collect the fall signals from high risk fall situations such as falls from stairs. Biped humanoid robot researches are fast increasing in recent years, because the torso structures of the biped humanoid robots is similar to the human beings. This thesis proposes a biped humanoid robot based fall scenario simulation system. The proposed fall scenario simulation system constructs the gait pattern libraries for the different fall scenarios which are similar to the falls of human beings. A 3-axis accelerometer is also placed on the chest of the biped humanoid robot to measure the fall signals. In order to verify the proposed approach, a motion capture system is employed in this study to measure the fall motions. At the same time, the fall motions collected from the motion capture system and the fall signals collected from the 3-axis accelerometer are synchronously recorded to verify the signal correlations between the biped humanoid robot and the human beings. Experiment results shows that the signal correlations between the biped humanoid robot and the human beings for typical forward, side and backward falls. Based on this correlation performance, the high risk fall signals such as falls from stairs and slip falls are collected from the biped humanoid robots only. Therefore, the proposed biped humanoid robot based fall scenario simulation system may effectively collect the fall signals for the further fall detection algorithm studies.
Amaro, Bruno Filipe Viana. "Behavioural attentiveness patterns analysis – detecting distraction behaviours." Master's thesis, 2018. http://hdl.handle.net/1822/66009.
Full textA capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção.
Lee, Ching-Wei, and 李靜微. "The Development of a Rehabilitation and Robot Imitative System Based on the Detection of Human Posture and Behavior." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/89250838527567836514.
Full text淡江大學
電機工程學系碩士班
103
The Development of a Rehabilitation and Robot Imitative System Based on the Detection of Human Posture and Behavior. In this thesis it basically discusses the design and the development of a rehabilitation and robot imitative system based on the detection of human posture and behavior. The rehabilitation system is using the Kinect’sidentification system based on the limb kinetic and joint movement to perform the rehabilitationprocess while in the design of the robot imitative system it extracts the functions of the kinect’s human posture and behavior to build an imitative system to have the robot to perform the actions as close as to the human beings. Based on the rehabilitation system we setup a functional block diagram for the test of rehabilitationprocess in the hip joint movement in six angles, in the shoulder joint movement, knee movement, hip muscle movement and elbow movement and then make the evaluation of these tests. In the robot imitative system we discuss widely in the structure of the imitative robot system, the hand and foot movement control and the design of human-robot interface and human-computer interaction.
(6589922), Ashwin Sasidharan Nair. "A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSES." Thesis, 2019.
Find full textMelo, Davide Alexandre Lima. "Seguimento de pessoas e navegação segura para um robô de serviço." Master's thesis, 2015. http://hdl.handle.net/1822/46732.
Full textThe aim of this work is to develop a person following system that also allows safe navigation for a service robot designed to unknown environments. The person following system provides the detection of people based on Laser Range Finder data. The detection process of the target person begins with the implementation of a segmentation technique. The segments obtained are classified based on their geometric characteristics. Each segment is represented by a characteristic point which defines its position in the real world. The tracking of the target person is done using heuristics based on a region of interest. A following algorithm in which the robot preserves socially acceptable distances while navigating safely was implemented. At the end, the proposed person following system is verified through simulations in various conditions and proved to be robust in following people, even in case of momentary occlusions.
O objetivo desta dissertação é desenvolver um sistema de seguimento de pessoas e navegação segura para um robô de serviço destinado a ambientes desconhecidos. Desenvolveuse um sistema de seguimento que permite a deteção de pessoas a partir das leituras de um Laser Range Finder. O processo de deteção da pessoa alvo baseia-se numa técnica de segmentação. Os segmentos obtidos são classificados com base nas suas características geométricas. Cada segmento é representado por um ponto característico que define a sua posição no mundo real. O seguimento-estático da pessoa alvo é feito recorrendo a heurísticas, baseadas numa região de interesse de procura. Foi implementado um algoritmo de seguimento onde o robô preserva distâncias de seguimento socialmente aceitáveis enquanto navega de forma segura. No final, o sistema de seguimento proposto foi verificado através de simulações em diversas condições e mostrou-se robusto no seguimento de pessoas, mesmo no caso de oclusões momentâneas.