Academic literature on the topic 'Robust Human Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robust Human Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Robust Human Detection"

1

Li, Ying. "Efficient and Robust Video Understanding for Human-robot Interaction and Detection." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leu, Adrian [Verfasser]. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu." Aachen : Shaker, 2014. http://d-nb.info/1060622432/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leu, Adrian [Verfasser], Axel [Akademischer Betreuer] Gräser, and Udo [Akademischer Betreuer] Frese. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu. Gutachter: Udo Frese. Betreuer: Axel Gräser." Bremen : Staats- und Universitätsbibliothek Bremen, 2014. http://d-nb.info/1072226340/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Full text
Abstract:
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point of view. This motivates our thesis that is divided in two parts. The first part of this thesis is devoted to the development of parsimonious algorithms for action/gesture recognition in human-centric applications such as sports and anomaly detection for artificial pancreas. The data sources employed for the validation of our approaches consist of a collection of time-series data coming from sensors, such as accelerometers or glycemic. The main challenge in this context is to discard (i.e. being invariant to) many nuisance factors that make the recognition task difficult, especially where many different users are involved. Moreover, in some cases, data cannot be easily labelled, making supervised approaches not viable. Thus, we present the mathematical tools and the background with a focus to the recognition problems and then we derive novel methods for: (i) gesture/action recognition using sparse representations for a sport application; (ii) gesture/action recognition using a symbolic representations and its extension to the multivariate case; (iii) model-free and unsupervised anomaly detection for detecting faults on artificial pancreas. These algorithms are well-suited to be deployed in resource constrained devices, such as wearables. In the second part, we investigate the feasibility of deep learning frameworks where human interpretation is crucial. Standard deep learning models are not robust and, unfortunately, literature approaches that ensure robustness are typically detrimental to accuracy in general. However, in general, real-world applications often require a minimum amount of accuracy to be employed. In view of this, after reviewing some results present in the recent literature, we formulate a new algorithm being able to semantically trade-off between accuracy and robustness, where a cost-sensitive classification problem is provided and a given threshold of accuracy is required. In addition, we provide a link between robustness to input perturbations and interpretability guided by a physical minimum energy principle: in fact, leveraging optimal transport tools, we show that robust training is connected to the optimal transport problem. Thanks to these theoretical insights we develop a new algorithm that provides robust, interpretable and more transferable representations.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Youding. "Model-Based Human Pose Estimation with Spatio-Temporal Inferencing." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1242752509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tasaki, Tsuyoshi. "People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.

Full text
Abstract:
We can detect faces more rapidly and efficiently compared to non-face object categories (Bell et al., 2008; Crouzet, 2011), even when only partial information is visible (Tang et al., 2014). Face inversion impairs our ability to recognise faces. The key to understand this effect is to determine what special face features are processed and how coding of these features is affected by face inversion. Previous studies from our lab showed coding of the contralateral eye in an upright face detection task, which was maximal around the N170 recorded at posterior-lateral electrodes (Ince et al., 2016b; Rousselet et al., 2014). In chapter 2, we used the Bubble technique to determine whether brain responses also reflect the processing of eyes in inverted faces and how it does so in a simple face detection task. The results suggest that in upright and inverted faces alike the N170 reflects coding of the contralateral eye, but face inversion quantitatively weakens the early processing of the contralateral eye, specifically in the transition between the P1 and the N170 and delays this local feature coding. Group and individual results support this claim. First, regardless of face orientation, the N170 coded the eyes contralateral to the posterior-lateral electrodes, which was the case in all participants. Second, face inversion delayed coding of contralateral eye information. Third, time course analysis of contralateral eye coding revealed weaker contralateral eye coding for inverted compared to upright faces in the transition between the P1 and the N170. Fourth, single-trial EEG responses were driven by the corresponding single-trial visibility of the left eye. The N170 amplitude was larger and latency shorter as the left eye visibility increased in upright and upside-down faces for the majority of participants. However, for images of faces, eye position and face orientation were confounded, i.e., the upper visual field usually contains eyes in upright faces; in upside-down faces lower visual field contains eyes. Thus, the impaired processing of the contralateral eye by inversion might be simply attributed to that face inversion removes the eyes away from upper visual filed. In chapter 3, we manipulated three vertical locations of images in which eyes are presented in upper, centre and lower visual field relative to fixation cross (the centre of the screen) so that in upright and inverted faces the eyes can shift from the upper to the lower visual field. We used the similar technique as in chapter 2 during a face detection task. First, we found 2 that regardless of face orientation and position, the modulations of ERPs recorded at the posterior-lateral electrodes were associated with the contralateral eye. This suggests that coding of the contralateral eye underlying the N170. Second, face inversion delayed processing of the contralateral eye when the eyes of faces were presented in the same position, Above, Below or at the Centre of the screen. Also, in the early N170, most of our participants showed weakened contralateral eye sensitivity by inversion of faces, of which the eyes appeared in the same position. The results suggest that face inversion related changes in processing of the contralateral eye cannot be simply considered as the results of differences of eye position. The scan-paths traced by human eye movements are similar to the low-level computation saliency maps produced by contrast based computer vision algorithms (Itti et al., 1998). This evidence leads us to a question of whether the coding function to encode the eyes is due to the significance in the eye regions. In chapter 4, we aim to answer the question. We introduced two altered version of original faces: normalised and reversed contrast faces in a face detection task - removing eye saliency (Simoncelli and Olshausen, 2001) and reversing face contrast polarity (Gilad et al., 2009) in a simple face detection task. In each face condition, we observed ERPs, that recorded at contralateral posterior lateral electrodes, were sensitive to eye regions. Both contrast manipulations delayed and reduced eye sensitivity during the rising part of the N170, roughly 120 – 160 ms post-stimulus onset. Also, there were no such differences between two contrast-manipulated faces. These results were observed in the majority of participants. They suggest that the processing of contralateral eye is due partially to low-level factors and may reflect feature processing in the early N170.
APA, Harvard, Vancouver, ISO, and other styles
8

Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Full text
Abstract:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
APA, Harvard, Vancouver, ISO, and other styles
9

Antonucci, Alessandro. "Socially aware robot navigation." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356142.

Full text
Abstract:
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Briquet-Kerestedjian, Nolwenn. "Impact detection and classification for safe physical Human-Robot Interaction under uncertainties." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC038/document.

Full text
Abstract:
La problématique traitée dans cette thèse vise à développer une stratégie efficace de détection et de classification des impacts en présence d'incertitudes de modélisation du robot et de son environnement et en utilisant un nombre minimal de capteurs, notamment en l'absence de capteur d’effort.La première partie de la thèse porte sur la détection d'un impact pouvant avoir lieu à n'importe quel endroit du bras robotique et à n'importe quel moment de sa trajectoire. Les méthodes de détection d’impacts sont généralement basées sur un modèle dynamique du système, ce qui les rend sujettes au compromis entre sensibilité de détection et robustesse aux incertitudes de modélisation. A cet égard, une méthodologie quantitative a d'abord été mise au point pour rendre explicite la contribution des erreurs induites par les incertitudes de modèle. Cette méthodologie a été appliquée à différentes stratégies de détection, basées soit sur une estimation directe du couple extérieur, soit sur l'utilisation d'observateurs de perturbation, dans le cas d’une modélisation parfaitement rigide ou à articulations flexibles. Une comparaison du type et de la structure des erreurs qui en découlent et de leurs conséquences sur la détection d'impacts en a été déduite. Dans une deuxième étape, de nouvelles stratégies de détection d'impacts ont été conçues: les effets dynamiques des impacts sont isolés en déterminant la marge d'erreur maximale due aux incertitudes de modèle à l’aide d’une approche stochastique.Une fois l'impact détecté et afin de déclencher la réaction post-impact du robot la plus appropriée, la deuxième partie de la thèse aborde l'étape de classification. En particulier, la distinction entre un contact intentionnel (l'opérateur interagit intentionnellement avec le robot, par exemple pour reconfigurer la tâche) et un contact non-désiré (un sujet humain heurte accidentellement le robot), ainsi que la localisation du contact sur le robot, est étudiée en utilisant des techniques d'apprentissage supervisé et plus spécifiquement des réseaux de neurones feedforward. La généralisation à plusieurs sujet humains et à différentes trajectoires du robot a été étudiée<br>The present thesis aims to develop an efficient strategy for impact detection and classification in the presence of modeling uncertainties of the robot and its environment and using a minimum number of sensors, in particular in the absence of force/torque sensor.The first part of the thesis deals with the detection of an impact that can occur at any location along the robot arm and at any moment during the robot trajectory. Impact detection methods are commonly based on a dynamic model of the system, making them subject to the trade-off between sensitivity of detection and robustness to modeling uncertainties. In this respect, a quantitative methodology has first been developed to make explicit the contribution of the errors induced by model uncertainties. This methodology has been applied to various detection strategies, based either on a direct estimate of the external torque or using disturbance observers, in the perfectly rigid case or in the elastic-joint case. A comparison of the type and structure of the errors involved and their consequences on the impact detection has been deduced. In a second step, novel impact detection strategies have been designed: the dynamic effects of the impacts are isolated by determining the maximal error range due to modeling uncertainties using a stochastic approach.Once the impact has been detected and in order to trigger the most appropriate post-impact robot reaction, the second part of the thesis focuses on the classification step. In particular, the distinction between an intentional contact (the human operator intentionally interacts with the robot, for example to reconfigure the task) and an undesired contact (a human subject accidentally runs into the robot), as well as the localization of the contact on the robot, is investigated using supervised learning techniques and more specifically feedforward neural networks. The challenge of generalizing to several human subjects and robot trajectories has been investigated
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography