Academic literature on the topic 'Robust Human Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robust Human Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Robust Human Detection"

1

GUAN, F., L. Y. LI, S. S. GE, and A. P. LOH. "ROBUST HUMAN DETECTION AND IDENTIFICATION BY USING STEREO AND THERMAL IMAGES IN HUMAN ROBOT INTERACTION." International Journal of Information Acquisition 04, no. 02 (June 2007): 161–83. http://dx.doi.org/10.1142/s0219878907001241.

Full text
Abstract:
In this paper, robust human detection is investigated by fusing the stereo and infrared thermal images for effective interaction between humans and socially interactive robots. A scale-adaptive filter is first designed for the stereo vision system to detect human candidates. To eliminate the difficulty of the vision system in distinguishing human beings from human-like objects, the infrared thermal image is used to solve the ambiguity and reduce the illumination effect. Experimental results show that the fusion of these two types of images gives an improved vision system for robust human detection and identification, which is the most important and essential component of human robot interaction.
APA, Harvard, Vancouver, ISO, and other styles
2

Iwata, Kenji, Yutaka Satoh, Ikushi Yoda, and Katsuhiko Sakaue. "Hybrid Camera Surveillance System Using Robust Human Detection." IEEJ Transactions on Electronics, Information and Systems 127, no. 6 (2007): 837–43. http://dx.doi.org/10.1541/ieejeiss.127.837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Hazaimeh, Obaida M., Malek Al-Nawashi, and Mohamad Saraee. "Geometrical-based approach for robust human image detection." Multimedia Tools and Applications 78, no. 6 (August 4, 2018): 7029–53. http://dx.doi.org/10.1007/s11042-018-6401-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chowdhury, Mozammel, Junbin Gao, and Rafiqul Islam. "Robust human detection and localization in security applications." Concurrency and Computation: Practice and Experience 29, no. 23 (October 22, 2016): e3977. http://dx.doi.org/10.1002/cpe.3977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Iwata, Kenji, Yutaka Satoh, Ikushi Yoda, and Katsuhiko Sakaue. "Hybrid camera surveillance system using robust human detection." Electronics and Communications in Japan 91, no. 11 (November 2008): 11–18. http://dx.doi.org/10.1002/ecj.10006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Störring, Moritz, Hans J. Andersen, and Erik Granum. "A multispectral approach to robust human skin detection." Conference on Colour in Graphics, Imaging, and Vision 2, no. 1 (January 1, 2004): 110–15. http://dx.doi.org/10.2352/cgiv.2004.2.1.art00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

WooJang, Seok, and Siwoo Byun. "Facial region detection robust to changing backgrounds." International Journal of Engineering & Technology 7, no. 2.12 (April 3, 2018): 25. http://dx.doi.org/10.14419/ijet.v7i2.12.11028.

Full text
Abstract:
Background/Objectives: These days, many studies have actively been conducted on intelligent robots capable of providing human friendly service. To make natural interaction between humans and robots, it is required to develop the mobile robot-based technology of detecting human facial regions robustly in dynamically changing real backgrounds.Methods/Statistical analysis: This paper proposes a method for detecting facial regions adaptively through the mobile robot-based monitoring of backgrounds in a dynamic real environment. In the proposed method, a camera-object distance and a color change in object background are monitored, and thereby the skin color extraction algorithm most suitable for the measured distance and color is applied. In the face detection step, if the searched range is valid, the most suitable skin color detection method is selected so as to detect facial regions.Findings: To sum up the experimental results, algorithms have a difference in performance depending on a distance and a background color. Overall, the algorithms using neural network showed stable results. The algorithm using Kismet had a good perception rate for the ground truth part of an original image, and a skin color detection rate was greatly influenced by pink and yellow background colors similar to a skin tone, and consequently an incorrect perception rate of background was considerably high. With regard to each algorithm performance depending on a distance, the closer a distance with an object was to 320cm, the more an incorrect perception rate of a background sharply increased. To analyze the performance of each skin color detection algorithm applied to face detection, we examined how much a skin color of an original image was detected by each algorithm. For a skin color detection rate, after the ground truth for the skin of an original image, the number of pixels of the skin color detected by each algorithm was calculated. In this case, the ground truth means a range of the skin color of an original image to detect.Improvements/Applications: We expect that the proposed approach of detecting facial regionsin a dynamic real environment will be used in a variety of application areas related to computer vision and image processing.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhong, Xubin, Changxing Ding, Xian Qu, and Dacheng Tao. "Polysemy Deciphering Network for Robust Human–Object Interaction Detection." International Journal of Computer Vision 129, no. 6 (April 19, 2021): 1910–29. http://dx.doi.org/10.1007/s11263-021-01458-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

CHO, SANG-HO, TAEWAN KIM, and DAIJIN KIM. "POSE ROBUST HUMAN DETECTION IN DEPTH IMAGES USING MULTIPLY-ORIENTED 2D ELLIPTICAL FILTERS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 05 (August 2010): 691–717. http://dx.doi.org/10.1142/s0218001410008135.

Full text
Abstract:
This paper proposes a pose robust human detection and identification method for sequences of stereo images using multiply-oriented 2D elliptical filters (MO2DEFs), which can detect and identify humans regardless of scale and pose. Four 2D elliptical filters with specific orientations are applied to a 2D spatial-depth histogram, and threshold values are used to detect humans. The human pose is then determined by finding the filter whose convolution result was maximal. Candidates are verified by either detecting the face or matching head-shoulder shapes. Human identification employs the human detection method for a sequence of input stereo images and identifies them as a registered human or a new human using the Bhattacharyya distance of the color histogram. Experimental results show that (1) the accuracy of pose angle estimation is about 88%, (2) human detection using the proposed method outperforms that of using the existing Object Oriented Scale Adaptive Filter (OOSAF) by 15–20%, especially in the case of posed humans, and (3) the human identification method has a nearly perfect accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

SRISUK, SANUN, WERASAK KURUTACH, and KONGSAK LIMPITIKEAT. "A NOVEL APPROACH FOR ROBUST, FAST AND ACCURATE FACE DETECTION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 09, no. 06 (December 2001): 769–79. http://dx.doi.org/10.1142/s0218488501001228.

Full text
Abstract:
In this paper, we propose a novel approach for detecting human faces in a complex background scene. This method is robust and based on our enhanced hausdorff distance. A major aim of this research is to achieve a highly efficient method of face detection that can be used in any real time applications. In addition, our approach produces a very reliable and accurate result. The whole algorithm is composed of three main modules: robust skin detection using Fuzzy HSCC, face similarity measure using RAMHD, and facial feature detection using SVM. Moreover, a technique of automatically updating the size of an elliptical model is also introduced. The results will be shown with real images.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Robust Human Detection"

1

Li, Ying. "Efficient and Robust Video Understanding for Human-robot Interaction and Detection." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leu, Adrian [Verfasser]. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu." Aachen : Shaker, 2014. http://d-nb.info/1060622432/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leu, Adrian [Verfasser], Axel [Akademischer Betreuer] Gräser, and Udo [Akademischer Betreuer] Frese. "Robust Real-time Vision-based Human Detection and Tracking / Adrian Leu. Gutachter: Udo Frese. Betreuer: Axel Gräser." Bremen : Staats- und Universitätsbibliothek Bremen, 2014. http://d-nb.info/1072226340/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Full text
Abstract:
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point of view. This motivates our thesis that is divided in two parts. The first part of this thesis is devoted to the development of parsimonious algorithms for action/gesture recognition in human-centric applications such as sports and anomaly detection for artificial pancreas. The data sources employed for the validation of our approaches consist of a collection of time-series data coming from sensors, such as accelerometers or glycemic. The main challenge in this context is to discard (i.e. being invariant to) many nuisance factors that make the recognition task difficult, especially where many different users are involved. Moreover, in some cases, data cannot be easily labelled, making supervised approaches not viable. Thus, we present the mathematical tools and the background with a focus to the recognition problems and then we derive novel methods for: (i) gesture/action recognition using sparse representations for a sport application; (ii) gesture/action recognition using a symbolic representations and its extension to the multivariate case; (iii) model-free and unsupervised anomaly detection for detecting faults on artificial pancreas. These algorithms are well-suited to be deployed in resource constrained devices, such as wearables. In the second part, we investigate the feasibility of deep learning frameworks where human interpretation is crucial. Standard deep learning models are not robust and, unfortunately, literature approaches that ensure robustness are typically detrimental to accuracy in general. However, in general, real-world applications often require a minimum amount of accuracy to be employed. In view of this, after reviewing some results present in the recent literature, we formulate a new algorithm being able to semantically trade-off between accuracy and robustness, where a cost-sensitive classification problem is provided and a given threshold of accuracy is required. In addition, we provide a link between robustness to input perturbations and interpretability guided by a physical minimum energy principle: in fact, leveraging optimal transport tools, we show that robust training is connected to the optimal transport problem. Thanks to these theoretical insights we develop a new algorithm that provides robust, interpretable and more transferable representations.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Youding. "Model-Based Human Pose Estimation with Spatio-Temporal Inferencing." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1242752509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tasaki, Tsuyoshi. "People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.

Full text
Abstract:
We can detect faces more rapidly and efficiently compared to non-face object categories (Bell et al., 2008; Crouzet, 2011), even when only partial information is visible (Tang et al., 2014). Face inversion impairs our ability to recognise faces. The key to understand this effect is to determine what special face features are processed and how coding of these features is affected by face inversion. Previous studies from our lab showed coding of the contralateral eye in an upright face detection task, which was maximal around the N170 recorded at posterior-lateral electrodes (Ince et al., 2016b; Rousselet et al., 2014). In chapter 2, we used the Bubble technique to determine whether brain responses also reflect the processing of eyes in inverted faces and how it does so in a simple face detection task. The results suggest that in upright and inverted faces alike the N170 reflects coding of the contralateral eye, but face inversion quantitatively weakens the early processing of the contralateral eye, specifically in the transition between the P1 and the N170 and delays this local feature coding. Group and individual results support this claim. First, regardless of face orientation, the N170 coded the eyes contralateral to the posterior-lateral electrodes, which was the case in all participants. Second, face inversion delayed coding of contralateral eye information. Third, time course analysis of contralateral eye coding revealed weaker contralateral eye coding for inverted compared to upright faces in the transition between the P1 and the N170. Fourth, single-trial EEG responses were driven by the corresponding single-trial visibility of the left eye. The N170 amplitude was larger and latency shorter as the left eye visibility increased in upright and upside-down faces for the majority of participants. However, for images of faces, eye position and face orientation were confounded, i.e., the upper visual field usually contains eyes in upright faces; in upside-down faces lower visual field contains eyes. Thus, the impaired processing of the contralateral eye by inversion might be simply attributed to that face inversion removes the eyes away from upper visual filed. In chapter 3, we manipulated three vertical locations of images in which eyes are presented in upper, centre and lower visual field relative to fixation cross (the centre of the screen) so that in upright and inverted faces the eyes can shift from the upper to the lower visual field. We used the similar technique as in chapter 2 during a face detection task. First, we found 2 that regardless of face orientation and position, the modulations of ERPs recorded at the posterior-lateral electrodes were associated with the contralateral eye. This suggests that coding of the contralateral eye underlying the N170. Second, face inversion delayed processing of the contralateral eye when the eyes of faces were presented in the same position, Above, Below or at the Centre of the screen. Also, in the early N170, most of our participants showed weakened contralateral eye sensitivity by inversion of faces, of which the eyes appeared in the same position. The results suggest that face inversion related changes in processing of the contralateral eye cannot be simply considered as the results of differences of eye position. The scan-paths traced by human eye movements are similar to the low-level computation saliency maps produced by contrast based computer vision algorithms (Itti et al., 1998). This evidence leads us to a question of whether the coding function to encode the eyes is due to the significance in the eye regions. In chapter 4, we aim to answer the question. We introduced two altered version of original faces: normalised and reversed contrast faces in a face detection task - removing eye saliency (Simoncelli and Olshausen, 2001) and reversing face contrast polarity (Gilad et al., 2009) in a simple face detection task. In each face condition, we observed ERPs, that recorded at contralateral posterior lateral electrodes, were sensitive to eye regions. Both contrast manipulations delayed and reduced eye sensitivity during the rising part of the N170, roughly 120 – 160 ms post-stimulus onset. Also, there were no such differences between two contrast-manipulated faces. These results were observed in the majority of participants. They suggest that the processing of contralateral eye is due partially to low-level factors and may reflect feature processing in the early N170.
APA, Harvard, Vancouver, ISO, and other styles
8

Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Full text
Abstract:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
APA, Harvard, Vancouver, ISO, and other styles
9

Antonucci, Alessandro. "Socially aware robot navigation." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356142.

Full text
Abstract:
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Briquet-Kerestedjian, Nolwenn. "Impact detection and classification for safe physical Human-Robot Interaction under uncertainties." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC038/document.

Full text
Abstract:
La problématique traitée dans cette thèse vise à développer une stratégie efficace de détection et de classification des impacts en présence d'incertitudes de modélisation du robot et de son environnement et en utilisant un nombre minimal de capteurs, notamment en l'absence de capteur d’effort.La première partie de la thèse porte sur la détection d'un impact pouvant avoir lieu à n'importe quel endroit du bras robotique et à n'importe quel moment de sa trajectoire. Les méthodes de détection d’impacts sont généralement basées sur un modèle dynamique du système, ce qui les rend sujettes au compromis entre sensibilité de détection et robustesse aux incertitudes de modélisation. A cet égard, une méthodologie quantitative a d'abord été mise au point pour rendre explicite la contribution des erreurs induites par les incertitudes de modèle. Cette méthodologie a été appliquée à différentes stratégies de détection, basées soit sur une estimation directe du couple extérieur, soit sur l'utilisation d'observateurs de perturbation, dans le cas d’une modélisation parfaitement rigide ou à articulations flexibles. Une comparaison du type et de la structure des erreurs qui en découlent et de leurs conséquences sur la détection d'impacts en a été déduite. Dans une deuxième étape, de nouvelles stratégies de détection d'impacts ont été conçues: les effets dynamiques des impacts sont isolés en déterminant la marge d'erreur maximale due aux incertitudes de modèle à l’aide d’une approche stochastique.Une fois l'impact détecté et afin de déclencher la réaction post-impact du robot la plus appropriée, la deuxième partie de la thèse aborde l'étape de classification. En particulier, la distinction entre un contact intentionnel (l'opérateur interagit intentionnellement avec le robot, par exemple pour reconfigurer la tâche) et un contact non-désiré (un sujet humain heurte accidentellement le robot), ainsi que la localisation du contact sur le robot, est étudiée en utilisant des techniques d'apprentissage supervisé et plus spécifiquement des réseaux de neurones feedforward. La généralisation à plusieurs sujet humains et à différentes trajectoires du robot a été étudiée
The present thesis aims to develop an efficient strategy for impact detection and classification in the presence of modeling uncertainties of the robot and its environment and using a minimum number of sensors, in particular in the absence of force/torque sensor.The first part of the thesis deals with the detection of an impact that can occur at any location along the robot arm and at any moment during the robot trajectory. Impact detection methods are commonly based on a dynamic model of the system, making them subject to the trade-off between sensitivity of detection and robustness to modeling uncertainties. In this respect, a quantitative methodology has first been developed to make explicit the contribution of the errors induced by model uncertainties. This methodology has been applied to various detection strategies, based either on a direct estimate of the external torque or using disturbance observers, in the perfectly rigid case or in the elastic-joint case. A comparison of the type and structure of the errors involved and their consequences on the impact detection has been deduced. In a second step, novel impact detection strategies have been designed: the dynamic effects of the impacts are isolated by determining the maximal error range due to modeling uncertainties using a stochastic approach.Once the impact has been detected and in order to trigger the most appropriate post-impact robot reaction, the second part of the thesis focuses on the classification step. In particular, the distinction between an intentional contact (the human operator intentionally interacts with the robot, for example to reconfigure the task) and an undesired contact (a human subject accidentally runs into the robot), as well as the localization of the contact on the robot, is investigated using supervised learning techniques and more specifically feedforward neural networks. The challenge of generalizing to several human subjects and robot trajectories has been investigated
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Robust Human Detection"

1

The Lost Symbol. 2nd ed. London: Corgi Books, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brown, Dan. The Lost Symbol: A novel. New York, USA: Doubleday, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brown, Dan. El símbolo perdido. Barcelona: Planeta, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brown, Dan. Le symbole perdu: Roman. Paris: JC Lattes, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brown, Dan. The Lost Symbol. New York, USA: Random House Large Print, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Dan. Rosuto shinboru. Tōkyō: Kadokawa Shoten, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brown, Dan. The Lost Symbol. London: Bantam Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Dan. Il simbolo perduto. Milano: Mondadori, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brown, Dan. Le Symbole Perdu. Paris: JC Lattès, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brown, Dan. Utrachennyĭ simvol: Roman. Moskva: AST, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Robust Human Detection"

1

Liu, Pengfei, Xue Zhou, and Shibin Cai. "Omega-Shape Feature Learning for Robust Human Detection." In Communications in Computer and Information Science, 290–303. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3002-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Tianshuo, Yanwei Pang, Jing Pan, and Changshu Liu. "Weighted Deformable Part Model for Robust Human Detection." In Intelligent Computing Theory, 764–75. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09333-8_83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Haojie, Fuming Sun, and Yue Guan. "Robust Detection and Localization of Human Action in Video." In Lecture Notes in Computer Science, 263–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35728-2_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schwartz, William Robson, Raghuraman Gopalan, Rama Chellappa, and Larry S. Davis. "Robust Human Detection under Occlusion by Integrating Face and Person Detectors." In Advances in Biometrics, 970–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01793-3_98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mikolajczyk, Krystian, Cordelia Schmid, and Andrew Zisserman. "Human Detection Based on a Probabilistic Assembly of Robust Part Detectors." In Lecture Notes in Computer Science, 69–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24670-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhalerao, Shailesh Vitthalrao, and Ram Bilas Pachori. "Automatic Detection of Motor Imagery EEG Signals Using Swarm Decomposition for Robust BCI Systems." In Human-Machine Interface Technology Advancements and Applications, 35–64. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003326830-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pham-Ngoc, Phuong-Trinh, Tae-Ho Kim, and Kang-Hyun Jo. "Robust Human Face Detection for Moving Pictures Based on Cascade-Typed Hybrid Classifier." In Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence, 1110–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74205-0_115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iwata, Kenji, Yutaka Satoh, Ikushi Yoda, and Katsuhiko Sakaue. "Hybrid Camera Surveillance System by Using Stereo Omni-directional System and Robust Human Detection." In Advances in Image and Video Technology, 611–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11949534_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yan Shan, Ang. "DNA Split Proximity Circuit for Visualizing Cell Surface Receptor Clustering—A Case Study Using Human Epidermal Growth Factor Receptor Family." In Engineering a Robust DNA Circuit for the Direct Detection of Biomolecular Interactions, 143–56. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2188-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Saad, Alia, Jonathan Liebers, Stefan Schneegass, and Uwe Gruenefeld. "“They see me scrollin”—Lessons Learned from Investigating Shoulder Surfing Behavior and Attack Mitigation Strategies." In Human Factors in Privacy Research, 199–218. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28643-8_10.

Full text
Abstract:
AbstractMobile computing devices have become ubiquitous; however, they are prone to observation and reconstruction attacks. In particular, shoulder surfing, where an adversary observes another user’s interaction without prior consent, remains a significant unresolved problem. In the past, researchers have primarily focused their research on making authentication more robust against shoulder surfing—with less emphasis on understanding the attacker or their behavior. Nonetheless, understanding these attacks is crucial for protecting smartphone users’ privacy. This chapter aims to bring more attention to research that promotes a deeper understanding of shoulder surfing attacks. While shoulder surfing attacks are difficult to study under natural conditions, researchers have proposed different approaches to overcome this challenge. We compare and discuss these approaches and extract lessons learned. Furthermore, we discuss different mitigation strategies of shoulder surfing attacks and cover algorithmic detection of attacks and proposed threat models as well. Finally, we conclude with an outlook of potential next steps for shoulder surfing research.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Robust Human Detection"

1

Hidai, Ken-ichi, T. Kanamori, Hiroshi Mizoguchi, Kazuyuki Hiraoka, Masaru Tanaka, Takaomi Shigehara, and Taketoshi Mishima. "Robust face detection for human interactive mobile robot." In Intelligent Systems and Smart Manufacturing, edited by Howie M. Choset, Douglas W. Gage, and Matthew R. Stein. SPIE, 2001. http://dx.doi.org/10.1117/12.417299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raviteja, Thaluru, Srikrishna Karanam, and Dinesh Reddy V. Yeduguru. "A robust human face detection algorithm." In Fourth International Conference on Machine Vision (ICMV 11), edited by Zhu Zeng and Yuting Li. SPIE, 2012. http://dx.doi.org/10.1117/12.920068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yoon, Hosub, Dohyung Kim, Suyoung Chi, and Youngjo Cho. "A robust human head detection method for human tracking." In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martinez-Martin, Ester, and Angel P. del Pobil. "Robust Motion Detection and Tracking for Human-Robot Interaction." In HRI '17: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3029798.3029799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Li, and Xiangxu Meng. "Enhanced Robust Vortex Detection." In 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2012. http://dx.doi.org/10.1109/ihmsc.2012.149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bell, Amy E. "Robust feature vector for efficient human detection." In 2013 IEEE Applied Imagery Pattern Recognition Workshop: Sensing for Control and Augmentation (AIPR 2013). IEEE, 2013. http://dx.doi.org/10.1109/aipr.2013.6749310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jianwei Niu, Xiaoke Zhao, Muhammad Ali Abdul Aziz, Jiangwei Li, Kongqiao Wang, and Aimin Hao. "Human hand detection using robust local descriptors." In 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2013. http://dx.doi.org/10.1109/icmew.2013.6618239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"ROBUST HUMAN SKIN DETECTION IN COMPLEX ENVIRONMENTS." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001376300270034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Liyuan, Jerry Kah Eng Hoe, Shuicheng Yan, and Xinguo Yu. "ML-fusion based multi-model human detection and tracking for robust human-robot interfaces." In 2009 Workshop on Applications of Computer Vision (WACV). IEEE, 2009. http://dx.doi.org/10.1109/wacv.2009.5403083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yijing, Lei Zhang, Zhiqiang Zuo, and Xiaoqiang Cheng. "Head-Body Correlation for Robust Crowd Human Detection." In 2021 40th Chinese Control Conference (CCC). IEEE, 2021. http://dx.doi.org/10.23919/ccc52363.2021.9550747.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Robust Human Detection"

1

Asari, Vijayan, Paheding Sidike, Binu Nair, Saibabu Arigela, Varun Santhaseelan, and Chen Cui. PR-433-133700-R01 Pipeline Right-of-Way Automated Threat Detection by Advanced Image Analysis. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2015. http://dx.doi.org/10.55274/r0010891.

Full text
Abstract:
A novel algorithmic framework for the robust detection and classification of machinery threats and other potentially harmful objects intruding onto a pipeline right-of-way (ROW) is designed from three perspectives: visibility improvement, context-based segmentation, and object recognition/classification. In the first part of the framework, an adaptive image enhancement algorithm is utilized to improve the visibility of aerial imagery to aid in threat detection. In this technique, a nonlinear transfer function is developed to enhance the processing of aerial imagery with extremely non-uniform lighting conditions. In the second part of the framework, the context-based segmentation is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. Context based segmentation makes use of a cascade of pre-trained classifiers to search for regions that are not threats. The context based segmentation algorithm accelerates threat identification and improves object detection rates. The last phase of the framework is an efficient object detection model. Efficient object detection �follows a three-stage approach which includes extraction of the local phase in the image and the use of local phase characteristics to locate machinery threats. The local phase is an image feature extraction technique which partially removes the lighting variance and preserves the edge information of the object. Multiple orientations of the same object are matched and the correct orientation is selected using feature matching by histogram of local phase in a multi-scale framework. The classifier outputs locations of threats to pipeline.�The advanced automatic image analysis system is intended to be capable of detecting construction equipment along the ROW of pipelines with a very high degree of accuracy in comparison with manual threat identification by a human analyst. �
APA, Harvard, Vancouver, ISO, and other styles
2

Douglas, Thomas A., Christopher A. Hiemstra, Stephanie P. Saari, Kevin L. Bjella, Seth W. Campbell, M. Torre Jorgenson, Dana R. N. Brown, and Anna K. Liljedahl. Degrading Permafrost Mapped with Electrical Resistivity Tomography, Airborne Imagery and LiDAR, and Seasonal Thaw Measurements. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41185.

Full text
Abstract:
Accurate identification of the relationships between permafrost extent and landscape patterns helps develop airborne geophysical or remote sensing tools to map permafrost in remote locations or across large areas. These tools are particularly applicable in discontinuous permafrost where climate warming or disturbances such as human development or fire can lead to rapid permafrost degradation. We linked field-based geophysical, point-scale, and imagery surveying measurements to map permafrost at five fire scars on the Tanana Flats in central Alaska. Ground-based elevation surveys, seasonal thaw-depth profiles, and electrical resistivity tomography (ERT) measurements were combined with airborne imagery and light detection and ranging (LiDAR) to identify relationships between permafrost geomorphology and elapsed time since fire disturbance. ERT was a robust technique for mapping the presence or absence of permafrost because of the marked difference in resistivity values for frozen versus unfrozen material. There was no clear relationship between elapsed time since fire and permafrost extent at our sites. The transition zone boundaries between permafrost soils and unfrozen soils in the collapse-scar bogs at our sites had complex and unpredictable morphologies, suggesting attempts to quantify the presence or absence of permafrost using aerial measurements alone could lead to incomplete results. The results from our study indicated limitations in being able to apply airborne surveying measurements at the landscape scale toward accurately estimating permafrost extent.
APA, Harvard, Vancouver, ISO, and other styles
3

D’Agostino, Martin, Nigel Cook, Liam O’Connor, Annette Sansom, Dima Semaan, Anne Wood, Sue Keenan, and Linda Scobie. Optimising extraction and RT-qPCR-based detection of hepatitis E virus (HEV) from pork meat and products. Food Standards Agency, July 2023. http://dx.doi.org/10.46756/sci.fsa.ylv958.

Full text
Abstract:
Hepatitis E is an infection of the liver caused by the hepatitis E virus (HEV). HEV infection usually produces a mild disease, hepatitis E. However, disease symptoms can vary from no apparent symptoms to liver failure. There are 4 main types (genotypes) of the virus that cause concern in humans. Genotypes 1 and 2 infections are mainly restricted to humans but 3 and 4 can be identified in numerous other animal species including pigs. Transmission routes of HEV genotypes 3 and 4 have been identified to include the consumption of food products derived from infected animals and shellfish, and via transfusion of infected blood products. Hepatitis E infection is still an emerging issue in the UK and there is evidence to suggest an association of this virus with undercooked pork and pork products. Currently, there is no standardized method for evaluating the stability of HEV that may be present in food during cooking processes. There is also lack of a suitable method that can detect only infectious HEV. The proposed project aimed to address a key gap in resources for methodology related to the detection of HEV in pork and pork products. Currently the lack of a standardised method for the detection of HEV has resulted in individual laboratories either utilising their own methods or adapting methods from previously published work. This leads to a high degree of variability between the interpretation of results and does nothing to progress or provide benefit to the food industry. By interrogating the existing published methods, the project sought to refine and optimise elements of existing protocols in order to enhance the performance characteristics of the method and to simplify the methodology wherever possible. The aim was to produce a validated method which is both robust and repeatable which can be easily integrated into food laboratories capable of performing virus related work. Overall, the final method chosen was devoid of hazardous reagents and utilised easily accessible equipment. To verify the robustness of the method, an international collaborative trial was performed, with 4 UK and 3 European participant laboratories. The participating laboratories conducted analyses of pork liver samples artificially contaminated with various levels of HEV (including uncontaminated samples). The trial showed that the HEV DETECT method was just as reproducible between laboratories as it was repeatable within a laboratory. It is envisaged that the developed system will be put forward as a suitable candidate for ISO certification as a standard method. The establishment of these methods in UK laboratories could result in the availability of independent testing services for both domestic and imported pork /pork-based products. The availability of this method is in essence innovation. This work is essential to industry to help support further research to ensure that public health safety and confidence in pork and other “HEV risk” food products is maintained and improved.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography