Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Active stereo vision.

Dissertationen zum Thema „Active stereo vision“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-19 Dissertationen für die Forschung zum Thema "Active stereo vision" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Li, Fuxing. „Active stereo for AGV navigation“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338984.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fung, Chun Him. „A biomimetic active stereo head with torsional control /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ECED%202006%20FUNG.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wong, Yuk Lam. „Optical tracking for medical diagnosis based on active stereo vision /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20WONGY.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chan, Balwin Man Hong. „A miniaturized 3-D endoscopic system using active stereo-vision /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20CHANB.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 106-108). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kihlström, Helena. „Active Stereo Reconstruction using Deep Learning“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158276.

Der volle Inhalt der Quelle
Annotation:
Depth estimation using stereo images is an important task in many computer vision applications. A stereo camera contains two image sensors that observe the scene from slightly different viewpoints, making it possible to find the depth of the scene. An active stereo camera also uses a laser projector that projects a pattern into the scene. The advantage of the laser pattern is the additional texture that gives better depth estimations in dark and textureless areas.  Recently, deep learning methods have provided new solutions producing state-of-the-art performance in stereo reconstruction. The aim of this project was to investigate the behavior of a deep learning model for active stereo reconstruction, when using data from different cameras. The model is self-supervised, which solves the problem of having enough ground truth data for training the model. It instead uses the known relationship between the left and right images to let the model learn the best estimation. The model was separately trained on datasets from three different active stereo cameras. The three trained models were then compared using evaluation images from all three cameras. The results showed that the model did not always perform better on images from the camera that was used for collecting the training data. However, when comparing the results of different models using the same test images, the model that was trained on images from the camera used for testing gave better results in most cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Urquhart, Colin W. „The active stereo probe : the design and implementation of an active videometrics system“. Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Björkman, Mårten. „Real-Time Motion and Stereo Cues for Active Visual Observers“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3382.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ulusoy, Ilkay. „Active Stereo Vision: Depth Perception For Navigation, Environmental Map Formation And Object Recognition“. Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604737/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In very few mobile robotic applications stereo vision based navigation and mapping is used because dealing with stereo images is very hard and very time consuming. Despite all the problems, stereo vision still becomes one of the most important resources of knowing the world for a mobile robot because imaging provides much more information than most other sensors. Real robotic applications are very complicated because besides the problems of finding how the robot should behave to complete the task at hand, the problems faced while controlling the robot&rsquo
s internal parameters bring high computational load. Thus, finding the strategy to be followed in a simulated world and then applying this on real robot for real applications is preferable. In this study, we describe an algorithm for object recognition and cognitive map formation using stereo image data in a 3D virtual world where 3D objects and a robot with active stereo imaging system are simulated. Stereo imaging system is simulated so that the actual human visual system properties are parameterized. Only the stereo images obtained from this world are supplied to the virtual robot. By applying our disparity algorithm, depth map for the current stereo view is extracted. Using the depth information for the current view, a cognitive map of the environment is updated gradually while the virtual agent is exploring the environment. The agent explores its environment in an intelligent way using the current view and environmental map information obtained up to date. Also, during exploration if a new object is observed, the robot turns around it, obtains stereo images from different directions and extracts the model of the object in 3D. Using the available set of possible objects, it recognizes the object.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Huster, Andrew Christian. „Design and Validation of an Active Stereo Vision System for the OSU EcoCAR 3“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1499251870670736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mohammadi, Vahid. „Design, Development and Evaluation of a System for the Detection of Aerial Parts and Measurement of Growth Indices of Bell Pepper Plant Based on Stereo and Multispectral Imaging“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK109.

Der volle Inhalt der Quelle
Annotation:
Au cours de la croissance des plantes, leur suivi apporte beaucoup d'avantages aux producteurs. Cette surveillance comprend la mesure des propriétés physiques, le comptage des feuilles des plantes, la détection des plantes et leur séparation des mauvaises herbes. Toutes ces techniques peuvent être réalisées de différentes manières, cependant, les techniques favorables sont non destructives car la plante est une créature très sensible que toute manipulation peut perturber sa croissance ou entraîner la perte de feuilles ou de branches. Les techniques d'imagerie sont les meilleures solutions pour le suivi de la croissance des plantes et les mesures géométriques. À cet égard, dans ce projet, l'utilisation de l'imagerie stéréo et des données multispectrales a été étudiée. L'imagerie stéréo active et passive a été utilisée pour l'estimation des propriétés physiques et le comptage des feuilles et des données multispectrales ont été utilisées pour la séparation des cultures et des mauvaises herbes. La plante de poivron a été utilisée pour des mesures d'imagerie pendant une période de 30 jours et pour la séparation culture/mauvaise herbe, les réponses spectrales du poivron et de cinq mauvaises herbes ont été mesurées. Neuf propriétés physiques des feuilles de poivre (c. Le système stéréo était composé de deux caméras LogiTech et d'un vidéoprojecteur. Tout d'abord, le système stéréo a été calibré à l'aide d'images d'échantillons d'un damier standard dans différentes positions et angles. Le système a été contrôlé à l'aide de l'ordinateur pour allumer une ligne lumineuse, enregistrer des vidéos des deux caméras pendant que la lumière est balayée sur la plante, puis arrêter la lumière. Les cadres ont été extraits et traités. L'algorithme de traitement a d'abord filtré les images pour supprimer le bruit, puis a seuillé les pixels indésirables de l'environnement. Ensuite, en utilisant la méthode de détection de pic du centre de masse, la partie principale et centrale de la ligne lumineuse a été extraite. Ensuite, les images ont été rectifiées en utilisant les informations d'étalonnage. Ensuite, les pixels correspondants ont été détectés et utilisés pour le développement du modèle 3D. Le nuage de points obtenu a été transformé en une surface maillée et utilisé pour la mesure des propriétés physiques. Pour les réponses spectrales des plantes, celles-ci ont été fraîchement déplacées au laboratoire, les feuilles ont été détachées des plantes et placées sur un fond sombre flou. Des lumières de type A ont été utilisées pour l'éclairage et les mesures spectrales ont été effectuées à l'aide d'un spectroradiomètre de 380 nm à 1000 nm. Pour réduire la dimensionnalité des données, l'ACP et la transformée en ondelettes ont été utilisées. Les résultats de cette étude ont montré que l'utilisation de l'imagerie stéréo peut proposer un outil bon marché et non destructif pour l'agriculture. Un avantage important de l'imagerie stéréo active est qu'elle est indépendante de la lumière et peut être utilisée pendant la nuit. Cependant, l'utilisation de la stéréo active pour le stade primaire de croissance fournit des résultats acceptables, mais après ce stade, le système sera incapable de détecter et de reconstruire toutes les feuilles et les parties de la plante. En utilisant l'ASI, les valeurs R2 de 0,978 et 0,967 ont été obtenues pour l'estimation de la surface foliaire et du périmètre, respectivement. Les résultats de la séparation des cultures et des mauvaises herbes à l'aide de données spectrales étaient très prometteurs et le classificateur, qui était basé sur un apprentissage en profondeur, pouvait complètement séparer le poivre des cinq autres mauvaises herbes
During the growth of plants, monitoring them brings much benefits to the producers. This monitoring includes the measurement of physical properties, counting plants leaves, detection of plants and separation of them from weeds. All these can be done different techniques, however, the techniques are favorable that are non-destructive because plant is a very sensitive creature that any manipulation can put disorder in its growth or lead to losing leaves or branches. Imaging techniques are of the best solutions for plants growth monitoring and geometric measurements. In this regard, in this project the use of stereo imaging and multispectral data was studied. Active and passive stereo imaging were employed for the estimation of physical properties and counting leaves and multispectral data was utilized for the separation of crop and weed. Bell pepper plant was used for imaging measurements for a period of 30 days and for crop/weed separation, the spectral responses of bell pepper and five weeds were measured. Nine physical properties of pepper leaves (i.e. main leaf diameters, leaf area, leaf perimeter etc.) were measured using a scanner and was used as a database and also for comparing the estimated values to the actual values. The stereo system consisted of two LogiTech cameras and a video projector. First the stereo system was calibrated using sample images of a standard checkerboard in different position and angles. The system was controlled using the computer for turning a light line on, recording videos of both cameras while light is being swept on the plant and then stopping the light. The frames were extracted and processed. The processing algorithm first filtered the images for removing noise and then thresholded the unwanted pixels of environment. Then, using the peak detection method of Center of Mass the main and central part of the light line was extracted. After, the images were rectified by using the calibration information. Then the correspondent pixels were detected and used for the 3D model development. The obtained point cloud was transformed to a meshed surface and used for physical properties measurement. Passive stereo imaging was used for leaf detection and counting. For passive stereo matching six different matching algorithms and three cost functions were used and compared. For spectral responses of plants, they were freshly moved to the laboratory, leaves were detached from the plants and placed on a blur dark background. Type A lights were used for illumination and the spectral measurements were carried out using a spectroradiometer from 380 nm to 1000 nm. To reduce the dimensionality of the data, PCA and wavelet transform were used. Results of this study showed that the use of stereo imaging can propose a cheap and non-destructive tool for agriculture. An important advantage of active stereo imaging is that it is light-independent and can be used during the night. However, the use of active stereo for the primary stage of growth provides acceptable results but after that stage, the system will be unable to detect and reconstruct all leaves and plant's parts. Using ASI the R2 values of 0.978 and 0.967 were obtained for the estimation leaf area and perimeter, respectively. The results of separation of crop and weeds using spectral data were very promising and the classifier—which was based on deep learning—could completely separate pepper from other five weeds
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Sanchez-Riera, Jordi. „Capacités audiovisuelles en robot humanoïde NAO“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00953260.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse nous avons l'intention d'enquêter sur la complémentarité des données auditives et visuelles sensorielles pour la construction d'une interprétation de haut niveau d'une scène. L'audiovisuel (AV) d'entrée reçus par le robot est une fonction à la fois l'environnement extérieur et de la localisation réelle du robot qui est étroitement liée à ses actions. La recherche actuelle dans AV analyse de scène a eu tendance à se concentrer sur les observateurs fixes. Toutefois, la preuve psychophysique donne à penser que les humains utilisent petite tête et les mouvements du corps, afin d'optimiser l'emplacement de leurs oreilles à l'égard de la source. De même, en marchant ou en tournant, le robot mai être en mesure d'améliorer les données entrantes visuelle. Par exemple, dans la perception binoculaire, il est souhaitable de réduire la distance de vue à un objet d'intérêt. Cela permet à la structure 3D de l'objet à analyser à une profondeur de résolution supérieure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

(8071319), Michael Clark Feller. „Active Stereo Vision for Precise Autonomous Vehicle Hitching“. Thesis, 2019.

Den vollen Inhalt der Quelle finden
Annotation:

This thesis describes the development of a low-cost, low-power, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. Few studies have been completed on the hitching problem, yet it is an important challenge to be solved for vehicles in the agricultural and transportation industries. Existing sensor solutions are high cost, high power, and require modification to the hitch in order to work. Other potential sensor solutions such as LiDAR and Digital Fringe Projection suffer from these same fundamental problems.

The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. As a whole, the system cost is $188, with a power usage of 2.3 W.

To test the system, a model test of the hitching problem was developed using an RC car and a target to represent a hitch. In the application, both the stereo system and the texture camera are used for measurement of the hitch, and a control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35⁰ of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 mm of lateral error and 1.5⁰ of angular error. Ultimately, this is believed to be the first low power, low cost hitching system that does not require modification of the hitch in order to sense it.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

„ACTIVE STEREO VISION: DEPTH PERCEPTION FOR NAVIGATION, ENVIRONMENTAL MAP FORMATION AND OBJECT RECOGNITION“. Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604737/index.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wei, Hsu-chiang, und 魏緒強. „Using Stereo Vision with Active Laser for the Feature Recognition of Mold Surface“. Thesis, 1998. http://ndltd.ncl.edu.tw/handle/9hxwxr.

Der volle Inhalt der Quelle
Annotation:
碩士
國立成功大學
機械工程學系
86
This study uses image processing technique with active laser beam forgeometrical recognition of dies and molds. The mold's images weregrabbed from two CCD cameras. The gradient filter was employed with a proper threshold value to get the binary images. We get three images that obtained by different lighting positions. All three images areoverlapped together to get the boundary information. We using mophology method to fill the empty hole and thin the boundary to get the edges ofeach surface on the mold. Then we using Mark-area method to divide each surface and find out the centered and principal axis as the project direction of laser beam. Finally we calculate the depth data of each surface on the mold with active laser beam then judge the feature of the surface by its depth data and the variation of the slope. Because the shapes of the mold surface are very complex, we try to recognize plane and revolving surface such as cone surface,cylindrical surface, spherical surface, And each revolving axis of revolving surface would be vertical or parallel to the planar parting surface. To reduce the interference result from the variation of depth data points, we calculate the regression line every 10 points. We try to recognized the feature of the mold surface by its depth data and the variation of the depth. For plane, spherical and cone surface, there are some special features with it. For example, the variation of the depth and slope in principle axes'direction would be zero for a plane. Finally we'll use a test mold to prove the theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Saleiro, Mário Alexandre Nobre. „Activie vision in robot cognition“. Doctoral thesis, 2016. http://hdl.handle.net/10400.1/8985.

Der volle Inhalt der Quelle
Annotation:
Tese de doutoramento, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016
As technology and our understanding of the human brain evolve, the idea of creating robots that behave and learn like humans seems to get more and more attention. However, although that knowledge and computational power are constantly growing we still have much to learn to be able to create such machines. Nonetheless, that does not mean we cannot try to validate our knowledge by creating biologically inspired models to mimic some of our brain processes and use them for robotics applications. In this thesis several biologically inspired models for vision are presented: a keypoint descriptor based on cortical cell responses that allows to create binary codes which can be used to represent speci c image regions; and a stereo vision model based on cortical cell responses and visual saliency based on color, disparity and motion. Active vision is achieved by combining these vision modules with an attractor dynamics approach for head pan control. Although biologically inspired models are usually very heavy in terms of processing power, these models were designed to be lightweight so that they can be tested for real-time robot navigation, object recognition and vision steering. The developed vision modules were tested on a child-sized robot, which uses only visual information to navigate, to detect obstacles and to recognize objects in real time. The biologically inspired visual system is integrated with a cognitive architecture, which combines vision with short- and long-term memory for simultaneous localization and mapping (SLAM). Motor control for navigation is also done using attractor dynamics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Morris, Julie Anne. „Design of an active stereo vision 3D scene reconstruction system based on the linear position sensor module“. 2006. http://etd.utk.edu/2006/MorrisJulie.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Chapdelaine-Couture, Vincent. „Le cinéma omnistéréo ou l'art d'avoir des yeux tout le tour de la tête“. Thèse, 2011. http://hdl.handle.net/1866/8382.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'intéresse à des aspects du tournage, de la projection et de la perception du cinéma stéréo panoramique, appelé aussi cinéma omnistéréo. Elle s'inscrit en grande partie dans le domaine de la vision par ordinateur, mais elle touche aussi aux domaines de l'infographie et de la perception visuelle humaine. Le cinéma omnistéréo projette sur des écrans immersifs des vidéos qui fournissent de l'information sur la profondeur de la scène tout autour des spectateurs. Ce type de cinéma comporte des défis liés notamment au tournage de vidéos omnistéréo de scènes dynamiques, à la projection polarisée sur écrans très réfléchissants rendant difficile l'estimation de leur forme par reconstruction active, aux distorsions introduites par l'omnistéréo pouvant fausser la perception des profondeurs de la scène. Notre thèse a tenté de relever ces défis en apportant trois contributions majeures. Premièrement, nous avons développé la toute première méthode de création de vidéos omnistéréo par assemblage d'images pour des mouvements stochastiques et localisés. Nous avons mis au point une expérience psychophysique qui montre l'efficacité de la méthode pour des scènes sans structure isolée, comme des courants d'eau. Nous proposons aussi une méthode de tournage qui ajoute à ces vidéos des mouvements moins contraints, comme ceux d'acteurs. Deuxièmement, nous avons introduit de nouveaux motifs lumineux qui permettent à une caméra et un projecteur de retrouver la forme d'objets susceptibles de produire des interréflexions. Ces motifs sont assez généraux pour reconstruire non seulement les écrans omnistéréo, mais aussi des objets très complexes qui comportent des discontinuités de profondeur du point de vue de la caméra. Troisièmement, nous avons montré que les distorsions omnistéréo sont négligeables pour un spectateur placé au centre d'un écran cylindrique, puisqu'elles se situent à la périphérie du champ visuel où l'acuité devient moins précise.
This thesis deals with aspects of shooting, projection and perception of stereo panoramic cinema, also called omnistereo cinema. It falls largely in the field of computer vision, but it also in the areas of computer graphics and human visual perception. Omnistereo cinema uses immersive screens to project videos that provide depth information of a scene all around the spectators. Many challenges remain in omnistereo cinema, in particular shooting omnistereo videos for dynamic scenes, polarized projection on highly reflective screens making difficult the process to recover their shape by active reconstruction, and perception of depth distortions introduced by omnistereo images. Our thesis addressed these challenges by making three major contributions. First, we developed the first mosaicing method of omnistereo videos for stochastic and localized motions. We developed a psychophysical experiment that shows the effectiveness of the method for scenes without isolated structure, such as water flows. We also propose a shooting method that adds to these videos foreground motions that are not as constrained, like a moving actor. Second, we introduced new light patterns that allow a camera and a projector to recover the shape of objects likely to produce interreflections. These patterns are general enough to not only recover the shape of omnistereo screens, but also very complex objects that have depth discontinuities from the viewpoint of the camera. Third, we showed that omnistereo distortions are negligible for a viewer located at the center of a cylindrical screen, as they are in the periphery of the visual field where the human visual system becomes less accurate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Farahmand, Fazel. „Development of a novel stereo vision method and its application to a six degrees of action robot arm as an assistive aid technology“. 2005. http://hdl.handle.net/1993/18044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Cunhal, Miguel João Alves. „Sistema de visão para a interação e colaboração humano-robô: reconhecimento de objetos, gestos e expressões faciais“. Master's thesis, 2014. http://hdl.handle.net/1822/42023.

Der volle Inhalt der Quelle
Annotation:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores
O objetivo deste projeto de dissertação consistiu no design, implementação e validação de um sistema de visão para aplicação no robô antropomórfico ARoS (Anthropomorphic Robotic System) no contexto da execução autónoma de tarefas de interação e colaboração com humanos. Foram exploradas três vertentes essenciais numa perspetiva de interação natural e eficiente entre robô e humano: o reconhecimento de objetos, gestos e expressões faciais. O reconhecimento de objetos, pois o robô deve estar a par do ambiente que o rodeia de modo a poder interagir com o parceiro humano. Foi implementado um sistema de reconhecimento híbrido assente em duas abordagens distintas: caraterísticas globais e caraterísticas locais. Para a abordagem baseada em caraterísticas globais usaram-se os momentos invariantes de Hu. Para a abordagem baseada em caraterísticas locais exploraram-se vários métodos de deteção e descrição de caraterísticas locais, selecionando-se o SURF (Speeded Up Robust Features) para uma implementação final. O sistema devolve, também, a localização espacial dos objetos, recorrendo a um sistema de visão estereoscópico. O reconhecimento de gestos, na medida em que estes podem fornecer informação acerca das intenções do humano, podendo o robô agir em concordância após a interpretação dos mesmos. Para a deteção da mão recorreu-se à deteção por cor e para a extração de caraterísticas da mesma recorreu-se aos momentos invariantes de Hu. A classificação dos gestos é feita através da verificação dos momentos invariantes de Hu complementada por uma análise da segmentação resultante da Convex Hull. Por último, o reconhecimento de expressões faciais, visto que estas podem indicar o estado emocional do humano. Tal como em relação aos gestos, o reconhecimento de expressões faciais e consequente aferição do estado emocional permite ao robô agir em concordância, podendo mesmo alterar o rumo da ação que vinha a efetuar. Foi desenvolvido um software de análise de robustez para avaliar o sistema previamente criado (FaceCoder) e, com base nesses resultados, foram introduzidas algumas alterações relevantes no sistema FaceCoder.
The objective of this dissertation consisted on the design, implementation and validation of a vision system to be applied on the anthropomorphic robot ARoS (Anthropomorphic Robotic System) in order to allow the autonomous execution of interaction and cooperation tasks with human partners. Three essential aspects for the efficient and natural interaction between robot and human were explored: object, gesture and facial expression recognition. Object recognition, because the robot should be aware of the environment surrounding it so it can interact with the objects in the scene and with the human partner. An hybrid recognition system was constructed based on two different approaches: global features and local features. For the approach based on global features, Hu’s moment invariants were used to classify the object. For the approach based on local features, several methods of local features detection and description were explored and for the final implementation, SURF (Speeded Up Robust Features) was the selected one. The system also returns the object’s spatial location through a stereo vision system. Gesture recognition, because gestures can provide information about the human’s intentions, so that the robot can act according to them. For hand’s detection, color detection was used, and for feature extraction, Hu’s moment invariants were used. Classification is performed through Hu’s moment invariants verification alongside with the analysis of the Convex Hull segmentation’s result. Last, facial expression recognition because it can indicate the human’s emotional state. Like for the gestures, facial expression recognition and consequent emotional state classification allows the robot to act accordingly, so it may even change the course of the task it was taking on. It was developed a robustness analysis software to evaluate the previously created system (FaceCoder) and, based on the results of the analysis, some relevant changes were added to FaceCoder.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie