Dissertationen zum Thema „Active stereo vision“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-19 Dissertationen für die Forschung zum Thema "Active stereo vision" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Li, Fuxing. „Active stereo for AGV navigation“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338984.
Der volle Inhalt der QuelleFung, Chun Him. „A biomimetic active stereo head with torsional control /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ECED%202006%20FUNG.
Der volle Inhalt der QuelleWong, Yuk Lam. „Optical tracking for medical diagnosis based on active stereo vision /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20WONGY.
Der volle Inhalt der QuelleChan, Balwin Man Hong. „A miniaturized 3-D endoscopic system using active stereo-vision /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20CHANB.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 106-108). Also available in electronic version. Access restricted to campus users.
Kihlström, Helena. „Active Stereo Reconstruction using Deep Learning“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158276.
Der volle Inhalt der QuelleUrquhart, Colin W. „The active stereo probe : the design and implementation of an active videometrics system“. Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312498.
Der volle Inhalt der QuelleBjörkman, Mårten. „Real-Time Motion and Stereo Cues for Active Visual Observers“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3382.
Der volle Inhalt der QuelleUlusoy, Ilkay. „Active Stereo Vision: Depth Perception For Navigation, Environmental Map Formation And Object Recognition“. Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604737/index.pdf.
Der volle Inhalt der Quelles internal parameters bring high computational load. Thus, finding the strategy to be followed in a simulated world and then applying this on real robot for real applications is preferable. In this study, we describe an algorithm for object recognition and cognitive map formation using stereo image data in a 3D virtual world where 3D objects and a robot with active stereo imaging system are simulated. Stereo imaging system is simulated so that the actual human visual system properties are parameterized. Only the stereo images obtained from this world are supplied to the virtual robot. By applying our disparity algorithm, depth map for the current stereo view is extracted. Using the depth information for the current view, a cognitive map of the environment is updated gradually while the virtual agent is exploring the environment. The agent explores its environment in an intelligent way using the current view and environmental map information obtained up to date. Also, during exploration if a new object is observed, the robot turns around it, obtains stereo images from different directions and extracts the model of the object in 3D. Using the available set of possible objects, it recognizes the object.
Huster, Andrew Christian. „Design and Validation of an Active Stereo Vision System for the OSU EcoCAR 3“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1499251870670736.
Der volle Inhalt der QuelleMohammadi, Vahid. „Design, Development and Evaluation of a System for the Detection of Aerial Parts and Measurement of Growth Indices of Bell Pepper Plant Based on Stereo and Multispectral Imaging“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK109.
Der volle Inhalt der QuelleDuring the growth of plants, monitoring them brings much benefits to the producers. This monitoring includes the measurement of physical properties, counting plants leaves, detection of plants and separation of them from weeds. All these can be done different techniques, however, the techniques are favorable that are non-destructive because plant is a very sensitive creature that any manipulation can put disorder in its growth or lead to losing leaves or branches. Imaging techniques are of the best solutions for plants growth monitoring and geometric measurements. In this regard, in this project the use of stereo imaging and multispectral data was studied. Active and passive stereo imaging were employed for the estimation of physical properties and counting leaves and multispectral data was utilized for the separation of crop and weed. Bell pepper plant was used for imaging measurements for a period of 30 days and for crop/weed separation, the spectral responses of bell pepper and five weeds were measured. Nine physical properties of pepper leaves (i.e. main leaf diameters, leaf area, leaf perimeter etc.) were measured using a scanner and was used as a database and also for comparing the estimated values to the actual values. The stereo system consisted of two LogiTech cameras and a video projector. First the stereo system was calibrated using sample images of a standard checkerboard in different position and angles. The system was controlled using the computer for turning a light line on, recording videos of both cameras while light is being swept on the plant and then stopping the light. The frames were extracted and processed. The processing algorithm first filtered the images for removing noise and then thresholded the unwanted pixels of environment. Then, using the peak detection method of Center of Mass the main and central part of the light line was extracted. After, the images were rectified by using the calibration information. Then the correspondent pixels were detected and used for the 3D model development. The obtained point cloud was transformed to a meshed surface and used for physical properties measurement. Passive stereo imaging was used for leaf detection and counting. For passive stereo matching six different matching algorithms and three cost functions were used and compared. For spectral responses of plants, they were freshly moved to the laboratory, leaves were detached from the plants and placed on a blur dark background. Type A lights were used for illumination and the spectral measurements were carried out using a spectroradiometer from 380 nm to 1000 nm. To reduce the dimensionality of the data, PCA and wavelet transform were used. Results of this study showed that the use of stereo imaging can propose a cheap and non-destructive tool for agriculture. An important advantage of active stereo imaging is that it is light-independent and can be used during the night. However, the use of active stereo for the primary stage of growth provides acceptable results but after that stage, the system will be unable to detect and reconstruct all leaves and plant's parts. Using ASI the R2 values of 0.978 and 0.967 were obtained for the estimation leaf area and perimeter, respectively. The results of separation of crop and weeds using spectral data were very promising and the classifier—which was based on deep learning—could completely separate pepper from other five weeds
Sanchez-Riera, Jordi. „Capacités audiovisuelles en robot humanoïde NAO“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00953260.
Der volle Inhalt der Quelle(8071319), Michael Clark Feller. „Active Stereo Vision for Precise Autonomous Vehicle Hitching“. Thesis, 2019.
Den vollen Inhalt der Quelle findenThis thesis describes the development of a low-cost, low-power, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. Few studies have been completed on the hitching problem, yet it is an important challenge to be solved for vehicles in the agricultural and transportation industries. Existing sensor solutions are high cost, high power, and require modification to the hitch in order to work. Other potential sensor solutions such as LiDAR and Digital Fringe Projection suffer from these same fundamental problems.
The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. As a whole, the system cost is $188, with a power usage of 2.3 W.
To test the system, a model test of the hitching problem was developed using an RC car and a target to represent a hitch. In the application, both the stereo system and the texture camera are used for measurement of the hitch, and a control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35⁰ of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 mm of lateral error and 1.5⁰ of angular error. Ultimately, this is believed to be the first low power, low cost hitching system that does not require modification of the hitch in order to sense it.
„ACTIVE STEREO VISION: DEPTH PERCEPTION FOR NAVIGATION, ENVIRONMENTAL MAP FORMATION AND OBJECT RECOGNITION“. Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604737/index.pdf.
Der volle Inhalt der QuelleWei, Hsu-chiang, und 魏緒強. „Using Stereo Vision with Active Laser for the Feature Recognition of Mold Surface“. Thesis, 1998. http://ndltd.ncl.edu.tw/handle/9hxwxr.
Der volle Inhalt der Quelle國立成功大學
機械工程學系
86
This study uses image processing technique with active laser beam forgeometrical recognition of dies and molds. The mold's images weregrabbed from two CCD cameras. The gradient filter was employed with a proper threshold value to get the binary images. We get three images that obtained by different lighting positions. All three images areoverlapped together to get the boundary information. We using mophology method to fill the empty hole and thin the boundary to get the edges ofeach surface on the mold. Then we using Mark-area method to divide each surface and find out the centered and principal axis as the project direction of laser beam. Finally we calculate the depth data of each surface on the mold with active laser beam then judge the feature of the surface by its depth data and the variation of the slope. Because the shapes of the mold surface are very complex, we try to recognize plane and revolving surface such as cone surface,cylindrical surface, spherical surface, And each revolving axis of revolving surface would be vertical or parallel to the planar parting surface. To reduce the interference result from the variation of depth data points, we calculate the regression line every 10 points. We try to recognized the feature of the mold surface by its depth data and the variation of the depth. For plane, spherical and cone surface, there are some special features with it. For example, the variation of the depth and slope in principle axes'direction would be zero for a plane. Finally we'll use a test mold to prove the theory.
Saleiro, Mário Alexandre Nobre. „Activie vision in robot cognition“. Doctoral thesis, 2016. http://hdl.handle.net/10400.1/8985.
Der volle Inhalt der QuelleAs technology and our understanding of the human brain evolve, the idea of creating robots that behave and learn like humans seems to get more and more attention. However, although that knowledge and computational power are constantly growing we still have much to learn to be able to create such machines. Nonetheless, that does not mean we cannot try to validate our knowledge by creating biologically inspired models to mimic some of our brain processes and use them for robotics applications. In this thesis several biologically inspired models for vision are presented: a keypoint descriptor based on cortical cell responses that allows to create binary codes which can be used to represent speci c image regions; and a stereo vision model based on cortical cell responses and visual saliency based on color, disparity and motion. Active vision is achieved by combining these vision modules with an attractor dynamics approach for head pan control. Although biologically inspired models are usually very heavy in terms of processing power, these models were designed to be lightweight so that they can be tested for real-time robot navigation, object recognition and vision steering. The developed vision modules were tested on a child-sized robot, which uses only visual information to navigate, to detect obstacles and to recognize objects in real time. The biologically inspired visual system is integrated with a cognitive architecture, which combines vision with short- and long-term memory for simultaneous localization and mapping (SLAM). Motor control for navigation is also done using attractor dynamics.
Morris, Julie Anne. „Design of an active stereo vision 3D scene reconstruction system based on the linear position sensor module“. 2006. http://etd.utk.edu/2006/MorrisJulie.pdf.
Der volle Inhalt der QuelleChapdelaine-Couture, Vincent. „Le cinéma omnistéréo ou l'art d'avoir des yeux tout le tour de la tête“. Thèse, 2011. http://hdl.handle.net/1866/8382.
Der volle Inhalt der QuelleThis thesis deals with aspects of shooting, projection and perception of stereo panoramic cinema, also called omnistereo cinema. It falls largely in the field of computer vision, but it also in the areas of computer graphics and human visual perception. Omnistereo cinema uses immersive screens to project videos that provide depth information of a scene all around the spectators. Many challenges remain in omnistereo cinema, in particular shooting omnistereo videos for dynamic scenes, polarized projection on highly reflective screens making difficult the process to recover their shape by active reconstruction, and perception of depth distortions introduced by omnistereo images. Our thesis addressed these challenges by making three major contributions. First, we developed the first mosaicing method of omnistereo videos for stochastic and localized motions. We developed a psychophysical experiment that shows the effectiveness of the method for scenes without isolated structure, such as water flows. We also propose a shooting method that adds to these videos foreground motions that are not as constrained, like a moving actor. Second, we introduced new light patterns that allow a camera and a projector to recover the shape of objects likely to produce interreflections. These patterns are general enough to not only recover the shape of omnistereo screens, but also very complex objects that have depth discontinuities from the viewpoint of the camera. Third, we showed that omnistereo distortions are negligible for a viewer located at the center of a cylindrical screen, as they are in the periphery of the visual field where the human visual system becomes less accurate.
Farahmand, Fazel. „Development of a novel stereo vision method and its application to a six degrees of action robot arm as an assistive aid technology“. 2005. http://hdl.handle.net/1993/18044.
Der volle Inhalt der QuelleCunhal, Miguel João Alves. „Sistema de visão para a interação e colaboração humano-robô: reconhecimento de objetos, gestos e expressões faciais“. Master's thesis, 2014. http://hdl.handle.net/1822/42023.
Der volle Inhalt der QuelleO objetivo deste projeto de dissertação consistiu no design, implementação e validação de um sistema de visão para aplicação no robô antropomórfico ARoS (Anthropomorphic Robotic System) no contexto da execução autónoma de tarefas de interação e colaboração com humanos. Foram exploradas três vertentes essenciais numa perspetiva de interação natural e eficiente entre robô e humano: o reconhecimento de objetos, gestos e expressões faciais. O reconhecimento de objetos, pois o robô deve estar a par do ambiente que o rodeia de modo a poder interagir com o parceiro humano. Foi implementado um sistema de reconhecimento híbrido assente em duas abordagens distintas: caraterísticas globais e caraterísticas locais. Para a abordagem baseada em caraterísticas globais usaram-se os momentos invariantes de Hu. Para a abordagem baseada em caraterísticas locais exploraram-se vários métodos de deteção e descrição de caraterísticas locais, selecionando-se o SURF (Speeded Up Robust Features) para uma implementação final. O sistema devolve, também, a localização espacial dos objetos, recorrendo a um sistema de visão estereoscópico. O reconhecimento de gestos, na medida em que estes podem fornecer informação acerca das intenções do humano, podendo o robô agir em concordância após a interpretação dos mesmos. Para a deteção da mão recorreu-se à deteção por cor e para a extração de caraterísticas da mesma recorreu-se aos momentos invariantes de Hu. A classificação dos gestos é feita através da verificação dos momentos invariantes de Hu complementada por uma análise da segmentação resultante da Convex Hull. Por último, o reconhecimento de expressões faciais, visto que estas podem indicar o estado emocional do humano. Tal como em relação aos gestos, o reconhecimento de expressões faciais e consequente aferição do estado emocional permite ao robô agir em concordância, podendo mesmo alterar o rumo da ação que vinha a efetuar. Foi desenvolvido um software de análise de robustez para avaliar o sistema previamente criado (FaceCoder) e, com base nesses resultados, foram introduzidas algumas alterações relevantes no sistema FaceCoder.
The objective of this dissertation consisted on the design, implementation and validation of a vision system to be applied on the anthropomorphic robot ARoS (Anthropomorphic Robotic System) in order to allow the autonomous execution of interaction and cooperation tasks with human partners. Three essential aspects for the efficient and natural interaction between robot and human were explored: object, gesture and facial expression recognition. Object recognition, because the robot should be aware of the environment surrounding it so it can interact with the objects in the scene and with the human partner. An hybrid recognition system was constructed based on two different approaches: global features and local features. For the approach based on global features, Hu’s moment invariants were used to classify the object. For the approach based on local features, several methods of local features detection and description were explored and for the final implementation, SURF (Speeded Up Robust Features) was the selected one. The system also returns the object’s spatial location through a stereo vision system. Gesture recognition, because gestures can provide information about the human’s intentions, so that the robot can act according to them. For hand’s detection, color detection was used, and for feature extraction, Hu’s moment invariants were used. Classification is performed through Hu’s moment invariants verification alongside with the analysis of the Convex Hull segmentation’s result. Last, facial expression recognition because it can indicate the human’s emotional state. Like for the gestures, facial expression recognition and consequent emotional state classification allows the robot to act accordingly, so it may even change the course of the task it was taking on. It was developed a robustness analysis software to evaluate the previously created system (FaceCoder) and, based on the results of the analysis, some relevant changes were added to FaceCoder.