Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Active stereo vision“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Active stereo vision" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Active stereo vision"
Grosso, E., und M. Tistarelli. „Active/dynamic stereo vision“. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, Nr. 9 (1995): 868–79. http://dx.doi.org/10.1109/34.406652.
Der volle Inhalt der QuelleJang, Mingyu, Hyunse Yoon, Seongmin Lee, Jiwoo Kang und Sanghoon Lee. „A Comparison and Evaluation of Stereo Matching on Active Stereo Images“. Sensors 22, Nr. 9 (26.04.2022): 3332. http://dx.doi.org/10.3390/s22093332.
Der volle Inhalt der QuelleGasteratos, Antonios. „Tele-Autonomous Active Stereo-Vision Head“. International Journal of Optomechatronics 2, Nr. 2 (13.06.2008): 144–61. http://dx.doi.org/10.1080/15599610802081753.
Der volle Inhalt der QuelleYexin Wang, Yexin Wang, Fuqiang Zhou Fuqiang Zhou und Yi Cui Yi Cui. „Single-camera active stereo vision system using fiber bundles“. Chinese Optics Letters 12, Nr. 10 (2014): 101301–4. http://dx.doi.org/10.3788/col201412.101301.
Der volle Inhalt der QuelleSamson, Eric, Denis Laurendeau, Marc Parizeau, Sylvain Comtois, Jean-François Allan und Clément Gosselin. „The Agile Stereo Pair for active vision“. Machine Vision and Applications 17, Nr. 1 (23.02.2006): 32–50. http://dx.doi.org/10.1007/s00138-006-0013-7.
Der volle Inhalt der QuelleFeller, Michael, Jae-Sang Hyun und Song Zhang. „Active Stereo Vision for Precise Autonomous Vehicle Control“. Electronic Imaging 2020, Nr. 16 (26.01.2020): 258–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-257.
Der volle Inhalt der QuelleKo, Jung-Hwan. „Active Object Tracking System based on Stereo Vision“. Journal of the Institute of Electronics and Information Engineers 53, Nr. 4 (25.04.2016): 159–66. http://dx.doi.org/10.5573/ieie.2016.53.4.159.
Der volle Inhalt der QuellePorta, J. M., J. J. Verbeek und B. J. A. Kröse. „Active Appearance-Based Robot Localization Using Stereo Vision“. Autonomous Robots 18, Nr. 1 (Januar 2005): 59–80. http://dx.doi.org/10.1023/b:auro.0000047287.00119.b6.
Der volle Inhalt der QuelleYongchang Wang, Kai Liu, Qi Hao, Xianwang Wang, D. L. Lau und L. G. Hassebrook. „Robust Active Stereo Vision Using Kullback-Leibler Divergence“. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, Nr. 3 (März 2012): 548–63. http://dx.doi.org/10.1109/tpami.2011.162.
Der volle Inhalt der QuelleMohamed, Abdulla, Phil F. Culverhouse, Ricardo De Azambuja, Angelo Cangelosi und Chenguang Yang. „Automating Active Stereo Vision Calibration Process with Cobots“. IFAC-PapersOnLine 50, Nr. 2 (Dezember 2017): 163–68. http://dx.doi.org/10.1016/j.ifacol.2017.12.030.
Der volle Inhalt der QuelleDissertationen zum Thema "Active stereo vision"
Li, Fuxing. „Active stereo for AGV navigation“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338984.
Der volle Inhalt der QuelleFung, Chun Him. „A biomimetic active stereo head with torsional control /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ECED%202006%20FUNG.
Der volle Inhalt der QuelleWong, Yuk Lam. „Optical tracking for medical diagnosis based on active stereo vision /“. View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20WONGY.
Der volle Inhalt der QuelleChan, Balwin Man Hong. „A miniaturized 3-D endoscopic system using active stereo-vision /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20CHANB.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 106-108). Also available in electronic version. Access restricted to campus users.
Kihlström, Helena. „Active Stereo Reconstruction using Deep Learning“. Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158276.
Der volle Inhalt der QuelleUrquhart, Colin W. „The active stereo probe : the design and implementation of an active videometrics system“. Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312498.
Der volle Inhalt der QuelleBjörkman, Mårten. „Real-Time Motion and Stereo Cues for Active Visual Observers“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3382.
Der volle Inhalt der QuelleUlusoy, Ilkay. „Active Stereo Vision: Depth Perception For Navigation, Environmental Map Formation And Object Recognition“. Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604737/index.pdf.
Der volle Inhalt der Quelles internal parameters bring high computational load. Thus, finding the strategy to be followed in a simulated world and then applying this on real robot for real applications is preferable. In this study, we describe an algorithm for object recognition and cognitive map formation using stereo image data in a 3D virtual world where 3D objects and a robot with active stereo imaging system are simulated. Stereo imaging system is simulated so that the actual human visual system properties are parameterized. Only the stereo images obtained from this world are supplied to the virtual robot. By applying our disparity algorithm, depth map for the current stereo view is extracted. Using the depth information for the current view, a cognitive map of the environment is updated gradually while the virtual agent is exploring the environment. The agent explores its environment in an intelligent way using the current view and environmental map information obtained up to date. Also, during exploration if a new object is observed, the robot turns around it, obtains stereo images from different directions and extracts the model of the object in 3D. Using the available set of possible objects, it recognizes the object.
Huster, Andrew Christian. „Design and Validation of an Active Stereo Vision System for the OSU EcoCAR 3“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1499251870670736.
Der volle Inhalt der QuelleMohammadi, Vahid. „Design, Development and Evaluation of a System for the Detection of Aerial Parts and Measurement of Growth Indices of Bell Pepper Plant Based on Stereo and Multispectral Imaging“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK109.
Der volle Inhalt der QuelleDuring the growth of plants, monitoring them brings much benefits to the producers. This monitoring includes the measurement of physical properties, counting plants leaves, detection of plants and separation of them from weeds. All these can be done different techniques, however, the techniques are favorable that are non-destructive because plant is a very sensitive creature that any manipulation can put disorder in its growth or lead to losing leaves or branches. Imaging techniques are of the best solutions for plants growth monitoring and geometric measurements. In this regard, in this project the use of stereo imaging and multispectral data was studied. Active and passive stereo imaging were employed for the estimation of physical properties and counting leaves and multispectral data was utilized for the separation of crop and weed. Bell pepper plant was used for imaging measurements for a period of 30 days and for crop/weed separation, the spectral responses of bell pepper and five weeds were measured. Nine physical properties of pepper leaves (i.e. main leaf diameters, leaf area, leaf perimeter etc.) were measured using a scanner and was used as a database and also for comparing the estimated values to the actual values. The stereo system consisted of two LogiTech cameras and a video projector. First the stereo system was calibrated using sample images of a standard checkerboard in different position and angles. The system was controlled using the computer for turning a light line on, recording videos of both cameras while light is being swept on the plant and then stopping the light. The frames were extracted and processed. The processing algorithm first filtered the images for removing noise and then thresholded the unwanted pixels of environment. Then, using the peak detection method of Center of Mass the main and central part of the light line was extracted. After, the images were rectified by using the calibration information. Then the correspondent pixels were detected and used for the 3D model development. The obtained point cloud was transformed to a meshed surface and used for physical properties measurement. Passive stereo imaging was used for leaf detection and counting. For passive stereo matching six different matching algorithms and three cost functions were used and compared. For spectral responses of plants, they were freshly moved to the laboratory, leaves were detached from the plants and placed on a blur dark background. Type A lights were used for illumination and the spectral measurements were carried out using a spectroradiometer from 380 nm to 1000 nm. To reduce the dimensionality of the data, PCA and wavelet transform were used. Results of this study showed that the use of stereo imaging can propose a cheap and non-destructive tool for agriculture. An important advantage of active stereo imaging is that it is light-independent and can be used during the night. However, the use of active stereo for the primary stage of growth provides acceptable results but after that stage, the system will be unable to detect and reconstruct all leaves and plant's parts. Using ASI the R2 values of 0.978 and 0.967 were obtained for the estimation leaf area and perimeter, respectively. The results of separation of crop and weeds using spectral data were very promising and the classifier—which was based on deep learning—could completely separate pepper from other five weeds
Bücher zum Thema "Active stereo vision"
Active computer vision by cooperative focus and stereo. New York: Springer-Verlag, 1989.
Den vollen Inhalt der Quelle findenKrotkov, Eric Paul. Active Computer Vision by Cooperative Focus and Stereo. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4613-9663-5.
Der volle Inhalt der QuelleKrotkov, Eric Paul. Active Computer Vision by Cooperative Focus and Stereo. New York, NY: Springer New York, 1989.
Den vollen Inhalt der Quelle findenTakao, Kumazawa, Kruger Lawrence und Mizumura Kazue, Hrsg. The polymodal receptor: A gateway to pathological pain. Amsterdam: Elsevier, 1996.
Den vollen Inhalt der Quelle findenJohansen, Bruce, und Adebowale Akande, Hrsg. Nationalism: Past as Prologue. Nova Science Publishers, Inc., 2021. http://dx.doi.org/10.52305/aief3847.
Der volle Inhalt der Quelle(Editor), T. Kumazawa, L. Kruger (Editor) und K. Mizumura (Editor), Hrsg. The Polymodal Receptor - A Gateway to Pathological Pain (Progress in Brain Research). Elsevier Science, 1996.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Active stereo vision"
Hogue, Andrew, und Michael Jenkin. „Active Stereo Vision“. In Computer Vision, 1–6. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_282-1.
Der volle Inhalt der QuelleHogue, Andrew, und Michael R. M. Jenkin. „Active Stereo Vision“. In Computer Vision, 8–12. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_282.
Der volle Inhalt der QuelleHogue, Andrew, und Michael Jenkin. „Active Stereo Vision“. In Computer Vision, 27–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_282.
Der volle Inhalt der QuelleWang, Ce, Zhanyi Hu und Song De Ma. „Active vision based stereo vision“. In Recent Developments in Computer Vision, 229–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-60793-5_78.
Der volle Inhalt der QuelleGrosso, Enrico, Massimo Tistarelli und Giulio Sandini. „Active/dynamic stereo for navigation“. In Computer Vision — ECCV'92, 516–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55426-2_57.
Der volle Inhalt der QuelleIkeuchi, Katsushi, Yasuyuki Matsushita, Ryusuke Sagawa, Hiroshi Kawasaki, Yasuhiro Mukaigawa, Ryo Furukawa und Daisuke Miyazaki. „Photometric Stereo“. In Active Lighting and Its Application for Computer Vision, 107–23. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-56577-0_5.
Der volle Inhalt der QuelleSkifstad, Kurt, und Ramesh Jain. „A New Paradigm for Computational Stereo“. In Active Perception and Robot Vision, 465–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_24.
Der volle Inhalt der QuelleKrotkov, Eric Paul. „Stereo with Verging Cameras“. In Active Computer Vision by Cooperative Focus and Stereo, 43–62. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4613-9663-5_4.
Der volle Inhalt der QuelleKrotkov, Eric Paul. „An Agile Stereo Camera System“. In Active Computer Vision by Cooperative Focus and Stereo, 7–18. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4613-9663-5_2.
Der volle Inhalt der QuelleKawasaki, Hiroshi, Yutaka Ohsawa, Ryo Furukawa und Yasuaki Nakamura. „Dense 3D Reconstruction with an Uncalibrated Active Stereo System“. In Computer Vision – ACCV 2006, 882–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11612704_88.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Active stereo vision"
Urquhart, C. W., J. P. Siebert, J. P. McDonald und R. J. Fryer. „Active Animate Stereo Vision“. In British Machine Vision Conference 1993. British Machine Vision Association, 1993. http://dx.doi.org/10.5244/c.7.8.
Der volle Inhalt der QuelleJenkin und Tsotsos. „Active stereo vision and cyclotorsion“. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Comput. Soc. Press, 1994. http://dx.doi.org/10.1109/cvpr.1994.323903.
Der volle Inhalt der QuelleViitanen, Jouko O. „Active stereo for mobile robot vision“. In Photonics East (ISAM, VVDC, IEMB), herausgegeben von David P. Casasent. SPIE, 1998. http://dx.doi.org/10.1117/12.325795.
Der volle Inhalt der QuelleMedioni, Gérard, und Jean-Luc Jezouin. „An Implementation of an Active Stereo Range Finder1“. In Machine Vision. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/mv.1987.tha3.
Der volle Inhalt der QuelleSumanasena, M. G. B., J. G. Samarawickrama und A. A. Pasqual. „Mobile Stereo Camera Platform for Active Vision“. In 2006 International Conference on Industrial and Information Systems. IEEE, 2006. http://dx.doi.org/10.1109/iciinfs.2006.347164.
Der volle Inhalt der QuelleAntonisse, Hendrick J. „Active stereo vision routines using PRISM-3“. In Applications in Optical Science and Engineering, herausgegeben von David P. Casasent. SPIE, 1992. http://dx.doi.org/10.1117/12.131578.
Der volle Inhalt der QuelleClark, James J., Michael J. Weisman und Alan L. Yuille. „Using viewpoint consistency in active stereo vision“. In Applications in Optical Science and Engineering, herausgegeben von David P. Casasent. SPIE, 1992. http://dx.doi.org/10.1117/12.131573.
Der volle Inhalt der QuelleSumanasena, M. G. B., J. G. Samarawickrama und A. A. Pasqual. „Mobile Stereo Camera Platform for Active Vision“. In First International Conference on Industrial and Information Systems. IEEE, 2006. http://dx.doi.org/10.1109/iciis.2006.365738.
Der volle Inhalt der QuelleBartolomei, Luca, Matteo Poggi, Fabio Tosi, Andrea Conti und Stefano Mattoccia. „Active Stereo Without Pattern Projector“. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.01693.
Der volle Inhalt der QuelleSiebert, J. P., C. W. Urquhart, D. F. Wilson, J. P. McDonald, P. H. Mowforth und R. J. Fryer. „The Active Stereo Probe: Dynamic Video Feedback“. In British Machine Vision Conference 1991. Springer-Verlag London Limited, 1991. http://dx.doi.org/10.5244/c.5.56.
Der volle Inhalt der Quelle