Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Camera guidance for robot“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Camera guidance for robot" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Camera guidance for robot"
Sai, Hesin, und Yoshikuni Okawa. „Structured Sign for Guidance of Mobile Robot“. Journal of Robotics and Mechatronics 3, Nr. 5 (20.10.1991): 379–86. http://dx.doi.org/10.20965/jrm.1991.p0379.
Der volle Inhalt der QuelleYang, Long, und Nan Feng Xiao. „Robot Stereo Vision Guidance System Based on Attention Mechanism“. Applied Mechanics and Materials 385-386 (August 2013): 708–11. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.708.
Der volle Inhalt der QuelleGolkowski, Alexander Julian, Marcus Handte, Peter Roch und Pedro J. Marrón. „An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems“. International Journal of Semantic Computing 15, Nr. 03 (September 2021): 337–57. http://dx.doi.org/10.1142/s1793351x21400080.
Der volle Inhalt der QuelleBlais, François, Marc Rioux und Jacques Domey. „Compact three-dimensional camera for robot and vehicle guidance“. Optics and Lasers in Engineering 10, Nr. 3-4 (Januar 1989): 227–39. http://dx.doi.org/10.1016/0143-8166(89)90039-0.
Der volle Inhalt der QuelleImasato, Akimitsu, und Noriaki Maru. „Guidance and Control of Nursing Care Robot Using Gaze Point Detector and Linear Visual Servoing“. International Journal of Automation Technology 5, Nr. 3 (05.05.2011): 452–57. http://dx.doi.org/10.20965/ijat.2011.p0452.
Der volle Inhalt der QuelleYang, Chun Hui, und Fu Dong Wang. „Trajectory Recognition and Navigation Control in the Mobile Robot“. Key Engineering Materials 464 (Januar 2011): 11–14. http://dx.doi.org/10.4028/www.scientific.net/kem.464.11.
Der volle Inhalt der QuelleBelmonte, Álvaro, José Ramón, Jorge Pomares, Gabriel Garcia und Carlos Jara. „Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing“. Electronics 8, Nr. 4 (27.03.2019): 374. http://dx.doi.org/10.3390/electronics8040374.
Der volle Inhalt der QuelleAchour, K., und A. O. Djekoune. „Localization and guidance with an embarked camera on a mobile robot“. Advanced Robotics 16, Nr. 1 (Januar 2002): 87–102. http://dx.doi.org/10.1163/156855302317413754.
Der volle Inhalt der QuelleBazeille, Stephane, Emmanuel Battesti und David Filliat. „A Light Visual Mapping and Navigation Framework for Low-Cost Robots“. Journal of Intelligent Systems 24, Nr. 4 (01.12.2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.
Der volle Inhalt der QuelleXue, Jin Lin, und Tony E. Grift. „Agricultural Robot Turning in the Headland of Corn Fields“. Applied Mechanics and Materials 63-64 (Juni 2011): 780–84. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.780.
Der volle Inhalt der QuelleDissertationen zum Thema "Camera guidance for robot"
Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.
Der volle Inhalt der QuelleGrepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.
Der volle Inhalt der QuelleMacknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der QuelleMaier, Daniel [Verfasser], und Maren [Akademischer Betreuer] Bennewitz. „Camera-based humanoid robot navigation“. Freiburg : Universität, 2015. http://d-nb.info/1119452082/34.
Der volle Inhalt der QuelleGu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.
Der volle Inhalt der QuelleArthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.
Der volle Inhalt der QuelleStark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.
Der volle Inhalt der QuelleThis thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.
Snailum, Nicholas. „Mobile robot navigation using single camera vision“. Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.
Der volle Inhalt der QuelleQuine, Ben. „Spacecraft guidance systems : attitude determination using star camera data“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360417.
Der volle Inhalt der QuelleBurman, Gustav, und Simon Erlandsson. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Maskinkonstruktion (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230253.
Der volle Inhalt der QuelleI dagens digitala samhälle är sättet som undervisning skerpå under ständig förändring. Undervisningen håller på attdigitaliseras genom användningen av nätbaserade kurseroch digitala föreläsningar. Detta kandidatexamensarbetesöker en lösning på frågan om hur man kan filma en föreläsningutan en kameraoperatör, med en automatiserad kameraman,för lättare produktion av högkvalitativt videomaterial.Genom en modulariserad designprocess, praktiska testeroch vetenskapliga studier, designades ett sådant system.Det automatiska kamerastativet kan placeras längst bak ien föreläsningssal, på vilket en kamera kan placeras för attspela in eller strömma filmmaterial medan stativet riktar insig mot föreläsarens position, med hjälp av bildbehandling.
Bücher zum Thema "Camera guidance for robot"
S, Roth Zvi, Hrsg. Camera-aided robot calibration. Boca Raton: CRC Press, 1996.
Den vollen Inhalt der Quelle findenSnailum, Nicholas. Mobile robot navigation using single camera vision. London: University of East London, 2001.
Den vollen Inhalt der Quelle findenHorn, Geoffrey M. Camera operator. Pleasantville, NY: Gareth Stevens Pub., 2009.
Den vollen Inhalt der Quelle findenManatt, Kathleen G. Robot scientist. Ann Arbor: Cherry Lake Pub., 2007.
Den vollen Inhalt der Quelle findenPomerleau, Dean A. Neural network perception for mobile robot guidance. Boston: Kluwer Academic Publishers, 1993.
Den vollen Inhalt der Quelle findenPomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US, 1993.
Den vollen Inhalt der Quelle findenPomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0.
Der volle Inhalt der QuelleSteer, Barry. Navigation for the guidance of a mobile robot. [s.l.]: typescript, 1985.
Den vollen Inhalt der Quelle findenWilliam, Link, Hrsg. Off camera: Conversations with the makers of prime-time television. New York, N.Y: New American Library, 1986.
Den vollen Inhalt der Quelle findenArndt, David. Make money with your camera. Buffalo, NY: Amherst Media, 1999.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Camera guidance for robot"
Bihlmaier, Andreas. „Endoscope Robots and Automated Camera Guidance“. In Learning Dynamic Spatial Relations, 23–102. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_2.
Der volle Inhalt der QuelleBaltes, Jacky. „Camera Calibration Using Rectangular Textures“. In Robot Vision, 245–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_30.
Der volle Inhalt der QuelleMartínez, Antonio B., und Albert Larré. „Fast Mobile Robot Guidance“. In Traditional and Non-Traditional Robotic Sensors, 423–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75984-0_26.
Der volle Inhalt der QuelleScheibe, Karsten, Hartmut Korsitzky, Ralf Reulke, Martin Scheele und Michael Solbrig. „EYESCAN - A High Resolution Digital Panoramic Camera“. In Robot Vision, 77–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_10.
Der volle Inhalt der QuelleBarnes, Nick, und Zhi-Qiang Liu. „Object Recognition Mobile Robot Guidance“. In Knowledge-Based Vision-Guided Robots, 63–86. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1780-5_4.
Der volle Inhalt der QuelleRojtberg, Pavel. „User Guidance for Interactive Camera Calibration“. In Virtual, Augmented and Mixed Reality. Multimodal Interaction, 268–76. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21607-8_21.
Der volle Inhalt der QuelleBihlmaier, Andreas. „Intraoperative Robot-Based Camera Assistance“. In Learning Dynamic Spatial Relations, 185–208. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_6.
Der volle Inhalt der QuelleAnko, Börner, Hirschmüller Heiko, Scheibe Karsten, Suppa Michael und Wohlfeil Jürgen. „MFC - A Modular Line Camera for 3D World Modulling“. In Robot Vision, 319–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78157-8_24.
Der volle Inhalt der QuelleBobadilla, Leonardo, Katrina Gossman und Steven M. LaValle. „Manipulating Ergodic Bodies through Gentle Guidance“. In Robot Motion and Control 2011, 273–82. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2343-9_23.
Der volle Inhalt der QuellePomerleau, Dean A. „Other Vision-based Robot Guidance Methods“. In Neural Network Perception for Mobile Robot Guidance, 161–71. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0_11.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Camera guidance for robot"
Kameoka, Kanako, Shigeru Uchikado und Sun Lili. „Visual Guidance for a Mobile Robot with a Camera“. In TENCON 2006 - 2006 IEEE Region 10 Conference. IEEE, 2006. http://dx.doi.org/10.1109/tencon.2006.344018.
Der volle Inhalt der Quelle„Guidance of Robot Arms using Depth Data from RGB-D Camera“. In 10th International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004481903150321.
Der volle Inhalt der QuelleLi, Zhiyuan, und Demao Ye. „Research on visual guidance algorithm of forking robot based on monocular camera“. In Conference on Optics Ultra Precision Manufacturing and Testing, herausgegeben von Dawei Zhang, Lingbao Kong und Xichun Luo. SPIE, 2020. http://dx.doi.org/10.1117/12.2575675.
Der volle Inhalt der QuelleMartinez-Rey, Miguel, Felipe Espinosa, Alfredo Gardel, Carlos Santos und Enrique Santiso. „Mobile robot guidance using adaptive event-based pose estimation and camera sensor“. In 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). IEEE, 2016. http://dx.doi.org/10.1109/ebccsp.2016.7605089.
Der volle Inhalt der QuelleSamu, Tayib, Nikhal Kelkar, David Perdue, Michael A. Ruthemeyer, Bradley O. Matthews und Ernest L. Hall. „Line following using a two camera guidance system for a mobile robot“. In Photonics East '96, herausgegeben von David P. Casasent. SPIE, 1996. http://dx.doi.org/10.1117/12.256287.
Der volle Inhalt der QuelleZhang, Zhexiao, Zhiqian Cheng, Geng Wang und Jimin Xu. „A VSLAM Fusing Visible Images and Infrared Images from RGB-D camera for Indoor Mobile Robot“. In 2018 IEEE CSAA Guidance, Navigation and Control Conference (GNCC). IEEE, 2018. http://dx.doi.org/10.1109/gncc42960.2018.9019128.
Der volle Inhalt der QuelleSai, Heshin, und Yoshikuni Okawa. „A Structured Sign for the Guidance of Autonomous Mobile Robots“. In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0019.
Der volle Inhalt der QuelleShah, Syed, Suresh Kannan und Eric Johnson. „Motion Estimation for Obstacle Detection and Avoidance Using a Single Camera for UAVs/Robots“. In AIAA Guidance, Navigation, and Control Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2010. http://dx.doi.org/10.2514/6.2010-7569.
Der volle Inhalt der QuelleSarkar, Saurabh, und Manish Kumar. „Fast Swarm Based Lane Detection System for Mobile Robot Navigation on Urban Roads“. In ASME 2009 Dynamic Systems and Control Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/dscc2009-2702.
Der volle Inhalt der QuelleZhao, Haimei, Wei Bian, Bo Yuan und Dacheng Tao. „Collaborative Learning of Depth Estimation, Visual Odometry and Camera Relocalization from Monocular Videos“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/68.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Camera guidance for robot"
Chen, J., W. E. Dixon, D. M. Dawson und V. K. Chitrakaran. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera. Fort Belvoir, VA: Defense Technical Information Center, Januar 2004. http://dx.doi.org/10.21236/ada465705.
Der volle Inhalt der QuelleHaas, Gary, und Philip R. Osteen. Wall Sensing for an Autonomous Robot With a Three-Dimensional Time-of-Flight (3-D TOF) Camera. Fort Belvoir, VA: Defense Technical Information Center, Februar 2011. http://dx.doi.org/10.21236/ada539897.
Der volle Inhalt der Quelle