Auswahl der wissenschaftlichen Literatur zum Thema „Virtual visual servoing“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Virtual visual servoing" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Virtual visual servoing"
Andreff, Nicolas, und Brahim Tamadazte. „Laser steering using virtual trifocal visual servoing“. International Journal of Robotics Research 35, Nr. 6 (24.07.2015): 672–94. http://dx.doi.org/10.1177/0278364915585585.
Der volle Inhalt der QuelleAssa, Akbar, und Farrokh Janabi-Sharifi. „Virtual Visual Servoing for Multicamera Pose Estimation“. IEEE/ASME Transactions on Mechatronics 20, Nr. 2 (April 2015): 789–98. http://dx.doi.org/10.1109/tmech.2014.2305916.
Der volle Inhalt der QuelleCao, Chenguang. „Research on a Visual Servoing Control Method Based on Perspective Transformation under Spatial Constraint“. Machines 10, Nr. 11 (18.11.2022): 1090. http://dx.doi.org/10.3390/machines10111090.
Der volle Inhalt der QuelleYu, Qiuda, Wu Wei, Dongliang Wang, Yanjie Li und Yong Gao. „A Framework for IBVS Using Virtual Work“. Actuators 13, Nr. 5 (10.05.2024): 181. http://dx.doi.org/10.3390/act13050181.
Der volle Inhalt der QuelleKawamura, Akihiro, Kenji Tahara, Ryo Kurazume und Tsutomu Hasegawa. „Robust Visual Servoing for Object Manipulation Against Temporary Loss of Sensory Information Using a Multi-Fingered Hand-Arm“. Journal of Robotics and Mechatronics 25, Nr. 1 (20.02.2013): 125–35. http://dx.doi.org/10.20965/jrm.2013.p0125.
Der volle Inhalt der QuelleXie, Hui, Geoff Fink, Alan F. Lynch und Martin Jagersand. „Adaptive visual servoing of UAVs using a virtual camera“. IEEE Transactions on Aerospace and Electronic Systems 52, Nr. 5 (Oktober 2016): 2529–38. http://dx.doi.org/10.1109/taes.2016.15-0155.
Der volle Inhalt der QuelleGratal, Xavi, Javier Romero und Danica Kragic. „Virtual Visual Servoing for Real-Time Robot Pose Estimation“. IFAC Proceedings Volumes 44, Nr. 1 (Januar 2011): 9017–22. http://dx.doi.org/10.3182/20110828-6-it-1002.02970.
Der volle Inhalt der QuelleWallin, Erik, Viktor Wiberg und Martin Servin. „Multi-Log Grasping Using Reinforcement Learning and Virtual Visual Servoing“. Robotics 13, Nr. 1 (21.12.2023): 3. http://dx.doi.org/10.3390/robotics13010003.
Der volle Inhalt der QuelleMarchand, Eric, und Francois Chaumette. „Virtual Visual Servoing: a framework for real-time augmented reality“. Computer Graphics Forum 21, Nr. 3 (September 2002): 289–98. http://dx.doi.org/10.1111/1467-8659.00588.
Der volle Inhalt der QuelleMarchand, Éric, und François Chaumette. „Virtual Visual Servoing: a framework for real-time augmented reality“. Computer Graphics Forum 21, Nr. 3 (September 2002): 289–97. http://dx.doi.org/10.1111/1467-8659.t01-1-00588.
Der volle Inhalt der QuelleDissertationen zum Thema "Virtual visual servoing"
Guerbas, Seif Eddine. „Modélisation adaptée des images omnidirectionnelles pour agrandir le domaine de convergence de l'asservissement visuel virtuel direct“. Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0026.
Der volle Inhalt der QuelleOmnidirectional vision captures a scene in real-time in all directions with a wider field of view than a conventional camera. Within the environment, linking the visual features contained in the camera images to its movements is a central issue for visual servoing. Direct approaches, however, are characterized by a limited range of convergence. The main objective of this dissertation is to significantly extend the area of convergence in the context of virtual visual servoing by representing the omnidirectional image by a Photometric Gaussian Mixtures (PGM). This approach is further extended in the second step to the registration and direct tracking based on 3D models in omnidirectional images. This proposed methodology allows for studying the localization of a mobile robot equipped with a panoramic camera in a 3D urban model. The results show a significant enlargement of the convergence domain for high robustness to large interframe movements, as evidenced by experiments in virtual environments and with real images captured with a mobile robot and a vehicle
Ting-YuChang und 張庭育. „Study on Virtual Visual Servoing Estimator and Dynamic Visual Servoing Scheme“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/739fwn.
Der volle Inhalt der QuelleLai, Chang-Yu, und 賴長佑. „System Integration and Development for Soccer Robots on Virtual Reality and Visual Servoing“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/67981247713025562888.
Der volle Inhalt der Quelle國立臺灣大學
機械工程學研究所
90
In this thesis, a visual servoing system has been implemented on soccer robot system. We proposed a method combining RGB (Red, Green, Blue) model and HSI (Hue, Saturation, Intensity) model to define the colors of interest. When searching for blobs, geometry characteristics are used to specify the blobs. A predict method is also used to locate the windows of tracking. In terms of robots'' number identification, different shapes of patterns are adapted as features for recognition. A robust algorithm is used to recognize these shapes, and figures out the correct identification that is important for the strategy system. This method is scaling and orientation invariant. Except for that, we also proposed a virtual soccer robot platform. This platform provides two functions. One is for remote monitoring. The events happened on real soccer robot can be displayed on computer monitor far away. The other is for strategy development of soccer robot system. This platform can demonstrate the result of the strategy design without real soccer robots. Therefore, the strategy development can be beyond the limitation of space, time or money.
Konferenzberichte zum Thema "Virtual visual servoing"
Nammoto, Takashi, Koichi Hashimoto, Shingo Kagami und Kazuhiro Kosuge. „High speed/accuracy visual servoing based on virtual visual servoing with stereo cameras“. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013). IEEE, 2013. http://dx.doi.org/10.1109/iros.2013.6696330.
Der volle Inhalt der QuelleAlex, Joseph, Barmeshwar Vikramaditya und Bradley J. Nelson. „A Virtual Reality Teleoperator Interface for Assembly of Hybrid MEMS Prototypes“. In ASME 1998 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/detc98/mech-5836.
Der volle Inhalt der QuellePressigout, M., und E. Marchand. „Model-free augmented reality by virtual visual servoing“. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334401.
Der volle Inhalt der QuelleLe Bras, Florent, Tarek Hamel und Robert Mahony. „Visual servoing of a VTOL vehicle using virtual states“. In 2007 46th IEEE Conference on Decision and Control. IEEE, 2007. http://dx.doi.org/10.1109/cdc.2007.4434113.
Der volle Inhalt der QuelleKingkan, Cherdsak, Shogo Ito, Shogo Arai, Takashi Nammoto und Koichi Hashimoto. „Model-based virtual visual servoing with point cloud data“. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016. http://dx.doi.org/10.1109/iros.2016.7759816.
Der volle Inhalt der QuelleDefterli, Sinem Gozde, und Yunjun Xu. „Virtual Motion Camouflage Based Visual Servo Control of a Leaf Picking Mechanism“. In ASME 2018 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/dscc2018-9042.
Der volle Inhalt der QuelleCai, Caixia, Nikhil Somani, Suraj Nair, Dario Mendoza und Alois Knoll. „Uncalibrated stereo visual servoing for manipulators using virtual impedance control“. In 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014. http://dx.doi.org/10.1109/icarcv.2014.7064604.
Der volle Inhalt der QuelleLi, Weiguang, Guoqiang Ye, Hao Wan, Shaohua Zheng und Zhijiang Lu. „Decoupled control for visual servoing with SVM-based virtual moments“. In 2015 IEEE International Conference on Information and Automation (ICIA). IEEE, 2015. http://dx.doi.org/10.1109/icinfa.2015.7279638.
Der volle Inhalt der QuelleNair, S., T. Roder, G. Panin und A. Knoll. „Visual servoing of presenters in augmented virtual reality TV studios“. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5648809.
Der volle Inhalt der QuelleAl-Shanoon, Abdulrahman, Aaron Hao Tan, Haoxiang Lang und Ying Wang. „Mobile Robot Regulation with Position Based Visual Servoing“. In 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA). IEEE, 2018. http://dx.doi.org/10.1109/civemsa.2018.8439978.
Der volle Inhalt der Quelle