Littérature scientifique sur le sujet « Virtual visual servoing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Virtual visual servoing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Virtual visual servoing"

1

Andreff, Nicolas, et Brahim Tamadazte. « Laser steering using virtual trifocal visual servoing ». International Journal of Robotics Research 35, no 6 (24 juillet 2015) : 672–94. http://dx.doi.org/10.1177/0278364915585585.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Assa, Akbar, et Farrokh Janabi-Sharifi. « Virtual Visual Servoing for Multicamera Pose Estimation ». IEEE/ASME Transactions on Mechatronics 20, no 2 (avril 2015) : 789–98. http://dx.doi.org/10.1109/tmech.2014.2305916.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cao, Chenguang. « Research on a Visual Servoing Control Method Based on Perspective Transformation under Spatial Constraint ». Machines 10, no 11 (18 novembre 2022) : 1090. http://dx.doi.org/10.3390/machines10111090.

Texte intégral
Résumé :
Visual servoing has been widely employed in robotic control to increase the flexibility and precision of a robotic arm. When the end-effector of the robotic arm needs to be moved to a spatial point without a coordinate, the conventional visual servoing control method has difficulty performing the task. The present work describes space constraint challenges in a visual servoing system by introducing an assembly node and then presents a two-stage visual servoing control approach based on perspective transformation. A virtual image plane is constructed using a calibration-derived homography matrix. The assembly node, as well as other objects, are projected into the plane after that. Second, the controller drives the robotic arm by tracking the projections in the virtual image plane and adjusting the position and attitude of the workpiece accordingly. Three simple image features are combined into a composite image feature, and an active disturbance rejection controller (ADRC) is established to improve the robotic arm’s motion sensitivity. Real-time simulations and experiments employing a robotic vision system with an eye-to-hand configuration are used to validate the effectiveness of the presented method. The results show that the robotic arm can move the workpiece to the desired position without using coordinates.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yu, Qiuda, Wu Wei, Dongliang Wang, Yanjie Li et Yong Gao. « A Framework for IBVS Using Virtual Work ». Actuators 13, no 5 (10 mai 2024) : 181. http://dx.doi.org/10.3390/act13050181.

Texte intégral
Résumé :
The visual servoing of manipulators is challenged by two main problems: the singularity of the inverse Jacobian and the physical constraints of a manipulator. In order to overcome the singularity issue, this paper presents a novel approach for image-based visual servoing (IBVS), which converts the propagation of errors in the image plane into the conduction of virtual forces using the principle of virtual work. This approach eliminates the need for Jacobian inversion computations and prevents matrix inversion singularity. To tackle physical constraints, reverse thinking is adopted to derive the function of the upper and lower bounds of the joint velocity on the joint angle. This enables the proposed method to configure the physical constraints of the robot in a more intuitive manner. To validate the effectiveness of the proposed method, an eye-in-hand system based on UR5 in VREP, as well as a physical robot, were established.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kawamura, Akihiro, Kenji Tahara, Ryo Kurazume et Tsutomu Hasegawa. « Robust Visual Servoing for Object Manipulation Against Temporary Loss of Sensory Information Using a Multi-Fingered Hand-Arm ». Journal of Robotics and Mechatronics 25, no 1 (20 février 2013) : 125–35. http://dx.doi.org/10.20965/jrm.2013.p0125.

Texte intégral
Résumé :
This paper proposes a robust visual servoing method for object manipulation against temporary loss of sensory information. It is known that visual information is useful for reliable object grasping and precise manipulation. Visual information becomes unavailable, however when occlusion occurs or a grasped object disappears during manipulation. In that case, the behavior of the visual servoing system becomes unstable. Our proposed method enables an object to be grasped and manipulated stably even if visual information is temporarily unavailable during manipulation. This method is based on dynamic stable object grasping and manipulation proposed in our previous work and the concept of virtual object information. A dynamic model of the overall system is first formulated. A new controller using both actual and virtual object information is proposed next. The usefulness of this method is finally verified through both numerical simulation and experiments using a triple-fingered mechanical hand.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xie, Hui, Geoff Fink, Alan F. Lynch et Martin Jagersand. « Adaptive visual servoing of UAVs using a virtual camera ». IEEE Transactions on Aerospace and Electronic Systems 52, no 5 (octobre 2016) : 2529–38. http://dx.doi.org/10.1109/taes.2016.15-0155.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gratal, Xavi, Javier Romero et Danica Kragic. « Virtual Visual Servoing for Real-Time Robot Pose Estimation ». IFAC Proceedings Volumes 44, no 1 (janvier 2011) : 9017–22. http://dx.doi.org/10.3182/20110828-6-it-1002.02970.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wallin, Erik, Viktor Wiberg et Martin Servin. « Multi-Log Grasping Using Reinforcement Learning and Virtual Visual Servoing ». Robotics 13, no 1 (21 décembre 2023) : 3. http://dx.doi.org/10.3390/robotics13010003.

Texte intégral
Résumé :
We explore multi-log grasping using reinforcement learning and virtual visual servoing for automated forwarding in a simulated environment. Automation of forest processes is a major challenge, and many techniques regarding robot control pose different challenges due to the unstructured and harsh outdoor environment. Grasping multiple logs involves various problems of dynamics and path planning, where understanding the interaction between the grapple, logs, terrain, and obstacles requires visual information. To address these challenges, we separate image segmentation from crane control and utilise a virtual camera to provide an image stream from reconstructed 3D data. We use Cartesian control to simplify domain transfer to real-world applications. Because log piles are static, visual servoing using a 3D reconstruction of the pile and its surroundings is equivalent to using real camera data until the point of grasping. This relaxes the limits on computational resources and time for the challenge of image segmentation, and allows for data collection in situations where the log piles are not occluded. The disadvantage is the lack of information during grasping. We demonstrate that this problem is manageable and present an agent that is 95% successful in picking one or several logs from challenging piles of 2–5 logs.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Marchand, Eric, et Francois Chaumette. « Virtual Visual Servoing : a framework for real-time augmented reality ». Computer Graphics Forum 21, no 3 (septembre 2002) : 289–98. http://dx.doi.org/10.1111/1467-8659.00588.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Marchand, Éric, et François Chaumette. « Virtual Visual Servoing : a framework for real-time augmented reality ». Computer Graphics Forum 21, no 3 (septembre 2002) : 289–97. http://dx.doi.org/10.1111/1467-8659.t01-1-00588.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Virtual visual servoing"

1

Guerbas, Seif Eddine. « Modélisation adaptée des images omnidirectionnelles pour agrandir le domaine de convergence de l'asservissement visuel virtuel direct ». Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0026.

Texte intégral
Résumé :
La vision omnidirectionnelle capture dans toutes les directions une scène en temps réel grâce à un champ de vision plus étendu que celui offert par une caméra conventionnelle. Au sein de l'environnement, relier les caractéristiques visuelles contenues dans les images de la caméra à ses mouvements est une problématique centrale pour l'asservissement visuel. Les approches directes se caractérisent cependant par un domaine de convergence limité. La thèse que nous présentons a pour premier objectif d'étendre significativement ce dernier dans le cadre de l'asservissement visuel virtuel en représentant l'image omnidirectionnelle par un Mélange de Gaussiennes Photométriques (MGP). Cette approche est étendue dans un deuxième temps au recalage et au suivi direct basé modèle 3D dans les images omnidirectionnelles. Cela permet d'étudier la localisation d'un robot mobile équipé d'une caméra panoramique dans un modèle urbain 3D. Les expérimentations ont été conduites en environnement virtuel et en utilisant des images réelles capturées à l'aide d'un robot mobile et d'un véhicule. Les résultats montrent un agrandissement significatif du domaine de convergence qui permet alors une grande robustesse face à d'importants mouvements inter-images
Omnidirectional vision captures a scene in real-time in all directions with a wider field of view than a conventional camera. Within the environment, linking the visual features contained in the camera images to its movements is a central issue for visual servoing. Direct approaches, however, are characterized by a limited range of convergence. The main objective of this dissertation is to significantly extend the area of convergence in the context of virtual visual servoing by representing the omnidirectional image by a Photometric Gaussian Mixtures (PGM). This approach is further extended in the second step to the registration and direct tracking based on 3D models in omnidirectional images. This proposed methodology allows for studying the localization of a mobile robot equipped with a panoramic camera in a 3D urban model. The results show a significant enlargement of the convergence domain for high robustness to large interframe movements, as evidenced by experiments in virtual environments and with real images captured with a mobile robot and a vehicle
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ting-YuChang et 張庭育. « Study on Virtual Visual Servoing Estimator and Dynamic Visual Servoing Scheme ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/739fwn.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Lai, Chang-Yu, et 賴長佑. « System Integration and Development for Soccer Robots on Virtual Reality and Visual Servoing ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/67981247713025562888.

Texte intégral
Résumé :
碩士
國立臺灣大學
機械工程學研究所
90
In this thesis, a visual servoing system has been implemented on soccer robot system. We proposed a method combining RGB (Red, Green, Blue) model and HSI (Hue, Saturation, Intensity) model to define the colors of interest. When searching for blobs, geometry characteristics are used to specify the blobs. A predict method is also used to locate the windows of tracking. In terms of robots'' number identification, different shapes of patterns are adapted as features for recognition. A robust algorithm is used to recognize these shapes, and figures out the correct identification that is important for the strategy system. This method is scaling and orientation invariant. Except for that, we also proposed a virtual soccer robot platform. This platform provides two functions. One is for remote monitoring. The events happened on real soccer robot can be displayed on computer monitor far away. The other is for strategy development of soccer robot system. This platform can demonstrate the result of the strategy design without real soccer robots. Therefore, the strategy development can be beyond the limitation of space, time or money.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Virtual visual servoing"

1

Nammoto, Takashi, Koichi Hashimoto, Shingo Kagami et Kazuhiro Kosuge. « High speed/accuracy visual servoing based on virtual visual servoing with stereo cameras ». Dans 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013). IEEE, 2013. http://dx.doi.org/10.1109/iros.2013.6696330.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Alex, Joseph, Barmeshwar Vikramaditya et Bradley J. Nelson. « A Virtual Reality Teleoperator Interface for Assembly of Hybrid MEMS Prototypes ». Dans ASME 1998 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/detc98/mech-5836.

Texte intégral
Résumé :
Abstract In this paper we describe a teleoperated microassembly workcell that integrates a VRML-based virtual microworld with visual servoing micromanipulation strategies. Java is used to program the VRML-based supervisory interface and to communicate with the microassembly workcell. This provides platform independence and allows remote teleoperation over the Internet. A key aspect of our approach entails the integration of teleoperation and visual servoing strategies. This allows a supervisor to guide the task remotely, while visual servoing strategies compensate for the imprecisely calibrated microworld. Results are presented that demonstrate system performance when a supervisor manipulates a microobject remotely. Though Internet delays impact the dynamic performance of the system, teleoperated relative parts placements with submicron precision is successfully demonstrated.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pressigout, M., et E. Marchand. « Model-free augmented reality by virtual visual servoing ». Dans Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334401.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Le Bras, Florent, Tarek Hamel et Robert Mahony. « Visual servoing of a VTOL vehicle using virtual states ». Dans 2007 46th IEEE Conference on Decision and Control. IEEE, 2007. http://dx.doi.org/10.1109/cdc.2007.4434113.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kingkan, Cherdsak, Shogo Ito, Shogo Arai, Takashi Nammoto et Koichi Hashimoto. « Model-based virtual visual servoing with point cloud data ». Dans 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016. http://dx.doi.org/10.1109/iros.2016.7759816.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Defterli, Sinem Gozde, et Yunjun Xu. « Virtual Motion Camouflage Based Visual Servo Control of a Leaf Picking Mechanism ». Dans ASME 2018 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/dscc2018-9042.

Texte intégral
Résumé :
For a lately constructed disease detection field robot, the segregation of unhealthy leaves from strawberry plants is a major task. In field operations, the picking mechanism is actuated via three previously derived inverse kinematic algorithms and their performances are compared. Due to the high risk of rapid and unexpected deviation from the target position under field circumstances, some compensation is considered necessary. For this purpose, an image-based visual servoing method via the camera-in-hand configuration is activated when the end-effector is nearby to the target leaf subsequent to performing the inverse kinematics algorithms. In this study, a bio-inspired trajectory optimization method is proposed for visual servoing and the method is constructed based on a prey-predator relationship observed in nature (“motion camouflage”). In this biological phenomenon, the predator constructs its path in a certain subspace while catching the prey. The proposed algorithm is tested both in simulations and in hardware experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cai, Caixia, Nikhil Somani, Suraj Nair, Dario Mendoza et Alois Knoll. « Uncalibrated stereo visual servoing for manipulators using virtual impedance control ». Dans 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014. http://dx.doi.org/10.1109/icarcv.2014.7064604.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Li, Weiguang, Guoqiang Ye, Hao Wan, Shaohua Zheng et Zhijiang Lu. « Decoupled control for visual servoing with SVM-based virtual moments ». Dans 2015 IEEE International Conference on Information and Automation (ICIA). IEEE, 2015. http://dx.doi.org/10.1109/icinfa.2015.7279638.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Nair, S., T. Roder, G. Panin et A. Knoll. « Visual servoing of presenters in augmented virtual reality TV studios ». Dans 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5648809.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Al-Shanoon, Abdulrahman, Aaron Hao Tan, Haoxiang Lang et Ying Wang. « Mobile Robot Regulation with Position Based Visual Servoing ». Dans 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA). IEEE, 2018. http://dx.doi.org/10.1109/civemsa.2018.8439978.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie