Letteratura scientifica selezionata sul tema "Virtual visual servoing"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Virtual visual servoing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Virtual visual servoing":

1

Andreff, Nicolas, e Brahim Tamadazte. "Laser steering using virtual trifocal visual servoing". International Journal of Robotics Research 35, n. 6 (24 luglio 2015): 672–94. http://dx.doi.org/10.1177/0278364915585585.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Assa, Akbar, e Farrokh Janabi-Sharifi. "Virtual Visual Servoing for Multicamera Pose Estimation". IEEE/ASME Transactions on Mechatronics 20, n. 2 (aprile 2015): 789–98. http://dx.doi.org/10.1109/tmech.2014.2305916.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cao, Chenguang. "Research on a Visual Servoing Control Method Based on Perspective Transformation under Spatial Constraint". Machines 10, n. 11 (18 novembre 2022): 1090. http://dx.doi.org/10.3390/machines10111090.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Visual servoing has been widely employed in robotic control to increase the flexibility and precision of a robotic arm. When the end-effector of the robotic arm needs to be moved to a spatial point without a coordinate, the conventional visual servoing control method has difficulty performing the task. The present work describes space constraint challenges in a visual servoing system by introducing an assembly node and then presents a two-stage visual servoing control approach based on perspective transformation. A virtual image plane is constructed using a calibration-derived homography matrix. The assembly node, as well as other objects, are projected into the plane after that. Second, the controller drives the robotic arm by tracking the projections in the virtual image plane and adjusting the position and attitude of the workpiece accordingly. Three simple image features are combined into a composite image feature, and an active disturbance rejection controller (ADRC) is established to improve the robotic arm’s motion sensitivity. Real-time simulations and experiments employing a robotic vision system with an eye-to-hand configuration are used to validate the effectiveness of the presented method. The results show that the robotic arm can move the workpiece to the desired position without using coordinates.
4

Yu, Qiuda, Wu Wei, Dongliang Wang, Yanjie Li e Yong Gao. "A Framework for IBVS Using Virtual Work". Actuators 13, n. 5 (10 maggio 2024): 181. http://dx.doi.org/10.3390/act13050181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The visual servoing of manipulators is challenged by two main problems: the singularity of the inverse Jacobian and the physical constraints of a manipulator. In order to overcome the singularity issue, this paper presents a novel approach for image-based visual servoing (IBVS), which converts the propagation of errors in the image plane into the conduction of virtual forces using the principle of virtual work. This approach eliminates the need for Jacobian inversion computations and prevents matrix inversion singularity. To tackle physical constraints, reverse thinking is adopted to derive the function of the upper and lower bounds of the joint velocity on the joint angle. This enables the proposed method to configure the physical constraints of the robot in a more intuitive manner. To validate the effectiveness of the proposed method, an eye-in-hand system based on UR5 in VREP, as well as a physical robot, were established.
5

Kawamura, Akihiro, Kenji Tahara, Ryo Kurazume e Tsutomu Hasegawa. "Robust Visual Servoing for Object Manipulation Against Temporary Loss of Sensory Information Using a Multi-Fingered Hand-Arm". Journal of Robotics and Mechatronics 25, n. 1 (20 febbraio 2013): 125–35. http://dx.doi.org/10.20965/jrm.2013.p0125.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a robust visual servoing method for object manipulation against temporary loss of sensory information. It is known that visual information is useful for reliable object grasping and precise manipulation. Visual information becomes unavailable, however when occlusion occurs or a grasped object disappears during manipulation. In that case, the behavior of the visual servoing system becomes unstable. Our proposed method enables an object to be grasped and manipulated stably even if visual information is temporarily unavailable during manipulation. This method is based on dynamic stable object grasping and manipulation proposed in our previous work and the concept of virtual object information. A dynamic model of the overall system is first formulated. A new controller using both actual and virtual object information is proposed next. The usefulness of this method is finally verified through both numerical simulation and experiments using a triple-fingered mechanical hand.
6

Xie, Hui, Geoff Fink, Alan F. Lynch e Martin Jagersand. "Adaptive visual servoing of UAVs using a virtual camera". IEEE Transactions on Aerospace and Electronic Systems 52, n. 5 (ottobre 2016): 2529–38. http://dx.doi.org/10.1109/taes.2016.15-0155.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Gratal, Xavi, Javier Romero e Danica Kragic. "Virtual Visual Servoing for Real-Time Robot Pose Estimation". IFAC Proceedings Volumes 44, n. 1 (gennaio 2011): 9017–22. http://dx.doi.org/10.3182/20110828-6-it-1002.02970.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wallin, Erik, Viktor Wiberg e Martin Servin. "Multi-Log Grasping Using Reinforcement Learning and Virtual Visual Servoing". Robotics 13, n. 1 (21 dicembre 2023): 3. http://dx.doi.org/10.3390/robotics13010003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We explore multi-log grasping using reinforcement learning and virtual visual servoing for automated forwarding in a simulated environment. Automation of forest processes is a major challenge, and many techniques regarding robot control pose different challenges due to the unstructured and harsh outdoor environment. Grasping multiple logs involves various problems of dynamics and path planning, where understanding the interaction between the grapple, logs, terrain, and obstacles requires visual information. To address these challenges, we separate image segmentation from crane control and utilise a virtual camera to provide an image stream from reconstructed 3D data. We use Cartesian control to simplify domain transfer to real-world applications. Because log piles are static, visual servoing using a 3D reconstruction of the pile and its surroundings is equivalent to using real camera data until the point of grasping. This relaxes the limits on computational resources and time for the challenge of image segmentation, and allows for data collection in situations where the log piles are not occluded. The disadvantage is the lack of information during grasping. We demonstrate that this problem is manageable and present an agent that is 95% successful in picking one or several logs from challenging piles of 2–5 logs.
9

Marchand, Eric, e Francois Chaumette. "Virtual Visual Servoing: a framework for real-time augmented reality". Computer Graphics Forum 21, n. 3 (settembre 2002): 289–98. http://dx.doi.org/10.1111/1467-8659.00588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Marchand, Éric, e François Chaumette. "Virtual Visual Servoing: a framework for real-time augmented reality". Computer Graphics Forum 21, n. 3 (settembre 2002): 289–97. http://dx.doi.org/10.1111/1467-8659.t01-1-00588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Virtual visual servoing":

1

Guerbas, Seif Eddine. "Modélisation adaptée des images omnidirectionnelles pour agrandir le domaine de convergence de l'asservissement visuel virtuel direct". Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0026.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La vision omnidirectionnelle capture dans toutes les directions une scène en temps réel grâce à un champ de vision plus étendu que celui offert par une caméra conventionnelle. Au sein de l'environnement, relier les caractéristiques visuelles contenues dans les images de la caméra à ses mouvements est une problématique centrale pour l'asservissement visuel. Les approches directes se caractérisent cependant par un domaine de convergence limité. La thèse que nous présentons a pour premier objectif d'étendre significativement ce dernier dans le cadre de l'asservissement visuel virtuel en représentant l'image omnidirectionnelle par un Mélange de Gaussiennes Photométriques (MGP). Cette approche est étendue dans un deuxième temps au recalage et au suivi direct basé modèle 3D dans les images omnidirectionnelles. Cela permet d'étudier la localisation d'un robot mobile équipé d'une caméra panoramique dans un modèle urbain 3D. Les expérimentations ont été conduites en environnement virtuel et en utilisant des images réelles capturées à l'aide d'un robot mobile et d'un véhicule. Les résultats montrent un agrandissement significatif du domaine de convergence qui permet alors une grande robustesse face à d'importants mouvements inter-images
Omnidirectional vision captures a scene in real-time in all directions with a wider field of view than a conventional camera. Within the environment, linking the visual features contained in the camera images to its movements is a central issue for visual servoing. Direct approaches, however, are characterized by a limited range of convergence. The main objective of this dissertation is to significantly extend the area of convergence in the context of virtual visual servoing by representing the omnidirectional image by a Photometric Gaussian Mixtures (PGM). This approach is further extended in the second step to the registration and direct tracking based on 3D models in omnidirectional images. This proposed methodology allows for studying the localization of a mobile robot equipped with a panoramic camera in a 3D urban model. The results show a significant enlargement of the convergence domain for high robustness to large interframe movements, as evidenced by experiments in virtual environments and with real images captured with a mobile robot and a vehicle
2

Ting-YuChang e 張庭育. "Study on Virtual Visual Servoing Estimator and Dynamic Visual Servoing Scheme". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/739fwn.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Lai, Chang-Yu, e 賴長佑. "System Integration and Development for Soccer Robots on Virtual Reality and Visual Servoing". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/67981247713025562888.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣大學
機械工程學研究所
90
In this thesis, a visual servoing system has been implemented on soccer robot system. We proposed a method combining RGB (Red, Green, Blue) model and HSI (Hue, Saturation, Intensity) model to define the colors of interest. When searching for blobs, geometry characteristics are used to specify the blobs. A predict method is also used to locate the windows of tracking. In terms of robots'' number identification, different shapes of patterns are adapted as features for recognition. A robust algorithm is used to recognize these shapes, and figures out the correct identification that is important for the strategy system. This method is scaling and orientation invariant. Except for that, we also proposed a virtual soccer robot platform. This platform provides two functions. One is for remote monitoring. The events happened on real soccer robot can be displayed on computer monitor far away. The other is for strategy development of soccer robot system. This platform can demonstrate the result of the strategy design without real soccer robots. Therefore, the strategy development can be beyond the limitation of space, time or money.

Atti di convegni sul tema "Virtual visual servoing":

1

Nammoto, Takashi, Koichi Hashimoto, Shingo Kagami e Kazuhiro Kosuge. "High speed/accuracy visual servoing based on virtual visual servoing with stereo cameras". In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013). IEEE, 2013. http://dx.doi.org/10.1109/iros.2013.6696330.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Alex, Joseph, Barmeshwar Vikramaditya e Bradley J. Nelson. "A Virtual Reality Teleoperator Interface for Assembly of Hybrid MEMS Prototypes". In ASME 1998 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/detc98/mech-5836.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract In this paper we describe a teleoperated microassembly workcell that integrates a VRML-based virtual microworld with visual servoing micromanipulation strategies. Java is used to program the VRML-based supervisory interface and to communicate with the microassembly workcell. This provides platform independence and allows remote teleoperation over the Internet. A key aspect of our approach entails the integration of teleoperation and visual servoing strategies. This allows a supervisor to guide the task remotely, while visual servoing strategies compensate for the imprecisely calibrated microworld. Results are presented that demonstrate system performance when a supervisor manipulates a microobject remotely. Though Internet delays impact the dynamic performance of the system, teleoperated relative parts placements with submicron precision is successfully demonstrated.
3

Pressigout, M., e E. Marchand. "Model-free augmented reality by virtual visual servoing". In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334401.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Le Bras, Florent, Tarek Hamel e Robert Mahony. "Visual servoing of a VTOL vehicle using virtual states". In 2007 46th IEEE Conference on Decision and Control. IEEE, 2007. http://dx.doi.org/10.1109/cdc.2007.4434113.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Kingkan, Cherdsak, Shogo Ito, Shogo Arai, Takashi Nammoto e Koichi Hashimoto. "Model-based virtual visual servoing with point cloud data". In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016. http://dx.doi.org/10.1109/iros.2016.7759816.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Defterli, Sinem Gozde, e Yunjun Xu. "Virtual Motion Camouflage Based Visual Servo Control of a Leaf Picking Mechanism". In ASME 2018 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/dscc2018-9042.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
For a lately constructed disease detection field robot, the segregation of unhealthy leaves from strawberry plants is a major task. In field operations, the picking mechanism is actuated via three previously derived inverse kinematic algorithms and their performances are compared. Due to the high risk of rapid and unexpected deviation from the target position under field circumstances, some compensation is considered necessary. For this purpose, an image-based visual servoing method via the camera-in-hand configuration is activated when the end-effector is nearby to the target leaf subsequent to performing the inverse kinematics algorithms. In this study, a bio-inspired trajectory optimization method is proposed for visual servoing and the method is constructed based on a prey-predator relationship observed in nature (“motion camouflage”). In this biological phenomenon, the predator constructs its path in a certain subspace while catching the prey. The proposed algorithm is tested both in simulations and in hardware experiments.
7

Cai, Caixia, Nikhil Somani, Suraj Nair, Dario Mendoza e Alois Knoll. "Uncalibrated stereo visual servoing for manipulators using virtual impedance control". In 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014. http://dx.doi.org/10.1109/icarcv.2014.7064604.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Li, Weiguang, Guoqiang Ye, Hao Wan, Shaohua Zheng e Zhijiang Lu. "Decoupled control for visual servoing with SVM-based virtual moments". In 2015 IEEE International Conference on Information and Automation (ICIA). IEEE, 2015. http://dx.doi.org/10.1109/icinfa.2015.7279638.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Nair, S., T. Roder, G. Panin e A. Knoll. "Visual servoing of presenters in augmented virtual reality TV studios". In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5648809.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Al-Shanoon, Abdulrahman, Aaron Hao Tan, Haoxiang Lang e Ying Wang. "Mobile Robot Regulation with Position Based Visual Servoing". In 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA). IEEE, 2018. http://dx.doi.org/10.1109/civemsa.2018.8439978.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia