Articoli di riviste sul tema "Virtual visual servoing"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Virtual visual servoing.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-40 articoli di riviste per l'attività di ricerca sul tema "Virtual visual servoing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Andreff, Nicolas, e Brahim Tamadazte. "Laser steering using virtual trifocal visual servoing". International Journal of Robotics Research 35, n. 6 (24 luglio 2015): 672–94. http://dx.doi.org/10.1177/0278364915585585.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Assa, Akbar, e Farrokh Janabi-Sharifi. "Virtual Visual Servoing for Multicamera Pose Estimation". IEEE/ASME Transactions on Mechatronics 20, n. 2 (aprile 2015): 789–98. http://dx.doi.org/10.1109/tmech.2014.2305916.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cao, Chenguang. "Research on a Visual Servoing Control Method Based on Perspective Transformation under Spatial Constraint". Machines 10, n. 11 (18 novembre 2022): 1090. http://dx.doi.org/10.3390/machines10111090.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Visual servoing has been widely employed in robotic control to increase the flexibility and precision of a robotic arm. When the end-effector of the robotic arm needs to be moved to a spatial point without a coordinate, the conventional visual servoing control method has difficulty performing the task. The present work describes space constraint challenges in a visual servoing system by introducing an assembly node and then presents a two-stage visual servoing control approach based on perspective transformation. A virtual image plane is constructed using a calibration-derived homography matrix. The assembly node, as well as other objects, are projected into the plane after that. Second, the controller drives the robotic arm by tracking the projections in the virtual image plane and adjusting the position and attitude of the workpiece accordingly. Three simple image features are combined into a composite image feature, and an active disturbance rejection controller (ADRC) is established to improve the robotic arm’s motion sensitivity. Real-time simulations and experiments employing a robotic vision system with an eye-to-hand configuration are used to validate the effectiveness of the presented method. The results show that the robotic arm can move the workpiece to the desired position without using coordinates.
4

Yu, Qiuda, Wu Wei, Dongliang Wang, Yanjie Li e Yong Gao. "A Framework for IBVS Using Virtual Work". Actuators 13, n. 5 (10 maggio 2024): 181. http://dx.doi.org/10.3390/act13050181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The visual servoing of manipulators is challenged by two main problems: the singularity of the inverse Jacobian and the physical constraints of a manipulator. In order to overcome the singularity issue, this paper presents a novel approach for image-based visual servoing (IBVS), which converts the propagation of errors in the image plane into the conduction of virtual forces using the principle of virtual work. This approach eliminates the need for Jacobian inversion computations and prevents matrix inversion singularity. To tackle physical constraints, reverse thinking is adopted to derive the function of the upper and lower bounds of the joint velocity on the joint angle. This enables the proposed method to configure the physical constraints of the robot in a more intuitive manner. To validate the effectiveness of the proposed method, an eye-in-hand system based on UR5 in VREP, as well as a physical robot, were established.
5

Kawamura, Akihiro, Kenji Tahara, Ryo Kurazume e Tsutomu Hasegawa. "Robust Visual Servoing for Object Manipulation Against Temporary Loss of Sensory Information Using a Multi-Fingered Hand-Arm". Journal of Robotics and Mechatronics 25, n. 1 (20 febbraio 2013): 125–35. http://dx.doi.org/10.20965/jrm.2013.p0125.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a robust visual servoing method for object manipulation against temporary loss of sensory information. It is known that visual information is useful for reliable object grasping and precise manipulation. Visual information becomes unavailable, however when occlusion occurs or a grasped object disappears during manipulation. In that case, the behavior of the visual servoing system becomes unstable. Our proposed method enables an object to be grasped and manipulated stably even if visual information is temporarily unavailable during manipulation. This method is based on dynamic stable object grasping and manipulation proposed in our previous work and the concept of virtual object information. A dynamic model of the overall system is first formulated. A new controller using both actual and virtual object information is proposed next. The usefulness of this method is finally verified through both numerical simulation and experiments using a triple-fingered mechanical hand.
6

Xie, Hui, Geoff Fink, Alan F. Lynch e Martin Jagersand. "Adaptive visual servoing of UAVs using a virtual camera". IEEE Transactions on Aerospace and Electronic Systems 52, n. 5 (ottobre 2016): 2529–38. http://dx.doi.org/10.1109/taes.2016.15-0155.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Gratal, Xavi, Javier Romero e Danica Kragic. "Virtual Visual Servoing for Real-Time Robot Pose Estimation". IFAC Proceedings Volumes 44, n. 1 (gennaio 2011): 9017–22. http://dx.doi.org/10.3182/20110828-6-it-1002.02970.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wallin, Erik, Viktor Wiberg e Martin Servin. "Multi-Log Grasping Using Reinforcement Learning and Virtual Visual Servoing". Robotics 13, n. 1 (21 dicembre 2023): 3. http://dx.doi.org/10.3390/robotics13010003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We explore multi-log grasping using reinforcement learning and virtual visual servoing for automated forwarding in a simulated environment. Automation of forest processes is a major challenge, and many techniques regarding robot control pose different challenges due to the unstructured and harsh outdoor environment. Grasping multiple logs involves various problems of dynamics and path planning, where understanding the interaction between the grapple, logs, terrain, and obstacles requires visual information. To address these challenges, we separate image segmentation from crane control and utilise a virtual camera to provide an image stream from reconstructed 3D data. We use Cartesian control to simplify domain transfer to real-world applications. Because log piles are static, visual servoing using a 3D reconstruction of the pile and its surroundings is equivalent to using real camera data until the point of grasping. This relaxes the limits on computational resources and time for the challenge of image segmentation, and allows for data collection in situations where the log piles are not occluded. The disadvantage is the lack of information during grasping. We demonstrate that this problem is manageable and present an agent that is 95% successful in picking one or several logs from challenging piles of 2–5 logs.
9

Marchand, Eric, e Francois Chaumette. "Virtual Visual Servoing: a framework for real-time augmented reality". Computer Graphics Forum 21, n. 3 (settembre 2002): 289–98. http://dx.doi.org/10.1111/1467-8659.00588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Marchand, Éric, e François Chaumette. "Virtual Visual Servoing: a framework for real-time augmented reality". Computer Graphics Forum 21, n. 3 (settembre 2002): 289–97. http://dx.doi.org/10.1111/1467-8659.t01-1-00588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Fink, Geoff, Hui Xie, Alan F. Lynch e Martin Jagersand. "Dynamic Visual Servoing for a Quadrotor Using a Virtual Camera". Unmanned Systems 05, n. 01 (gennaio 2017): 1–17. http://dx.doi.org/10.1142/s2301385017500017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents a dynamic image-based visual servoing (IBVS) control law for a quadrotor unmanned aerial vehicle (UAV) equipped with a single fixed on-board camera. The motion control problem is to regulate the relative position and yaw of the vehicle to a moving planar target located within the camera’s field of view. The control law is termed dynamic as it’s based on the dynamics of the vehicle. To simplify the kinematics and dynamics, the control law relies on the notion of a virtual camera and image moments as visual features. The convergence of the closed-loop is proven to be globally asymptotically stable for a horizontal target. In the case of nonhorizontal targets, we modify the control using a homography decomposition. Experimental and simulation results demonstrate the control law’s performance.
12

Zhang, Kaixiang, François Chaumette e Jian Chen. "Trifocal tensor-based 6-DOF visual servoing". International Journal of Robotics Research 38, n. 10-11 (27 agosto 2019): 1208–28. http://dx.doi.org/10.1177/0278364919872544.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a trifocal tensor-based approach for six-degree-of-freedom visual servoing. The trifocal tensor model among the current, desired, and reference views is constructed to describe the geometric relationship of the system. More precisely, to ensure the computation consistency of trifocal tensor, a virtual reference view is introduced by exploiting the transfer relationships between the initial and desired images. Instead of resorting to explicit estimation of the camera pose, a set of visual features with satisfactory decoupling properties are constructed from the tensor elements. Based on the selected features, a visual controller is developed to regulate the camera to a desired pose, and an adaptive update law is used to compensate for the unknown distance scale factor. Furthermore, the system stability is analyzed via Lyapunov-based techniques, showing that the proposed controller can achieve almost global asymptotic stability. Both simulation and experimental results are provided to demonstrate the effectiveness and robustness of our approach under different conditions and case studies.
13

Shen, Yue, Qingxuan Jia, Ruiquan Wang, Zeyuan Huang e Gang Chen. "Learning-Based Visual Servoing for High-Precision Peg-in-Hole Assembly". Actuators 12, n. 4 (27 marzo 2023): 144. http://dx.doi.org/10.3390/act12040144.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Visual servoing is widely used in the peg-in-hole assembly due to the uncertainty of pose. Humans can easily align the peg with the hole according to key visual points/edges. By imitating human behavior, we propose P2HNet, a learning-based neural network that can directly extract desired landmarks for visual servoing. To avoid collecting and annotating a large number of real images for training, we built a virtual assembly scene to generate many synthetic data for transfer learning. A multi-modal peg-in-hole strategy is then introduced to combine image-based search-and-force-based insertion. P2HNet-based visual servoing and spiral search are used to align the peg with the hole from coarse to fine. Force control is then used to complete the insertion. The strategy exploits the flexibility of neural networks and the stability of traditional methods. The effectiveness of the method was experimentally verified in the D-sub connector assembly with sub-millimeter clearance. The results show that the proposed method can achieve a higher success rate and efficiency than the baseline method in the high-precision peg-in-hole assembly.
14

Xi, Wenming. "STUDY OF VISUAL SERVOING FOR VIRTUAL MICROASSEMBLY BASED ON SOLID MODEL". Chinese Journal of Mechanical Engineering 41, n. 03 (2005): 59. http://dx.doi.org/10.3901/jme.2005.03.059.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Bassi, Danilo, Adrian Silva, Cristian Marinkovic e Gonzalo Acuña. "Visual Servoing of Robotic Manipulator in a Virtual Learned Articular Space". IFAC Proceedings Volumes 33, n. 17 (luglio 2000): 1235–40. http://dx.doi.org/10.1016/s1474-6670(17)39582-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zheng, Dongliang, Hesheng Wang, Jingchuan Wang, Siheng Chen, Weidong Chen e Xinwu Liang. "Image-Based Visual Servoing of a Quadrotor Using Virtual Camera Approach". IEEE/ASME Transactions on Mechatronics 22, n. 2 (aprile 2017): 972–82. http://dx.doi.org/10.1109/tmech.2016.2639531.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Comport, A. I., E. Marchand, M. Pressigout e F. Chaumette. "Real-time markerless tracking for augmented reality: the virtual visual servoing framework". IEEE Transactions on Visualization and Computer Graphics 12, n. 4 (luglio 2006): 615–28. http://dx.doi.org/10.1109/tvcg.2006.78.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Carelli, Ricardo, Eduardo Oliva, Carlos Soria e Oscar Nasisi. "Combined force and visual control of an industrial robot". Robotica 22, n. 2 (marzo 2004): 163–71. http://dx.doi.org/10.1017/s0263574703005423.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This work proposes control structures that efficiently combine force control with vision servo control of robot manipulators. Impedance controllers are considered which are based both on visual servoing and on physical or fictitious force feedback, the force and visual information being combined in the image space. Force and visual servo controllers included in extended hybrid control structures are also considered. The combination of both force and vision based control allows the tasks range of the robot to be extended to partially structured environments. The proposed controllers, implemented on an industrial SCARA-type robot, are tested in tasks involving physical and virtual contact with the environment.
19

Chen, Chin‐Sheng, Ming‐Shium Hsieh, Yu‐Wen Chiu, Chia‐Hou Tsai, Shih‐Ming Liu, Chun‐Chang Lu e Ping‐Lang Yen. "An unconstrained virtual bone clamper for a knee surgical robot using visual servoing technique". Journal of the Chinese Institute of Engineers 33, n. 3 (aprile 2010): 379–86. http://dx.doi.org/10.1080/02533839.2010.9671626.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Ji, Peng, Hong Zeng, Aiguo Song, Ping Yi, PengWen Xiong e Huijun Li. "Virtual exoskeleton-driven uncalibrated visual servoing control for mobile robotic manipulators based on human–robot–robot cooperation". Transactions of the Institute of Measurement and Control 40, n. 14 (8 gennaio 2018): 4046–62. http://dx.doi.org/10.1177/0142331217741538.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents an uncalibrated visual servoing control system based on the human–robot–robot cooperation (HRRC). In case of malfunctions of the joint sensors of a robotic manipulator, the proposed system enables the mobile robot to continue operating the manipulator to complete the task that requires careful handling. With the aid of a virtual exoskeleton, an operator may use a human–computer interaction (HCI) device to guide the malfunctioning manipulator. During the guiding process, the virtual exoskeleton serves as a connector between the HCI device and the manipulator. However, when using the HCI device to guide the virtual exoskeleton, there could be a risk of a large-residual problem at any time caused by non-uniform guiding. To solve this problem, a residual switching algorithm (RSA) has been proposed that can identify whether the residual should be calculated based on the motion characteristics of the artificial guiding, reducing the computational cost and ensuring the tracking stability. To enhance the virtual exoskeleton’s ability to drive the manipulator, a multi-joint fuzzy driving controller has been proposed, which can drive the corresponding joint of the manipulator in accordance with an offset vector between the virtual exoskeleton and the manipulator. Lastly, the guiding experiments have verified that, compared with the contrast algorithm, the proposed RSA has a better tracking performance. A peg-in-hole assembly experiment has shown that the proposed control system can assist the operator to control efficiently the robotic manipulator with malfunctioning joint sensors.
21

Li, Pengcheng, Ahmad Ghasemi, Wenfang Xie e Wei Tian. "Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor". Electronics 8, n. 8 (26 luglio 2019): 836. http://dx.doi.org/10.3390/electronics8080836.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Parallel robots present outstanding advantages compared with their serial counterparts; they have both a higher force-to-weight ratio and better stiffness. However, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. This paper focuses on the dynamic model identification of the 6-DOF parallel robots for advanced model-based visual servoing control design purposes. A visual closed-loop output-error identification method based on an optical coordinate-measuring-machine (CMM) sensor for parallel robots is proposed. The main advantage, compared with the conventional identification method, is that the joint torque measurement and the exact knowledge of the built-in robot controllers are not needed. The time-consuming forward kinematics calculation, which is employed in the conventional identification method of the parallel robot, can be avoided due to the adoption of optical CMM sensor for real time pose estimation. A case study on a 6-DOF RSS parallel robot is carried out in this paper. The dynamic model of the parallel robot is derived based on the virtual work principle, and the built dynamic model is verified through Matlab/SimMechanics. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, a visual closed-loop output-error identification method is proposed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests.
22

Borshchova, Iryna, e Siu O’Young. "Visual servoing for autonomous landing of a multi-rotor UAS on a moving platform". Journal of Unmanned Vehicle Systems 5, n. 1 (1 marzo 2017): 13–26. http://dx.doi.org/10.1139/juvs-2015-0044.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, a method to control a small multi-rotor unmanned aerial system (UAS) while landing on a moving platform using image-based visual servoing is described. The landing scheme is based on positioning visual markers on a landing platform in the form of a detectable pattern. When the onboard camera detects the object pattern, the flight control algorithm will send visual-based servo-commands to align the multi-rotor with the targets. The main contribution is that the proposed method is less computationally expensive as it uses color-based object detection applied to a geometric pattern instead of feature tracking algorithms. This method has the advantage that it does not demand calculating the distance to the objects (depth). The proposed method was tested in simulation using a quadcopter model in V-REP (virtual robotics experimental platform) working in parallel with robot operating system (ROS). Finally, this method was validated in a series of real-time experiments with a quadcopter.
23

Zoppi, Matteo, e Rezia Molfino. "ArmillEye: Flexible Platform for Underwater Stereo Vision". Journal of Mechanical Design 129, n. 8 (8 agosto 2006): 808–15. http://dx.doi.org/10.1115/1.2735338.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The paper describes ArmillEye, a 3-degree of freedom (DOF) flexible hybrid platform designed for agile underwater stereoptic vision. Effective telecontrol systems of remote operated vehicles require active and dexterous camera support in order to allow the operator to easily and promptly change the point of view, also improving the virtual reconstruction of the environment in difficult operative conditions (dirtiness, turbulence, and partial occlusion). The same concepts hold for visual servoing of autonomous underwater vehicles. ArmillEye was designed for this specific application; it is based on the concept of using a parallel-hybrid mechanism architecture that, in principle, allows us to minimize the ad hoc waterproof boxes (generally only for cameras) while the actuators, fixed to the base of the mechanism, can be placed into the main body of the underwater vehicle. This concept was revealed effective and was previously proposed for underwater arms. The synthesis of ArmillEye followed the specific aims of visual telecontrol and servoing, specifying vision workspace, dexterity, and dynamics parameters. Two versions of ArmillEye are proposed: the first one with two cameras to obtain a steroptic vision by using two viewpoints (two rotational freedoms with a fixed tilt or pan axis and vergence); the second one with one camera operated to obtain a stereoptic vision by using one viewpoint (two rotational freedoms with a fixed tilt or pan axis and extrusion).
24

Ozawa, Ryuta, e François Chaumette. "Dynamic visual servoing with image moments for an unmanned aerial vehicle using a virtual spring approach". Advanced Robotics 27, n. 9 (giugno 2013): 683–96. http://dx.doi.org/10.1080/01691864.2013.776967.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Xie, Hui, Alan F. Lynch e Martin Jagersand. "Dynamic IBVS of a rotary wing UAV using line features". Robotica 34, n. 9 (9 dicembre 2014): 2009–26. http://dx.doi.org/10.1017/s0263574714002707.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
SUMMARYIn this paper we propose a dynamic image-based visual servoing (IBVS) control for a rotary wing unmanned aerial vehicle (UAV) which directly accounts for the vehicle's underactuated dynamic model. The motion control objective is to follow parallel lines and is motivated by power line inspection tasks where the UAV's relative position and orientation to the lines are controlled. The design is based on a virtual camera whose motion follows the onboard physical camera but which is constrained to point downwards independent of the vehicle's roll and pitch angles. A set of image features is proposed for the lines projected into the virtual camera frame. These features are chosen to simplify the interaction matrix which in turn leads to a simpler IBVS control design which is globally asymptotically stable. The proposed scheme is adaptive and therefore does not require depth estimation. Simulation results are presented to illustrate the performance of the proposed control and its robustness to calibration parameter error.
26

IWASAKI, Takuya, e Kimitoshi YAMAZAKI. "Visual Servoing Corresponding to Various Obstacle Placements and Target Object Shapes Based on Learning in Virtual Environments". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2021 (2021): 2P2—H12. http://dx.doi.org/10.1299/jsmermd.2021.2p2-h12.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Van, Mien, Shuzhi Sam Ge e Dariusz Ceglarek. "Fault Estimation and Accommodation For Virtual Sensor Bias Fault in Image-Based Visual Servoing Using Particle Filter". IEEE Transactions on Industrial Informatics 14, n. 4 (aprile 2018): 1312–22. http://dx.doi.org/10.1109/tii.2017.2723930.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

MATSUURA, Shoutaro, e Noriaki MARU. "Position and Attitude Control of Eye-In-Hand Robot by Dynamic Visual Servoing Based on Virtual Spring-Dumper Hypothesis Using Binocular Visual Space Error". TRANSACTIONS OF THE JAPAN SOCIETY OF MECHANICAL ENGINEERS Series C 77, n. 776 (2011): 1366–75. http://dx.doi.org/10.1299/kikaic.77.1366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Almaghout, K., e A. Klimchik. "Vision-Based Robotic Comanipulation for Deforming Cables". Nelineinaya Dinamika 18, n. 5 (2022): 0. http://dx.doi.org/10.20537/nd221213.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although deformable linear objects (DLOs), such as cables, are widely used in the majority of life fields and activities, the robotic manipulation of these objects is considerably more complex compared to the rigid-body manipulation and still an open challenge. In this paper, we introduce a new framework using two robotic arms cooperatively manipulating a DLO from an initial shape to a desired one. Based on visual servoing and computer vision techniques, a perception approach is proposed to detect and sample the DLO as a set of virtual feature points. Then a manipulation planning approach is introduced to map between the motion of the manipulators end effectors and the DLO points by a Jacobian matrix. To avoid excessive stretching of the DLO, the planning approach generates a path for each DLO point forming profiles between the initial and desired shapes. It is guaranteed that all these intershape profiles are reachable and maintain the cable length constraint. The framework and the aforementioned approaches are validated in real-life experiments.
30

Mincă, Eugenia, Adrian Filipescu, Daniela Cernega, Răzvan Șolea, Adriana Filipescu, Dan Ionescu e Georgian Simion. "Digital Twin for a Multifunctional Technology of Flexible Assembly on a Mechatronics Line with Integrated Robotic Systems and Mobile Visual Sensor—Challenges towards Industry 5.0". Sensors 22, n. 21 (25 ottobre 2022): 8153. http://dx.doi.org/10.3390/s22218153.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A digital twin for a multifunctional technology for flexible manufacturing on an assembly, disassembly, and repair mechatronics line (A/D/RML), assisted by a complex autonomous system (CAS), is presented in the paper. The hardware architecture consists of the A/D/RML and a six-workstation (WS) mechatronics line (ML) connected to a flexible cell (FC) and equipped with a six-degree of freedom (DOF) industrial robotic manipulator (IRM). The CAS has in its structure two driving wheels and one free wheel (2DW/1FW)-wheeled mobile robot (WMR) equipped with a 7-DOF robotic manipulator (RM). On the end effector of the RM, a mobile visual servoing system (eye-in-hand MVSS) is mounted. The multifunctionality is provided by the three actions, assembly, disassembly, and repair, while the flexibility is due to the assembly of different products. After disassembly or repair, CAS picks up the disassembled components and transports them to the appropriate storage depots for reuse. Disassembling or repairing starts after assembling, and the final assembled product fails the quality test. The virtual world that serves as the digital counterpart consists of tasks assignment, planning and synchronization of A/D/RML with integrated robotic systems, IRM, and CAS. Additionally, the virtual world includes hybrid modeling with synchronized hybrid Petri nets (SHPN), simulation of the SHPN models, modeling of the MVSS, and simulation of the trajectory-tracking sliding-mode control (TTSMC) of the CAS. The real world, as counterpart of the digital twin, consists of communication, synchronization, and control of A/D/RML and CAS. In addition, the real world includes control of the MVSS, the inverse kinematic control (IKC) of the RM and graphic user interface (GUI) for monitoring and real-time control of the whole system. The “Digital twin” approach has been designed to meet all the requirements and attributes of Industry 4.0 and beyond towards Industry 5.0, the target being a closer collaboration between the human operator and the production line.
31

Shi, Lintao, Baoquan Li, Wuxi Shi e Xuebo Zhang. "Visual servoing of quadrotor UAVs for slant targets with autonomous object search". Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 13 gennaio 2023, 095965182211444. http://dx.doi.org/10.1177/09596518221144490.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, an enhanced visual servoing method is designed for a quadrotor unmanned aerial vehicle (UAV) based on virtual plane image moments, under underactuation and tight coupling constraints of UAV kinematics. Moreover, in order to make the UAV search visual targets autonomously in target vicinity during flight, a flexible flight system is developed with stages of take-off, target searching, and image-based visual servoing (IBVS). With dual-camera sensor configuration, the UAV system searches targets from given directions while making localization. A virtual image plane is constructed and image moments are adopted to decouple UAV lateral movement. For a non-horizonal target, homography is utilized to construct the target plane and transform it into a horizonal plane. Backstepping techniques are used to derive the nonlinear controller to realize the IBVS strategy. Stability analysis proves global asymptotic performance of the closed-loop system. Experimental verification shows feasibility of the overall flight system and effectiveness of the visual servoing controller.
32

Qian, Zhenyu, Yuanshuai Dong, Yun Hou, Hong Zhang, ShuangWen Fan e Hang Zhong. "A geometric approach for homography-based visual servo control of underactuated UAVs". Measurement and Control, 23 aprile 2024. http://dx.doi.org/10.1177/00202940241238918.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a new geometric control method for homography-based visual servo control of Underactuated UAVs. In order to solve the application difficulties of geometric control in HBVS and explore a visual servo control technology that can be applied to aerial detection operations, this paper integrates the geometric control into the visual servoing framework and design a new homography-based geometric visual servoing controller. The outer loop is used as feedback information using the virtual homography matrix between the two images. The inner loop controls the orientation of the UAVs through geometric control. The stability of the proposed controller is proved based on Lyapunov’s theory. The proposed method has better transient performance and dynamic performance than the conventional visual servo method. The excellent performance of the controller has been proven by a large number of experiments. In addition, the application of the controller on an unmanned aerial manipulator is demonstrated.
33

Wang, Runhua, Xuebo Zhang, Yongchun Fang e Baoquan Li. "Virtual-Goal-Guided RRT for Visual Servoing of Mobile Robots With FOV Constraint". IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021, 1–11. http://dx.doi.org/10.1109/tsmc.2020.3044347.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Zhong, Shangkun, e Pakpong Chirarattananon. "Virtual Camera-based Visual Servoing for Rotorcraft using Monocular Camera and Gyroscopic Feedback". Journal of the Franklin Institute, agosto 2022. http://dx.doi.org/10.1016/j.jfranklin.2022.08.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Zhong, Shangkun, e Pakpong Chirarattananon. "Virtual Camera-based Visual Servoing for Rotorcraft using Monocular Camera and Gyroscopic Feedback". Journal of the Franklin Institute, settembre 2022. http://dx.doi.org/10.1016/j.jfranklin.2022.08.045.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Wang, Runhua, Xuebo Zhang, Yongchun Fang e Baoquan Li. "Virtual-Goal-Guided RRT for Visual Servoing of Mobile Robots With FOV Constraint". IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021, 1–11. http://dx.doi.org/10.1109/tsmc.2020.3044347.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Liyanage, Migara H., e Nicholas Krouglicof. "An Embedded System for a High-Speed Manipulator With Single Time Scale Visual Servoing". Journal of Dynamic Systems, Measurement, and Control 139, n. 7 (10 maggio 2017). http://dx.doi.org/10.1115/1.4035740.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study presents the development of an embedded system for controlling a high-speed robotic manipulator. Three different types of controllers including hardware proportional derivative (PD), software PD, and single time scale visual servoing are considered in this study. Novel field programmable gate array (FPGA) technology was used for implementing the embedded system for faster execution speeds and parallelism. It is comprised of dedicated hardware and software modules for obtaining sensor feedback and control signal (CT) estimation, providing the control signal to the servovalves. A NIOS II virtual soft processor system was configured in the FPGA for implementing functions that are computationally expensive and difficult to implement in hardware. Quadrature decoding, serial peripheral interface (SPI) input and output modules, and control signal estimation in some cases was carried out using the dedicated hardware modules. The experiments show that the proposed controller performed satisfactory control of the end effector position. It performed single time scale visual servoing with control signal updates at 330 Hz to control the end effector trajectory at speeds of up to 0.8 ms−1. The FPGA technology also provided a more compact single chip implementation of the controller.
38

Shi, Lintao, Baoquan Li e Wuxi Shi. "Vision-based UAV adaptive tracking control for moving targets with velocity observation". Transactions of the Institute of Measurement and Control, 12 febbraio 2024. http://dx.doi.org/10.1177/01423312241228886.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An adaptive image-based visual servoing (IBVS) controller is designed for a quadrotor unmanned aerial vehicle (UAV) to achieve robust tracking for moving targets, under underactuation and tight coupling constraints of UAV kinematics. Specifically, image features are selected from perspective image moments of a planar target to obtain virtual feature dynamics regarding UAV kinematics and dynamics. By constructing an auxiliary variable, a translational velocity observer for moving target is constructed by using virtual image features. An IBVS tracking controller is designed without target geometric information by combining UAV and visual feature dynamics. Designed controller and observer make the UAV robustly reach desired height and track the moving target, despite uncertainty of target movement. The controller has asymptotical convergence performance, and the target velocity is observed according to Lyapunov stability analysis. Simulation and experimental results show that the proposed method has smoother and more accurate performance in motion tracking and target velocity prediction under system uncertainty.
39

"A Study on Robot OLP Compensation Based on Image Based Visual Servoing in the Virtual Environment". Journal of Control, Automation and Systems Engineering 12, n. 3 (1 marzo 2006): 248–54. http://dx.doi.org/10.5302/j.icros.2006.12.3.248.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

He, Wei, e Liang Yuan. "Fixed-time controller of visual quadrotor for tracking a moving target". Journal of Vibration and Control, 27 settembre 2023. http://dx.doi.org/10.1177/10775463231200914.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In order to eliminate the dependence of finite-time control on the initial state of the system in the image-based visual servoing (IBVS) of a quadrotor UAV (QUAV) for the moving target, we propose a control scheme for the moving target of a QUAV based on fixed-time stability. We select the image moments of the virtual camera plane as features and establish an image moment dynamics model that includes target motion parameters based on them. Then, we use the backstepping method to design the fixed-time controller for the system. The design steps of the controller mainly consist of two parts. Firstly, we designed a fixed-time stable linear velocity observer to address the unmeasurable linear velocity and unmeasurable monocular camera depth issues of QUAV. Then, using a high-order tracking differentiator to estimate the target linear velocity as a feedforward, combined with the fixed-time linear velocity observer estimated QUAV linear velocity, we designed a fixed-time controller for the system using the backstepping method. We demonstrated the fixed-time stability of the controller using the Lyapunov theory. The effectiveness and robustness of the proposed method are demonstrated by numerical simulation. At the same time, the comparative simulation shows that the method can guarantee the rate of convergence of the system and has high control accuracy.

Vai alla bibliografia