Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Camera guidance for robot.

Zeitschriftenartikel zum Thema „Camera guidance for robot“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Camera guidance for robot" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Sai, Hesin, und Yoshikuni Okawa. „Structured Sign for Guidance of Mobile Robot“. Journal of Robotics and Mechatronics 3, Nr. 5 (20.10.1991): 379–86. http://dx.doi.org/10.20965/jrm.1991.p0379.

Der volle Inhalt der Quelle
Annotation:
As part of a guidance system for mobile robots operating on a wide and flat floor, such as an ordinary factory or a gymnasium, we have proposed a special-purpose sign. It consists of a cylinder, with four slits, and a fluorescent light, which is placed on the axis of the cylinder. Two of the slits are parallel to each other, and the other two are angled. A robot obtains an image of the sign with a TV camera. After thresholding, we have four bright sets of pixels which correspond to the four slits of the cylinder. We compute by measuring the relative distances between the four points, the distance and the angle to the direction of the sign can be computed using simple geometrical equations. Using a personal computer with an image processing capability, we have investigated the accuracy of the proposed position identification method and compared the experimental results against the theoretical analysis of measured error. The data shows good coincidence between the analysis and the experiments. Finally, we have built a movable robot, which has three microprocessors and a TV camera, and performed several control experiments for trajectory following.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yang, Long, und Nan Feng Xiao. „Robot Stereo Vision Guidance System Based on Attention Mechanism“. Applied Mechanics and Materials 385-386 (August 2013): 708–11. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.708.

Der volle Inhalt der Quelle
Annotation:
Add attention mechanism into traditional robot stereo vision system, thus got the possible workpiece position quickly by saliency image, highly accelerate the computing process. First, to get the camera intrinsic matrix and extrinsic matrix, camera stereo calibration needed be done. Then use those parameter matrixes to rectify the newly captured images, disparity map can be got based on the OpenCV library, meanwhile, saliency image was computed by Itti algorithm. Workpiece spatial pose to left camera coordinates can be got with triangulation measurement principal. After a series of coordinates transformation workpiece spatial pose to world coordinates can be got. With the robot inverse solution function, the robot joint rotation angle can be got thus driver the robot to work. At last, experiment results show the effectiveness of this method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Golkowski, Alexander Julian, Marcus Handte, Peter Roch und Pedro J. Marrón. „An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems“. International Journal of Semantic Computing 15, Nr. 03 (September 2021): 337–57. http://dx.doi.org/10.1142/s1793351x21400080.

Der volle Inhalt der Quelle
Annotation:
For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Blais, François, Marc Rioux und Jacques Domey. „Compact three-dimensional camera for robot and vehicle guidance“. Optics and Lasers in Engineering 10, Nr. 3-4 (Januar 1989): 227–39. http://dx.doi.org/10.1016/0143-8166(89)90039-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Imasato, Akimitsu, und Noriaki Maru. „Guidance and Control of Nursing Care Robot Using Gaze Point Detector and Linear Visual Servoing“. International Journal of Automation Technology 5, Nr. 3 (05.05.2011): 452–57. http://dx.doi.org/10.20965/ijat.2011.p0452.

Der volle Inhalt der Quelle
Annotation:
The gaze guidance and control we propose for a nursing robot uses a gaze point detector (GPD) and linear visual servoing (LVS). The robot captures stereo camera images, presents them via a head-mounted display (HMD) to the user, calculates the user’s gaze tracked by the camera, and moves to gaze of the LVS. Since in the proposal, persons requiring nursing share the robot’s field of view via the GPD, the closer they get to the target, the more accurate control becomes. The GPD, on the user’s head, has an HMD and a CCD camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Chun Hui, und Fu Dong Wang. „Trajectory Recognition and Navigation Control in the Mobile Robot“. Key Engineering Materials 464 (Januar 2011): 11–14. http://dx.doi.org/10.4028/www.scientific.net/kem.464.11.

Der volle Inhalt der Quelle
Annotation:
Fast and accurate acquisition of navigation information is the key and premise for robot guidance. In this paper, a robot trajectory guidance system composed of a camera, a Digital Signal Controller and mobile agency driven by stepper motors is given. First the JPEG (Joint Photographic Expert Group) image taken by camera is decoded and turns to correspond pixel image. By binarization process the image is then transformed to a binary image. A fast line extraction algorithm is presented based on Column Elementary Line Segment method. Furthermore the trajectory direction deviation parameters and distance deviation parameters are calculated. In this way the robot is controlled to follow the given track accurately in higher speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Belmonte, Álvaro, José Ramón, Jorge Pomares, Gabriel Garcia und Carlos Jara. „Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing“. Electronics 8, Nr. 4 (27.03.2019): 374. http://dx.doi.org/10.3390/electronics8040374.

Der volle Inhalt der Quelle
Annotation:
This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based on an optimal control framework and it is employed to control mobile manipulators during the tracking of image trajectories taking into account robot dynamics. The direct approach allows us to take both the manipulator and base dynamics into account. The proposed image-based controllers consider the optimization of the motor signals sent to the mobile manipulator during the tracking of image trajectories by minimizing the control force and torque. As the results show, the proposed direct visual servoing system uses the eye-in-hand camera images for concurrently controlling both the base platform and robot arm. The use of the optimal framework allows us to derive different visual controllers with different dynamical behaviors during the tracking of image trajectories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Achour, K., und A. O. Djekoune. „Localization and guidance with an embarked camera on a mobile robot“. Advanced Robotics 16, Nr. 1 (Januar 2002): 87–102. http://dx.doi.org/10.1163/156855302317413754.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bazeille, Stephane, Emmanuel Battesti und David Filliat. „A Light Visual Mapping and Navigation Framework for Low-Cost Robots“. Journal of Intelligent Systems 24, Nr. 4 (01.12.2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Der volle Inhalt der Quelle
Annotation:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry, and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time, and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is, moreover, particularly well suited for low-power robots as it is not dependent on the image processing frequency and latency, and thus it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments, and its robustness is demonstrated experimentally for a large range of odometry noise levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xue, Jin Lin, und Tony E. Grift. „Agricultural Robot Turning in the Headland of Corn Fields“. Applied Mechanics and Materials 63-64 (Juni 2011): 780–84. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.780.

Der volle Inhalt der Quelle
Annotation:
This article discusses the development of variable field of view (FOV) of camera to realize headland turning of an agricultural robot in corn fields. The variable FOV of camera was implemented to change direction of view of camera by two DC motors rotating separately in vertical and horizontal planes. Headland turning is executed in six steps: end of row detection and guidance, going blind for a distance, first 90˚ turning, position calculation, backing control, second 90˚ turning. Mathematically morphological operations were chosen to segment crops, and fuzzy logic control was applied to guide the robot. Three repetition tests were conducted to perform the headland turning. A maximum error of 17.4mm when using the lateral view and good headland turning operation were observed. It was successful for variable FOV to implement headland turning of the agricultural robot in corn fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Villagran, Carlos R. Tercero, Seiichi Ikeda, Toshio Fukuda, Kosuke Sekiyama, Yuta Okada, Tomomi Uchiyama, Makoto Negoro und Ikuo Takahashi. „Robot Manipulation and Guidance Using Magnetic Motion Capture Sensor and a Rule-Based Controller“. Journal of Robotics and Mechatronics 20, Nr. 1 (20.02.2008): 151–58. http://dx.doi.org/10.20965/jrm.2008.p0151.

Der volle Inhalt der Quelle
Annotation:
Magnetic motion capture sensors (MMCS) are not commonly used for robot control due to the need for complex, resource-consuming calibration to correct error introduced by the magnetic sensor. We propose avoiding such calibration using a rule-based controller that only uses spatial coordinates from the magnetic sensor. This controller uses a sparse look-up table of spatial coordinates and actions conducted by the robot and reacts to the presence of the sensor near reference points. The control method was applied to manipulate a robotic camera to track a catheter-shaped sensor inside vessels silicone models. A second evaluation was done guiding a mechanism to reconstruct catheter insertion in major silicone vasculature models. The robotic camera tracked the catheter by reacting to the sensor within 10 mm of each reference point. The catheter insertion mechanism reconstructed the catheter trajectory by reacting to the sensor within 6 mm of each reference point. We found that the proposed method allowed robot control in a bounded space without having to correct for the magnetic tracker output distortion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

HAN, LONG, XINYU WU, YONGSHENG OU, YEN-LUN CHEN, CHUNJIE CHEN und YANGSHENG XU. „HOUSEHOLD SERVICE ROBOT WITH CELLPHONE INTERFACE“. International Journal of Information Acquisition 09, Nr. 02 (Juni 2013): 1350009. http://dx.doi.org/10.1142/s0219878913500095.

Der volle Inhalt der Quelle
Annotation:
In this paper, an efficient and low-cost cellphone-commandable mobile manipulation system is described. Aiming at house and elderly caring, this system can be easily commanded through common cellphone network to efficiently grasp objects in household environment, utilizing several low-cost off-the-shelf devices. Unlike the visual servo technology using high quality vision system with high cost, the household-service robot may not afford to such high quality vision servo system, and thus it is essential to use some of low-cost device. However, it is extremely challenging to have the said vision for precise localization, as well as motion control. To tackle this challenge, we developed a realtime vision system with which a reliable grasping algorithm combining machine vision, robotic kinematics and motor control technology is presented. After the target is captured by the arm camera, the arm camera keeps tracking the target while the arm keeps stretching until the end effector reaches the target. However, if the target is not captured by the arm camera, the arm will take a move to help the arm camera capture the target under the guidance of the head camera. This algorithm is implemented on two robot systems: the one with a fixed base and another with a mobile base. The results demonstrated the feasibility and efficiency of the algorithm and system we developed, and the study shown in this paper is of significance in developing a service robot in modern household environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Lei, Wentai, Mengdi Xu, Feifei Hou, Wensi Jiang, Chiyu Wang, Ye Zhao, Tiankun Xu, Yan Li, Yumei Zhao und Wenjun Li. „Calibration Venus: An Interactive Camera Calibration Method Based on Search Algorithm and Pose Decomposition“. Electronics 9, Nr. 12 (17.12.2020): 2170. http://dx.doi.org/10.3390/electronics9122170.

Der volle Inhalt der Quelle
Annotation:
Cameras are widely used in many scenes such as robot positioning and unmanned driving, in which the camera calibration is a major task in this field. The interactive camera calibration method based on a plane board is becoming popular due to its stability and handleability. However, most methods choose suggestions subjectively from a fixed pose dataset, which is error-prone and limited for different camera models. In addition, these methods do not provide clear guidelines on how to place the board in the specified pose. This paper proposes a new interactive calibration method, named ‘Calibration Venus’, including two main parts: pose search and pose decomposition. First, a pose search algorithm based on simulated annealing (SA) algorithm is proposed to select the optimal pose in the entire pose space. Second, an intuitive and easy-to-use user guidance method is designed to decompose the optimal pose into four sub-poses: translation, each rotation along X-, Y-, Z-axes. Thereby the users could follow the guide step by step to accurately complete the placement of the calibration board. Experimental results evaluated on simulated and real datasets show that the proposed method can reduce the difficulty of calibration, and improve the accuracy of calibration, as well as provide better guidance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Jinlin, Xue. „Guidance of an agricultural robot with variable angle-of-view camera arrangement in cornfield“. African Journal of Agricultural Research 9, Nr. 18 (05.05.2014): 1378–85. http://dx.doi.org/10.5897/ajar2013.7670.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Sreenivasan, S. V., und K. J. Waldron. „A Drift-Free Navigation System for a Mobile Robot Operating on Unstructured Terrain“. Journal of Mechanical Design 116, Nr. 3 (01.09.1994): 894–900. http://dx.doi.org/10.1115/1.2919466.

Der volle Inhalt der Quelle
Annotation:
The orientation and the angular rates of the body of a robotic vehicle are required for the guidance and control of the vehicle. In the current robotic systems these quantities are obtained by the use of inertial sensing systems. Inertial sensing systems involve drift errors which can be significant even after the vehicle has traversed only short distances on the terrain. A different approach is suggested here which guarantees accurate, drift-free sensing of the angular position and rates of the vehicle body. A camera system consisting of two cameras in fixed relationship to one another is made to continuously track two stationary objects (stars or the sun). The camera system is mounted on the vehicle body through an actuated three-degree-of-freedom joint. The angular positions and rates of these joints can be used to evaluate the angular positions and rates of the vehicle body. An estimate of the absolute position of the vehicle on the terrain can also be obtained from this sensing system. This can serve as the primary system for estimating the position of a vehicle on a planet, or as an inexpensive alternative/backup to a more accurate Global Positioning System (GPS) for estimating the position of a vehicle on earth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ravankar, Abhijeet, Ankit Ravankar, Yukinori Kobayashi und Takanori Emaru. „Intelligent Robot Guidance in Fixed External Camera Network for Navigation in Crowded and Narrow Passages“. Proceedings 1, Nr. 2 (15.11.2016): 37. http://dx.doi.org/10.3390/ecsa-3-d008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ponnambalam, Vignesh Raja, Marianne Bakken, Richard J. D. Moore, Jon Glenn Omholt Gjevestad und Pål Johan From. „Autonomous Crop Row Guidance Using Adaptive Multi-ROI in Strawberry Fields“. Sensors 20, Nr. 18 (14.09.2020): 5249. http://dx.doi.org/10.3390/s20185249.

Der volle Inhalt der Quelle
Annotation:
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kent, Ernest W., Thomas Wheatley und Marilyn Nashman. „Real-time cooperative interaction between structured-light and reflectance ranging for robot guidance“. Robotica 3, Nr. 1 (Januar 1985): 7–11. http://dx.doi.org/10.1017/s0263574700001417.

Der volle Inhalt der Quelle
Annotation:
SUMMARYWhen applied to rapidly moving objects with complex trajectories, the information-rate limitation imposed by video-camera frame rates impairs the effectiveness of structured-light techniques in real-time robot servoing. To improve the performance of such systems, the use of fast infra-red proximity detectors to augment visual guidance in the final phase of target acquisition was explored. It was found that this approach was limited by the necessity of employing a different range/intensity calibration curve for the proximity detectors for every object and for every angle of approach to complex objects. Consideration of the physics of the detector process suggested that a single log-linear parametric family could describe all such calibration curves, and this was confirmed by experiment. From this result, a technique was devised for cooperative interaction between modalities, in which the vision sense provided on-the-fly determination of calibration parameters for the proximity detectors, for every approach to a target, before passing control of the system to the other modality. This technique provided a three hundred percent increase in useful manipulator velocity, and improved performance during the transition of control from one modality to the other.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

PRETLOVE, J. R. G., und G. A. PARKER. „THE SURREY ATTENTIVE ROBOT VISION SYSTEM“. International Journal of Pattern Recognition and Artificial Intelligence 07, Nr. 01 (Februar 1993): 89–107. http://dx.doi.org/10.1142/s0218001493000066.

Der volle Inhalt der Quelle
Annotation:
This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end-effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small-sized envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation and component inspection for the manufacturing industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Hamdy ElGohary, Sherif, Yomna Sabah Mohamed, Mennatallah Hany Elkhodary, Omnya Ahmed und Mennatallah Hesham. „Photodynamic Therapy Using Endoscopy Capsule Robot“. Academic Journal of Life Sciences, Nr. 67 (10.09.2020): 93–100. http://dx.doi.org/10.32861/ajls.67.93.100.

Der volle Inhalt der Quelle
Annotation:
Among the photosensitizers used in Photodynamic therapy (PDT) technique for cancer treatment, it is found out that the Methylene blue and glycoconjugates chlorine are the best ones for this purpose. In this paper, it is suggested to use Active Capsule Wireless Endoscopy Robot instead of the traditional endoscope. The capsule has many valuable features. It uses LEDs as a source of light in the PDT to kill the colon cancer cells. So, the doctor can make use of the advantage of applying the LED light locally at the tumor which was previously injected by the photosensitizers, the light activates these photosensitizers and a photochemical reaction starts that makes the colon cancer cells die. The light with effective wavelength and power density, energy level and controlled LED light intensity will be applied. Active locomotion capsule endoscopy with an electromagnetic actuation system that can achieve a 3-D locomotion and guidance within the digestive system. The paper also discussed how to manage the required power in the capsule for all parts, LEDs, camera, transceiver, and locomotion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Hang, Andrey V. Savkin und Branka Vucetic. „Autonomous Area Exploration and Mapping in Underground Mine Environments by Unmanned Aerial Vehicles“. Robotica 38, Nr. 3 (17.06.2019): 442–56. http://dx.doi.org/10.1017/s0263574719000754.

Der volle Inhalt der Quelle
Annotation:
SummaryIn this paper, we propose a method of using an autonomous flying robot to explore an underground tunnel environment and build a 3D map. The robot model we use is an extension of a 2D non-holonomic robot. The measurements and sensors we considered in the presented method are simple and valid in practical unmanned aerial vehicle (UAV) engineering. The proposed safe exploration algorithm belongs to a class of probabilistic area search, and with a mathematical proof, the performance of the algorithm is analysed. Based on the algorithm, we also propose a sliding control law to apply the algorithm to a real quadcopter in experiments. In the presented experiment, we use a DJI Guidance sensing system and an Intel depth camera to complete the localization, obstacle detection and 3D environment information capture. Furthermore, the simulations show that the algorithm can be implemented in sloping tunnels and with multiple UAVs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bayro-Corrochano, Eduardo. „Editorial“. Robotica 26, Nr. 4 (Juli 2008): 415–16. http://dx.doi.org/10.1017/s0263574708004785.

Der volle Inhalt der Quelle
Annotation:
Robotic sensing is a relatively new field of activity compared with the design and control of robot mechanisms. In both areas the role of geometry is natural and necessary for the development of devices, their control and use in challenging environments. At the very beginning odometry, tactile and touch sensors dominated robot sensing. More recently, due to the fall in the price of laser devices, they have become more attractive to the community. On the other hand, progress in photogrametry, particularly during the nineties as the n-view geometry in projective geometry matured, boot-strapped the use of computer vision as an extra powerful sensor technique for robot guidance. Cameras were used in monocular or stereoscopic fashion, catadioptric systems for ominidirectional vision, fish-eye cameras and camera networks made the use of computer vision even more diverse. Researchers started to combine sensors for 2D and 3D sensing by fusing sensor data in a projective framework. Thanks to the continuous progress in mechatronics, the low prices of fast computers and increasing accuracy of sensor systems, one can build a robot to perceive its surroundings, reconstruct, plan and ultimately act intelligently. In these perception-action systems there is of course, the urgent need for a geometric stochastic framework to deal with uncertainty in the sensing, planning and action in a robust manner. Here geometry can play a central role for the representation and computing in higher dimensions using projective geometry and differential geometry on Lie groups manifolds with a pseudo Euclidean metric. Let us review briefly the developments towards modern geometry that have been often overlooked by the robotic researchers and practitioners.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Rivas-Blanco, Irene, Carlos Perez-del-Pulgar, Carmen López-Casado, Enrique Bauzano und Víctor Muñoz. „Transferring Know-How for an Autonomous Camera Robotic Assistant“. Electronics 8, Nr. 2 (18.02.2019): 224. http://dx.doi.org/10.3390/electronics8020224.

Der volle Inhalt der Quelle
Annotation:
Robotic platforms are taking their place in the operating room because they provide more stability and accuracy during surgery. Although most of these platforms are teleoperated, a lot of research is currently being carried out to design collaborative platforms. The objective is to reduce the surgeon workload through the automation of secondary or auxiliary tasks, which would benefit both surgeons and patients by facilitating the surgery and reducing the operation time. One of the most important secondary tasks is the endoscopic camera guidance, whose automation would allow the surgeon to be concentrated on handling the surgical instruments. This paper proposes a novel autonomous camera guidance approach for laparoscopic surgery. It is based on learning from demonstration (LfD), which has demonstrated its feasibility to transfer knowledge from humans to robots by means of multiple expert showings. The proposed approach has been validated using an experimental surgical robotic platform to perform peg transferring, a typical task that is used to train human skills in laparoscopic surgery. The results show that camera guidance can be easily trained by a surgeon for a particular task. Later, it can be autonomously reproduced in a similar way to one carried out by a human. Therefore, the results demonstrate that the use of learning from demonstration is a suitable method to perform autonomous camera guidance in collaborative surgical robotic platforms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ferland, François, Aurélien Reveleau, Francis Leconte, Dominic Létourneau und François Michaud. „Coordination mechanism for integrated design of Human-Robot Interaction scenarios“. Paladyn, Journal of Behavioral Robotics 8, Nr. 1 (20.12.2017): 100–111. http://dx.doi.org/10.1515/pjbr-2017-0006.

Der volle Inhalt der Quelle
Annotation:
Abstract The ultimate long-term goal in Human-Robot Interaction (HRI) is to design robots that can act as a natural extension to humans. This requires the design of robot control architectures to provide structure for the integration of the necessary components into HRI. This paper describes how HBBA, a Hybrid Behavior-Based Architecture, can be used as a unifying framework for integrated design of HRI scenarios. More specifically, we focus here on HBBA’s generic coordination mechanism of behavior-producing modules, which allows to address a wide range or cognitive capabilities ranging from assisted teleoperation to selective attention and episodic memory. Using IRL-1, a humanoid robot equipped with compliant actuators for motion and manipulation, proximity sensors, cameras and a microphone array, three interaction scenarios are implemented: multi-modal teleoperation with physical guidance interaction, fetching-and delivering and tour-guiding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Choi, Keun Ha, Sang Kwon Han, Kwang-Ho Park, Kyung-Soo Kim und Soohyun Kim. „Guidance Line Extraction Algorithm using Central Region Data of Crop for Vision Camera based Autonomous Robot in Paddy Field“. Journal of Korea Robotics Society 11, Nr. 1 (30.03.2016): 1–8. http://dx.doi.org/10.7746/jkros.2016.11.1.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Vaida, C., D. Pisla und N. Plitea. „Graphical simulation of a new concept of low sized surgical parallel robot for camera guidance in minimally invasive surgery“. PAMM 7, Nr. 1 (Dezember 2007): 2090005–6. http://dx.doi.org/10.1002/pamm.200700132.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Li, Xingdong, Hewei Gao, Fusheng Zha, Jian Li, Yangwei Wang, Yanling Guo und Xin Wang. „Learning the Cost Function for Foothold Selection in a Quadruped Robot“. Sensors 19, Nr. 6 (14.03.2019): 1292. http://dx.doi.org/10.3390/s19061292.

Der volle Inhalt der Quelle
Annotation:
This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit–Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Bárdosi, Zoltán, Christian Plattner, Yusuf Özbek, Thomas Hofmann, Srdjan Milosavljevic, Volker Schartinger und Wolfgang Freysinger. „CIGuide: in situ augmented reality laser guidance“. International Journal of Computer Assisted Radiology and Surgery 15, Nr. 1 (11.09.2019): 49–57. http://dx.doi.org/10.1007/s11548-019-02066-1.

Der volle Inhalt der Quelle
Annotation:
Abstract Purpose A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools. Methods Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery. Results System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon. Conclusion A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Liu, Yi, Ming Cong, Hang Dong und Dong Liu. „Human skill integrated motion planning of assembly manipulation for 6R industrial robot“. Industrial Robot: the international journal of robotics research and application 46, Nr. 1 (21.01.2019): 171–80. http://dx.doi.org/10.1108/ir-09-2018-0189.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhang, Le, Rui Li, Zhiqiang Li, Yuyao Meng, Jinxin Liang, Leiyang Fu, Xiu Jin und Shaowen Li. „A Quadratic Traversal Algorithm of Shortest Weeding Path Planning for Agricultural Mobile Robots in Cornfield“. Journal of Robotics 2021 (19.02.2021): 1–19. http://dx.doi.org/10.1155/2021/6633139.

Der volle Inhalt der Quelle
Annotation:
In order to improve the weeding efficiency and protect farm crops, accurate and fast weeds removal guidance to agricultural mobile robots is an utmost important topic. Based on this motivation, we propose a time-efficient quadratic traversal algorithm for the removal guidance of weeds around the recognized corn in the field. To recognize the weeds and corns, a Faster R-CNN neural network is implemented in real-time recognition. Then, an ultra-green characterization (EXG) hyperparameter is used for grayscale image processing. An improved OTSU (IOTSU) algorithm is proposed to accurately generate and optimize the binary image. Compared to the traditional OTSU algorithm, the improved OTSU algorithm effectively shortens the search speed of the algorithm and reduces the calculation processing time by compressing the range of the search grayscale range. Finally, based on the contour of the target plants and the Canny edge detection operator, the shortest weeding path guidance can be calculated by the proposed quadratic traversal algorithm. The experimental results proved that our search success rate can reach 90.0% on the testing date. This result ensured the accurate selection of the target 2D coordinates in the pixel coordinate system. Transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system as well as using a depth camera can achieve multitarget depth ranging and path planning for an optimized weeding path.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fernández, Ignacio, Manuel Mazo, José L. Lázaro, Daniel Pizarro, Enrique Santiso, Pedro Martín und Cristina Losada. „Guidance of a mobile robot using an array of static cameras located in the environment“. Autonomous Robots 23, Nr. 4 (08.09.2007): 305–24. http://dx.doi.org/10.1007/s10514-007-9049-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Golparvar, Ata Jedari, und Murat Kaya Yapici. „Toward graphene textiles in wearable eye tracking systems for human–machine interaction“. Beilstein Journal of Nanotechnology 12 (11.02.2021): 180–89. http://dx.doi.org/10.3762/bjnano.12.14.

Der volle Inhalt der Quelle
Annotation:
The study of eye movements and the measurement of the resulting biopotential, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human–computer and human–machine interaction (HCI/HMI), and personal medical devices; provided that, seamless sensing of eye activity and processing thereof is achieved by a truly wearable, low-cost, and accessible technology. The present study demonstrates an alternative to the bulky and expensive camera-based eye tracking systems and reports the development of a graphene textile-based personal assistive device for the first time. This self-contained wearable prototype comprises a headband with soft graphene textile electrodes that overcome the limitations of conventional “wet” electrodes, along with miniaturized, portable readout electronics with real-time signal processing capability that can stream data to a remote device over Bluetooth. The potential of graphene textiles in wearable eye tracking and eye-operated remote object interaction is demonstrated by controlling a mouse cursor on screen for typing with a virtual keyboard and enabling navigation of a four-wheeled robot in a maze, all utilizing five different eye motions initiated with a single channel EOG acquisition. Typing speeds of up to six characters per minute without prediction algorithms and guidance of the robot in a maze with four 180° turns were successfully achieved with perfect pattern detection accuracies of 100% and 98%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wang, Xiaoguang, Yunbo Hu und Qi Lin. „Workspace analysis and verification of cable-driven parallel mechanism for wind tunnel test“. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 231, Nr. 6 (12.05.2016): 1012–21. http://dx.doi.org/10.1177/0954410016646601.

Der volle Inhalt der Quelle
Annotation:
Cable-driven parallel mechanism is a special kind of parallel robot in which traditional rigid links are replaced by actuated cables. This provides a new suspension method for wind tunnel test, in which an aircraft model is driven by a number of parallel cables to fulfil 6-DOF motion. The workspace of such a cable robot is limited due to the geometrical and unilateral force constraints, the investigation of which is important for applications requiring large flight space. This paper focuses on the workspace analysis and verification of a redundant constraint 6-DOF cable-driven parallel suspension system. Based on the system motion and dynamic equations, the geometrical interference (either intersection between two cables or between a cable and the aircraft) and cable tension restraint conditions are constructed and analyzed. The hyperplane vector projection strategy is used to solve the aircraft’s orientation and position workspace. Moreover, software ADAMS is used to check the workspace, and experiments are done on the prototype, which adopts a camera to monitor the actual motion space. In addition, the system construction is designed by using a built-in six-component balance to measure the aerodynamic force. The results of simulation and tests show a good consistency, which means that the restraint conditions and workspace solution strategy are valid and can be used to provide guidance for the cable-driven parallel suspension system’s application in wind tunnel tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zhang, Jincheng, Prashant Ganesh, Kyle Volle, Andrew Willis und Kevin Brink. „Low-Bandwidth and Compute-Bound RGB-D Planar Semantic SLAM“. Sensors 21, Nr. 16 (10.08.2021): 5400. http://dx.doi.org/10.3390/s21165400.

Der volle Inhalt der Quelle
Annotation:
Visual simultaneous location and mapping (SLAM) using RGB-D cameras has been a necessary capability for intelligent mobile robots. However, when using point-cloud map representations as most RGB-D SLAM systems do, limitations in onboard compute resources, and especially communication bandwidth can significantly limit the quantity of data processed and shared. This article proposes techniques that help address these challenges by mapping point clouds to parametric models in order to reduce computation and bandwidth load on agents. This contribution is coupled with a convolutional neural network (CNN) that extracts semantic information. Semantics provide guidance in object modeling which can reduce the geometric complexity of the environment. Pairing a parametric model with a semantic label allows agents to share the knowledge of the world with much less complexity, opening a door for multi-agent systems to perform complex tasking, and human–robot cooperation. This article takes the first step towards a generalized parametric model by limiting the geometric primitives to a planar surface and providing semantic labels when appropriate. Two novel compression algorithms for depth data and a method to independently fit planes to RGB-D data are provided, so that plane data can be used for real-time odometry estimation and mapping. Additionally, we extend maps with semantic information predicted from sparse geometries (planes) by a CNN. In experiments, the advantages of our approach in terms of computational and bandwidth resources savings are demonstrated and compared with other state-of-the-art SLAM systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Nebylov, A. V., V. V. Perliouk und T. S. Leontieva. „Investigation of the technology of mutual navigation and orientation of small space vehicles flying in formation“. VESTNIK of Samara University. Aerospace and Mechanical Engineering 18, Nr. 1 (16.04.2019): 88–93. http://dx.doi.org/10.18287/2541-7533-2019-18-1-88-93.

Der volle Inhalt der Quelle
Annotation:
The paper presents the problem of ensuring support of the flight of a group of small spacecraft (microsatellites) taking into account the small mutual distances between them. The purpose of using the orbital constellation specified is to create a radio communication system to control remote objects like unmanned aerial vehicles and ground robots located in hard-to-reach areas of the Earth from the Central ground station. To reduce the cost of microsatellite design, it was decided to rigidly fix the receiving and transmitting antennas on their housings and use the spatial orientation of the entire apparatus for antenna guidance. This seriously complicated the tasks of navigation and orientation of microsatellites in a formation and required the development of a new method for determining the orientation of a single microsatellite. The essence of the method is to process the image obtained by means of a video camera mounted on a nearby microsatellite. We used methods of computer vision. The results of mathematical modeling simulation, as well as the results of full-scale bench experiment confirming the efficiency of the proposed method are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Hunter, Mark, Kevin Kremer und Kristen Wymore. „A first report of robotic-assisted sentinel lymph node mapping and inguinal lymph node dissection, using near-infared fluorescence, in vulvar cancer.“ Journal of Clinical Oncology 35, Nr. 15_suppl (20.05.2017): e17027-e17027. http://dx.doi.org/10.1200/jco.2017.35.15_suppl.e17027.

Der volle Inhalt der Quelle
Annotation:
e17027 Background: Vulvar carcinoma is a rare gynecologic malignancy which has seen a considerable evolution in surgical techniques over the last several decades. However, the morbidity associated with inguinal lymph node dissections remains significant. The majority of patients undergoing full lymphadenectomy will have some complication, with wound breakdown being the most common. In males, robotic inguinal lymph node dissection has been described for penile cancer. This report represents a first use of near-infared fluorescence for sentinel inguinal lymph node mapping, and the first description of complete robotic inguinal lymph node dissection for patients with vulvar malignancies. Methods: Bilateral robotic-assisted inguinal lymph node mapping and and lymphadenectomy was performed using the daVinci Xi system with a near-infared fluourescence. Results: The patient presented at 81 years old with a 3 cm lesion on the left labia. In the operating room, the vulvar lesion was injected circumferentially with indocyanine green. A 1 cm incision was made in the skin over the apex of the left femoral triangle carried past the underlying campers fascia. The tissue plain was developed overlying the femoral triangle using a tissue expander balloon. Under visual guidance, an 8 mm camera port and two 8 mm instrument ports were placed and the robot docked. The ipsilateral sentinel lymph nodes were identified using near-infared fluorescence and resected. We then performed a complete left superficial inguinal lymph node dissection. The right side was then performed in identical fashion. No sentinel node was identified on the right side. Radical hemivulvectomy was then performed without difficulty. All 11 lymph nodes were negative for disease. She was returned to the OR once for replacement of her JP drains. Her postoperative course was otherwise unremarkable and she is currently 15 months postoperative without complications or recurrence. Conclusions: Sentinel lymph node mapping and superficial inguinal lymph node dissection using robotic-assisted techniques and near-infared fluorescence is feasible and warrants further investigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ito, Minoru. „Robot vision modelling-camera modelling and camera calibration“. Advanced Robotics 5, Nr. 3 (Januar 1990): 321–35. http://dx.doi.org/10.1163/156855391x00232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Tobita, Kazuteru, Katsuyuki Sagayama, Mayuko Mori, Ayako Tabuchi, Yosuke Fukushima, Seiichi Teshigawara und Hironori Ogawa. „Guidance Robot LIGHBOT and Robot Town Sagami“. Journal of the Robotics Society of Japan 38, Nr. 7 (2020): 604–10. http://dx.doi.org/10.7210/jrsj.38.604.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Wallster, Ola. „Optimaster robot guidance system“. Sensor Review 15, Nr. 2 (Juni 1995): 23–26. http://dx.doi.org/10.1108/eum0000000004263.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

TAKAHASHI, Hironobu, und Fumiaki TOMITA. „Camera Calibration for Robot Vision.“ Journal of the Robotics Society of Japan 10, Nr. 2 (1992): 177–84. http://dx.doi.org/10.7210/jrsj.10.177.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Lubis, Abdul Jabbar, Yuyun Dwi Lestari, Haida Dafitri und Azanuddin. „ROBOT TRACER WITH VISUAL CAMERA“. Journal of Physics: Conference Series 930 (Dezember 2017): 012017. http://dx.doi.org/10.1088/1742-6596/930/1/012017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Webb, P., I. Gibson und C. Wykes. „Robot guidance using ultrasonic arrays“. Journal of Robotic Systems 11, Nr. 8 (1994): 681–92. http://dx.doi.org/10.1002/rob.4620110802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Shyi, C. N., J. Y. Lee und C. H. Chen. „Robot guidance using standard mark“. Electronics Letters 24, Nr. 21 (1988): 1326. http://dx.doi.org/10.1049/el:19880901.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Deveza, Reimundo, David Thiel, Andrew Russell und Alan Mackay-Sim. „Odor Sensing for Robot Guidance“. International Journal of Robotics Research 13, Nr. 3 (Juni 1994): 232–39. http://dx.doi.org/10.1177/027836499401300305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kumar, Arun V. Rejus, und A. Sagai Francis Britto. „Robot Controlled Six Degree Freedom Camera“. International Journal of Psychosocial Rehabilitation 23, Nr. 4 (20.07.2019): 243–53. http://dx.doi.org/10.37200/ijpr/v23i4/pr190183.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

CHRISTENSEN, HENRIK I. „A LOW-COST ROBOT CAMERA HEAD“. International Journal of Pattern Recognition and Artificial Intelligence 07, Nr. 01 (Februar 1993): 69–87. http://dx.doi.org/10.1142/s0218001493000054.

Der volle Inhalt der Quelle
Annotation:
Active vision involving the exploitation of controllable cameras and camera heads is an area which has received increased attention over the last few years. At LIA/AUC a binocular robot camera head has been constructed for use in geometric modelling and interpretation. In this manuscript the basic design of the head is outlined and a first prototype is described in some detail. Detailed specifications for the components used are provided together with a section on lessons learned from construction and initial use of this prototype.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Pour Yousefian Barfeh, D., und E. Ramos. „Color Detection in Autonomous Robot-Camera“. Journal of Physics: Conference Series 1169 (Februar 2019): 012048. http://dx.doi.org/10.1088/1742-6596/1169/1/012048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhuang, Hanqi, Zvi S. Roth und Kuanchih Wang. „Robot calibration by mobile camera systems“. Journal of Robotic Systems 11, Nr. 3 (1994): 155–67. http://dx.doi.org/10.1002/rob.4620110303.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wang, Long, Ying Sheng, Yuan Ying Qiu und Guang Da Chen. „Design and Implementation of the Control Device for a Cable-Driven Camera Robot“. Applied Mechanics and Materials 373-375 (August 2013): 225–30. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.225.

Der volle Inhalt der Quelle
Annotation:
Due to the advantages of strong bearing capacity, compact structure and large working space, cable-driven camera robots are being used increasingly. This paper focuses on the design and implementation method of the control device for a four-cable-driven camera robot. Firstly, a kinematics model is built and the variations of the cable lengths are analyzed along with the camera robot motion; then, the console and actuator of the camera robot are designed; again, PC software transferring command codes from the camera robot console to the actuator and displaying camera robot motion state is developed; finally, the stability and rationality of the control device are verified by the experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Herpe, G., M. Chevreton, J. C. Robin, P. Prat, B. Servan und F. Gex. „An intensified C.C.D. camera for telescope guidance“. Experimental Astronomy 2, Nr. 3 (1991): 163–77. http://dx.doi.org/10.1007/bf00566684.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie