Auswahl der wissenschaftlichen Literatur zum Thema „Camera guidance for robot“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Camera guidance for robot" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Camera guidance for robot"

1

Sai, Hesin, und Yoshikuni Okawa. „Structured Sign for Guidance of Mobile Robot“. Journal of Robotics and Mechatronics 3, Nr. 5 (20.10.1991): 379–86. http://dx.doi.org/10.20965/jrm.1991.p0379.

Der volle Inhalt der Quelle
Annotation:
As part of a guidance system for mobile robots operating on a wide and flat floor, such as an ordinary factory or a gymnasium, we have proposed a special-purpose sign. It consists of a cylinder, with four slits, and a fluorescent light, which is placed on the axis of the cylinder. Two of the slits are parallel to each other, and the other two are angled. A robot obtains an image of the sign with a TV camera. After thresholding, we have four bright sets of pixels which correspond to the four slits of the cylinder. We compute by measuring the relative distances between the four points, the distance and the angle to the direction of the sign can be computed using simple geometrical equations. Using a personal computer with an image processing capability, we have investigated the accuracy of the proposed position identification method and compared the experimental results against the theoretical analysis of measured error. The data shows good coincidence between the analysis and the experiments. Finally, we have built a movable robot, which has three microprocessors and a TV camera, and performed several control experiments for trajectory following.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yang, Long, und Nan Feng Xiao. „Robot Stereo Vision Guidance System Based on Attention Mechanism“. Applied Mechanics and Materials 385-386 (August 2013): 708–11. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.708.

Der volle Inhalt der Quelle
Annotation:
Add attention mechanism into traditional robot stereo vision system, thus got the possible workpiece position quickly by saliency image, highly accelerate the computing process. First, to get the camera intrinsic matrix and extrinsic matrix, camera stereo calibration needed be done. Then use those parameter matrixes to rectify the newly captured images, disparity map can be got based on the OpenCV library, meanwhile, saliency image was computed by Itti algorithm. Workpiece spatial pose to left camera coordinates can be got with triangulation measurement principal. After a series of coordinates transformation workpiece spatial pose to world coordinates can be got. With the robot inverse solution function, the robot joint rotation angle can be got thus driver the robot to work. At last, experiment results show the effectiveness of this method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Golkowski, Alexander Julian, Marcus Handte, Peter Roch und Pedro J. Marrón. „An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems“. International Journal of Semantic Computing 15, Nr. 03 (September 2021): 337–57. http://dx.doi.org/10.1142/s1793351x21400080.

Der volle Inhalt der Quelle
Annotation:
For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Blais, François, Marc Rioux und Jacques Domey. „Compact three-dimensional camera for robot and vehicle guidance“. Optics and Lasers in Engineering 10, Nr. 3-4 (Januar 1989): 227–39. http://dx.doi.org/10.1016/0143-8166(89)90039-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Imasato, Akimitsu, und Noriaki Maru. „Guidance and Control of Nursing Care Robot Using Gaze Point Detector and Linear Visual Servoing“. International Journal of Automation Technology 5, Nr. 3 (05.05.2011): 452–57. http://dx.doi.org/10.20965/ijat.2011.p0452.

Der volle Inhalt der Quelle
Annotation:
The gaze guidance and control we propose for a nursing robot uses a gaze point detector (GPD) and linear visual servoing (LVS). The robot captures stereo camera images, presents them via a head-mounted display (HMD) to the user, calculates the user’s gaze tracked by the camera, and moves to gaze of the LVS. Since in the proposal, persons requiring nursing share the robot’s field of view via the GPD, the closer they get to the target, the more accurate control becomes. The GPD, on the user’s head, has an HMD and a CCD camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Chun Hui, und Fu Dong Wang. „Trajectory Recognition and Navigation Control in the Mobile Robot“. Key Engineering Materials 464 (Januar 2011): 11–14. http://dx.doi.org/10.4028/www.scientific.net/kem.464.11.

Der volle Inhalt der Quelle
Annotation:
Fast and accurate acquisition of navigation information is the key and premise for robot guidance. In this paper, a robot trajectory guidance system composed of a camera, a Digital Signal Controller and mobile agency driven by stepper motors is given. First the JPEG (Joint Photographic Expert Group) image taken by camera is decoded and turns to correspond pixel image. By binarization process the image is then transformed to a binary image. A fast line extraction algorithm is presented based on Column Elementary Line Segment method. Furthermore the trajectory direction deviation parameters and distance deviation parameters are calculated. In this way the robot is controlled to follow the given track accurately in higher speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Belmonte, Álvaro, José Ramón, Jorge Pomares, Gabriel Garcia und Carlos Jara. „Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing“. Electronics 8, Nr. 4 (27.03.2019): 374. http://dx.doi.org/10.3390/electronics8040374.

Der volle Inhalt der Quelle
Annotation:
This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based on an optimal control framework and it is employed to control mobile manipulators during the tracking of image trajectories taking into account robot dynamics. The direct approach allows us to take both the manipulator and base dynamics into account. The proposed image-based controllers consider the optimization of the motor signals sent to the mobile manipulator during the tracking of image trajectories by minimizing the control force and torque. As the results show, the proposed direct visual servoing system uses the eye-in-hand camera images for concurrently controlling both the base platform and robot arm. The use of the optimal framework allows us to derive different visual controllers with different dynamical behaviors during the tracking of image trajectories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Achour, K., und A. O. Djekoune. „Localization and guidance with an embarked camera on a mobile robot“. Advanced Robotics 16, Nr. 1 (Januar 2002): 87–102. http://dx.doi.org/10.1163/156855302317413754.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bazeille, Stephane, Emmanuel Battesti und David Filliat. „A Light Visual Mapping and Navigation Framework for Low-Cost Robots“. Journal of Intelligent Systems 24, Nr. 4 (01.12.2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Der volle Inhalt der Quelle
Annotation:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry, and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time, and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is, moreover, particularly well suited for low-power robots as it is not dependent on the image processing frequency and latency, and thus it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments, and its robustness is demonstrated experimentally for a large range of odometry noise levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xue, Jin Lin, und Tony E. Grift. „Agricultural Robot Turning in the Headland of Corn Fields“. Applied Mechanics and Materials 63-64 (Juni 2011): 780–84. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.780.

Der volle Inhalt der Quelle
Annotation:
This article discusses the development of variable field of view (FOV) of camera to realize headland turning of an agricultural robot in corn fields. The variable FOV of camera was implemented to change direction of view of camera by two DC motors rotating separately in vertical and horizontal planes. Headland turning is executed in six steps: end of row detection and guidance, going blind for a distance, first 90˚ turning, position calculation, backing control, second 90˚ turning. Mathematically morphological operations were chosen to segment crops, and fuzzy logic control was applied to guide the robot. Three repetition tests were conducted to perform the headland turning. A maximum error of 17.4mm when using the lateral view and good headland turning operation were observed. It was successful for variable FOV to implement headland turning of the agricultural robot in corn fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Camera guidance for robot"

1

Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Grepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Der volle Inhalt der Quelle
Annotation:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Maier, Daniel [Verfasser], und Maren [Akademischer Betreuer] Bennewitz. „Camera-based humanoid robot navigation“. Freiburg : Universität, 2015. http://d-nb.info/1119452082/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Der volle Inhalt der Quelle
Annotation:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Arthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Stark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Snailum, Nicholas. „Mobile robot navigation using single camera vision“. Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the research carried out in overcoming the problems encountered during the development of an autonomous mobile robot (AMR) which uses a single television camera for navigation in environments with visible edges, such as corridors and hallways. The objective was to determine the minimal sensing and signal processing requirements for a real AMR that could achieve self-steering, navigation and obstacle avoidance in real unmodified environments. A goal was to design algorithms that could meet the objective while being able to run on a laptop personal computer (PC). This constraint confined the research to computationally efficient algorithms and memory management techniques. The methods by which the objective was successfully achieved are described. A number of noise reduction and feature extraction algorithms have been tested to determine their suitability in this type of environment, and where possible these have been modified to improve their performance. The current methods of locating lines of perspective and vanishing points in images are described, and a novel algorithm has been devised for this application which is more efficient in both its memory usage and execution time. A novel obstacle avoidance mechanism is described which is shown to provide the low level piloting capability necessary to deal with unexpected situations. The difficulties of using a single camera are described, and it is shown that a second camera is required in order to provide robust performance. A prototype AMR was built and used to demonstrate reliable navigation and obstacle avoidance in real time in real corridors. Test results indicate that the prototype could be developed into a competitively priced commercial service robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Quine, Ben. „Spacecraft guidance systems : attitude determination using star camera data“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360417.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Burman, Gustav, und Simon Erlandsson. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Maskinkonstruktion (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230253.

Der volle Inhalt der Quelle
Annotation:
Today’s digital society is changing the way we learn andeducate drastically. Education is being digitalized with theuse of online courses and digital lectures. This bachelorthesis solves the problem of how to be able to record alecture without a camera operator, an Automated CameraMan (ACM), for easier production of high quality educationmaterial. It was achieved with a modularized designprocess, practical testing and a scientific approach. TheAutomated Camera Man can be placed in the rear of thelecture hall to record or stream the content while it activelyadjusts itself and its direction towards the lecturerusing image processing and analysis.
I dagens digitala samhälle är sättet som undervisning skerpå under ständig förändring. Undervisningen håller på attdigitaliseras genom användningen av nätbaserade kurseroch digitala föreläsningar. Detta kandidatexamensarbetesöker en lösning på frågan om hur man kan filma en föreläsningutan en kameraoperatör, med en automatiserad kameraman,för lättare produktion av högkvalitativt videomaterial.Genom en modulariserad designprocess, praktiska testeroch vetenskapliga studier, designades ett sådant system.Det automatiska kamerastativet kan placeras längst bak ien föreläsningssal, på vilket en kamera kan placeras för attspela in eller strömma filmmaterial medan stativet riktar insig mot föreläsarens position, med hjälp av bildbehandling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Camera guidance for robot"

1

S, Roth Zvi, Hrsg. Camera-aided robot calibration. Boca Raton: CRC Press, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Snailum, Nicholas. Mobile robot navigation using single camera vision. London: University of East London, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Horn, Geoffrey M. Camera operator. Pleasantville, NY: Gareth Stevens Pub., 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Manatt, Kathleen G. Robot scientist. Ann Arbor: Cherry Lake Pub., 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pomerleau, Dean A. Neural network perception for mobile robot guidance. Boston: Kluwer Academic Publishers, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Pomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Steer, Barry. Navigation for the guidance of a mobile robot. [s.l.]: typescript, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

William, Link, Hrsg. Off camera: Conversations with the makers of prime-time television. New York, N.Y: New American Library, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Arndt, David. Make money with your camera. Buffalo, NY: Amherst Media, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Camera guidance for robot"

1

Bihlmaier, Andreas. „Endoscope Robots and Automated Camera Guidance“. In Learning Dynamic Spatial Relations, 23–102. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Baltes, Jacky. „Camera Calibration Using Rectangular Textures“. In Robot Vision, 245–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Martínez, Antonio B., und Albert Larré. „Fast Mobile Robot Guidance“. In Traditional and Non-Traditional Robotic Sensors, 423–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75984-0_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Scheibe, Karsten, Hartmut Korsitzky, Ralf Reulke, Martin Scheele und Michael Solbrig. „EYESCAN - A High Resolution Digital Panoramic Camera“. In Robot Vision, 77–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Barnes, Nick, und Zhi-Qiang Liu. „Object Recognition Mobile Robot Guidance“. In Knowledge-Based Vision-Guided Robots, 63–86. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1780-5_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rojtberg, Pavel. „User Guidance for Interactive Camera Calibration“. In Virtual, Augmented and Mixed Reality. Multimodal Interaction, 268–76. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21607-8_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bihlmaier, Andreas. „Intraoperative Robot-Based Camera Assistance“. In Learning Dynamic Spatial Relations, 185–208. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14914-7_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Anko, Börner, Hirschmüller Heiko, Scheibe Karsten, Suppa Michael und Wohlfeil Jürgen. „MFC - A Modular Line Camera for 3D World Modulling“. In Robot Vision, 319–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78157-8_24.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bobadilla, Leonardo, Katrina Gossman und Steven M. LaValle. „Manipulating Ergodic Bodies through Gentle Guidance“. In Robot Motion and Control 2011, 273–82. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2343-9_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pomerleau, Dean A. „Other Vision-based Robot Guidance Methods“. In Neural Network Perception for Mobile Robot Guidance, 161–71. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Camera guidance for robot"

1

Kameoka, Kanako, Shigeru Uchikado und Sun Lili. „Visual Guidance for a Mobile Robot with a Camera“. In TENCON 2006 - 2006 IEEE Region 10 Conference. IEEE, 2006. http://dx.doi.org/10.1109/tencon.2006.344018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

„Guidance of Robot Arms using Depth Data from RGB-D Camera“. In 10th International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004481903150321.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Zhiyuan, und Demao Ye. „Research on visual guidance algorithm of forking robot based on monocular camera“. In Conference on Optics Ultra Precision Manufacturing and Testing, herausgegeben von Dawei Zhang, Lingbao Kong und Xichun Luo. SPIE, 2020. http://dx.doi.org/10.1117/12.2575675.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Martinez-Rey, Miguel, Felipe Espinosa, Alfredo Gardel, Carlos Santos und Enrique Santiso. „Mobile robot guidance using adaptive event-based pose estimation and camera sensor“. In 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). IEEE, 2016. http://dx.doi.org/10.1109/ebccsp.2016.7605089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Samu, Tayib, Nikhal Kelkar, David Perdue, Michael A. Ruthemeyer, Bradley O. Matthews und Ernest L. Hall. „Line following using a two camera guidance system for a mobile robot“. In Photonics East '96, herausgegeben von David P. Casasent. SPIE, 1996. http://dx.doi.org/10.1117/12.256287.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Zhexiao, Zhiqian Cheng, Geng Wang und Jimin Xu. „A VSLAM Fusing Visible Images and Infrared Images from RGB-D camera for Indoor Mobile Robot“. In 2018 IEEE CSAA Guidance, Navigation and Control Conference (GNCC). IEEE, 2018. http://dx.doi.org/10.1109/gncc42960.2018.9019128.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sai, Heshin, und Yoshikuni Okawa. „A Structured Sign for the Guidance of Autonomous Mobile Robots“. In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0019.

Der volle Inhalt der Quelle
Annotation:
Abstract For the explicit purpose of guiding autonomous mobile robots on wide and flat planes, we propose a specially designed guiding sign. It consists of a cylinder with four slits on its surface, and a fluorescent light on the center axis of the cylinder. Two outer slits are parallel to each other, and the other two inner slits are angled. A robot takes an image of the sign with a TV camera. By the threshold operation, it has four bright sets of pixels each of which corresponds to one of the four slits of the cylinder. By measuring the relative distances between those four bright blobs in the image plane, the robot can compute its distance and angle to the direction of the sign using two simple geometrical equations. We have built a mobile robot and a sign. Using a personal computer with an image processor, we have evaluated the accuracy of the proposed method of position and angle measurement, and compared them with the theoretical analysis. The data show good coincidence with the analytical results. We have programmed the characteristics of the measurement, which is called the simulation program. First, the parameters in the simulation program are adjusted. Next, we have done several trajectory following experiments using this simulation program. The simulation program provides us with the results which could not be taken in the actual experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shah, Syed, Suresh Kannan und Eric Johnson. „Motion Estimation for Obstacle Detection and Avoidance Using a Single Camera for UAVs/Robots“. In AIAA Guidance, Navigation, and Control Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2010. http://dx.doi.org/10.2514/6.2010-7569.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sarkar, Saurabh, und Manish Kumar. „Fast Swarm Based Lane Detection System for Mobile Robot Navigation on Urban Roads“. In ASME 2009 Dynamic Systems and Control Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/dscc2009-2702.

Der volle Inhalt der Quelle
Annotation:
The paper presents a swarm based lane detection system which uses cooperating agents acting on different regions of the images obtained from an onboard robot camera. These agents act on the image and communicate with each other to find out the possible location of the lane in the image. The swarm agents finalize their locations based on a set of rules which includes each other’s relative position and their previous locations. The swarm agents place themselves on the lane and generate a guidance path for the robot. This proposed lane detection method is is fast and robust to noises in the image. It is faster than the regression methods commonly used and can overcome the problem of noisy image to a good extent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhao, Haimei, Wei Bian, Bo Yuan und Dacheng Tao. „Collaborative Learning of Depth Estimation, Visual Odometry and Camera Relocalization from Monocular Videos“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/68.

Der volle Inhalt der Quelle
Annotation:
Scene perceiving and understanding tasks including depth estimation, visual odometry (VO) and camera relocalization are fundamental for applications such as autonomous driving, robots and drones. Driven by the power of deep learning, significant progress has been achieved on individual tasks but the rich correlations among the three tasks are largely neglected. In previous studies, VO is generally accurate in local scope yet suffers from drift in long distances. By contrast, camera relocalization performs well in the global sense but lacks local precision. We argue that these two tasks should be strategically combined to leverage the complementary advantages, and be further improved by exploiting the 3D geometric information from depth data, which is also beneficial for depth estimation in turn. Therefore, we present a collaborative learning framework, consisting of DepthNet, LocalPoseNet and GlobalPoseNet with a joint optimization loss to estimate depth, VO and camera localization unitedly. Moreover, the Geometric Attention Guidance Model is introduced to exploit the geometric relevance among three branches during learning. Extensive experiments demonstrate that the joint learning scheme is useful for all tasks and our method outperforms current state-of-the-art techniques in depth estimation and camera relocalization with highly competitive performance in VO.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Camera guidance for robot"

1

Chen, J., W. E. Dixon, D. M. Dawson und V. K. Chitrakaran. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera. Fort Belvoir, VA: Defense Technical Information Center, Januar 2004. http://dx.doi.org/10.21236/ada465705.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Haas, Gary, und Philip R. Osteen. Wall Sensing for an Autonomous Robot With a Three-Dimensional Time-of-Flight (3-D TOF) Camera. Fort Belvoir, VA: Defense Technical Information Center, Februar 2011. http://dx.doi.org/10.21236/ada539897.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie