Academic literature on the topic 'Robot vision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robot vision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Robot vision systems"

1

Monta, Mitsuji, Naoshi Kondo, Seiichi Arima, and Kazuhiko Namba. "Robotic Vision for Bioproduction Systems." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 341–48. http://dx.doi.org/10.20965/jrm.2003.p0341.

Full text
Abstract:
The vision system is one of the most important external sensors for an agricultural robot because the robot has to find its target among various objects in complicated background. Optical and morphological properties, therefore, should be investigated first to recognize the target object properly, when a visual sensor for agricultural robot is developed. A TV camera is widely used as a vision sensor for agricultural robot. Target image can be easily obtained by using color component images from TV camera, when the target color is different from the colors of the other objects and its background. When the target has a similar color with its background, it is often possible to discriminate objects by a monochrome TV camera whose sensitivity is from visible to infrared region. However, it is not easy to measure the target depth by TV cameras because many objects are sometimes overlapped in a field view. In this paper, robotic vision using TV camera for tomato and cucumber harvesting robots and a depth measurement systems using laser scanner are introduced.
APA, Harvard, Vancouver, ISO, and other styles
2

Senoo, Taku, Yuji Yamakawa, Yoshihiro Watanabe, Hiromasa Oku, and Masatoshi Ishikawa. "High-Speed Vision and its Application Systems." Journal of Robotics and Mechatronics 26, no. 3 (June 20, 2014): 287–301. http://dx.doi.org/10.20965/jrm.2014.p0287.

Full text
Abstract:
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260003/02.jpg"" width=""300"" />Batting/throwing robots </span></div> This paper introduces high-speed vision the authors developed, together with its applications. Architecture and development examples of high-speed vision are shown first in the sections that follow, then target tracking using active vision is explained. High-speed vision applied to robot control, design guidelines, and the development system for a high-speed robot are then introduced as examples. High-speed robot tasks, including dynamic manipulation and handling of soft objects, are explained and then book flipping scanning – an image analysis application – is explained followed by 1 ms auto pan/tilt and micro visual feedback, which are optical applications. </span>
APA, Harvard, Vancouver, ISO, and other styles
3

C, Abhishek. "Development of Hexapod Robot with Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1796–805. http://dx.doi.org/10.22214/ijraset.2021.37455.

Full text
Abstract:
Abstract: Nowadays many robotic systems are developed with lot of innovation, seeking to get flexibility and efficiency of biological systems. Hexapod Robot is the best example for such robots, it is a six-legged robot whose walking movements try to imitate the movements of the insects, it has two sets of three legs alternatively which is used to walk, this will provide stability, flexibility and mobility to travel on irregular surfaces. With these attributes the hexapod robots can be used to explore irregular surfaces, inhospitable places, or places which are difficult for humans to access. This paper involves the development of hexapod robot with digital image processing implemented on Raspberry Pi, to study in the areas of robotic systems with legged locomotion and robotic vision. This paper is an integration of a robotic system and an embedded system of digital image processing, programmed in high level language using Python. It is equipped with a camera to capture real time video and uses a distance sensor that allow the robot to detect obstacles. The Robot is Self-Stabilizing and can detect corners. The robot has 3 degrees of freedom in each six legs thus making a 18 DOF robotic movement. The use of multiple degrees of freedom at the joints of the legs allows the legged robots to change their movement direction without slippage. Additionally, it is possible to change the height from the ground, introducing a damping and a decoupling between the terrain irregularities and the body of the robot servo motors. Keywords: Hexapod, Raspberry Pi, Computer vision, Object detection, Yolo, Servo Motor, OpevCV.
APA, Harvard, Vancouver, ISO, and other styles
4

Fujita, Toyomi, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji, and Shun’ichi Kaneko. "Special Issue on Vision and Motion Control." Journal of Robotics and Mechatronics 27, no. 2 (April 20, 2015): 121. http://dx.doi.org/10.20965/jrm.2015.p0121.

Full text
Abstract:
Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on. This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Rui, Yuhui Wen, Wang Zhao, and Yong-Jin Liu. "View planning in robot active vision: A survey of systems, algorithms, and applications." Computational Visual Media 6, no. 3 (August 1, 2020): 225–45. http://dx.doi.org/10.1007/s41095-020-0179-3.

Full text
Abstract:
Abstract Rapid development of artificial intelligence motivates researchers to expand the capabilities of intelligent and autonomous robots. In many robotic applications, robots are required to make planning decisions based on perceptual information to achieve diverse goals in an efficient and effective way. The planning problem has been investigated in active robot vision, in which a robot analyzes its environment and its own state in order to move sensors to obtain more useful information under certain constraints. View planning, which aims to find the best view sequence for a sensor, is one of the most challenging issues in active robot vision. The quality and efficiency of view planning are critical for many robot systems and are influenced by the nature of their tasks, hardware conditions, scanning states, and planning strategies. In this paper, we first summarize some basic concepts of active robot vision, and then review representative work on systems, algorithms and applications from four perspectives: object reconstruction, scene reconstruction, object recognition, and pose estimation. Finally, some potential directions are outlined for future work.
APA, Harvard, Vancouver, ISO, and other styles
6

Senoo, Taku, Yuji Yamakawa, Shouren Huang, Keisuke Koyama, Makoto Shimojo, Yoshihiro Watanabe, Leo Miyashita, Masahiro Hirano, Tomohiro Sueishi, and Masatoshi Ishikawa. "Dynamic Intelligent Systems Based on High-Speed Vision." Journal of Robotics and Mechatronics 31, no. 1 (February 20, 2019): 45–56. http://dx.doi.org/10.20965/jrm.2019.p0045.

Full text
Abstract:
This paper presents an overview of the high-speed vision system that the authors have been developing, and its applications. First, examples of high-speed vision are presented, and image-related technologies are described. Next, we describe the use of vision systems to track flying objects at sonic speed. Finally, we present high-speed robotic systems that use high-speed vision for robotic control. Descriptions of the tasks that employ high-speed robots center on manipulation, bipedal running, and human-robot cooperation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kühnlenz, Kolja, and Martin Buss. "Multi-Focal Vision and Gaze Control Improve Navigation Performance." International Journal of Advanced Robotic Systems 9, no. 1 (January 1, 2012): 25. http://dx.doi.org/10.5772/50920.

Full text
Abstract:
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Menegatti, Emanuele, and Tomas Pajdla. "Omnidirectional robot vision." Robotics and Autonomous Systems 58, no. 6 (June 2010): 745–46. http://dx.doi.org/10.1016/j.robot.2010.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chioreanu, Adrian, Stelian Brad, and Cosmin Ioanes. "Vision on Intelligent Management of Industrial Robotics Systems." Applied Mechanics and Materials 162 (March 2012): 368–77. http://dx.doi.org/10.4028/www.scientific.net/amm.162.368.

Full text
Abstract:
Based on Future Internet and ITIL, cutting edge concepts and approaches related to software service systems in distributed architectures for managing information and processes in industrial robot platforms are introduced. A new approach in defining the business relation for entities that have various interests related to industrial robots, as well as tools that support the new business approach are also identified in this paper. Further the architecture of a prototype platform designed around those concepts is presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Troscianko, T., B. Vincent, I. D. Gilchrist, R. Knight, and O. Holland. "A robot with active vision." Journal of Vision 6, no. 6 (March 19, 2010): 456. http://dx.doi.org/10.1167/6.6.456.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Robot vision systems"

1

Öfjäll, Kristoffer. "Online Learning for Robot Vision." Licentiate thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110892.

Full text
Abstract:
In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35]. Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods. This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Pudney, Christopher John. "Surface modelling and surface following for robots equipped with range sensors." University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0002.

Full text
Abstract:
The construction of surface models from sensor data is an important part of perceptive robotics. When the sensor data are obtained from fixed sensors the problem of occlusion arises. To overcome occlusion, sensors may be mounted on a robot that moves the sensors over the surface. In this thesis the sensors are single–point range finders. The range finders provide a set of sensor points, that is, the surface points detected by the sensors. The sets of sensor points obtained during the robot’s motion are used to construct a surface model. The surface model is used in turn in the computation of the robot’s motion, so surface modelling is performed on–line, that is, the surface model is constructed incrementally from the sensor points as they are obtained. A planar polyhedral surface model is used that is amenable to incremental surface modelling. The surface model consists of a set of model segments, where a neighbour relation allows model segments to share edges. Also sets of adjacent shared edges may form corner vertices. Techniques are presented for incrementally updating the surface model using sets of sensor points. Various model segment operations are employed to do this: model segments may be merged, fissures in model segment perimeters are filled, and shared edges and corner vertices may be formed. Details of these model segment operations are presented. The robot’s control point is moved over the surface model at a fixed distance. This keeps the sensors around the control point within sensing range of the surface, and keeps the control point from colliding with the surface. The remainder of the robot body is kept from colliding with the surface by using redundant degrees–of–freedom. The goal of surface modelling and surface following is to model as much of the surface as possible. The incomplete parts of the surface model (non–shared edges) indicate where sections of surface that have not been exposed to the robot’s sensors lie. The direction of the robot’s motion is chosen such that the robot’s control point is directed to non–shared edges, and then over the unexposed surface near the edge. These techniques have been implemented and results are presented for a variety of simulated robots combined with real range sensor data.
APA, Harvard, Vancouver, ISO, and other styles
3

Karr, Roger W. "The assembly of a microcomputer controlled low cost vision-robot system and the design of software." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184010908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sridaran, S. "Off-line robot vision system programming using a computer aided design system." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54373.

Full text
Abstract:
Robots with vision capability have been taught to recognize unknown objects by comparing their shape features with those of known objects, which are stored in the vision system as a knowledge base. Traditionally, this knowledge base is created by showing the robot the set of objects that it is likely to come across. This is done with the vision system to be used and must be done in an online mode. An approach to teach the robot in an off-line mode by integrating the robot vision system and an off-line graphic system, has been developed in this research. Instead of showing the objects that the robot is likely to come across, graphic models of the objects were created in an off-line graphic system and a FORTRAN program that processes the models to extract their shape parameters was developed. These shape parameters were passed to the vision system. A program to process an unknown object placed in front of the vision system was developed to extract its shape parameters. A program that compares the parameters of the unknown object with those of the known models was also developed. The vision system was calibrated to measure the pixel dimensions in inches. In the vision system, shape parameters of the objects were found to vary with different orientations. The range of variation for each parameter was established and this was taken into consideration in the parameter comparison program.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Damweber, Michael Frank. "Model independent offset tracking with virtual feature points." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/17651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Mo. "Navigation using one camera in structured environment /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20MA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cipolla, Roberto. "Active visual inference of surface shape." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jansen, van Nieuwenhuizen Rudolph Johannes. "Development of an automated robot vision component handling system." Thesis, Bloemfontein : Central University of Technology, Free State, 2013. http://hdl.handle.net/11462/213.

Full text
Abstract:
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013
In the industry, automation is used to optimize production, improve product quality and increase profitability. By properly implementing automation systems, the risk of injury to workers can be minimized. Robots are used in many low-level tasks to perform repetitive, undesirable or dangerous work. Robots can perform a task with higher precision and accuracy to lower errors and waste of material. Machine Vision makes use of cameras, lighting and software to do visual inspections that a human would normally do. Machine Vision is useful in application where repeatability, high speed and accuracy are important. This study concentrates on the development of a dedicated robot vision system to automatically place components exiting from a conveyor system onto Automatic Guided Vehicles (AGV). A personal computer (PC) controls the automated system. Software modules were developed to do image processing for the Machine Vision system as well as software to control a Cartesian robot. These modules were integrated to work in a real-time system. The vision system is used to determine the parts‟ position and orientation. The orientation data are used to rotate a gripper and the position data are used by the Cartesian robot to position the gripper over the part. Hardware for the control of the gripper, pneumatics and safety systems were developed. The automated system‟s hardware was integrated by the use of the different communication protocols, namely DeviceNet (Cartesian robot), RS-232 (gripper) and Firewire (camera).
APA, Harvard, Vancouver, ISO, and other styles
9

Ukidve, Chinmay S. "Quantifying optimum fault tolerance of manipulators and robotic vision systems." Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1605147571&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hallenberg, Johan. "Robot Tool Center Point Calibration using Computer Vision." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9520.

Full text
Abstract:

Today, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are.

This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure.

The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Robot vision systems"

1

1950-, Liu Z. Q., ed. Knowledge-based vision-guided robots. Heidelberg: Physica-Verlag, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Computer vision for robotic systems: An introduction. New York: Prentice Hall, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Durrant-Whyte, Hugh F. Integration, Coordination and Control of Multi-Sensor Robot Systems. Boston, MA: Springer US, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dudek, Gregory. Robotic exploration as graph construction. Toronto: University of Toronto, Dept. of Computer Science, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ruoff, Wolfgang. Optische Sensorsysteme zur On-line-Führung von Industrierobotern. Berlin: Springer-Verlag, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mendes, Mateus. Vision-based robot navigation: Quest for intelligent approaches using a sparse distributed memory. Boca Raton: Universal Publishers, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cipolla, Roberto. Active visual inference of surface shape. Berlin: Springer₋Verlag, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

International Conference on Robot Vision and Sensory Controls (7th 1988 Zurich, Switzerland). Advanced sensor technology: Proceedings of the 7th International Conference on Robot Vision and Sensory Controls, 2-4 February 1988, Zürich, Switzerland. Bedford, UK: IFS Publications, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bräunl, Thomas. Embedded Robotics: Mobile Robot Design and Applications with Embedded Systems. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gridin, V. N. Adaptivnye sistemy tekhnicheskogo zrenii︠a︡. Sankt-Peterburg: "Nauka", 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Robot vision systems"

1

McIvor, Alan, Qi Zang, and Reinhard Klette. "The Background Subtraction Problem for Video Surveillance Systems." In Robot Vision, 176–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Browne, Arthur, and Leonard Norton-Wayne. "Robot Systems." In Vision and Information Processing for Automation, 257–99. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4899-2028-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leclercq, Philippe, and Thomas Bräunl. "A Color Segmentation Algorithm for Real-Time Object Localization on Small Embedded Systems." In Robot Vision, 69–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Browne, Arthur, and Leonard Norton-Wayne. "Robot Vision Systems and Applications." In Vision and Information Processing for Automation, 339–64. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4899-2028-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsotsos, John K. "Motion Understanding Systems." In Active Perception and Robot Vision, 1–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kropatsch, Walter G. "Hierarchical Methods for Robot Vision." In Expert Systems and Robotics, 63–109. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-76465-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Little, James J., Jesse Hoey, and Pantelis Elinas. "Visual Capabilities in an Interactive Autonomous Robot." In Cognitive Vision Systems, 295–312. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11414353_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Balsi, Marco, and Xavier Vilasís-Cardona. "Robot Vision Using Cellular Neural Networks." In Autonomous Robotic Systems, 431–50. Heidelberg: Physica-Verlag HD, 2003. http://dx.doi.org/10.1007/978-3-7908-1767-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

D’Hollander, Erik H. "Neural Networks and Robot Vision." In Microprocessors in Robotic and Manufacturing Systems, 237–57. Dordrecht: Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-011-3812-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Purnell, G., and K. Khodabandehloo. "Vision for Robot Guidance in Automated Butchery." In Robotic Systems, 619–26. Dordrecht: Springer Netherlands, 1992. http://dx.doi.org/10.1007/978-94-011-2526-0_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Robot vision systems"

1

Liu, Shuai, Chunlin Chen, Lihua Xie, and Yeong-Hwa Chang. "Formation control of multi-robot systems." In Vision (ICARCV 2010). IEEE, 2010. http://dx.doi.org/10.1109/icarcv.2010.5707964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xing, and Mark H. Lee. "A Developmental Robot Vision System." In 2006 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icsmc.2006.385028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sitte, Joaquin, and Petra Winzer. "Methodic Design of Robot Vision Systems." In 2007 International Conference on Mechatronics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/icma.2007.4303816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lejun, Shao, Richard A. Volz, Lynn Conway, and Michael W. Walker. "Incorporating robot vision in teleautonomous systems." In Robotics - DL tentative, edited by William E. Stoney. SPIE, 1992. http://dx.doi.org/10.1117/12.56769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barnard, S., R. Bolles, D. Marimont, and A. Pentland. "Multiple Representations for Mobile Robot Vision." In Cambridge Symposium_Intelligent Robotics Systems, edited by Nelson Marquina and William J. Wolfe. SPIE, 1987. http://dx.doi.org/10.1117/12.937793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ganapathy, Velappa, and Ng Oon-Ee. "Stereo Vision Based Robot Controller." In 2008 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2008. http://dx.doi.org/10.1109/icsmc.2008.4811558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chittajallu, Siva K., and Michael A. Penna. "Incorporating Ultrasound Into Robot Vision." In 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, edited by Bruce G. Batchelor. SPIE, 1990. http://dx.doi.org/10.1117/12.969822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Weijie, Lei Zhang, Yang Yao, Junqiu Zuo, and Xingtian Yao. "Linear Calibration for Robot Vision." In 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2016. http://dx.doi.org/10.1109/ihmsc.2016.223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Reina, Giulio, Annalisa Milella, and Mario Foglia. "Vision-Based Methods for Mobile Robot Localization and Wheel Sinkage Estimation." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2188.

Full text
Abstract:
External perception based on vision plays a critical role in developing improved and robust localization algorithms for mobile robots, as well as in gaining important information about the vehicle and the traversed terrain. This paper presents two novel methods to improve mobility on rough terrains by using visual input. The first method consists of a stereovision algorithm for 6-DoF ego-motion estimation, which integrates image intensity information and 3D stereo data using an Iterative Closest Point (ICP) approach. The second method aims at estimating the wheel sinkage of a mobile robot on deformable soil, based on the visual input from an onboard monocular camera, and an edge detection strategy. Both methods were implemented and experimentally validated on an all-terrain mobile robot, showing that the proposed techniques can be successfully employed to improve the performance of ground vehicles operating in uncharted environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Blake, Andrew. "Probabilistic inference in machine vision systems." In 2008 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2008. http://dx.doi.org/10.1109/robot.2008.4543173.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Robot vision systems"

1

Metta, Giorgio. An Attentional System for a Humanoid Robot Exploiting Space Variant Vision. Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada434729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography