Academic literature on the topic 'Robot vision research'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robot vision research.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Robot vision research"

1

Yan, Hui, Xue Bo Zhang, Yu Wang, and Wei Jie Han. "Research on the Vision Processing of Space Robot's Tracking Camera." Advanced Materials Research 748 (August 2013): 713–17. http://dx.doi.org/10.4028/www.scientific.net/amr.748.713.

Full text
Abstract:
The tracking camera is very important to the whole test tasks of space robot. Aiming at the vision processing problems of space robots tracking camera, a new method of vision processing LabVIEW+DLL is present in this article. Based on the method, a set of vision processing system of space robots tracking camera is researched and developed. This system can better meet the index requirements of space robots vision processing and precisely measure the position and posture data from the target star relative to space robots body coordinate system in the process of the ground air-float test for the space robot, guaranteeing the smooth completion of the ground test mission.
APA, Harvard, Vancouver, ISO, and other styles
2

IKEUCHI, Katsushi. "Robot Vision Research in U.S.A." Journal of the Robotics Society of Japan 10, no. 2 (1992): 146–52. http://dx.doi.org/10.7210/jrsj.10.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guang Hui, Zhi Jian Jiang, and Bin Pan. "Research for Vision-Based Mobile Robot Self-Localization Strategy." Applied Mechanics and Materials 130-134 (October 2011): 2153–59. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.2153.

Full text
Abstract:
Mobile robots are a very dynamic part of the robotic domain, self-localization is one of the basic functions of them, in the complex environment, and whether to achieve precise localization is the key factor for mobile robot navigates accurately. In this paper, on the basis of the mobile robot can identify the set artificial landmark correctly through its vision sensors, then, based on the observe landmark and perspective localization method, the mobile robot’s position and moving direction can be calculated in the world coordinate system. Finally, a lot of experiments are carried out, the experimental results show that in the structural environment, this mobile robot can realize self-localization under this method perfectly, and the localization accuracy and sampling frequency meet the need of practical requirement.
APA, Harvard, Vancouver, ISO, and other styles
4

Umeda, Kazunori. "Special Issue on Robot Vision." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 253. http://dx.doi.org/10.20965/jrm.2003.p0253.

Full text
Abstract:
Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target. This special issue reviews recent advances in this exciting field, following up two special issues, Vol. 11 No. 2, and Vol. 13 No. 6, which attracted more papers than expected. This indicates the high degree of research activity in this field. I am most pleased to report that this issue presents 12 excellent papers covering robot vision, including basic algorithms based on precise optical models, pattern and gesture recognition, and active vision. Several papers treat range imaging and others interesting applications to agriculture and quadruped robots and new devices. This issue also presents two news briefs, one on a practical range sensor suited to mobile robots and the other on vision devices that are the improved ones of famous IP-5000 series. I am convinced that this special issue helps research on robot vision more exciting. I would like to close by thanking all of the researchers who submitted their studies, and to give special thanks to the reviewers and editors, especially Prof. M. Kaneko, Dr. K. Yokoi, and Prof. Y. Nakauchi.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hui Jun, Cai Biao Chen, Wan Wu, and Zhi Wei Zhou. "Research of Application on Robot Vision with SQI Algorithms Based on Retinex." Applied Mechanics and Materials 675-677 (October 2014): 1358–62. http://dx.doi.org/10.4028/www.scientific.net/amm.675-677.1358.

Full text
Abstract:
Troubleshooting and safety monitoring in the underground mine by manual operation often have the existence of security risks, so the trend of replacing manual operation by robots increases. In robot vision navigation, image enhancement in the automatic target recognition is the most critical stage of the pretreatment when extracting the image feature and matching. As the contour enhancement effect of multiscale Retinex algorithm is very good, which plays a key role in the robot visual scene image. When underground mine mobile robot identify path in machine vision navigation, this paper proposes a application of robot vision with SQI algorithms based on Retinex, which aims at its poor real-time performance and serious affecting by light interference.
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Zhi Guo. "Research on Automatic Cleaning Robot Based on Machine Vision." Applied Mechanics and Materials 539 (July 2014): 648–52. http://dx.doi.org/10.4028/www.scientific.net/amm.539.648.

Full text
Abstract:
The development and application of the machine vision technology is greatly liberating the human labor force and improved the production automation level and the situation of human life, which has very broad application prospects. The intelligent empty bottle inspection robot this paper studies is a typical application of the machine vision in the industrial detection. This paper mainly introduces the concept of machine vision, some important technology related to automatic cleaning robots and application of the machine vision in the production of all areas of life.
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Ssu Ting, Jun Hu, Chia Hung Shih, Chiou Jye Huang, and Ping Huan Kuo. "The Development of Supervised Motion Learning and Vision System for Humanoid Robot." Applied Mechanics and Materials 886 (January 2019): 188–93. http://dx.doi.org/10.4028/www.scientific.net/amm.886.188.

Full text
Abstract:
With the development of the concept of Industry 4.0, research relating to robots is being paid more and more attention, among which the humanoid robot is a very important research topic. The humanoid robot is a robot with a bipedal mechanism. Due to the physical mechanism, humanoid robots can maneuver more easily in complex terrains, such as going up and down the stairs. However, humanoid robots often fall from imbalance. Whether or not the robot can stand up on its own after a fall is a key research issue. However, the often used method of hand tuning to allow robots to stand on its own is very inefficient. In order to solve the above problems, this paper proposes an automatic learning system based on Particle Swarm Optimization (PSO). This system allows the robot to learn how to achieve the motion of rebalancing after a fall. To allow the robot to have the capability of object recognition, this paper also applies the Convolutional Neural Network (CNN) to let the robot perform image recognition and successfully distinguish between 10 types of objects. The effectiveness and feasibility of the motion learning algorithm and the CNN based image classification for vision system proposed in this paper has been confirmed in the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
8

Baasandorj, Bayanjargal, Aamir Reyaz, Batmunkh Battulga, Deok Jin Lee, and Kil To Chong. "Formation of Multiple-Robots Using Vision Based Approach." Applied Mechanics and Materials 419 (October 2013): 768–73. http://dx.doi.org/10.4028/www.scientific.net/amm.419.768.

Full text
Abstract:
Multi-robots system has grown enormously with a large variety of topics being addressed. It is an important research area within the robotics and artificial intelligence. By using the vision based approach this paper deals with the formation of multiple-robots. Three NXT robots were used in the experiment and all the three robots work together as one virtual mobile robot. In addition to these things we also used TCP/IP socket, ArToolKit, NXT robot, Bluetooth communication device. And for programming C++ was used. Results achieved from the experiment were highly successful.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Zhi Li, Ying Ying Song, Wei Dong Zhang, and Shuo Yin. "Research Humanoid Robot Walking Based on Vision-Guided." Applied Mechanics and Materials 496-500 (January 2014): 1426–29. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1426.

Full text
Abstract:
Walking is a basic function of humanoid robot, this paper presents key ideas of stereo vision based humanoid walking. Image processing techniques and pattern recognition techniques are employed for the obstacle detection and object recognition, data fitting technique is also used to plan the path of the humanoid robot. High precision visual feedback is provided by the combination of real time high precision feature detection and high actuary object detection method. The proposed stereo vision based approach and robot guidance system were evaluated partly by experiments and partly by the simulation with the humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Jian Bing, and Ya Bing Zha. "Research on the Vision Control System for Modular Robot." Applied Mechanics and Materials 667 (October 2014): 421–24. http://dx.doi.org/10.4028/www.scientific.net/amm.667.421.

Full text
Abstract:
Modular robot is an approach to build the robot for various complex tasks with the promise of great versatility, robustness and lower cost. It can be used extensively to meet the demands of different tasks or different working environments by changing its shapes. Therefore, they can travel over or through obstacles, and go though small pipe. Even they can walk somewhat like a person on crutches. It is a key technology that how modular robot with vision to capture exact image information, as well as to extract the feature parameters of components real-timely, to recognize the component types, and to judge the position and posture of component. The quantity of the motion pattern will decide the adjustable ability of modular robot. So, dynamic vision control system is very important for modular robot, which can improve its moving ability and intelligent degree. With the help of the dynamic vision control system, the robot can accomplish different tasks in different working environments by itself, such as deciding routes and avoiding obstacles. In this paper, in order to improve the vision of modular robot, a kind of dynamic vision control system is analyzed and researched roundly, and two kinds of motion patterns for the modular robot are put forward.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Robot vision research"

1

Reid, Ian D. "Recognizing parameterized objects from range data." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morilla, Cabello David. "Vision Based Control for Industrial Robots : Research and implementation." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17583.

Full text
Abstract:
The automation revolution already helps in many tasks that are now performed by robots.  Increases in the complexity of problems regarding robot manipulators require new approaches or alternatives in order to solve them. This project comprises a research in different available software for implementing easy and fast visual servoing tasks controlling a robot manipulator. It focuses on out-of-the-box solutions. Then, the tools found are applied to implement a solution for controlling an arm from Universal Robots. The task is to follow a moving object on a plane with the robot manipulator. The research compares the most popular software, the state-of-the-art alternatives, especially in computer vision and also robot control. The implementation aims to be a proof of concept of a system divided by each functionality (computer vision, path generation and robot control) in order to allow software modularity and exchangeability. The results show various options for each system to take into consideration. The implementation is successfully completed, showing the efficiency of the alternatives examined. The chosen software is MATLAB and Simulink for computer vision and trajectory calculation interfacing with Robotic Operating System (ROS). ROS is used for controlling a UR3 arm using ros_control and ur_modern_driver packages.  Both the research and the implementation present a first approach for further applications and understanding over the current technologies for visual servoing tasks. These alternatives offer different easy, fast, and flexible methods to confront complex computer vision and robot control problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Chiang, Shun Fan. "The development of a low-cost robotic visual tracking system : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering, Mechatronics at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/996.

Full text
Abstract:
This thesis describes a system which is able to track and imitate human motion. The system is divided into two major parts: computer vision system and robot arm motion control system. Through the use of two real-time video cameras, computer vision system identifies the moving object depending on the colour features, as the object colour is matched within the colour range in the current image frame, a method that employs two vectors is used to calculate the coordinates of the object. After the object is detected and tracked coordinates are saved to a pre-establish database in the purpose of further data processing, a mathematical algorithm is performed to the data in order to give a better robotic motion control. Robot arm manipulator responds with a move within its workspace which corresponds to a consequential human-type motion. Experimental outcomes have shown that the system is reliable and can successfully imitate a human hand motion in most cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Saleh, Diana. "Interaction Design for Remote Control of Military Unmanned Ground Vehicles." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174074.

Full text
Abstract:
The fast technology development for military unmanned ground vehicles (UGVs) has led to a considerable demand to explore the soldier’s role in an interactive UGV system. This thesis explores how to design interactive systems for UGVs for infantry soldiers in the Swedish Armed Force. This was done through a user-centered design approach in three steps; (1) identifying the design drivers of the targeted military context through qualitative observations and user interviews, (2) using the design drivers to investigate concepts for controlling the UGV, and (3) create and evaluate a prototype of an interactive UGV system design. Results from interviews indicated that design drivers depend on the physical and psychological context of the intended soldiers. In addition, exploring the different concepts showed that early conceptual designs helped the user express their needs of a non-existing system. Furthermore, the results indicate that an interactive UGV system does not necessarily need to be at the highest level of autonomy in order to be useful for the soldiers on the field. The final prototype of an interactive UGV system was evaluated using a demonstration video, a Technology Acceptance Model (TAM), and semi-structured user interviews. Results from this evaluation suggested that the soldiers see the potential usefulness of an interactive UGV system but are not entirely convinced. In conclusion, this thesis argues that in order to design an interactive UGV system, the most critical aspect is the soldiers’ acceptance of the new system. Moreover, for soldiers to accept the concept of military UGVs, it is necessary to understand the context of use and the needs of the soldiers. This is done by involving the soldiers already in the conceptual design process and then throughout the development phases.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhao thinks, and 王昭惟. "Research on Climbing Stairs for An Autonomous Vision-Guided Robot Wheelchair." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/98974575421850431149.

Full text
Abstract:
碩士
大葉大學
機械工程研究所碩士班
95
In this thesis an autonomous vision-guided technology is applied to our developed novel robot wheelchair for stair-climbing. In order to climb a flight of stairs automatically for the robot wheelchair, a CCD camera mounted on this robot wheelchair will be used to detect stairs. Since there exist large differences on stair environment, the boundaries of all stairs are confined to a straight lines, and all of them are perpendicular each other. After carrying out the detection of stair boundaries and imagine processing, the captured image coordinates will be transformed into the real-world coordinates. The corresponding rotational angles for each arm, or the motor commands can be calculated using kinematics and one-step-ahead motion planning algorithm, so that the autonomous vision-guided stair climbing can be implemented. Finally, from the implemented experiments, it is shown that the robot wheelchair can climb stairs autonomously using the vision-guided technology.
APA, Harvard, Vancouver, ISO, and other styles
6

Agunbiade, Olusanya Yinka. "Road region detection system using filters and concurrency technique." 2014. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001932.

Full text
Abstract:
M. Tech. Computer System Engineering
Autonomous robots are extensively used equipment in industries and in our daily lives; they assist in manufacturing and production but are used for exploration in dangerous or unknown environments. However for a successful exploration, manufacturing and production, navigation plays an important role. Road detection is a vital factor that assists autonomous robots in perfect navigation. Different methods using camera-vision technique have been developed by various researchers with outstanding results, but their systems are still vulnerable to environmental risks. The frequent weather change in various countries such as South Africa, Nigeria and Zimbabwe where shadow, light intensity and other environmental noises occur on daily basis, can cause autonomous robot to encounter failure in navigation. Therefore, the main research question is: How to enhance the road region detection system to enable an effective and efficient maneuvering of the robot in any weather condition.
APA, Harvard, Vancouver, ISO, and other styles
7

LI, CHIEN-LIN, and 李建霖. "Research on Computer Vision Target Navigation of Two-Wheeled Balancing Robots." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9dn2sc.

Full text
Abstract:
碩士
華夏科技大學
智慧型機器人研究所
107
In this study, NI myRIO controller with a USB webcam and a remote computer builds a two-wheeled self-balancing robot with computer vision, which can balance at the fixed point and on the move. Through the myRIO controller, computer vision is generated by combining the webcam and computer. Computer vision means the robot transmits images captured by the webcam to the remote PC with a Wi-Fi local area network. The human-machine interface of the LabVIEW software on the PC can interact with the two-wheeled self-balancing robot. The interactions of the human-machine interface are (1) It can manually adjust the robot control (forward, backward, left turn, right turn, balance, and stop). (2)It can automatically identify the self-made two black tracks to control the robot to move forward along the two black tracks. (3)It can identify the red and green color signs to control the robot to stop or forward. Automation in today's society is more and more developed. Two-wheeled self-balancing personal transporter, such as the Segway, becomes more popular in recent years. But large self-balancing vehicles are subject to many restrictions. Based on the factors, this study chooses to use a small two-wheeled vehicle as a robot prototype development platform. On the platform, the two-wheeled robot control and computer vision target navigation are in discussion. The first part of this study focuses on using the PID to achieve the balance control of the two-wheeled robot with the webcam. The second part is the robot's computer vision target navigation. The robot's webcam captures the images and then performs image recognition to identify the designed track and target information. The robot navigates the target with computer vision based on the tracks and target information and keeps its balance on the move. Keywords: myRIO controller、two-wheeled self-balancing robot、Wi-Fi、human-machine interface、computer vision target navigation.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Tomé Pereira da. "Desenvolvimento de plataforma móvel para futebol robótico." Master's thesis, 2010. http://hdl.handle.net/1822/65406.

Full text
Abstract:
Dissertação de Mestrado em Ciclo de Estudos Integrados Conducentes ao Grau de Mestre em Engenharia Electrónica Industrial e Computadores
A robótica de hoje em dia tem inúmeras aplicações práticas, desde a ajuda prestada ao Homem, até situações em que a precisão e a repetibilidade a torna num grande instrumento de trabalho em diversificadas áreas. Em certos casos, em que o meio ambiente que engloba o agente não é totalmente controlado, este tem que se adaptar ao meio envolvente para finalização da sua determinada tarefa. Esta última situação é a mais complexa, mas é também a situação em que se insere o principal objectivo desta dissertação - a construção de um robô autónomo capaz de jogar futebol. O trabalho apresentado, engloba tanto a concepção como a construção de um protótipo de um robô futebolista, com software capaz de controlar o robô autonomamente, assim como software de apoio às competições. Na construção do robô é analisada desde a estrutura, forma, disposição dos componentes e materiais usados; o software é desenvolvido desde a raiz numa nova estrutura organizada; por fim, mas igualmente importante, é implementado software para a comunicação com hardware, para comunicação em rede, processamento de imagem entre outros módulos necessários ao bom funcionamento do robô. No final, são apresentados alguns aspectos críticos de aperfeiçoamento de todo este trabalho, assim como soluções futuras para os problemas encontrados.
Nowadays, robotics has numerous practical applications, from help to humans, to situations where accuracy and repeatability becomes a great tool to work in several different areas. In some cases, when the agent works on uncontrolled environments, he has to adapt itself completely to that environment or to its particular task. This becomes more complex, but it is also the main goal of this thesis - the construction of an autonomous robot, able to play football coping with the RoboCup rules. This thesis work here presented encloses the robot football player prototype design, with software that can autonomously control the robot, and software to support the competition in which it operates. The robot is analyzed regarding design, structure, shape and components arrangement, as well as the materials used. Software was developed in a new organizational structure from scratch and is also explained on this thesis. Several software modules were created, from the network communication, to hardware control, image processing as well as other modules necessary to manage the real game. In the last chapters, critical aspects are described and discussed, as well as future solutions to problems encountered during this whole process.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Robot vision research"

1

Pal, Rajarshi. Innovative research in attention modeling and computer vision applications. Hershey, PA: Information Science Reference, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taisho, Matsuda, ed. Robot vision: New research. New York: Nova Science Publishers, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Colin, Archibald, and Kwok Paul, eds. Research in computer and robot vision. Singapore: World Scientific, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Archibald, Colin, and Paul Kwok. Research in Computer and Robot Vision. WORLD SCIENTIFIC, 1995. http://dx.doi.org/10.1142/2630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Metta, Giorgio. Humans and humanoids. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0047.

Full text
Abstract:
This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
6

The Robotic Touch How Robots Change Architecture Gramazio Kohler Research Eth Zurich 20052013. Park Books, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

J, Scott Gregory, and Consultative Group on International Agricultural Research. Committee on Inter-Centre Root and Tuber Crops Research., eds. Roots and tubers in the global food system: A vision statement to the year 2020. Lima: International Potato Center, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Research And Education In Robotics Eurobot 2008 International Conference Heidelberg Germany May 2224 2008 Revised Selected Papers. Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Robot vision research"

1

Triggs, Bill, and Christian Laugier. "Automatic Task Planning for Robot Vision." In Robotics Research, 428–39. London: Springer London, 1996. http://dx.doi.org/10.1007/978-1-4471-1021-7_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Asada, Minoru, Takayuki Nakamura, and Koh Hosoda. "Behavior Acquisition via Vision-Based Robot Learning." In Robotics Research, 279–86. London: Springer London, 1996. http://dx.doi.org/10.1007/978-1-4471-1021-7_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Asada, M., K. Hosoda, and S. Suzuki. "Vision-based Behavior Learning and Development for Emergence of Robot Intelligence." In Robotics Research, 327–38. London: Springer London, 1998. http://dx.doi.org/10.1007/978-1-4471-1580-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Milighetti, Giulio, Moritz Ritter, and Helge-Björn Kuntze. "Vision Controlled Grasping by Means of an Intelligent Robot Hand." In Advances in Robotics Research, 215–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01213-6_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Muse, David, Cornelius Weber, and Stefan Wermter. "Robot Docking Based on Omnidirectional Vision and Reinforcement Learning." In Research and Development in Intelligent Systems XXII, 23–36. London: Springer London, 2006. http://dx.doi.org/10.1007/978-1-84628-226-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eiben, Agoston E., Emma Hart, Jon Timmis, Andy M. Tyrrell, and Alan F. Winfield. "Towards Autonomous Robot Evolution." In Software Engineering for Robotics, 29–51. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66494-7_2.

Full text
Abstract:
AbstractWe outline a perspective on the future of evolutionary robotics and discuss a long-term vision regarding robots that evolve in the real world. We argue that such systems offer significant potential for advancing both science and engineering. For science, evolving robots can be used to investigate fundamental issues about evolution and the emergence of embodied intelligence. For engineering, artificial evolution can be used as a tool that produces good designs in difficult applications in complex unstructured environments with (partially) unknown and possibly changing conditions. This implies a new paradigm, second-order software engineering, where instead of directly developing a system for a given application, we develop an evolutionary system that will develop the target system for us. Importantly, this also holds for the hardware; with a complete evolutionary robot system, both the software and the hardware are evolved. In this chapter, we discuss the long-term vision, elaborate on the main challenges, and present the initial results of an ongoing research project concerned with the first tangible implementation of such a robot system.
APA, Harvard, Vancouver, ISO, and other styles
7

Gao, Hongwei, Fuguo Chen, Dong Li, and Yang Yu. "Movement Simulation for Wheeled Mobile Robot Based on Stereo Vision." In Advanced Research on Electronic Commerce, Web Application, and Communication, 396–401. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20367-1_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hafiz, Abdul Rahman, and Kazuyuki Murase. "iRov: A Robot Platform for Active Vision Research and as Education Tool." In Advances in Autonomous Mini Robots, 173–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27482-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Inaba, Masayuki. "Extended Vision with the Robot Sensor Suit: A Primary Sensor Image Approach to Interfacing Body to Brain." In Robotics Research, 499–508. London: Springer London, 1996. http://dx.doi.org/10.1007/978-1-4471-1021-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Longhui, Shigang Cui, Li Zhao, and Zhigang Bing. "Research on Vision System for Service Robot Based on Virtual Space Environment." In Advances in Intelligent and Soft Computing, 665–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29637-6_89.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Robot vision research"

1

Han, Chin Yun, S. Parasuraman, I. Elamvazhuthi, C. Deisy, S. Padmavathy, and M. K. A. Ahamed khan. "Vision Guided Soccer Robot." In 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, 2017. http://dx.doi.org/10.1109/iccic.2017.8524422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yao, Qiaobing, Hualong Yu, Wankou Yang, and Changyin Sun. "Research on panoramie vision system of mobile robot." In 2015 27th Chinese Control and Decision Conference (CCDC). IEEE, 2015. http://dx.doi.org/10.1109/ccdc.2015.7162619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hongwei Gao, Yang Yu, and Li Bin. "Research on planet rover robot arm vision localization." In 2010 8th World Congress on Intelligent Control and Automation (WCICA 2010). IEEE, 2010. http://dx.doi.org/10.1109/wcica.2010.5554636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Yifeng, and Ziwei Zhao. "Research on Face Recognition Algorithm for Robot Vision." In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 2019. http://dx.doi.org/10.1109/ccdc.2019.8833263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Zhao, and Jianyi Kong. "Research on vision simulation platform for robot soccer." In 2010 International Conference on Computer Application and System Modeling (ICCASM 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccasm.2010.5620166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jia, Ning. "Research on Indoor Robot Based on Monocular Vision." In 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2017. http://dx.doi.org/10.1109/ihmsc.2017.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jusoh, Rizal Mat. "Application of Vision Target Localization for Mobile Robot." In 2006 4th Student Conference on Research and Development. IEEE, 2006. http://dx.doi.org/10.1109/scored.2006.4339327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Georgiou, Evangelos, Jian S. Dai, and Michael Luck. "The KCLBOT: The Challenges of Stereo Vision for a Small Autonomous Mobile Robot." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70503.

Full text
Abstract:
In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Xiaoling, Yinsheng Luo, Yuchi Lin, and Lei Zhu. "Research on robot navigation vision sensor based on grating projection stereo vision." In Eighth International Symposium on Advanced Optical Manufacturing and Testing Technology (AOMATT2016), edited by Wenhan Jiang, Li Yang, Oltmann Riemer, Shengyi Li, and Yongjian Wan. SPIE, 2016. http://dx.doi.org/10.1117/12.2242569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Starzyk, Wiktor, Adam Domurad, and Faisal Z. Qureshi. "A Virtual Vision Simulator for Camera Networks Research." In 2012 Canadian Conference on Computer and Robot Vision (CRV). IEEE, 2012. http://dx.doi.org/10.1109/crv.2012.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography