To see the other types of publications on this topic, follow the link: Robot vision research.

Journal articles on the topic 'Robot vision research'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Robot vision research.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yan, Hui, Xue Bo Zhang, Yu Wang, and Wei Jie Han. "Research on the Vision Processing of Space Robot's Tracking Camera." Advanced Materials Research 748 (August 2013): 713–17. http://dx.doi.org/10.4028/www.scientific.net/amr.748.713.

Full text
Abstract:
The tracking camera is very important to the whole test tasks of space robot. Aiming at the vision processing problems of space robots tracking camera, a new method of vision processing LabVIEW+DLL is present in this article. Based on the method, a set of vision processing system of space robots tracking camera is researched and developed. This system can better meet the index requirements of space robots vision processing and precisely measure the position and posture data from the target star relative to space robots body coordinate system in the process of the ground air-float test for the space robot, guaranteeing the smooth completion of the ground test mission.
APA, Harvard, Vancouver, ISO, and other styles
2

IKEUCHI, Katsushi. "Robot Vision Research in U.S.A." Journal of the Robotics Society of Japan 10, no. 2 (1992): 146–52. http://dx.doi.org/10.7210/jrsj.10.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guang Hui, Zhi Jian Jiang, and Bin Pan. "Research for Vision-Based Mobile Robot Self-Localization Strategy." Applied Mechanics and Materials 130-134 (October 2011): 2153–59. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.2153.

Full text
Abstract:
Mobile robots are a very dynamic part of the robotic domain, self-localization is one of the basic functions of them, in the complex environment, and whether to achieve precise localization is the key factor for mobile robot navigates accurately. In this paper, on the basis of the mobile robot can identify the set artificial landmark correctly through its vision sensors, then, based on the observe landmark and perspective localization method, the mobile robot’s position and moving direction can be calculated in the world coordinate system. Finally, a lot of experiments are carried out, the experimental results show that in the structural environment, this mobile robot can realize self-localization under this method perfectly, and the localization accuracy and sampling frequency meet the need of practical requirement.
APA, Harvard, Vancouver, ISO, and other styles
4

Umeda, Kazunori. "Special Issue on Robot Vision." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 253. http://dx.doi.org/10.20965/jrm.2003.p0253.

Full text
Abstract:
Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target. This special issue reviews recent advances in this exciting field, following up two special issues, Vol. 11 No. 2, and Vol. 13 No. 6, which attracted more papers than expected. This indicates the high degree of research activity in this field. I am most pleased to report that this issue presents 12 excellent papers covering robot vision, including basic algorithms based on precise optical models, pattern and gesture recognition, and active vision. Several papers treat range imaging and others interesting applications to agriculture and quadruped robots and new devices. This issue also presents two news briefs, one on a practical range sensor suited to mobile robots and the other on vision devices that are the improved ones of famous IP-5000 series. I am convinced that this special issue helps research on robot vision more exciting. I would like to close by thanking all of the researchers who submitted their studies, and to give special thanks to the reviewers and editors, especially Prof. M. Kaneko, Dr. K. Yokoi, and Prof. Y. Nakauchi.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hui Jun, Cai Biao Chen, Wan Wu, and Zhi Wei Zhou. "Research of Application on Robot Vision with SQI Algorithms Based on Retinex." Applied Mechanics and Materials 675-677 (October 2014): 1358–62. http://dx.doi.org/10.4028/www.scientific.net/amm.675-677.1358.

Full text
Abstract:
Troubleshooting and safety monitoring in the underground mine by manual operation often have the existence of security risks, so the trend of replacing manual operation by robots increases. In robot vision navigation, image enhancement in the automatic target recognition is the most critical stage of the pretreatment when extracting the image feature and matching. As the contour enhancement effect of multiscale Retinex algorithm is very good, which plays a key role in the robot visual scene image. When underground mine mobile robot identify path in machine vision navigation, this paper proposes a application of robot vision with SQI algorithms based on Retinex, which aims at its poor real-time performance and serious affecting by light interference.
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Zhi Guo. "Research on Automatic Cleaning Robot Based on Machine Vision." Applied Mechanics and Materials 539 (July 2014): 648–52. http://dx.doi.org/10.4028/www.scientific.net/amm.539.648.

Full text
Abstract:
The development and application of the machine vision technology is greatly liberating the human labor force and improved the production automation level and the situation of human life, which has very broad application prospects. The intelligent empty bottle inspection robot this paper studies is a typical application of the machine vision in the industrial detection. This paper mainly introduces the concept of machine vision, some important technology related to automatic cleaning robots and application of the machine vision in the production of all areas of life.
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Ssu Ting, Jun Hu, Chia Hung Shih, Chiou Jye Huang, and Ping Huan Kuo. "The Development of Supervised Motion Learning and Vision System for Humanoid Robot." Applied Mechanics and Materials 886 (January 2019): 188–93. http://dx.doi.org/10.4028/www.scientific.net/amm.886.188.

Full text
Abstract:
With the development of the concept of Industry 4.0, research relating to robots is being paid more and more attention, among which the humanoid robot is a very important research topic. The humanoid robot is a robot with a bipedal mechanism. Due to the physical mechanism, humanoid robots can maneuver more easily in complex terrains, such as going up and down the stairs. However, humanoid robots often fall from imbalance. Whether or not the robot can stand up on its own after a fall is a key research issue. However, the often used method of hand tuning to allow robots to stand on its own is very inefficient. In order to solve the above problems, this paper proposes an automatic learning system based on Particle Swarm Optimization (PSO). This system allows the robot to learn how to achieve the motion of rebalancing after a fall. To allow the robot to have the capability of object recognition, this paper also applies the Convolutional Neural Network (CNN) to let the robot perform image recognition and successfully distinguish between 10 types of objects. The effectiveness and feasibility of the motion learning algorithm and the CNN based image classification for vision system proposed in this paper has been confirmed in the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
8

Baasandorj, Bayanjargal, Aamir Reyaz, Batmunkh Battulga, Deok Jin Lee, and Kil To Chong. "Formation of Multiple-Robots Using Vision Based Approach." Applied Mechanics and Materials 419 (October 2013): 768–73. http://dx.doi.org/10.4028/www.scientific.net/amm.419.768.

Full text
Abstract:
Multi-robots system has grown enormously with a large variety of topics being addressed. It is an important research area within the robotics and artificial intelligence. By using the vision based approach this paper deals with the formation of multiple-robots. Three NXT robots were used in the experiment and all the three robots work together as one virtual mobile robot. In addition to these things we also used TCP/IP socket, ArToolKit, NXT robot, Bluetooth communication device. And for programming C++ was used. Results achieved from the experiment were highly successful.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Zhi Li, Ying Ying Song, Wei Dong Zhang, and Shuo Yin. "Research Humanoid Robot Walking Based on Vision-Guided." Applied Mechanics and Materials 496-500 (January 2014): 1426–29. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1426.

Full text
Abstract:
Walking is a basic function of humanoid robot, this paper presents key ideas of stereo vision based humanoid walking. Image processing techniques and pattern recognition techniques are employed for the obstacle detection and object recognition, data fitting technique is also used to plan the path of the humanoid robot. High precision visual feedback is provided by the combination of real time high precision feature detection and high actuary object detection method. The proposed stereo vision based approach and robot guidance system were evaluated partly by experiments and partly by the simulation with the humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Jian Bing, and Ya Bing Zha. "Research on the Vision Control System for Modular Robot." Applied Mechanics and Materials 667 (October 2014): 421–24. http://dx.doi.org/10.4028/www.scientific.net/amm.667.421.

Full text
Abstract:
Modular robot is an approach to build the robot for various complex tasks with the promise of great versatility, robustness and lower cost. It can be used extensively to meet the demands of different tasks or different working environments by changing its shapes. Therefore, they can travel over or through obstacles, and go though small pipe. Even they can walk somewhat like a person on crutches. It is a key technology that how modular robot with vision to capture exact image information, as well as to extract the feature parameters of components real-timely, to recognize the component types, and to judge the position and posture of component. The quantity of the motion pattern will decide the adjustable ability of modular robot. So, dynamic vision control system is very important for modular robot, which can improve its moving ability and intelligent degree. With the help of the dynamic vision control system, the robot can accomplish different tasks in different working environments by itself, such as deciding routes and avoiding obstacles. In this paper, in order to improve the vision of modular robot, a kind of dynamic vision control system is analyzed and researched roundly, and two kinds of motion patterns for the modular robot are put forward.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Ling, and Chao Sun. "Research on Mobile Robot Vision Navigation Algorithm." Journal of Physics: Conference Series 2010, no. 1 (September 1, 2021): 012007. http://dx.doi.org/10.1088/1742-6596/2010/1/012007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ehrenman, Gayle. "Eyes on the Line." Mechanical Engineering 127, no. 08 (August 1, 2005): 25–27. http://dx.doi.org/10.1115/1.2005-aug-2.

Full text
Abstract:
This article discusses vision-enabled robots that are helping factories to keep the production lines rolling, even when the parts are out of place. The automotive industry was one of the earliest to adopt industrial robots, and continues to be one of its biggest users, but now industrial robots are turning up in more unusual factory settings, including pharmaceutical production and packaging, consumer electronics assembly, machine tooling, and food packaging. No current market research is available that breaks down vision-enabled versus blind robot usage. However, all the major industrial robot manufacturers are turning out models that are vision-enabled; one manufacturer said that its entire current line of robots are vision enabled. All it takes to change over the robot system is some fairly basic tooling changes to the robot's end-effector, and some programming changes in the software. The combination of speed, relatively low cost , flexibility, and ease of use that vision-enabled robots offer is making an increasing number of factories consider putting another set of eyes on their lines.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, C. Z., Xiao Dong Zhang, and Xue Zhi Wu. "Research on Robot Vision Technology Based on ARM and DSP for Age and Disabled Helping Robot." Key Engineering Materials 455 (December 2010): 42–46. http://dx.doi.org/10.4028/www.scientific.net/kem.455.42.

Full text
Abstract:
The robot vision technology designed to help the elderly and the disabled people are studied on a mobile robot in the thesis, and its robot vision system based on ARM and DSP is mainly designed, respectively. After hardware construction and software design, the experiments are taken to verify the feasibility of the application of the robot vision technology on the age and disabled helping robot.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Dandan, Junhong Chen, Wei Li, Daniel Bautista Salinas, and Guang-Zhong Yang. "A microsurgical robot research platform for robot-assisted microsurgery research and training." International Journal of Computer Assisted Radiology and Surgery 15, no. 1 (October 11, 2019): 15–25. http://dx.doi.org/10.1007/s11548-019-02074-1.

Full text
Abstract:
Abstract Purpose Ocular surgery, ear, nose and throat surgery and neurosurgery are typical types of microsurgery. A versatile training platform can assist microsurgical skills development and accelerate the uptake of robot-assisted microsurgery (RAMS). However, the currently available platforms are mainly designed for macro-scale minimally invasive surgery. There is a need to develop a dedicated microsurgical robot research platform for both research and clinical training. Methods A microsurgical robot research platform (MRRP) is introduced in this paper. The hardware system includes a slave robot with bimanual manipulators, two master controllers and a vision system. It is flexible to support multiple microsurgical tools. The software architecture is developed based on the robot operating system, which is extensible at high-level control. The selection of master–slave mapping strategy was explored, while comparisons were made between different interfaces. Results Experimental verification was conducted based on two microsurgical tasks for training evaluation, i.e. trajectory following and targeting. User study results indicated that the proposed hybrid interface is more effective than the traditional approach in terms of frequency of clutching, task completion time and ease of control. Conclusion Results indicated that the MRRP can be utilized for microsurgical skills training, since motion kinematic data and vision data can provide objective means of verification and scoring. The proposed system can further be used for verifying high-level control algorithms and task automation for RAMS research.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Ke Yin, Xiang Jun Zou, and Li Juan Chen. "The On-Line Calibration Research of the Picking Robot Binocular Vision System." Key Engineering Materials 522 (August 2012): 634–37. http://dx.doi.org/10.4028/www.scientific.net/kem.522.634.

Full text
Abstract:
In the picking robot binocular vision systems research, the camera calibration is often an indispensable step and these basements to locate the target of the object and rebuild the three-dimensional construction based on the robot stereo vision for the follow-up study. So, searching for a high accuracy and simple camera calibration algorithm is of great significance and necessary. However, For most of these camera calibration algorithms, it is necessary to establish a reference object, namely the target, in front of the camera at present, but posing the target is very not convenient or almost impossible in some cases. Therefore, a picking robot online calibration algorithm based on the vision scene was proposed by studying the work environment characteristics of the picking robot binocular vision system and the invariant projective geometry. The experimental results showed that this algorithm’s calibration accuracy and precision good meets to the requirement of the robot binocular vision system camera calibration in the complex environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Jian Qiang, Li Song Chen, Jiu You Zhu, Zhao Long Niu, and Dong Hua Tang. "Research on the Calibration Technology of Robot Vision System." Advanced Materials Research 482-484 (February 2012): 506–11. http://dx.doi.org/10.4028/www.scientific.net/amr.482-484.506.

Full text
Abstract:
This paper introduces calibration technique for robot vision system, aiming to enhance the robot’s ability to adapt to various environments. In accordance with a calibration plate and car body, we figured out the relationship among the coordinates of car body, camera, and robot by image acquisition, processing, analysis, and calculation. An algorithm was developed to deduce the coordinate transformation between robot frame and calibration plate frame with 4 points. Its feasibility was validated by the experiment.
APA, Harvard, Vancouver, ISO, and other styles
17

Adachi, Yoshinobu, and Masayoshi Kakikura. "Research on the Sheepdog Problem Using Cellular Automata." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 9 (November 20, 2007): 1099–106. http://dx.doi.org/10.20965/jaciii.2007.p1099.

Full text
Abstract:
The simulation framework we propose for complex path planning problems with multiagent systems focuses on the sheepdog problem for handling distributed autonomous robot systems – an extension of the pursuit problem for handling one prey robot and multiple predator robot. The sheepdog problem involves a more complex issue in which multiple dog robot chase and herd multiple sheep robot. We use the Boids model and cellular automata to model sheep flocking and chase and herd behavior for dog robots. We conduct experiments using a Sheepdog problem simulator and study cooperative behavior.
APA, Harvard, Vancouver, ISO, and other styles
18

Songhao, Piao, Liu Yaqi, Zhong Qiubo, and Byen Zhengyong. "Research of Mobile Robot Navigation Based on Vision." Advanced Science Letters 7, no. 1 (March 30, 2012): 187–91. http://dx.doi.org/10.1166/asl.2012.2062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ho, Tang Swee, Goh Chok How, Yeong Che Fai, Tey Wei Kang, and Eileen Su Lee Ming. "Vision Recognition on Line Pattern Following Behavior on POB-Bot." Applied Mechanics and Materials 432 (September 2013): 453–57. http://dx.doi.org/10.4028/www.scientific.net/amm.432.453.

Full text
Abstract:
Robust development in the industries had lead to the fast development of the robots and sensors. Mobile robot is able to replace human power for many things due to its low cost and high accuracy. Types of navigation of the mobile robot had been in the research for past decades, the simplest and straight forward method of mobile robot navigation will be line following. Line following basically allows the robot to move along the predefined line. This paper is proposing a novel method of line following for more functions for mobile robot. 12 patterns have been defined to perform different functions. POB-Bot has been used to recognize these patterns using its POB-Eye. The result shows that the accuracy of the POB-Eye is promisingly high for simple pattern.
APA, Harvard, Vancouver, ISO, and other styles
20

Yin, Jiang Yan. "The Application Research of Robot Vision Target Positioning Based on Static Camera Calibration." Advanced Materials Research 712-715 (June 2013): 2378–84. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2378.

Full text
Abstract:
Target precise positioning by vision system is one of key techniques in robot vision system. In target positioning and selection with robot vision technique, the camera lens distortion must be calibrated. In this paper, a calibration method based on segment slope is used to calibrate the camera and the radial lens distortion coefficient is obtained. The distortion coefficient is used in calculating target position coordinates, and the robot end-exceutor is led to position the target with the use of the coordinates. The experimental results show the effectiveness of the research work. Keywords:robot vision;camera calibration;radial distortion;target positioning
APA, Harvard, Vancouver, ISO, and other styles
21

Shen, Dan, Haibin Ling, Khanh Pham, Erik Blasch, and Genshe Chen. "Computer vision and pursuit–evasion game theoretical controls for ground robots." Advances in Mechanical Engineering 11, no. 8 (August 2019): 168781401987291. http://dx.doi.org/10.1177/1687814019872911.

Full text
Abstract:
A hardware-in-loop control framework with robot dynamic models, pursuit–evasion game models, sensor and information solutions, and entity tracking algorithms is designed and developed to demonstrate discrete-time robotic pursuit–evasion games for real-world conditions. A parameter estimator is implemented to learn the unknown parameters in the robot dynamics. For visual tracking and fusion, several markers are designed and selected with the best balance of robot tracking accuracy and robustness. The target robots are detected after background modeling, and the robot poses are estimated from the local gradient patterns. Based on the robot dynamic model, a two-player discrete-time game model with limited action space and limited look-ahead horizons is created. The robot controls are based on the game-theoretic (mixed) Nash solutions. Supportive results are obtained from the robot control framework to enable future research of the robot applications in sensor fusion, target tracking and detection, and decision making.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Jun, Tao Mei, Bin Kong, and Xiang Dong. "Research on Object Recognition of Intelligent Robot Base on Binocular Vision." Applied Mechanics and Materials 127 (October 2011): 300–304. http://dx.doi.org/10.4028/www.scientific.net/amm.127.300.

Full text
Abstract:
For intelligent robots need reliable object recognition and precise orientation in complex environments, this paper presents a method that use binocular vision to object recognition. In this paper, use least-squares fitting method to accurately determine coordinates of one matching point and four boundary points. calculate the three-dimension coordinate of the mathcing point via binocular vision theory, compute the three-dimension information include the size and height of the object via projection theory and restrictive relation between four boundary points and depth of mathcing point, which improve the reliability of object recognition and precise of orientation. The results of experiment show that this method can get reliable object recognition and precise orientation, meet the needs of robot path planning and grab objects with gripper.
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Xin, Rong Guang Sun, and Yong Feng Dong. "Research and Design on Vision System of the Mobile Robot." Applied Mechanics and Materials 341-342 (July 2013): 797–800. http://dx.doi.org/10.4028/www.scientific.net/amm.341-342.797.

Full text
Abstract:
The paper presents the design of vision system of the mobile robot, and shows methods of object recognition based on color images in the system of mobile robot. To adapt to the different light conditions, HSI color space is used. The system is able to meet the demand on rapidity and veracity of system of mobile robot. Experiment results show that the proposed technique is liable to accomplish object recognition in presence of changing illumination environment conditions.
APA, Harvard, Vancouver, ISO, and other styles
24

Xiang, Wei. "Research of CamShift Algorithm Used in Object Tracking of Selfvision Underwater Robot." Advanced Materials Research 662 (February 2013): 971–74. http://dx.doi.org/10.4028/www.scientific.net/amr.662.971.

Full text
Abstract:
It is difficult for self-vision underwater robot to track object, and the tracking process is frequently inaccurate, unstable or even loss goals. To solve the above questions, Continuously Adaptive Mean Shift Algorithm (CamShift) is used in object tracking of self-vision underwater robot in this paper. We build a software experimental platform by VC++6.0 and Opencv1.0, with the external camera to capture video, and then apply Camshift algorithm in the environment, in which background color is not similar to the object to realize the real time tracking. The experimental results show the effectiveness of the algorithm for self-vision underwater robot.
APA, Harvard, Vancouver, ISO, and other styles
25

Tan, Bin. "Soccer-Assisted Training Robot Based on Image Recognition Omnidirectional Movement." Wireless Communications and Mobile Computing 2021 (August 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/5532210.

Full text
Abstract:
With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Yao, Ruting, Yili Zheng, Fengjun Chen, Jian Wu, and Hui Wang. "Research on Vision System Calibration Method of Forestry Mobile Robots." International Journal of Circuits, Systems and Signal Processing 14 (January 12, 2021): 1107–14. http://dx.doi.org/10.46300/9106.2020.14.139.

Full text
Abstract:
Forestry mobile robots can effectively solve the problems of low efficiency and poor safety in the forestry operation process. To realize the autonomous navigation of forestry mobile robots, a vision system consisting of a monocular camera and two-dimensional LiDAR and its calibration method are investigated. First, the adaptive algorithm is used to synchronize the data captured by the two in time. Second, a calibration board with a convex checkerboard is designed for the spatial calibration of the devices. The nonlinear least squares algorithm is employed to solve and optimize the external parameters. The experimental results show that the time synchronization precision of this calibration method is 0.0082s, the communication rate is 23Hz, and the gradient tolerance of spatial calibration is 8.55e−07. The calibration results satisfy the requirements of real-time operation and accuracy of the forestry mobile robot vision system. Furthermore, the engineering applications of the vision system are discussed herein. This study lays the foundation for further forestry mobile robots research, which is relevant to intelligent forest machines.
APA, Harvard, Vancouver, ISO, and other styles
27

Martyshkin, Alexey I. "Motion Planning Algorithm for a Mobile Robot with a Smart Machine Vision System." Nexo Revista Científica 33, no. 02 (December 31, 2020): 651–71. http://dx.doi.org/10.5377/nexo.v33i02.10800.

Full text
Abstract:
This study is devoted to the challenges of motion planning for mobile robots with smart machine vision systems. Motion planning for mobile robots in the environment with obstacles is a problem to deal with when creating robots suitable for operation in real-world conditions. The solutions found today are predominantly private, and are highly specialized, which prevents judging of how successful they are in solving the problem of effective motion planning. Solutions with a narrow application field already exist and are being already developed for a long time, however, no major breakthrough has been observed yet. Only a systematic improvement in the characteristics of such systems can be noted. The purpose of this study: develop and investigate a motion planning algorithm for a mobile robot with a smart machine vision system. The research subject for this article is a motion planning algorithm for a mobile robot with a smart machine vision system. This study provides a review of domestic and foreign mobile robots that solve the motion planning problem in a known environment with unknown obstacles. The following navigation methods are considered for mobile robots: local, global, individual. In the course of work and research, a mobile robot prototype has been built, capable of recognizing obstacles of regular geometric shapes, as well as plan and correct the movement path. Environment objects are identified and classified as obstacles by means of digital image processing methods and algorithms. Distance to the obstacle and relative angle are calculated by photogrammetry methods, image quality is improved by linear contrast enhancement and optimal linear filtering using the Wiener-Hopf equation. Virtual tools, related to mobile robot motion algorithm testing, have been reviewed, which led us to selecting Webots software package for prototype testing. Testing results allowed us to make the following conclusions. The mobile robot has successfully identified the obstacle, planned a path in accordance with the obstacle avoidance algorithm, and continued moving to the destination. Conclusions have been drawn regarding the concluded research.
APA, Harvard, Vancouver, ISO, and other styles
28

Bogue, Robert. "Robots poised to revolutionise agriculture." Industrial Robot: An International Journal 43, no. 5 (August 15, 2016): 450–56. http://dx.doi.org/10.1108/ir-05-2016-0142.

Full text
Abstract:
Purpose This paper aims to provide details of a number of recent and significant agricultural robot research and development activities. Design/methodology/approach Following an introduction, this first provides a brief overview of agricultural robot research. It then discusses a number of specific activities involving robots for precision weed control and fertiliser application. A selection of harvesting robots and allied technological developments is then considered and is followed by concluding comments. Findings Agricultural robots are the topic of an extensive research and development effort. Several autonomous robots aimed at precision weed control and fertiliser application have reached the pre-production stage. Equally, harvesting robots are at an advanced stage of development. Both classes exploit state-of-the-art machine vision and image processing technologies which are the topic of a major research effort. These developments will contribute to the forecasted rapid growth in the agricultural robot markets during the next decade. Originality/value Robots are expected to play a significant role in meeting the ever increasing demand for food, and this paper provides details of some recent agricultural robot research and development activities.
APA, Harvard, Vancouver, ISO, and other styles
29

Xie, Yong Gang, Zhong Min Wang, and Shi Tao Su. "The Research of Binocular Ranging System on Independent Mobile Robot." Advanced Materials Research 462 (February 2012): 603–8. http://dx.doi.org/10.4028/www.scientific.net/amr.462.603.

Full text
Abstract:
Timeliness and accuracy is a key to be resolved on robot binocular measurement. In this paper, a kind of robot vision projection has been completely established. It analyzes the principle of binocular ranging in three aspects, makes the calculation concise and easy to understand, and expands the range of effective distance. On binocular image processing, we have proposed a gray-scale computing for, firstly, generating characteristic area, then, executing template matching in the area, finally, extracting feature points and matching them in the templates. It ensures certain robustness to noise spots and tries its best to avoid mismatches. The experiments show that the robot vision system has a better accuracy and a low time complexity, and the robot can react in real time.
APA, Harvard, Vancouver, ISO, and other styles
30

Hayashi, Koichiro, Yasuyoshi Yokokohji, and Tsuneo Yoshikawa. "Tele-Existence Vision System with Image Stabilization for Rescue Robots." Journal of Robotics and Mechatronics 17, no. 2 (April 20, 2005): 181–88. http://dx.doi.org/10.20965/jrm.2005.p0181.

Full text
Abstract:
The purpose of this research is to develop an intuitive interface to control rescue robots. We propose a new image stabilization system for operating rescue robots easily. The use of teleoperated rescue robots is promising in searching for victims in rubble. In the rescue activities with such robots, operators control the robots remotely through images captured by cameras mounted on the robots. Since the orientation of the robots change rapidly while they move in rubble, image stabilization is necessary so the operators can search for victims without suffering from fatigue or motion sickness. However, robot orientation changes so much that conventional image stabilizing methods does not work. In this paper, we propose a new image stabilization system which cancels camera motion caused by such rapid changes of robot orientation on an uneven terrain. After a preliminary experiment, a 3-DOF camera system was designed based on the newly proposed mechanism. To verify the performance of the camera system, we conducted two experiments. The results of the experiments confirmed that the proposed mechanism shows good image stabilization and good tracking of commanded head motion.
APA, Harvard, Vancouver, ISO, and other styles
31

Eiammanussakul, Trinnachoke, Jirawut Taoprayoon, and Viboon Sangveraphunsiri. "Weld Bead Tracking Control of a Magnetic Wheel Wall Climbing Robot Using a Laser-Vision System." Applied Mechanics and Materials 619 (August 2014): 219–23. http://dx.doi.org/10.4028/www.scientific.net/amm.619.219.

Full text
Abstract:
Magnetic wheel wall climbing robots are used in inspection of weld bead on large steel tanks. This research aims to develop the robot and tracking control algorithm to be more suitable for the inspection process. A laser-vision system was added on the robot to provide accurate weld bead detection. The simulation of the robot tracking along a weld bead by using nonlinear controller is demonstrated the effectiveness of the tracking control system.
APA, Harvard, Vancouver, ISO, and other styles
32

LEI Jin-zhou, 雷金周, 曾令斌 ZENG Ling-bin, and 叶. 南. YE Nan. "Research on industrial robot alignment technique with monocular vision." Optics and Precision Engineering 26, no. 3 (2018): 733–41. http://dx.doi.org/10.3788/ope.20182603.0733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yin, Ruijiao, and Jie Yang. "Research on Robot Control Technology Based on Vision Localization." Journal on Artificial Intelligence 1, no. 1 (2019): 37–44. http://dx.doi.org/10.32604/jai.2019.05815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Yifan, Shipeng Xiong, and Yangyang Huang. "Research on fire inspection robot based on computer vision." IOP Conference Series: Earth and Environmental Science 632 (January 14, 2021): 052066. http://dx.doi.org/10.1088/1755-1315/632/5/052066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Sheng Rong, and Huan Long Guo. "Research of Machine Vision System about Robot Soccer Based on the HSV." Advanced Materials Research 121-122 (June 2010): 807–12. http://dx.doi.org/10.4028/www.scientific.net/amr.121-122.807.

Full text
Abstract:
In this paper, including the introduction of image processing and image processing factors influence the discussion, after the RGB image access to the transformation of HSV, mainly through the robot to select the image of the identification process design, completed the capture of a robot A specific color of the entity, after the robot moves to give you accurate basis for the judgment. First of all, the introduction for images on the basis of the relevant knowledge is presented. Second, the image and the image of the divisions are briefly introduced. Third, we proposed the image factors and image processing technology. Fourth, conversion from RGB to HSV model is presented. Finally, we designed the image of the robot to access and identify procedures modular.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Ming, Li Wang, Hai Hua Shi, and Wei Xiang. "The Target Tracking Algorithm Research of Independent Vision Robot Fish." Advanced Materials Research 753-755 (August 2013): 2015–19. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.2015.

Full text
Abstract:
In the independent vision robot fish games, the interference of water wave often causes tracking inaccuracy and target tracking failure. In order to solve these problems, the Meanshift algorithm and the combination of Meanshift algorithm and Kalman filter respectively are studied to realize target tracking of independent vision robot fish in this paper. By comparing the two algorithms, the results show that: the former tracking algorithm is not ideal and easy to lose the target. The combined algorithm of Meanshift and Kalman filter can effectively improve the performance of single-target tracking in a complex environment to achieve the goal of continuous accurate tracking.
APA, Harvard, Vancouver, ISO, and other styles
37

Piao, Song Hao, Qiu Bo Zhong, Shu Ai Wang, and Xian Feng Wang. "Research of Object Recognition Algorithm Based on Variable Illumination." Advanced Materials Research 255-260 (May 2011): 2096–100. http://dx.doi.org/10.4028/www.scientific.net/amr.255-260.2096.

Full text
Abstract:
The robot vision system is the critical component of the soccer robot, in football competition, robot perceive the most of the information from the vision system. Because of the variable illumination conditions, the traditional image segmentation method based on color information is not satisfactory. Based on the color information and shape information of the object, this paper proposes a object recognition algorithm that combine color image segmentation with edge detection. This algorithm implement image segmentation use color information in the HSV color space obtain the pixel of the object, then use this pixel implement edge detection to recognize the object. Experiments show that this algorithm can recognize the object exactly in the different illumination conditions, satisfy the requirement of the competition.
APA, Harvard, Vancouver, ISO, and other styles
38

Rahmadian, Reza, and Mahendra Widyartono. "Machine Vision and Global Positioning System for Autonomous Robotic Navigation in Agriculture: A Review." Journal of Information Engineering and Educational Technology 1, no. 1 (March 13, 2017): 46. http://dx.doi.org/10.26740/jieet.v1n1.p46-54.

Full text
Abstract:
Interest on robotic agriculture system has led to the development of agricultural robots that helps to improve the farming operation and increase the agriculture productivity. Much research has been conducted to increase the capability of the robot to assist agricultural operation, which leads to development of autonomous robot. This development provides a means of reducing agriculture’s dependency on operators, workers, also reducing the inaccuracy caused by human errors. There are two important development components for autonomous navigation. The first component is Machine vision for guiding through the crops and the second component is GPS technology to guide the robot through the agricultural fields.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Lei, Shiliang Wu, Zhenlong Lv, and Feng Lu. "Research on manipulator grasping method based on vision." MATEC Web of Conferences 309 (2020): 04004. http://dx.doi.org/10.1051/matecconf/202030904004.

Full text
Abstract:
Aiming at the problem that the manipulator cannot grasp the object accurately when the robot changes the position and pose of the object in the process of static grasping, a set of manipulator grasping method research based on vision is proposed. Firstly, a set of camera detection and robot grasping system model is built. Secondly, each coordinate system is created for the grasping system, and the transformation relation between each coordinate system, matrix model and the quantity that needs to be calibrated are introduced in detail. Thirdly, the trajectory function from the image coordinate system to the coordinates of the manipulator is obtained through the direct linear method calibration experiment. Finally, the experimental platform for camera detection and manipulator grasping objects based on xavis was built, and the grasping success rate of the experimental platform was tested. The experimental results show that the grasping error rate of the experimental platform is within the control range. Therefore, the manipulator grasping method based on vision is of reference significance for engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Linfeng, Gang Li, Peiheng Song, and Weixiang Shao. "Vision-Based Intelligent Perceiving and Planning System of a 7-DoF Collaborative Robot." Computational Intelligence and Neuroscience 2021 (September 14, 2021): 1–25. http://dx.doi.org/10.1155/2021/5810371.

Full text
Abstract:
In this paper, an intelligent perceiving and planning system based on deep learning is proposed for a collaborative robot consisting of a 7-DoF (7-degree-of-freedom) manipulator, a three-finger robot hand, and a vision system, known as IPPS (intelligent perceiving and planning system). The lack of intelligence has been limiting the application of collaborative robots for a long time. A system to realize “eye-brain-hand” process is crucial for the true intelligence of robots. In this research, a more stable and accurate perceiving process was proposed. A well-designed camera system as the vision system and a new hand tracking method were proposed for operation perceiving and recording set establishment to improve the applicability. A visual process was designed to improve the accuracy of environment perceiving. Besides, a faster and more precise planning process was proposed. Deep learning based on a new CNN (convolution neural network) was designed to realize intelligent grasping planning for robot hand. A new trajectory planning method of the manipulator was proposed to improve efficiency. The performance of the IPPS was tested with simulations and experiments in a real environment. The results show that IPPS could effectively realize intelligent perceiving and planning for the robot, which could realize higher intelligence and great applicability for collaborative robots.
APA, Harvard, Vancouver, ISO, and other styles
41

Yi, Ji Ming, and Min Han. "Welding Robot Welding Process Research and Development of Intelligent Technology." Applied Mechanics and Materials 419 (October 2013): 774–77. http://dx.doi.org/10.4028/www.scientific.net/amm.419.774.

Full text
Abstract:
The welding direction of robot and existing problems, the groove plate is difficult to realize automatic welding robot problem, methods using laser sensor and a binocular vision system combines, image and depth information extraction plate groove groove, realize accurate 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
42

Rahmani, Budi, Agfianto Eko Putra, Agus Harjoko, and Tri Kuntoro Priyambodo. "Review of Vision-Based Robot Navigation Method." IAES International Journal of Robotics and Automation (IJRA) 4, no. 4 (December 1, 2015): 254. http://dx.doi.org/10.11591/ijra.v4i4.pp254-261.

Full text
Abstract:
Vision-based robot navigation is a research theme that continues to be developed up to now by the researchers in the field of robotics. There are innumerable methods or algorithms are developed, and this paper described the reviews of the methods. The methods are distinguished whether the robot is equipped with the navigation map (map-based), the map is built incrementally as robot observes the environment (map-building), or the robot navigates using no map (mapless). In this paper will described navigation methods of map-based, map-building, and mapless category.
APA, Harvard, Vancouver, ISO, and other styles
43

Geng, Mingyang, Shuqi Liu, and Zhaoxia Wu. "Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System." Sensors 19, no. 4 (February 17, 2019): 823. http://dx.doi.org/10.3390/s19040823.

Full text
Abstract:
Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance and perform the trail-following task in a better manner. Concretely, each robot can periodically exchange the vision data with other robots and make decisions based both on its local view and the information from others. This paper proposes a sensor fusion-based cooperative trail-following method, which enables a group of robots to implement the trail-following task by fusing the sensor data of each robot. Our method allows each robot to face the same direction from different altitudes to fuse the vision data feature on the collective level and then take action respectively. Besides, considering the quality of service requirement of the robotic software, our method limits the condition to implementing the sensor data fusion process by using the “threshold” mechanism. Qualitative and quantitative experiments on the real-world dataset have shown that our method can significantly promote the recognition accuracy and lead to a more robust performance compared with the single-robot system.
APA, Harvard, Vancouver, ISO, and other styles
44

Tanaka, Ryosuke, Jinseok Woo, and Naoyuki Kubota. "Nonverbal Communication Based on Instructed Learning for Socially Embedded Robot Partners." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 3 (May 20, 2019): 584–91. http://dx.doi.org/10.20965/jaciii.2019.p0584.

Full text
Abstract:
The research and development of robot partners have been actively conducted to support human daily life. Human-robot interaction is one of the important research field, in which verbal and nonverbal communication are essential elements for improving the interactions between humans and robots. Thus, the purpose of this research was to establish a method to adapt a human-robot interaction mechanism for robot partners to various situations. In the proposed system, the robot needs to analyze the gestures of humans to interact with them. Humans have the ability to interact according to dynamically changing environmental conditions. Therefore, when robots interact with a human, it is necessary for robots to interact appropriately by correctly judging the situation according to human gestures to carry out natural human-robot interaction. In this paper, we propose a constructive methodology on a system that enables nonverbal communication elements for human-robot interaction. The proposed method was validated through a series of experiments.
APA, Harvard, Vancouver, ISO, and other styles
45

Huang, Wensheng, and Hongli Xu. "Development of six-DOF welding robot with machine vision." Modern Physics Letters B 32, no. 34n36 (December 30, 2018): 1840079. http://dx.doi.org/10.1142/s0217984918400791.

Full text
Abstract:
The application of machine vision to industrial robots is a hot topic in robot research nowadays. A welding robot with machine vision had been developed, which is convenient and flexible to reach the welding point with six degrees-of-freedom (DOF) manipulator, while the singularity of its movement trail is prevented, and the stability of the mechanism had been fully guaranteed. As the precise industry camera can capture the optical feature of the workpiece to reflect in the camera’s CCD lens, the workpiece is identified and located through a visual pattern recognition algorithm based on gray scale processing, on the gradient direction of edge pixel or on geometric element so that high-speed visual acquisition, image preprocessing, feature extraction and recognition, target location are integrated and hardware processing power is improved. Another task is to plan control strategy of control system, and the upper computer software is programmed in order that multi-axis motion trajectory is optimized and servo control is accomplished. Finally, prototype was developed and validation experiments show that the welding robot has high stability, high efficiency, high precision, even if welding joints are random and workpiece contour is irregular.
APA, Harvard, Vancouver, ISO, and other styles
46

Cheng, Sheng. "Research on Ping-Pong Balls Collecting Robot Based on Embedded Vision Processing System." Applied Mechanics and Materials 511-512 (February 2014): 838–41. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.838.

Full text
Abstract:
Nowadays, the design on Ping-Pong balls collecting robot cannot combine autonomous control with remote control. And the existing collecting devices are not efficient enough for the ball collecting job. A design is proposed on the system of Ping-Pong balls collecting robot with autonomous and remote control, based on embedded vision processing. The design also includes a set of wheeled picking-up device. The research showed that the robot can serve the function of collecting Ping-Pong balls efficiently.
APA, Harvard, Vancouver, ISO, and other styles
47

Pisar, Žiga, and Primož Podržaj. "THE DESIGN OF A MODULAR MOBILE ROBOT FOR VISION BASED HUMAN ROBOT INTERACTION RESEARCH." Electrical and Electronics Engineering: An International Journal 06, no. 04 (November 30, 2017): 01–10. http://dx.doi.org/10.14810/elelij.2017.6401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xing, Si Ming, and Zhi Yong Luo. "Research on Wire-Plugging Robot System Based on Machine Vision." Applied Mechanics and Materials 275-277 (January 2013): 2459–66. http://dx.doi.org/10.4028/www.scientific.net/amm.275-277.2459.

Full text
Abstract:
ADSL line test in the field of telecommunication is high-strength work. But current testing method has low working efficiency and cannot realize automatic test. In this paper, the wire-plugging test robot system based on machine vision is designed to realize remote test and automatic wire-plugging, and it also can improve work efficiency. Dual-positioning method which based on technologies of color-coded blocks recognition and visual locating is used in this system. Color-coded blocks are recognized to realize socket coarse-positioning, the stepper motors in directions of X-axis and Y-axis are drived to move to nearby the socket quickly. Video-positioning technology is used to realized pinpoint the socket. The stepper motors of X-axis and Y-axis are drived to make a plugging to align a socket after the pinpoint action is realized, and then the motor in the direction of Z-axis is drived to realize wire-plugging action. Plugging is resetted to a safe place after the end of the wire-plugging. Performance tests have improved that this wire-plugging test robot system can realize plug-testing task quickly and accurately, so it is a stable wire-plugging equipment.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Wei. "Research on Mechanical Industry with the Patrol Robot Study Based on the Embedded System." Advanced Materials Research 675 (March 2013): 82–85. http://dx.doi.org/10.4028/www.scientific.net/amr.675.82.

Full text
Abstract:
It is a hot topic that comprehensive machine vision and embedded systems for mechanical industry robot motion control now. In this paper, an embedded robot visual servo tracking system platform was build, which can process and analyze video images and audio data, then analysis results was transfered to the drive motor in the robot body, and realized the autonomy tracing of the mechanical robot .
APA, Harvard, Vancouver, ISO, and other styles
50

Su, Shiyuan, Zhijie Xu, and Yihui Yang. "Research on Robot Vision Servo Based on image big data." Journal of Physics: Conference Series 1650 (October 2020): 032132. http://dx.doi.org/10.1088/1742-6596/1650/3/032132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography