Artykuły w czasopismach na temat „Robot vision”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Robot vision.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Robot vision”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

MASRIL, MUHAMMAD ABRAR, i DEOSA PUTRA CANIAGO. "Optimasi Teknologi Computer Vision pada Robot Industri Sebagai Pemindah Objek Berdasarkan Warna". ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 11, nr 1 (24.01.2023): 46. http://dx.doi.org/10.26760/elkomika.v11i1.46.

Pełny tekst źródła
Streszczenie:
ABSTRAKComputer vision merupakan teknologi yang dapat mendeteksi objek yang ada disekitarnya pada penelitian ini membahas optimasi teknologi computer vison pada robot sebagai pemindah objek berdasarkan warna. Sistem pada robot terdiri dari pengenalan bola berwarna dan memindahkan bola berwarna sesuai dengan warna yang dideteksi. Teknologi computer vision pada pixy 2 camera dapat mendeteksi objek berwarna menggunakan metode deteksi real-time dengan hasil optimasi yang tinggi yaitu 0,2 detik ketika mendeteksi objek berwarna. Pengujian pengenalan objek berwarna dilakukan sebanyak tiga kali pada setiap objek berwarna dengan tingkat akurasi sebesar 100%. Optimasi computer vision dapat membantu robot mengenali objek berwarna.Kata kunci: Computer Vision, Deteksi Objek Berwarna, Pixy2 Camera, Real-Time ABSTRACTComputer vision is a technology that can detect objects that are around it. This study discusses the optimization of computer vision technology on robots as object transfers based on color. The system on the robot consists of recognizing colored balls and moving colored balls according to the detected color. Computer vision technology on the pixy 2 camera can detect colored objects using a real-time detection method with a high optimization result of 0.2 seconds when detecting colored objects. The color object recognition test was carried out three times on each colored object with an accuracy rate of 100%. Computer vision optimization can help robots recognize colored objects.Keywords: Computer Vision, Color Object Detection, Pixy2 Camera, Real-Time
Style APA, Harvard, Vancouver, ISO itp.
2

BARNES, NICK, i ZHI-QIANG LIU. "VISION GUIDED CIRCUMNAVIGATING AUTONOMOUS ROBOTS". International Journal of Pattern Recognition and Artificial Intelligence 14, nr 06 (wrzesień 2000): 689–714. http://dx.doi.org/10.1142/s0218001400000489.

Pełny tekst źródła
Streszczenie:
We present a system for vision guided autonomous circumnavigation, allowing a mobile robot to navigate safely around objects of arbitrary pose, and avoid obstacles. The system performs model-based object recognition from an intensity image. By enabling robots to recognize and navigate with respect to particular objects, this system empowers robots to perform deterministic actions on specific objects, rather than general exploration and navigation as emphasized in much of the current literature. This paper describes a fully integrated system, and, in particular, introduces canonical-views. Further, we derive a direct algebraic method for finding object pose and position for the four-dimensional case of a ground-based robot with uncalibrated vertical movement of its camera. Vision for mobile robots can be treated as a very different problem to traditional computer vision, as mobile robots have a characteristic perspective, and there is a causal relation between robot actions and view changes. Canonical-views are a novel, active object representation designed specifically to take advantage of the constraints of the robot navigation problem to allow efficient recognition and navigation.
Style APA, Harvard, Vancouver, ISO itp.
3

Martinez-Martin, Ester, i Angel del Pobil. "Vision for Robust Robot Manipulation". Sensors 19, nr 7 (6.04.2019): 1648. http://dx.doi.org/10.3390/s19071648.

Pełny tekst źródła
Streszczenie:
Advances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper’s flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach.
Style APA, Harvard, Vancouver, ISO itp.
4

Umeda, Kazunori. "Special Issue on Robot Vision". Journal of Robotics and Mechatronics 15, nr 3 (20.06.2003): 253. http://dx.doi.org/10.20965/jrm.2003.p0253.

Pełny tekst źródła
Streszczenie:
Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target. This special issue reviews recent advances in this exciting field, following up two special issues, Vol. 11 No. 2, and Vol. 13 No. 6, which attracted more papers than expected. This indicates the high degree of research activity in this field. I am most pleased to report that this issue presents 12 excellent papers covering robot vision, including basic algorithms based on precise optical models, pattern and gesture recognition, and active vision. Several papers treat range imaging and others interesting applications to agriculture and quadruped robots and new devices. This issue also presents two news briefs, one on a practical range sensor suited to mobile robots and the other on vision devices that are the improved ones of famous IP-5000 series. I am convinced that this special issue helps research on robot vision more exciting. I would like to close by thanking all of the researchers who submitted their studies, and to give special thanks to the reviewers and editors, especially Prof. M. Kaneko, Dr. K. Yokoi, and Prof. Y. Nakauchi.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhang, Hongxin, i Suan Lee. "Robot Bionic Vision Technologies: A Review". Applied Sciences 12, nr 16 (9.08.2022): 7970. http://dx.doi.org/10.3390/app12167970.

Pełny tekst źródła
Streszczenie:
The visual organ is important for animals to obtain information and understand the outside world; however, robots cannot do so without a visual system. At present, the vision technology of artificial intelligence has achieved automation and relatively simple intelligence; however, bionic vision equipment is not as dexterous and intelligent as the human eye. At present, robots can function as smartly as human beings; however, existing reviews of robot bionic vision are still limited. Robot bionic vision has been explored in view of humans and animals’ visual principles and motion characteristics. In this study, the development history of robot bionic vision equipment and related technologies are discussed, the most representative binocular bionic and multi-eye compound eye bionic vision technologies are selected, and the existing technologies are reviewed; their prospects are discussed from the perspective of visual bionic control. This comprehensive study will serve as the most up-to-date source of information regarding developments in the field of robot bionic vision technology.
Style APA, Harvard, Vancouver, ISO itp.
6

YACHIDA, Masahiko. "Robot Vision." Journal of the Robotics Society of Japan 10, nr 2 (1992): 140–45. http://dx.doi.org/10.7210/jrsj.10.140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Haralick, Robert M. "Robot vision". Computer Vision, Graphics, and Image Processing 34, nr 1 (kwiecień 1986): 118–19. http://dx.doi.org/10.1016/0734-189x(86)90060-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Forrest, A. K. "Robot vision". Physics in Technology 17, nr 1 (styczeń 1986): 5–9. http://dx.doi.org/10.1088/0305-4624/17/1/301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Shirai, Y. "Robot vision". Robotics 2, nr 3 (wrzesień 1986): 175–203. http://dx.doi.org/10.1016/0167-8493(86)90028-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Shirai, Y. "Robot vision". Future Generation Computer Systems 1, nr 5 (wrzesień 1985): 325–52. http://dx.doi.org/10.1016/0167-739x(85)90005-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Xing, Guansheng, i Weichuan Meng. "Design of Robot Vision Servo Control System Based on Image". Journal of Physics: Conference Series 2136, nr 1 (1.12.2021): 012049. http://dx.doi.org/10.1088/1742-6596/2136/1/012049.

Pełny tekst źródła
Streszczenie:
Abstract Visual servo is a closed-loop control system of robot, which takes the image information obtained by visual sensor as feedback. Generally speaking, visual servo plays an important role in robot control, which is one of the main research directions in the field of robot control and plays a decisive role in the development of intelligent robots. In order to make the robot competent for more complex tasks and work more intelligently, autonomously and reliably, it is necessary not only to improve the control system of the robot, but also to obtain more and better information about the working environment of the robot. This paper introduces the principle and basic realization method of robot visual servo based on image, and expounds the problems and solutions in image feature extraction and visual servo controller design. In order to further expand the application field of robots and improve the operation performance of robots, robots must have higher intelligence level and stronger adaptability to the environment, so as to manufacture intelligent robots that can replace human labor.
Style APA, Harvard, Vancouver, ISO itp.
12

Yang, Tianwu, Changjiu Zhou i Mohan Rajesh. "A Fast Vision System for Soccer Robot". Applied Bionics and Biomechanics 9, nr 4 (2012): 399–407. http://dx.doi.org/10.1155/2012/480718.

Pełny tekst źródła
Streszczenie:
This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.
Style APA, Harvard, Vancouver, ISO itp.
13

C, Abhishek. "Development of Hexapod Robot with Computer Vision". International Journal for Research in Applied Science and Engineering Technology 9, nr 8 (31.08.2021): 1796–805. http://dx.doi.org/10.22214/ijraset.2021.37455.

Pełny tekst źródła
Streszczenie:
Abstract: Nowadays many robotic systems are developed with lot of innovation, seeking to get flexibility and efficiency of biological systems. Hexapod Robot is the best example for such robots, it is a six-legged robot whose walking movements try to imitate the movements of the insects, it has two sets of three legs alternatively which is used to walk, this will provide stability, flexibility and mobility to travel on irregular surfaces. With these attributes the hexapod robots can be used to explore irregular surfaces, inhospitable places, or places which are difficult for humans to access. This paper involves the development of hexapod robot with digital image processing implemented on Raspberry Pi, to study in the areas of robotic systems with legged locomotion and robotic vision. This paper is an integration of a robotic system and an embedded system of digital image processing, programmed in high level language using Python. It is equipped with a camera to capture real time video and uses a distance sensor that allow the robot to detect obstacles. The Robot is Self-Stabilizing and can detect corners. The robot has 3 degrees of freedom in each six legs thus making a 18 DOF robotic movement. The use of multiple degrees of freedom at the joints of the legs allows the legged robots to change their movement direction without slippage. Additionally, it is possible to change the height from the ground, introducing a damping and a decoupling between the terrain irregularities and the body of the robot servo motors. Keywords: Hexapod, Raspberry Pi, Computer vision, Object detection, Yolo, Servo Motor, OpevCV.
Style APA, Harvard, Vancouver, ISO itp.
14

Frese, Udo, i Heiko Hirschmüller. "Special issue on robot vision: what is robot vision?" Journal of Real-Time Image Processing 10, nr 4 (22.10.2015): 597–98. http://dx.doi.org/10.1007/s11554-015-0541-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Yan, Hui, Xue Bo Zhang, Yu Wang i Wei Jie Han. "Research on the Vision Processing of Space Robot's Tracking Camera". Advanced Materials Research 748 (sierpień 2013): 713–17. http://dx.doi.org/10.4028/www.scientific.net/amr.748.713.

Pełny tekst źródła
Streszczenie:
The tracking camera is very important to the whole test tasks of space robot. Aiming at the vision processing problems of space robots tracking camera, a new method of vision processing LabVIEW+DLL is present in this article. Based on the method, a set of vision processing system of space robots tracking camera is researched and developed. This system can better meet the index requirements of space robots vision processing and precisely measure the position and posture data from the target star relative to space robots body coordinate system in the process of the ground air-float test for the space robot, guaranteeing the smooth completion of the ground test mission.
Style APA, Harvard, Vancouver, ISO itp.
16

Tang, Ran, Jun Lu, Lei Liang, Jiakai Peng, Min Zhao i Wei Zhang. "A Fast Automatic Charging Method for Robots Based on Optic Position and Navigation". Journal of Physics: Conference Series 2405, nr 1 (1.12.2022): 012029. http://dx.doi.org/10.1088/1742-6596/2405/1/012029.

Pełny tekst źródła
Streszczenie:
Abstract Automatic charging is one of the important functions of a robot. Traditional automatic charging is generally based on multiple sensors working with each other. With the development of vision technology, the application of robots’ first vision and third vision is becoming more and more widespread. In this paper, an automatic robot charging system based on optical position and navigation is presented based on third vision. The system design is simplified by positioning the robot and the charging stand based on the optic. After the experimental test, it shows that the robot and the charging stand are docked with high accuracy and the docking is completed well.
Style APA, Harvard, Vancouver, ISO itp.
17

Ehrenman, Gayle. "Eyes on the Line". Mechanical Engineering 127, nr 08 (1.08.2005): 25–27. http://dx.doi.org/10.1115/1.2005-aug-2.

Pełny tekst źródła
Streszczenie:
This article discusses vision-enabled robots that are helping factories to keep the production lines rolling, even when the parts are out of place. The automotive industry was one of the earliest to adopt industrial robots, and continues to be one of its biggest users, but now industrial robots are turning up in more unusual factory settings, including pharmaceutical production and packaging, consumer electronics assembly, machine tooling, and food packaging. No current market research is available that breaks down vision-enabled versus blind robot usage. However, all the major industrial robot manufacturers are turning out models that are vision-enabled; one manufacturer said that its entire current line of robots are vision enabled. All it takes to change over the robot system is some fairly basic tooling changes to the robot's end-effector, and some programming changes in the software. The combination of speed, relatively low cost , flexibility, and ease of use that vision-enabled robots offer is making an increasing number of factories consider putting another set of eyes on their lines.
Style APA, Harvard, Vancouver, ISO itp.
18

Baasandorj, Bayanjargal, Aamir Reyaz, Batmunkh Battulga, Deok Jin Lee i Kil To Chong. "Formation of Multiple-Robots Using Vision Based Approach". Applied Mechanics and Materials 419 (październik 2013): 768–73. http://dx.doi.org/10.4028/www.scientific.net/amm.419.768.

Pełny tekst źródła
Streszczenie:
Multi-robots system has grown enormously with a large variety of topics being addressed. It is an important research area within the robotics and artificial intelligence. By using the vision based approach this paper deals with the formation of multiple-robots. Three NXT robots were used in the experiment and all the three robots work together as one virtual mobile robot. In addition to these things we also used TCP/IP socket, ArToolKit, NXT robot, Bluetooth communication device. And for programming C++ was used. Results achieved from the experiment were highly successful.
Style APA, Harvard, Vancouver, ISO itp.
19

Monta, Mitsuji, Naoshi Kondo, Seiichi Arima i Kazuhiko Namba. "Robotic Vision for Bioproduction Systems". Journal of Robotics and Mechatronics 15, nr 3 (20.06.2003): 341–48. http://dx.doi.org/10.20965/jrm.2003.p0341.

Pełny tekst źródła
Streszczenie:
The vision system is one of the most important external sensors for an agricultural robot because the robot has to find its target among various objects in complicated background. Optical and morphological properties, therefore, should be investigated first to recognize the target object properly, when a visual sensor for agricultural robot is developed. A TV camera is widely used as a vision sensor for agricultural robot. Target image can be easily obtained by using color component images from TV camera, when the target color is different from the colors of the other objects and its background. When the target has a similar color with its background, it is often possible to discriminate objects by a monochrome TV camera whose sensitivity is from visible to infrared region. However, it is not easy to measure the target depth by TV cameras because many objects are sometimes overlapped in a field view. In this paper, robotic vision using TV camera for tomato and cucumber harvesting robots and a depth measurement systems using laser scanner are introduced.
Style APA, Harvard, Vancouver, ISO itp.
20

Park, Jaehong, Wonsang Hwang, Hyunil Kwon, Kwangsoo Kim i Dong-il “Dan” Cho. "A novel line of sight control system for a robot vision tracking system, using vision feedback and motion-disturbance feedforward compensation". Robotica 31, nr 1 (12.04.2012): 99–112. http://dx.doi.org/10.1017/s0263574712000124.

Pełny tekst źródła
Streszczenie:
SUMMARYThis paper presents a novel line of sight control system for a robot vision tracking system, which uses a position feedforward controller to preposition a camera, and a vision feedback controller to compensate for the positioning error. Continuous target tracking is an important function for service robots, surveillance robots, and cooperating robot systems. However, it is difficult to track a specific target using only vision information, while a robot is in motion. This is especially true when a robot is moving fast or rotating fast. The proposed system controls the camera line of sight, using a feedforward controller based on estimated robot position and motion information. Specifically, the camera is rotated in the direction opposite to the motion of the robot. To implement the system, a disturbance compensator is developed to determine the current position of the robot, even when the robot wheels slip. The disturbance compensator is comprised of two extended Kalman filters (EKFs) and a slip detector. The inputs of the disturbance compensator are data from an accelerometer, a gyroscope, and two wheel-encoders. The vision feedback information, which is the targeting error, is used as the measurement update for the two EKFs. Using output of the disturbance compensator, an actuation module pans the camera to locate a target at the center of an image plane. This line of sight control methodology improves the recognition performance of the vision tracking system, by keeping a target image at the center of an image frame. The proposed system is implemented on a two-wheeled robot. Experiments are performed for various robot motion scenarios in dynamic situations to evaluate the tracking and recognition performance. Experimental results showed the proposed system achieves high tracking and recognition performances with a small targeting error.
Style APA, Harvard, Vancouver, ISO itp.
21

Jiménez Moreno, Robinson, Oscar Aviles i Ruben Darío Hernández Beleño. "Humanoid Robot Cooperative System by Machine Vision". International Journal of Online Engineering (iJOE) 13, nr 12 (11.12.2017): 162. http://dx.doi.org/10.3991/ijoe.v13i12.7594.

Pełny tekst źródła
Streszczenie:
This article presents a supervised control position system, based on image processing and oriented to the cooperative work between two humanoid robots that work autonomously. The first robot picks up an object, carry it to the second robot and after that the same second robot places it in an endpoint, this is achieved through doing movements in straight line trajectories and turns of 180 degrees. Using for this the Microsoft Kinect , finding for each robot and the reference object its exact spatial position, through the color space conversion and filtering, derived from the information of the RGB camera that counts and obtains this result using the information transmitted from the depth sensor, obtaining the final location of each. Through programming in C #, and the developed algorithms that allow to command each robot in order to work together for transport the reference object, from an initial point, delivering this object from one robot to the other and depositing it in an endpoint. This experiment was tested performed the same trajectory, under uniform light conditions, achieving each time the successful delivering of the object
Style APA, Harvard, Vancouver, ISO itp.
22

HIROTA, Kaoru. "Fuzzy robot vision." Journal of the Robotics Society of Japan 6, nr 6 (1988): 557–62. http://dx.doi.org/10.7210/jrsj.6.6_557.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

ITO, Minoru. "Robot vision modeling." Journal of the Robotics Society of Japan 7, nr 2 (1989): 215–20. http://dx.doi.org/10.7210/jrsj.7.215.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Suzuki, Mototaka, i Dario Floreano. "Enactive Robot Vision". Adaptive Behavior 16, nr 2-3 (kwiecień 2008): 122–28. http://dx.doi.org/10.1177/1059712308089183.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Menegatti, Emanuele, i Tomas Pajdla. "Omnidirectional robot vision". Robotics and Autonomous Systems 58, nr 6 (czerwiec 2010): 745–46. http://dx.doi.org/10.1016/j.robot.2010.02.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Lin, Ssu Ting, Jun Hu, Chia Hung Shih, Chiou Jye Huang i Ping Huan Kuo. "The Development of Supervised Motion Learning and Vision System for Humanoid Robot". Applied Mechanics and Materials 886 (styczeń 2019): 188–93. http://dx.doi.org/10.4028/www.scientific.net/amm.886.188.

Pełny tekst źródła
Streszczenie:
With the development of the concept of Industry 4.0, research relating to robots is being paid more and more attention, among which the humanoid robot is a very important research topic. The humanoid robot is a robot with a bipedal mechanism. Due to the physical mechanism, humanoid robots can maneuver more easily in complex terrains, such as going up and down the stairs. However, humanoid robots often fall from imbalance. Whether or not the robot can stand up on its own after a fall is a key research issue. However, the often used method of hand tuning to allow robots to stand on its own is very inefficient. In order to solve the above problems, this paper proposes an automatic learning system based on Particle Swarm Optimization (PSO). This system allows the robot to learn how to achieve the motion of rebalancing after a fall. To allow the robot to have the capability of object recognition, this paper also applies the Convolutional Neural Network (CNN) to let the robot perform image recognition and successfully distinguish between 10 types of objects. The effectiveness and feasibility of the motion learning algorithm and the CNN based image classification for vision system proposed in this paper has been confirmed in the experimental results.
Style APA, Harvard, Vancouver, ISO itp.
27

Ravankar, Abhijeet, Ankit Ravankar, Michiko Watanabe i Yohei Hoshino. "An Efficient Algorithm for Cleaning Robots Using Vision Sensors". Proceedings 42, nr 1 (14.11.2019): 45. http://dx.doi.org/10.3390/ecsa-6-06578.

Pełny tekst źródła
Streszczenie:
In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. Most of the previous works have used on-robot cameras, developed dirt detection sensors which are mounted on the cleaning robot, or built a map of the environment to clean periodically. However, a critical limitation of all the previous works is that robots cannot know if the floor is clean or not unless they actually visit that place. Hence, timely information is not available if the room needs to be cleaned. To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor that must be cleaned. This information is communicated to the cleaning robot through a wireless network. Thus, cleaning robots have access to a `bird’s-eye view’ of the environment for efficient cleaning. In this paper, we demonstrate the dirt detection using external camera and communication with robot in actual scenarios.
Style APA, Harvard, Vancouver, ISO itp.
28

Fujita, Toyomi, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji i Shun’ichi Kaneko. "Special Issue on Vision and Motion Control". Journal of Robotics and Mechatronics 27, nr 2 (20.04.2015): 121. http://dx.doi.org/10.20965/jrm.2015.p0121.

Pełny tekst źródła
Streszczenie:
Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on. This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.
Style APA, Harvard, Vancouver, ISO itp.
29

Bao, Chenxi, Yuanfan Hu i Zhuoqi Yu. "Current study on multi-robot collaborative vision SLAM". Applied and Computational Engineering 35, nr 1 (22.01.2024): 80–88. http://dx.doi.org/10.54254/2755-2721/35/20230367.

Pełny tekst źródła
Streszczenie:
Simultaneous Localization and Mapping (SLAM) stands as a vital technology for automatic control of robots. The significance of vision-based multi-robot collaborative SLAM technology is noteworthy in this domain, because visual SLAM uses cameras as the main sensor, which offers the benefits of easy access to environmental information and convenient installation. And the multi-robot system has the advantages of high efficiency, high fault tolerance, and high precision, so the multi-robot system can work in a complex environment and ensure its mapping efficiency, these may be a challenge for a single robot. This paper introduces the principles and common methods of visual SLAM, as well as the main algorithms of multi-robot collaborative SLAM. This paper analyzed the main problems existing in the current multi-robot collaborative visual SLAM technology: multi-robot SLAM task allocation, map fusion and back-end optimization. Then this paper listed different solutions, and analyzed their advantages and disadvantages. In addition, this paper also introduces some future research prospects of multi-robot collaborative visual SLAM technology, aiming to provide a reference direction for subsequent research in related fields.
Style APA, Harvard, Vancouver, ISO itp.
30

Idesawa, Masanori, Yasushi Mae i Junji Oaki. "Special Issue on Robot Vision - Vision for Action -". Journal of Robotics and Mechatronics 21, nr 6 (20.12.2009): 671. http://dx.doi.org/10.20965/jrm.2009.p0671.

Pełny tekst źródła
Streszczenie:
Robot vision is a key technology in robotics and mechatronics for realizing intelligent robot systems that work in the real world. The fact that robot vision algorithms required much time and effort to apply in real-world applications has delayed their dissemination until new forms made possible by recent rapid improvements in computer speed. Now the day is coming when robot vision may surpass human vision in many applications. This special issue presents 13 papers on the latest robot vision achievements and their applications. The first two propose ways of measuring and modeling 3D objects in everyday environments. Four more detail object detection and tracking, including visual servoing. Three propose advances in hand feature extraction and pose calculation, and one treats video coding for visual sensor networks. Two papers discuss robot vision applications based on human visual physiology, and the last clarifies an application in optical force sensors. We thank the authors for their invaluable contributions to this issue and the reviewers for their generous time and effort. Last, we thank the Editorial Board of JRM for making this issue possible.
Style APA, Harvard, Vancouver, ISO itp.
31

Wahrini, Retyana. "Development of Color-Based Object Follower Robot Using Pixy 2 Camera and Arduino to Support Robotics Practice Learning". Jurnal Edukasi Elektro 7, nr 2 (30.11.2023): 161–68. http://dx.doi.org/10.21831/jee.v7i2.64413.

Pełny tekst źródła
Streszczenie:
The purpose of this study is to find out how robot vision works and the level of media feasibility in the robotic practice course and student responses in the Mechatronics Vocational Education study program. This research is a type of RD research with a 4D development model (Define, Design, Development, and Desiminate). The result of how the vision robot works is where the robot follows objects based on color using the Pixy 2 camera that has been programmed in the PixyMoon application where the way this robot works is to follow the more dominant object. The results of the feasibility level of vision robots as learning media are determined by the results of the validation of media experts and material experts. Based on the results of the validation of media experts, the overall robot vision was declared very feasible with a percentage of 86.7%. Then from the results of material expert validation, it can be seen that overall, the robot vision companion guidebook is included in the very feasible category with a percentage of 87.5%. From the results of the assessment by students, it can be seen that from all aspects of the overall assessment it can be concluded that robot vision is in the very feasible category with a percentage of 90.8%.
Style APA, Harvard, Vancouver, ISO itp.
32

HASHIMOTO, Manabu. "Robot Vision for Natural Robot Motion". Journal of the Japan Society for Precision Engineering 87, nr 1 (5.01.2021): 30–33. http://dx.doi.org/10.2493/jjspe.87.30.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Wang, Yi, Fengrui Qu, Xijun Wang, Songtao Zeng, Qizhen Sun, Mengyang Li i Jiafei Ge. "Research on vision control methods based on multi-sensor fusion". Journal of Physics: Conference Series 2708, nr 1 (1.02.2024): 012011. http://dx.doi.org/10.1088/1742-6596/2708/1/012011.

Pełny tekst źródła
Streszczenie:
Abstract This paper focuses on the application scenarios of electric power live line working robots, addressing tasks such as live wire stripping and live wire connection/disconnection. It utilizes Mixed Reality (MR) technology for rapid environment modeling and visual signal acquisition. Through the research on robot contact force sensing, real-time force feedback, and obstacle avoidance through the fusion of robot motion paths and environmental data, it establishes a novel electric power live line working robot system centered around dexterous dual arms at the operation end, force feedback teleoperation controllers, multi-sensor fusion acquisition systems, and robot motion planning and control technology. Through prototyping, a demonstration application is realized, showcasing a semi-autonomous electric power live line working robot system that employs remote operation combined with sensor-based MR for task execution. This teleoperation and Mixed Reality-based semi-autonomous electrified operation approach is poised to play a significant role in advancing and demonstrating the use of robots in the field of electrical grid operations.
Style APA, Harvard, Vancouver, ISO itp.
34

Boopalan, S., G. Narendra Prasad, C. Swaraj Paul, Kumar Narayanan i R. Anandan. "Fire fighting robot with vision camera and gas sensors". International Journal of Engineering & Technology 7, nr 2.21 (20.04.2018): 348. http://dx.doi.org/10.14419/ijet.v7i2.21.12404.

Pełny tekst źródła
Streszczenie:
Yearly there are 150+ deaths are occurring due to fire fighting operations. Most of the deaths are due to presence of poisonous gases being present in the fire fighting areas. If we able to inform fire fighter worker about the presence of gases and hazardous situation, we can able to save their lives. In now-a-days robots play a vital part in industry and normal life of human beings , robots are the assemble of different parts or things which are made by human in order to complete any work with more accuracy. “Fire Fighting Robot with gas sensors”. The design is costless and will do lot of work. It can be used in industry and for domestic purpose. Important parts involved in the construction of robot are gas sensors, water tank, wireless remote and wireless android device Wi-Fi enabled Camera, A wireless robot will do effective work, thereby making it possible to control the robot from remote location. Keeping all above factors in mind the Robot is capable of being remotely controlled and live video buffering i.e. possessing a multimedia interface was convinced and developed.
Style APA, Harvard, Vancouver, ISO itp.
35

Floreano, Dario, Mototaka Suzuki i Claudio Mattiussi. "Active Vision and Receptive Field Development in Evolutionary Robots". Evolutionary Computation 13, nr 4 (grudzień 2005): 527–44. http://dx.doi.org/10.1162/106365605774666912.

Pełny tekst źródła
Streszczenie:
In this paper, we describe the artificial evolution of adaptive neural controllers for an outdoor mobile robot equipped with a mobile camera. The robot can dynamically select the gazing direction by moving the body and/or the camera. The neural control system, which maps visual information to motor commands, is evolved online by means of a genetic algorithm, but the synaptic connections (receptive fields) from visual photoreceptors to internal neurons can also be modified by Hebbian plasticity while the robot moves in the environment. We show that robots evolved in physics-based simulations with Hebbian visual plasticity display more robust adaptive behavior when transferred to real outdoor environments as compared to robots evolved without visual plasticity. We also show that the formation of visual receptive fields is significantly and consistently affected by active vision as compared to the formation of receptive fields with grid sample images in the environment of the robot. Finally, we show that the interplay between active vision and receptive field formation amounts to the selection and exploitation of a small and constant subset of visual features available to the robot.
Style APA, Harvard, Vancouver, ISO itp.
36

Albustanji, Rand N., Shorouq Elmanaseer i Ahmad A. A. Alkhatib. "Robotics: Five Senses plus One—An Overview". Robotics 12, nr 3 (4.05.2023): 68. http://dx.doi.org/10.3390/robotics12030068.

Pełny tekst źródła
Streszczenie:
Robots can be equipped with a range of senses to allow them to perceive and interact with the world in a more natural and intuitive way. These senses can include vision, hearing, touch, smell, and taste. Vision allows the robot to see and recognize objects and navigate its environment. Hearing enables the robot to recognize sounds and respond to vocal commands. Touch allows the robot to perceive information about the texture, shape, and temperature of objects through the sense of touch. Smell enables the robot to recognize and classify different odors. Taste enables the robot to identify the chemical composition of materials. The specific senses used in a robot will depend on the needs of the application, and many robots use a combination of different senses to perceive and interact with the environment. This paper reviews the five senses used in robots, their types, how they work, and other related information, while also discussing the possibility of a Sixth Sense.
Style APA, Harvard, Vancouver, ISO itp.
37

Kühnlenz, Kolja, i Martin Buss. "Multi-Focal Vision and Gaze Control Improve Navigation Performance". International Journal of Advanced Robotic Systems 9, nr 1 (1.01.2012): 25. http://dx.doi.org/10.5772/50920.

Pełny tekst źródła
Streszczenie:
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.
Style APA, Harvard, Vancouver, ISO itp.
38

Kim, Bong Keun, i Yasushi Sumi. "Vision-Based Safety-Related Sensors in Low Visibility by Fog". Sensors 20, nr 10 (15.05.2020): 2812. http://dx.doi.org/10.3390/s20102812.

Pełny tekst źródła
Streszczenie:
Mobile service robots are expanding their use to outdoor areas affected by various weather conditions, but the outdoor environment directly affects the functional safety of robots implemented by vision-based safety-related sensors (SRSs). Therefore, this paper aims to set the fog as the environmental condition of the robot and to understand the relationship between the quantified value of the environmental conditions and the functional safety performance of the robot. To this end, the safety functions of the robot built using SRS and the requirements for the outdoor environment affecting them are described first. The method of controlling visibility for evaluating the safety function of SRS is described through the measurement and control of visibility, a quantitative means of expressing the concentration of fog, and wavelength analysis of various SRS light sources. Finally, object recognition experiments using vision-based SRS for robots are conducted at low visibility. Through this, it is verified that the proposed method is a specific and effective method for verifying the functional safety of the robot using the vision-based SRS, for low visibility environmental requirements.
Style APA, Harvard, Vancouver, ISO itp.
39

Wang, Yingxu. "On Theoretical Foundations of Human and Robot Vision". Journal of Physics: Conference Series 2278, nr 1 (1.05.2022): 012001. http://dx.doi.org/10.1088/1742-6596/2278/1/012001.

Pełny tekst źródła
Streszczenie:
Abstract A set of cognitive, neurological, and mathematical theories for human and robot vision has been recognized that encompasses David Hubel’s hypercolumn vision theory (The Nobel Prize in Physiology or Medicine 1981 [1]) and Dennis Gabor’s wavelet filter theory (The Nobel Prize in Physics 1971 [2]). This keynote lecture presents a theoretical framework of the Cognitive Vision Theory (CVT) [3-6] and its neurological and mathematical foundations. A set of Intelligent Mathematics (IM) [7-13] and formal vision theories developed in my laboratory is introduced encompassing Image Frame Algebra (IFA) [3], Visual Semantic Algebra (VSA) [4], and the Spike Frequency Modulation (SFM) theory [5]. IM is created for enabling cognitive robots to gain autonomous vision cognition capability supported by Visual Knowledge Bases (VKBs). Paradigms and case studies of robot vision powered by CVTs and IM will be demonstrated. The basic research on CVTs has led to new perspectives to human and robot vision for developing novel image processing applications in AI, neural networks, image recognitions, sequence learning, computational intelligence, self-driving vehicles, unmanned systems, and robot navigations.
Style APA, Harvard, Vancouver, ISO itp.
40

Yu, Hui Jun, Cai Biao Chen, Wan Wu i Zhi Wei Zhou. "Research of Application on Robot Vision with SQI Algorithms Based on Retinex". Applied Mechanics and Materials 675-677 (październik 2014): 1358–62. http://dx.doi.org/10.4028/www.scientific.net/amm.675-677.1358.

Pełny tekst źródła
Streszczenie:
Troubleshooting and safety monitoring in the underground mine by manual operation often have the existence of security risks, so the trend of replacing manual operation by robots increases. In robot vision navigation, image enhancement in the automatic target recognition is the most critical stage of the pretreatment when extracting the image feature and matching. As the contour enhancement effect of multiscale Retinex algorithm is very good, which plays a key role in the robot visual scene image. When underground mine mobile robot identify path in machine vision navigation, this paper proposes a application of robot vision with SQI algorithms based on Retinex, which aims at its poor real-time performance and serious affecting by light interference.
Style APA, Harvard, Vancouver, ISO itp.
41

Bingol, Mustafa Can, i Omur Aydogmus. "Practical application of a safe human-robot interaction software". Industrial Robot: the international journal of robotics research and application 47, nr 3 (16.01.2020): 359–68. http://dx.doi.org/10.1108/ir-09-2019-0180.

Pełny tekst źródła
Streszczenie:
Purpose Because of the increased use of robots in the industry, it has become inevitable for humans and robots to be able to work together. Therefore, human security has become the primary noncompromising factor of joint human and robot operations. For this reason, the purpose of this study was to develop a safe human-robot interaction software based on vision and touch. Design/methodology/approach The software consists of three modules. Firstly, the vision module has two tasks: to determine whether there is a human presence and to measure the distance between the robot and the human within the robot’s working space using convolutional neural networks (CNNs) and depth sensors. Secondly, the touch detection module perceives whether or not a human physically touches the robot within the same work environment using robot axis torques, wavelet packet decomposition algorithm and CNN. Lastly, the robot’s operating speed is adjusted according to hazard levels came from vision and touch module using the robot’s control module. Findings The developed software was tested with an industrial robot manipulator and successful results were obtained with minimal error. Practical implications The success of the developed algorithm was demonstrated in the current study and the algorithm can be used in other industrial robots for safety. Originality/value In this study, a new and practical safety algorithm is proposed and the health of people working with industrial robots is guaranteed.
Style APA, Harvard, Vancouver, ISO itp.
42

Umeda, Takayuki, Kosuke Sekiyama i Toshio Fukuda. "Vision-Based Object Tracking by Multi-Robots". Journal of Robotics and Mechatronics 24, nr 3 (20.06.2012): 531–39. http://dx.doi.org/10.20965/jrm.2012.p0531.

Pełny tekst źródła
Streszczenie:
This paper proposes a cooperative visual object tracking by a multi-robot system, where robust cognitive sharing is essential between robots. Robots identify the object of interest by using various types of information in the image recognition field. However, the most effective type of information for recognizing an object accurately is the difference between the object and its surrounding environment. Therefore we propose two evaluation criteria, called ambiguity and stationarity, in order to select the best information. Although robots attempt to select the best available feature for recognition, it will lead a failure of recognition if the background scene contains very similar features with the object of concern. To solve this problem, we introduce a scheme that robots share the relation between the landmarks and the object of interest where landmark information is generated autonomously. The experimental results show the effectiveness of the proposed multi-robot cognitive sharing.
Style APA, Harvard, Vancouver, ISO itp.
43

Li, Zhaolu, Ning Xu, Xiaoli Zhang, Xiafu Peng i Yumin Song. "Motion Control Method of Bionic Robot Dog Based on Vision and Navigation Information". Applied Sciences 13, nr 6 (13.03.2023): 3664. http://dx.doi.org/10.3390/app13063664.

Pełny tekst źródła
Streszczenie:
With the progress and development of AI technology and industrial automation technology, AI robot dogs are widely used in engineering practice to replace human beings in high-precision and tedious industrial operations. Bionic robots easily produce control errors due to the influence of spatial disturbance factors in the process of pose determination. It is necessary to calibrate robots accurately to improve the positioning control accuracy of bionic robots. Therefore, a robust control algorithm for bionic robots based on binocular vision navigation is proposed. An optical CCD binocular vision dynamic tracking system is used to measure the end position and pose parameters of a bionic robot, and the kinematics model of the controlled object is established. Taking the degree of freedom parameter of the robot’s rotating joint as the control constraint parameter, a hierarchical subdimensional space motion planning model of the robot is established. The binocular vision tracking method is used to realize the adaptive correction of the position and posture of the bionic robot and achieve robust control. The simulation results show that the fitting error of the robot’s end position and pose parameters is low, and the dynamic tracking performance is good when the method is used for the position positioning of control of the bionic robot.
Style APA, Harvard, Vancouver, ISO itp.
44

Rioux, Antoine, Claudia Esteves, Jean-Bernard Hayet i Wael Suleiman. "Cooperative Vision-Based Object Transportation by Two Humanoid Robots in a Cluttered Environment". International Journal of Humanoid Robotics 14, nr 03 (25.08.2017): 1750018. http://dx.doi.org/10.1142/s0219843617500189.

Pełny tekst źródła
Streszczenie:
Although in recent years, there have been quite a few studies aimed at the navigation of robots in cluttered environments, few of these have addressed the problem of robots navigating while moving a large or heavy object. Such a functionality is especially useful when transporting objects of different shapes and weights without having to modify the robot hardware. In this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply could not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the design of the control to execute those trajectories. We present experiments made on real NAO robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without modifying the robot main hardware, and therefore that our scheme enhances the humanoid robots capacities in real-life situations. Our contributions are: (1) a low-dimension multi-robot motion planning algorithm that finds an obstacle-free trajectory, by using the constructed map of the environment as an input, (2) a framework that produces continuous and consistent odometry data, by fusing the visual and the robot odometry information, (3) a synchronization system that uses the projection of the robots based on their hands positions coupled with the visual feedback error computed from a frontal camera, (4) an efficient real-time whole-body control scheme that controls the motions of the closed-loop robot–object–robot system.
Style APA, Harvard, Vancouver, ISO itp.
45

Zeng, Rui, Yuhui Wen, Wang Zhao i Yong-Jin Liu. "View planning in robot active vision: A survey of systems, algorithms, and applications". Computational Visual Media 6, nr 3 (1.08.2020): 225–45. http://dx.doi.org/10.1007/s41095-020-0179-3.

Pełny tekst źródła
Streszczenie:
Abstract Rapid development of artificial intelligence motivates researchers to expand the capabilities of intelligent and autonomous robots. In many robotic applications, robots are required to make planning decisions based on perceptual information to achieve diverse goals in an efficient and effective way. The planning problem has been investigated in active robot vision, in which a robot analyzes its environment and its own state in order to move sensors to obtain more useful information under certain constraints. View planning, which aims to find the best view sequence for a sensor, is one of the most challenging issues in active robot vision. The quality and efficiency of view planning are critical for many robot systems and are influenced by the nature of their tasks, hardware conditions, scanning states, and planning strategies. In this paper, we first summarize some basic concepts of active robot vision, and then review representative work on systems, algorithms and applications from four perspectives: object reconstruction, scene reconstruction, object recognition, and pose estimation. Finally, some potential directions are outlined for future work.
Style APA, Harvard, Vancouver, ISO itp.
46

Liu, Zhanming, Mingyuan Xu i Yuanpei Zhang. "Perspective Of Vision, Motion Planning, And Motion Control for Quadruped Robots". Highlights in Science, Engineering and Technology 38 (16.03.2023): 902–16. http://dx.doi.org/10.54097/hset.v38i.5976.

Pełny tekst źródła
Streszczenie:
In recent years, robot technology has made great progress, especially quadruped robots. Quadruped robot is a bionic robot that imitates the movement of quadruped animals. For quadruped machines in complex environments, our group first investigate the control of quadruped robots in complex situations. It is found that the current quadruped robots can achieve better motion control under various complex conditions, but there are still limitations. Its structure includes the trunk and four legs located in front and behind the trunk. Each leg has the same structure, including the thigh, calf, and foot. This paper summarizes the research on foot structure design and foot tip trajectory optimization of quadruped robot. Machine Vision is a branch of artificial intelligence that is developing rapidly. Simply put, machine vision is to use machines instead of human eyes to make measurements and judgments. The following passage discusses about two main kinds of algorithm, while making analytic and comparisons among others and why to choose these two out in the machine vision part of quadruped robots. By summarizing and analyzing the research status, this paper proposes some challenging and valuable future research directions.
Style APA, Harvard, Vancouver, ISO itp.
47

Akmal, Revanza Akmal Pradipta, Ryan Yudha Adhitya, Zindhu Maulana Ahmad Putra, Ii'Munadhif, Mohammad Abu Jami’in i Muhammad Khoirul Hasin. "Identifikasi Warna Bola dan SILO Menggunakan Metode You Only Look Once Pada Robot Pengambil Bola". Jurnal Elektronika dan Otomasi Industri 11, nr 1 (31.05.2024): 210–17. http://dx.doi.org/10.33795/elkolind.v11i1.5151.

Pełny tekst źródła
Streszczenie:
Robots are a modern technology that is experiencing very rapid growth. Robots also have several types, one type of robot is Robot Vision. Robot Vision uses image processing and computer vision in carrying out its tasks, including recognizing objects and controlling robot movements. Taking the concept from the ABU (Asia Pacific Broadcasting Union) robot contest in 2024 where the robot is required to be able to pick up balls with red and blue colors which are then inserted into a place called SILO. The detection method used in this study uses the YOLO (You Only Look Once) method with the YOLOv5 version. The reason for choosing the YOLOv5 version is because of its ability to be more accurate than previous versions and lighter to run than newer versions. From the results of this study, it was found that the confidence value for each object was quite high, which was 0.90. In addition, the precision value against recall for all classes gets a value of 0.991 or 99.1%. For the ball and SILO classes, the precision to recall value is 0.993 or 99.3%. So it can be concluded that this method has good value in terms of detection and accuracy
Style APA, Harvard, Vancouver, ISO itp.
48

Xu, Jin. "RESEARCH ON TARGET RECOGNITION OF COMBAT ROBOT BASED ON OPENCV". EPH - International Journal of Science And Engineering 7, nr 3 (27.09.2021): 12–21. http://dx.doi.org/10.53555/ephijse.v7i3.187.

Pełny tekst źródła
Streszczenie:
With the wide application of robots in various fields of human life, the research on improving the level of intelligent robot is highly concerned by domestic and foreign scholars. As an important part of intelligent robot, robot vision has been paid more and more attention in recent years. Computer vision technology is the use of image processing technology to make the camera and the human eye to identify, judge, feature detection, tracking and other visual functions. The computer vision system is to create a complete artificial intelligence system which can get the needed information in the plane image or the three-dimensional image data. Opencv plays an important role in the field of computer vision which is a cross platform computer vision library based on open source. The main purpose of this study is: According to the feature recognition and matching of the object, make the camera of the combat robot has the vision through a detection system based on OpenCV visual library to detect and track the moving objects in the line of sight
Style APA, Harvard, Vancouver, ISO itp.
49

Dong, Qin. "Path Planning Algorithm Based on Visual Image Feature Extraction for Mobile Robots". Mobile Information Systems 2022 (1.07.2022): 1–9. http://dx.doi.org/10.1155/2022/4094472.

Pełny tekst źródła
Streszczenie:
When autonomous mobile robots plan their own movements properly, they first need to perceive the surrounding environment and then make comprehensive decisions based on the surrounding environment information, that is, path planning. Vision can provide abundant and complete environmental information for robots, and it can significantly improve the effect of path planning when it is introduced into the path planning of autonomous robots. In this article, we take the autonomous mobile robot AS-R as the research object and use the multisensors such as a gimbal camera and ultrasonic sensor attached to the robot to study the navigation line information perceived by machine vision, the obstacle information sensed by range sensor, and the fusion of multisensor information to solve the image processing, path recognition, information fusion, decision control, and other related problems of the mobile robot to realize the autonomous mobile robot.
Style APA, Harvard, Vancouver, ISO itp.
50

Song, Rui, Fengming Li, Tianyu Fu i Jie Zhao. "A Robotic Automatic Assembly System Based on Vision". Applied Sciences 10, nr 3 (8.02.2020): 1157. http://dx.doi.org/10.3390/app10031157.

Pełny tekst źródła
Streszczenie:
At present, the production lines of mobile phones are mainly manual and semi-automatic. Robots are the most important tools used to improve the intelligence level of industry production. The design of an intelligent robotic assembly system is presented in this paper, which takes the assembly of screen components and shell components as examples. There are major factors restricting the application of robots, such as the calibration of diversified objects, the moving workpiece with incomplete information, and diverse assembly types. A method was proposed to solve these technological difficulties. The multi-module calibration method is used to transform the pose relationship between the robot, the conveyor belt, and the two cameras in the robot assembly system. Aiming at a workpiece moving with incomplete information, the minimum matrix method is proposed to detect the position. Then dynamic fetching is realized based on pose estimation of the moving workpiece. At last, the template matching method is used to identify the assembly area of the workpiece type. The proposed method was verified on a real platform with a LINKHOU LR4-R5 60 robot. Results showed that the assembly success rate was above 98%, and the intelligent assembly system of the robot can realize the assembly of mobile phone screens and back shells without any staff.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii