Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Machine vision for robot guidance.

Zeitschriftenartikel zum Thema „Machine vision for robot guidance“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Machine vision for robot guidance" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pérez, Luis, Íñigo Rodríguez, Nuria Rodríguez, Rubén Usamentiaga und Daniel García. „Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review“. Sensors 16, Nr. 3 (05.03.2016): 335. http://dx.doi.org/10.3390/s16030335.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xue, Jinlin, Lei Zhang und Tony E. Grift. „Variable field-of-view machine vision based row guidance of an agricultural robot“. Computers and Electronics in Agriculture 84 (Juni 2012): 85–91. http://dx.doi.org/10.1016/j.compag.2012.02.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ponnambalam, Vignesh Raja, Marianne Bakken, Richard J. D. Moore, Jon Glenn Omholt Gjevestad und Pål Johan From. „Autonomous Crop Row Guidance Using Adaptive Multi-ROI in Strawberry Fields“. Sensors 20, Nr. 18 (14.09.2020): 5249. http://dx.doi.org/10.3390/s20185249.

Der volle Inhalt der Quelle
Annotation:
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jin, Xiao Jun, Yong Chen, Ying Qing Guo, Yan Xia Sun und Jun Chen. „Tea Flushes Identification Based on Machine Vision for High-Quality Tea at Harvest“. Applied Mechanics and Materials 288 (Februar 2013): 214–18. http://dx.doi.org/10.4028/www.scientific.net/amm.288.214.

Der volle Inhalt der Quelle
Annotation:
Tea flushes identification from their natural background is the first key step for the intelligent tea-picking robot. This paper focuses on the algorithms of identifying the tea flushes based on color image analysis. A tea flushes identification system was developed as a means of guidance for a robotic manipulator in the picking of high-quality tea. Firstly, several color indices, including y-c, y-m, (y-c)/(y+c) and (y-m)/(y+m) in CMY color space, S channel in HSI color space, and U channel in YUV color space, were studied and tested. These color indices enhanced and highlighted the tea flushes against their background. Afterwards, grey level image was transformed into binary image using Otsu method and then area filter was employed to eliminate small noise regions. The algorithm and identification system has been tested extensively and proven to be well adapted to the complexity of a natural environment. Experiments show that these indices were particularly effective for tea flushes identification and could be used for future tea-picking robot development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

HAN, LONG, XINYU WU, YONGSHENG OU, YEN-LUN CHEN, CHUNJIE CHEN und YANGSHENG XU. „HOUSEHOLD SERVICE ROBOT WITH CELLPHONE INTERFACE“. International Journal of Information Acquisition 09, Nr. 02 (Juni 2013): 1350009. http://dx.doi.org/10.1142/s0219878913500095.

Der volle Inhalt der Quelle
Annotation:
In this paper, an efficient and low-cost cellphone-commandable mobile manipulation system is described. Aiming at house and elderly caring, this system can be easily commanded through common cellphone network to efficiently grasp objects in household environment, utilizing several low-cost off-the-shelf devices. Unlike the visual servo technology using high quality vision system with high cost, the household-service robot may not afford to such high quality vision servo system, and thus it is essential to use some of low-cost device. However, it is extremely challenging to have the said vision for precise localization, as well as motion control. To tackle this challenge, we developed a realtime vision system with which a reliable grasping algorithm combining machine vision, robotic kinematics and motor control technology is presented. After the target is captured by the arm camera, the arm camera keeps tracking the target while the arm keeps stretching until the end effector reaches the target. However, if the target is not captured by the arm camera, the arm will take a move to help the arm camera capture the target under the guidance of the head camera. This algorithm is implemented on two robot systems: the one with a fixed base and another with a mobile base. The results demonstrated the feasibility and efficiency of the algorithm and system we developed, and the study shown in this paper is of significance in developing a service robot in modern household environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mavridou, Efthimia, Eleni Vrochidou, George A. Papakostas, Theodore Pachidis und Vassilis G. Kaburlasos. „Machine Vision Systems in Precision Agriculture for Crop Farming“. Journal of Imaging 5, Nr. 12 (07.12.2019): 89. http://dx.doi.org/10.3390/jimaging5120089.

Der volle Inhalt der Quelle
Annotation:
Machine vision for precision agriculture has attracted considerable research interest in recent years. The aim of this paper is to review the most recent work in the application of machine vision to agriculture, mainly for crop farming. This study can serve as a research guide for the researcher and practitioner alike in applying cognitive technology to agriculture. Studies of different agricultural activities that support crop harvesting are reviewed, such as fruit grading, fruit counting, and yield estimation. Moreover, plant health monitoring approaches are addressed, including weed, insect, and disease detection. Finally, recent research efforts considering vehicle guidance systems and agricultural harvesting robots are also reviewed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kanagasingham, Sabeethan, Mongkol Ekpanyapong und Rachan Chaihan. „Integrating machine vision-based row guidance with GPS and compass-based routing to achieve autonomous navigation for a rice field weeding robot“. Precision Agriculture 21, Nr. 4 (16.11.2019): 831–55. http://dx.doi.org/10.1007/s11119-019-09697-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Yibo, Jianjun Tang und Hui Huang. „Motion Capture and Intelligent Correction Method of Badminton Movement Based on Machine Vision“. Mobile Information Systems 2021 (30.07.2021): 1–10. http://dx.doi.org/10.1155/2021/3256924.

Der volle Inhalt der Quelle
Annotation:
In recent years, badminton has become more and more popular in national fitness programs. Amateur badminton clubs have been established all over the country, and amateur badminton events at all levels have increased significantly. Due to the lack of correct medical supervision and health guidance, many people have varying degrees of injury during sports. Therefore, it is very important to study the method of badminton movement capture and intelligent correction based on machine vision to provide safe and effective exercise plan for amateur badminton enthusiasts. This article aims to study the methods of motion capture and intelligent correction of badminton. Aiming at the shortcoming of the mean shift algorithm that it is easy to lose the target when the target is occluded or the background is disturbed, this paper combines the mean shift algorithm with the Kalman filter algorithm and proposes an improvement to the combined algorithm. The improved algorithm is added to the calculation of the average speed of the target, which can be used as the target speed when the target is occluded to predict the area where the target may appear at the next moment, and it can also be used as a judgment condition for whether the target is interfered by the background. The improved algorithm combines the macroscopic motion information of the target, can overcome the problem of target loss when the target is occluded and background interference, and improves the robustness of target tracking. Using LabVIEW development environment to write the system software of the Japanese standard tracking robot, the experiment verified the rationality and correctness of the improved target tracking algorithm and motion control method, which can meet the real-time performance of moving target tracking. Experimental results show that 83% of amateur badminton players have problems with asymmetric functions and weak links. Based on machine vision technology, it can provide reliable bottom line reference for making training plans, effectively improve the quality of action, improve the efficiency of action, and promote the development of sports competitive level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Marshall, S. „Machine vision: Automated visual inspection and robot vision“. Automatica 30, Nr. 4 (April 1994): 731–32. http://dx.doi.org/10.1016/0005-1098(94)90163-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rovira-Más, F., Q. Zhang, J. F. Reid und J. D. Will. „Machine Vision Based Automated Tractor Guidance“. International Journal of Smart Engineering System Design 5, Nr. 4 (Oktober 2003): 467–80. http://dx.doi.org/10.1080/10255810390445300.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wágner, Petr. „Machine Vision for Robot-Soccer Application“. IFAC Proceedings Volumes 42, Nr. 1 (2009): 161–66. http://dx.doi.org/10.3182/20090210-3-cz-4002.00034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Yang Xichen, 杨洗陈, 张海明 Zhang Haiming, 刘立峰 Liu Lifeng, 方艳 Fang Yan, 董玲 Dong Ling, 高贵 Gao Gui, 刘美丽 Liu Meili et al. „Machine Vision in Laser Remufacturing Robot“. Chinese Journal of Lasers 38, Nr. 6 (2011): 0601008. http://dx.doi.org/10.3788/cjl201138.0601008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Jiménez Moreno, Robinson, Oscar Aviles und Ruben Darío Hernández Beleño. „Humanoid Robot Cooperative System by Machine Vision“. International Journal of Online Engineering (iJOE) 13, Nr. 12 (11.12.2017): 162. http://dx.doi.org/10.3991/ijoe.v13i12.7594.

Der volle Inhalt der Quelle
Annotation:
This article presents a supervised control position system, based on image processing and oriented to the cooperative work between two humanoid robots that work autonomously. The first robot picks up an object, carry it to the second robot and after that the same second robot places it in an endpoint, this is achieved through doing movements in straight line trajectories and turns of 180 degrees. Using for this the Microsoft Kinect , finding for each robot and the reference object its exact spatial position, through the color space conversion and filtering, derived from the information of the RGB camera that counts and obtains this result using the information transmitted from the depth sensor, obtaining the final location of each. Through programming in C #, and the developed algorithms that allow to command each robot in order to work together for transport the reference object, from an initial point, delivering this object from one robot to the other and depositing it in an endpoint. This experiment was tested performed the same trajectory, under uniform light conditions, achieving each time the successful delivering of the object
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Oh, Je-Keun, Giho Jang, Semin Oh, Jeong Ho Lee, Byung-Ju Yi, Young Shik Moon, Jong Seh Lee und Youngjin Choi. „Bridge inspection robot system with machine vision“. Automation in Construction 18, Nr. 7 (November 2009): 929–41. http://dx.doi.org/10.1016/j.autcon.2009.04.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ho, Chao Ching, Ming Chen Chen und Chih Hao Lien. „Machine Vision-Based Intelligent Fire Fighting Robot“. Key Engineering Materials 450 (November 2010): 312–15. http://dx.doi.org/10.4028/www.scientific.net/kem.450.312.

Der volle Inhalt der Quelle
Annotation:
Designing a visual monitoring system to detect fire flame is a complex task because a large amount of video data must be transmitted and processed in real time. In this work, an intelligent fire fighting and detection system is proposed which uses a machine vision to locate the fire flame positions and to control a mobile robot to approach the fire source. This real-time fire monitoring system uses the motion history detection algorithm to register the possible fire position in transmitted video data and then analyze the spectral, spatial and temporal characteristics of the fire regions in the image sequences. The fire detecting and fighting system is based on the visual servoing feedback framework with portable components, off-the-shelf commercial hardware, and embedded programming. Experimental results show that the proposed intelligent fire fighting system is successfully detecting the fire flame and extinguish the fire source reliably.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Mizuochi, Y., und M. Dohi. „Machine Vision for Vegetable Seedling Production Robot“. Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2002 (2002): 86. http://dx.doi.org/10.1299/jsmermd.2002.86_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Yang, Long, und Nan Feng Xiao. „Robot Stereo Vision Guidance System Based on Attention Mechanism“. Applied Mechanics and Materials 385-386 (August 2013): 708–11. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.708.

Der volle Inhalt der Quelle
Annotation:
Add attention mechanism into traditional robot stereo vision system, thus got the possible workpiece position quickly by saliency image, highly accelerate the computing process. First, to get the camera intrinsic matrix and extrinsic matrix, camera stereo calibration needed be done. Then use those parameter matrixes to rectify the newly captured images, disparity map can be got based on the OpenCV library, meanwhile, saliency image was computed by Itti algorithm. Workpiece spatial pose to left camera coordinates can be got with triangulation measurement principal. After a series of coordinates transformation workpiece spatial pose to world coordinates can be got. With the robot inverse solution function, the robot joint rotation angle can be got thus driver the robot to work. At last, experiment results show the effectiveness of this method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Taha, Zahari, Jouh Yeong Chew und Hwa Jen Yap. „Omnidirectional Vision for Mobile Robot Navigation“. Journal of Advanced Computational Intelligence and Intelligent Informatics 14, Nr. 1 (20.01.2010): 55–62. http://dx.doi.org/10.20965/jaciii.2010.p0055.

Der volle Inhalt der Quelle
Annotation:
Machine vision has been widely studied, leading to the discovery of many image-processing and identification techniques. Together with this, rapid advances in computer processing speed have triggered a growing need for vision sensor data and faster robot response. In considering omnidirectional camera use in machine vision, we have studied omnidirectional image features in depth to determine correlation between parameters and ways to flatten 3-dimensional images into 2 dimensions. We also discuss ways to process omnidirectional images based on their individual features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Chen, Yu Min, Xiao Li Wang, Chen Zhang und Xian Min Meng. „On Stereoscopic Machine Vision with Limited Horizons“. Applied Mechanics and Materials 538 (April 2014): 383–86. http://dx.doi.org/10.4028/www.scientific.net/amm.538.383.

Der volle Inhalt der Quelle
Annotation:
This paper provides a path planning algorithm based on model that contains 3D vision data. Using this model and a six-legged platform, we propose that limited vision field should be considered in a path planning of 3D vision robot. We also give out a machine learning method to analysis a robot's obstacle capacity, and formed vector to measure it. Based on the model, we designed an algorithm that allow robot to navigate in 3D environment. Observation on its behavior proof that our algorithm and model will allow a robot to pass through random 3D terrain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ahn, H. S. „Vision-based magnetic heading sensor for mobile robot guidance“. Electronics Letters 45, Nr. 16 (2009): 819. http://dx.doi.org/10.1049/el.2009.1477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Widodo, Nuryono Satya, und Anggit Pamungkas. „Machine Vision-based Obstacle Avoidance for Mobile Robot“. Jurnal Ilmiah Teknik Elektro Komputer dan Informatika 5, Nr. 2 (10.02.2020): 77. http://dx.doi.org/10.26555/jiteki.v5i2.14767.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Blok, Pieter M., Ruud Barth und Wim van den Berg. „Machine vision for a selective broccoli harvesting robot“. IFAC-PapersOnLine 49, Nr. 16 (2016): 66–71. http://dx.doi.org/10.1016/j.ifacol.2016.10.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Tian, Subo, M. A. Ashraf, N. Kondo, T. Shiigi und M. A. Momin. „Optimization of Machine Vision for Tomato Grafting Robot“. Sensor Letters 11, Nr. 6 (01.06.2013): 1190–94. http://dx.doi.org/10.1166/sl.2013.2899.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhang, Yanjun, Jianxin zhao und Heyong Han. „A 3D Machine Vision-Enabled Intelligent Robot Architecture“. Mobile Information Systems 2021 (04.03.2021): 1–11. http://dx.doi.org/10.1155/2021/6617286.

Der volle Inhalt der Quelle
Annotation:
In this paper, the principle of camera imaging is studied, and the transformation model of camera calibration is analyzed. Based on Zhang Zhengyou’s camera calibration method, an automatic calibration method for monocular and binocular cameras is developed on a multichannel vision platform. The automatic calibration of camera parameters using human-machine interface of the host computer is realized. Based on the principle of binocular vision, a feasible three-dimensional positioning method for binocular target points is proposed and evaluated to provide binocular three-dimensional positioning of target in simple environment. Based on the designed multichannel vision platform, image acquisition, preprocessing, image display, monocular and binocular automatic calibration, and binocular three-dimensional positioning experiments are conducted. Moreover, the positioning error is analyzed, and the effectiveness of the binocular vision module is verified to justify the robustness of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Patil, Rupali, Adhish Velingkar, Mohammad Nomaan Parmar, Shubham Khandhar und Bhavin Prajapati. „Machine Vision Enabled Bot for Object Tracking“. JINAV: Journal of Information and Visualization 1, Nr. 1 (01.10.2020): 15–26. http://dx.doi.org/10.35877/454ri.jinav155.

Der volle Inhalt der Quelle
Annotation:
Object detection and tracking are essential and testing undertaking in numerous PC vision appliances. To distinguish the object first find a way to accumulate information. In this design, the robot can distinguish the item and track it just as it can turn left and right position and afterward push ahead and in reverse contingent on the object motion. It keeps up the consistent separation between the item and the robot. We have designed a webpage that is used to display a live feed from the camera and the camera can be controlled by the user efficiently. Implementation of machine learning is done for detection purposes along with open cv and creating cloud storage. The pan-tilt mechanism is used for camera control which is attached to our 3-wheel chassis robot through servo motors. This idea can be used for surveillance purposes, monitoring local stuff, and human-machine interaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Stein, Procópio Silveira, und Vítor Santos. „Visual Guidance of an Autonomous Robot Using Machine Learning“. IFAC Proceedings Volumes 43, Nr. 16 (2010): 55–60. http://dx.doi.org/10.3182/20100906-3-it-2019.00012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Pan, Zhi Guo. „Research on Automatic Cleaning Robot Based on Machine Vision“. Applied Mechanics and Materials 539 (Juli 2014): 648–52. http://dx.doi.org/10.4028/www.scientific.net/amm.539.648.

Der volle Inhalt der Quelle
Annotation:
The development and application of the machine vision technology is greatly liberating the human labor force and improved the production automation level and the situation of human life, which has very broad application prospects. The intelligent empty bottle inspection robot this paper studies is a typical application of the machine vision in the industrial detection. This paper mainly introduces the concept of machine vision, some important technology related to automatic cleaning robots and application of the machine vision in the production of all areas of life.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Heilbrun, M. Peter, Paul McDonald, Clay Wiker, Spencer Koehler und William Peters. „Stereotactic Localization and Guidance Using a Machine Vision Technique“. Stereotactic and Functional Neurosurgery 58, Nr. 1-4 (1992): 94–98. http://dx.doi.org/10.1159/000098979.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Padhen, Mithilesh P. „Human Detecting Robot based on Computer Vision - Machine Learning“. International Journal for Research in Applied Science and Engineering Technology 8, Nr. 9 (30.09.2020): 646–56. http://dx.doi.org/10.22214/ijraset.2020.31545.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ashraf, M. A., S. Tian, N. Kondo und T. Shigi. „MACHINE VISION TO INSPECT TOMATO SEEDLINGS FOR GRAFTING ROBOT“. Acta Horticulturae, Nr. 1054 (Oktober 2014): 309–16. http://dx.doi.org/10.17660/actahortic.2014.1054.37.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wang, Zhongrui, und Zhongcheng Wang. „An Agricultural Spraying Robot Based on the Machine Vision“. Applied Science and Innovative Research 1, Nr. 2 (06.06.2017): 80. http://dx.doi.org/10.22158/asir.v1n2p80.

Der volle Inhalt der Quelle
Annotation:
<p><em>Accurate target spraying is a key technology in modern and intelligent agriculture. </em><em>For</em><em> solv</em><em>ing</em><em> the problems of pesticide waste and poisoning in the spraying process, a spraying robot based on binocular machine vision was proposed in this paper. </em><em>A</em><em> digital signal processor was used to identify and locate tomatoes as well as to control the nozzle spray. A stereoscopic vision model was established, and color normalization, 2G-R-B, was adopted to implement background segmentation between plants and soil. As for the tomatoes and plants, depth information and circularity depended on the nozzle’s target, and the plant shape area determined the amount of pesticide. </em><em>E</em><em>xperiment</em><em>s shown that</em><em> the recognition rate </em><em>of this </em><em>spraying robot </em><em>was up to </em><em>92.5%</em><em> for tomatoes.</em></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Kondo, Naoshi, Kazuya Yamamoto, Hiroshi Shimizu, Koki Yata, Mitsutaka Kurita, Tomoo Shiigi, Mitsuji Monta und Takahisa Nishizu. „A Machine Vision System for Tomato Cluster Harvesting Robot“. Engineering in Agriculture, Environment and Food 2, Nr. 2 (2009): 60–65. http://dx.doi.org/10.1016/s1881-8366(09)80017-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Huang, Wensheng, und Hongli Xu. „Development of six-DOF welding robot with machine vision“. Modern Physics Letters B 32, Nr. 34n36 (30.12.2018): 1840079. http://dx.doi.org/10.1142/s0217984918400791.

Der volle Inhalt der Quelle
Annotation:
The application of machine vision to industrial robots is a hot topic in robot research nowadays. A welding robot with machine vision had been developed, which is convenient and flexible to reach the welding point with six degrees-of-freedom (DOF) manipulator, while the singularity of its movement trail is prevented, and the stability of the mechanism had been fully guaranteed. As the precise industry camera can capture the optical feature of the workpiece to reflect in the camera’s CCD lens, the workpiece is identified and located through a visual pattern recognition algorithm based on gray scale processing, on the gradient direction of edge pixel or on geometric element so that high-speed visual acquisition, image preprocessing, feature extraction and recognition, target location are integrated and hardware processing power is improved. Another task is to plan control strategy of control system, and the upper computer software is programmed in order that multi-axis motion trajectory is optimized and servo control is accomplished. Finally, prototype was developed and validation experiments show that the welding robot has high stability, high efficiency, high precision, even if welding joints are random and workpiece contour is irregular.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Yin, Fangrui. „Inspection Robot for Submarine Pipeline Based on Machine Vision“. Journal of Physics: Conference Series 1952, Nr. 2 (01.06.2021): 022034. http://dx.doi.org/10.1088/1742-6596/1952/2/022034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Gong, Fan, und Yu Mu Zhang. „Design of intelligent throwing robot based on machine vision“. Journal of Physics: Conference Series 1939, Nr. 1 (01.05.2021): 012004. http://dx.doi.org/10.1088/1742-6596/1939/1/012004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Xia, Wen Tao, Yan Ying Wang, Zhi Gang Huang, Hao Guan und Ping Cai Li. „Trajectory Control of Museum Commentary Robot Based on Machine Vision“. Applied Mechanics and Materials 615 (August 2014): 145–48. http://dx.doi.org/10.4028/www.scientific.net/amm.615.145.

Der volle Inhalt der Quelle
Annotation:
The design aim to make the museum robot move according to desired trajectory. Having a commentary robot in museum can not only arouse visitor’s curiosity, but also save human resource. Furthermore, this robot can change and upgrade its software system according to museum action situation to accomplish different trajectory in different space. The machine vision tracked method is applied to museum robot, which mainly use camera to seek the marked object in proper order and accomplish designed trajectory movement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

PRETLOVE, J. R. G., und G. A. PARKER. „THE SURREY ATTENTIVE ROBOT VISION SYSTEM“. International Journal of Pattern Recognition and Artificial Intelligence 07, Nr. 01 (Februar 1993): 89–107. http://dx.doi.org/10.1142/s0218001493000066.

Der volle Inhalt der Quelle
Annotation:
This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end-effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small-sized envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation and component inspection for the manufacturing industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Zhang, Zhi Li, Ying Ying Song, Wei Dong Zhang und Shuo Yin. „Research Humanoid Robot Walking Based on Vision-Guided“. Applied Mechanics and Materials 496-500 (Januar 2014): 1426–29. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1426.

Der volle Inhalt der Quelle
Annotation:
Walking is a basic function of humanoid robot, this paper presents key ideas of stereo vision based humanoid walking. Image processing techniques and pattern recognition techniques are employed for the obstacle detection and object recognition, data fitting technique is also used to plan the path of the humanoid robot. High precision visual feedback is provided by the combination of real time high precision feature detection and high actuary object detection method. The proposed stereo vision based approach and robot guidance system were evaluated partly by experiments and partly by the simulation with the humanoid robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Cai, Lin. „Development and Design of Smart-Robot Image Transmission and Processing System Based on On-Line Control“. Applied Mechanics and Materials 602-605 (August 2014): 813–16. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.813.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of network technology, communication technology and multimedia technology, and robot technology is getting mature, network control robot system has gradually become a main direction of current robot research. Network based robot refers to the public through the network and the control operation of the robot Shi Yuancheng. Study on the idea of network robot is the network technology and robot technology integration together, through the network to control the robot. In the network of the robot, machine vision plays a more and more important role. To a strange environment of robot control, observed the function of an image. Machine vision can not only according to the characteristics of image recognition, robot path, but also can provide visual understanding of the observation space to strange environment and to control the robot. It is also belongs to the field of robot visual category for image transmission and processing technology on the essence of robot network control. The vision system of robot is a machine vision system, refers to the use of computer to realize the vision function of the people, is to use computer to achieve the objective of 3D world understanding.[5] The so-called three-dimensional understanding refers to the observed object shape, size, texture and motion feature distance, leaving on the understanding of the concept of robot design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Inagaki, Shinkichi, Tatsuya Suzuki, Takahiro Ito und Wu Shidan. „Design of Autonomous/Man-Machine-Cooperative Mobile Robot“. Journal of Robotics and Mechatronics 21, Nr. 2 (20.04.2009): 252–59. http://dx.doi.org/10.20965/jrm.2009.p0252.

Der volle Inhalt der Quelle
Annotation:
The control methodology we propose for a nonholonomic electric two-wheeled vehicle combines autonomous control and man-machine-cooperative control. “Autonomous control” is designed using time-state control to reduce vehicle deviation from a guidance line. “Man-machine-cooperative control” is designed using impedance control to generate power to assist user maneuvers. Experiments demonstrate the usefulness of our proposed design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

S. I. Cho und N. H. Ki. „AUTONOMOUS SPEED SPRAYER GUIDANCE USING MACHINE VISION AND FUZZY LOGIC“. Transactions of the ASAE 42, Nr. 4 (1999): 1137–43. http://dx.doi.org/10.13031/2013.20130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Ge, Jimin, Zhaohui Deng, Zhongyang Li, Wei Li, Lishu Lv und Tao Liu. „Robot welding seam online grinding system based on laser vision guidance“. International Journal of Advanced Manufacturing Technology 116, Nr. 5-6 (03.07.2021): 1737–49. http://dx.doi.org/10.1007/s00170-021-07433-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Lenz, Reiner. „Lie methods for color robot vision“. Robotica 26, Nr. 4 (Juli 2008): 453–64. http://dx.doi.org/10.1017/s0263574707003906.

Der volle Inhalt der Quelle
Annotation:
SUMMARYWe describe how Lie-theoretical methods can be used to analyze color related problems in machine vision. The basic observation is that the nonnegative nature of spectral color signals restricts these functions to be members of a limited, conical section of the larger Hilbert space of square-integrable functions. From this observation, we conclude that the space of color signals can be equipped with a coordinate system consisting of a half-axis and a unit ball with the Lorentz groups as natural transformation group. We introduce the theory of the Lorentz group SU(1, 1) as a natural tool for analyzing color image processing problems and derive some descriptions and algorithms that are useful in the investigation of dynamical color changes. We illustrate the usage of these results by describing how to compress, interpolate, extrapolate, and compensate image sequences generated by dynamical color changes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Zhang, Chen, Xuewu Xu, Chen Fan und Guoping Wang. „Literature Review of Machine Vision in Application Field“. E3S Web of Conferences 236 (2021): 04027. http://dx.doi.org/10.1051/e3sconf/202123604027.

Der volle Inhalt der Quelle
Annotation:
Aiming at the application and research of machine vision, a comprehensive and detailed elaboration is carried out in its two application areas: visual inspection and robot vision. Introduce the composition, characteristics and application advantages of the machine vision system. Based on the analysis of the current research status at home and abroad, the application development trend of machine vision is prospected.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Wang, Jiwu, Huazhe Dou, Shunkai Zheng und Masanori Sugisaka. „Target Recognition based on Machine Vision for Industrial Sorting Robot“. Journal of Robotics, Networking and Artificial Life 2, Nr. 2 (2015): 100. http://dx.doi.org/10.2991/jrnal.2015.2.2.7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Tsai, Du-Ming. „A three-dimensional machine-vision approach for automatic robot programming“. Journal of Intelligent & Robotic Systems 12, Nr. 1 (März 1995): 23–48. http://dx.doi.org/10.1007/bf01258306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Xing, Si Ming, und Zhi Yong Luo. „Research on Wire-Plugging Robot System Based on Machine Vision“. Applied Mechanics and Materials 275-277 (Januar 2013): 2459–66. http://dx.doi.org/10.4028/www.scientific.net/amm.275-277.2459.

Der volle Inhalt der Quelle
Annotation:
ADSL line test in the field of telecommunication is high-strength work. But current testing method has low working efficiency and cannot realize automatic test. In this paper, the wire-plugging test robot system based on machine vision is designed to realize remote test and automatic wire-plugging, and it also can improve work efficiency. Dual-positioning method which based on technologies of color-coded blocks recognition and visual locating is used in this system. Color-coded blocks are recognized to realize socket coarse-positioning, the stepper motors in directions of X-axis and Y-axis are drived to move to nearby the socket quickly. Video-positioning technology is used to realized pinpoint the socket. The stepper motors of X-axis and Y-axis are drived to make a plugging to align a socket after the pinpoint action is realized, and then the motor in the direction of Z-axis is drived to realize wire-plugging action. Plugging is resetted to a safe place after the end of the wire-plugging. Performance tests have improved that this wire-plugging test robot system can realize plug-testing task quickly and accurately, so it is a stable wire-plugging equipment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Opiyo, Samwel, Cedric Okinda, Jun Zhou, Emmy Mwangi und Nelson Makange. „Medial axis-based machine-vision system for orchard robot navigation“. Computers and Electronics in Agriculture 185 (Juni 2021): 106153. http://dx.doi.org/10.1016/j.compag.2021.106153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Noguchi, Noboru. „Agricultural Vehicle Robot“. Journal of Robotics and Mechatronics 30, Nr. 2 (20.04.2018): 165–72. http://dx.doi.org/10.20965/jrm.2018.p0165.

Der volle Inhalt der Quelle
Annotation:
With the intensive application of techniques in global positioning, machine vision, image processing, sensor integration, and computing-based algorithms, vehicle automation is one of the most pragmatic branches of precision agriculture, and has evolved from a concept to be in existence worldwide. This paper addresses the application of robot vehicles in agriculture using new technologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Ho, Chao Ching, You Min Chen, Tien Yun Chi und Tzu Hsin Kuo. „Machine Vision-Based Automatic Placement System for Solenoid Housing“. Key Engineering Materials 649 (Juni 2015): 9–13. http://dx.doi.org/10.4028/www.scientific.net/kem.649.9.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a machine vision-based, servo-controlled delta robotic system for solenoid housing placement. The system consists of a charge-coupled device camera and a delta robot. To begin the placement process, the solenoid housing targets inside the camera field were identified and used to guide the delta robot to the grabbing zone according to the calibrated homography transformation. To determine the angle of solenoid housing, image preprocessing was then implemented in order to rotate the target object to assemble with the solenoid coil. Finally, the solenoid housing was grabbed automatically and placed in the collecting box. The experimental results demonstrate that the proposed system can help to reduce operator fatigue and to achieve high-quality placements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie