Auswahl der wissenschaftlichen Literatur zum Thema „Machine vision for robot guidance“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Machine vision for robot guidance" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Machine vision for robot guidance"

1

Pérez, Luis, Íñigo Rodríguez, Nuria Rodríguez, Rubén Usamentiaga und Daniel García. „Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review“. Sensors 16, Nr. 3 (05.03.2016): 335. http://dx.doi.org/10.3390/s16030335.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xue, Jinlin, Lei Zhang und Tony E. Grift. „Variable field-of-view machine vision based row guidance of an agricultural robot“. Computers and Electronics in Agriculture 84 (Juni 2012): 85–91. http://dx.doi.org/10.1016/j.compag.2012.02.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ponnambalam, Vignesh Raja, Marianne Bakken, Richard J. D. Moore, Jon Glenn Omholt Gjevestad und Pål Johan From. „Autonomous Crop Row Guidance Using Adaptive Multi-ROI in Strawberry Fields“. Sensors 20, Nr. 18 (14.09.2020): 5249. http://dx.doi.org/10.3390/s20185249.

Der volle Inhalt der Quelle
Annotation:
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jin, Xiao Jun, Yong Chen, Ying Qing Guo, Yan Xia Sun und Jun Chen. „Tea Flushes Identification Based on Machine Vision for High-Quality Tea at Harvest“. Applied Mechanics and Materials 288 (Februar 2013): 214–18. http://dx.doi.org/10.4028/www.scientific.net/amm.288.214.

Der volle Inhalt der Quelle
Annotation:
Tea flushes identification from their natural background is the first key step for the intelligent tea-picking robot. This paper focuses on the algorithms of identifying the tea flushes based on color image analysis. A tea flushes identification system was developed as a means of guidance for a robotic manipulator in the picking of high-quality tea. Firstly, several color indices, including y-c, y-m, (y-c)/(y+c) and (y-m)/(y+m) in CMY color space, S channel in HSI color space, and U channel in YUV color space, were studied and tested. These color indices enhanced and highlighted the tea flushes against their background. Afterwards, grey level image was transformed into binary image using Otsu method and then area filter was employed to eliminate small noise regions. The algorithm and identification system has been tested extensively and proven to be well adapted to the complexity of a natural environment. Experiments show that these indices were particularly effective for tea flushes identification and could be used for future tea-picking robot development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

HAN, LONG, XINYU WU, YONGSHENG OU, YEN-LUN CHEN, CHUNJIE CHEN und YANGSHENG XU. „HOUSEHOLD SERVICE ROBOT WITH CELLPHONE INTERFACE“. International Journal of Information Acquisition 09, Nr. 02 (Juni 2013): 1350009. http://dx.doi.org/10.1142/s0219878913500095.

Der volle Inhalt der Quelle
Annotation:
In this paper, an efficient and low-cost cellphone-commandable mobile manipulation system is described. Aiming at house and elderly caring, this system can be easily commanded through common cellphone network to efficiently grasp objects in household environment, utilizing several low-cost off-the-shelf devices. Unlike the visual servo technology using high quality vision system with high cost, the household-service robot may not afford to such high quality vision servo system, and thus it is essential to use some of low-cost device. However, it is extremely challenging to have the said vision for precise localization, as well as motion control. To tackle this challenge, we developed a realtime vision system with which a reliable grasping algorithm combining machine vision, robotic kinematics and motor control technology is presented. After the target is captured by the arm camera, the arm camera keeps tracking the target while the arm keeps stretching until the end effector reaches the target. However, if the target is not captured by the arm camera, the arm will take a move to help the arm camera capture the target under the guidance of the head camera. This algorithm is implemented on two robot systems: the one with a fixed base and another with a mobile base. The results demonstrated the feasibility and efficiency of the algorithm and system we developed, and the study shown in this paper is of significance in developing a service robot in modern household environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mavridou, Efthimia, Eleni Vrochidou, George A. Papakostas, Theodore Pachidis und Vassilis G. Kaburlasos. „Machine Vision Systems in Precision Agriculture for Crop Farming“. Journal of Imaging 5, Nr. 12 (07.12.2019): 89. http://dx.doi.org/10.3390/jimaging5120089.

Der volle Inhalt der Quelle
Annotation:
Machine vision for precision agriculture has attracted considerable research interest in recent years. The aim of this paper is to review the most recent work in the application of machine vision to agriculture, mainly for crop farming. This study can serve as a research guide for the researcher and practitioner alike in applying cognitive technology to agriculture. Studies of different agricultural activities that support crop harvesting are reviewed, such as fruit grading, fruit counting, and yield estimation. Moreover, plant health monitoring approaches are addressed, including weed, insect, and disease detection. Finally, recent research efforts considering vehicle guidance systems and agricultural harvesting robots are also reviewed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kanagasingham, Sabeethan, Mongkol Ekpanyapong und Rachan Chaihan. „Integrating machine vision-based row guidance with GPS and compass-based routing to achieve autonomous navigation for a rice field weeding robot“. Precision Agriculture 21, Nr. 4 (16.11.2019): 831–55. http://dx.doi.org/10.1007/s11119-019-09697-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Yibo, Jianjun Tang und Hui Huang. „Motion Capture and Intelligent Correction Method of Badminton Movement Based on Machine Vision“. Mobile Information Systems 2021 (30.07.2021): 1–10. http://dx.doi.org/10.1155/2021/3256924.

Der volle Inhalt der Quelle
Annotation:
In recent years, badminton has become more and more popular in national fitness programs. Amateur badminton clubs have been established all over the country, and amateur badminton events at all levels have increased significantly. Due to the lack of correct medical supervision and health guidance, many people have varying degrees of injury during sports. Therefore, it is very important to study the method of badminton movement capture and intelligent correction based on machine vision to provide safe and effective exercise plan for amateur badminton enthusiasts. This article aims to study the methods of motion capture and intelligent correction of badminton. Aiming at the shortcoming of the mean shift algorithm that it is easy to lose the target when the target is occluded or the background is disturbed, this paper combines the mean shift algorithm with the Kalman filter algorithm and proposes an improvement to the combined algorithm. The improved algorithm is added to the calculation of the average speed of the target, which can be used as the target speed when the target is occluded to predict the area where the target may appear at the next moment, and it can also be used as a judgment condition for whether the target is interfered by the background. The improved algorithm combines the macroscopic motion information of the target, can overcome the problem of target loss when the target is occluded and background interference, and improves the robustness of target tracking. Using LabVIEW development environment to write the system software of the Japanese standard tracking robot, the experiment verified the rationality and correctness of the improved target tracking algorithm and motion control method, which can meet the real-time performance of moving target tracking. Experimental results show that 83% of amateur badminton players have problems with asymmetric functions and weak links. Based on machine vision technology, it can provide reliable bottom line reference for making training plans, effectively improve the quality of action, improve the efficiency of action, and promote the development of sports competitive level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Marshall, S. „Machine vision: Automated visual inspection and robot vision“. Automatica 30, Nr. 4 (April 1994): 731–32. http://dx.doi.org/10.1016/0005-1098(94)90163-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rovira-Más, F., Q. Zhang, J. F. Reid und J. D. Will. „Machine Vision Based Automated Tractor Guidance“. International Journal of Smart Engineering System Design 5, Nr. 4 (Oktober 2003): 467–80. http://dx.doi.org/10.1080/10255810390445300.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Machine vision for robot guidance"

1

Arthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pretlove, John. „Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance“. Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the design, development and implementation of a robot mounted active stereo vision system for adaptive robot arm guidance. This provides a very flexible and intelligent system that is able to react to uncertainty in a manufacturing environment. It is capable of tracking and determining the 3D position of an object so that the robot can move towards, and intercept, it. Such a system has particular applications in remotely controlled robot arms, typically working in hostile environments. The stereo vision system is designed on mechatronic principles and is modular, light-weight and uses state-of-the-art dc servo-motor technology. Based on visual information, it controls camera vergence and focus independently while making use of the flexibility of the robot for positioning. Calibration and modelling techniques have been developed to determine the geometry of the stereo vision system so that the 3D position of objects can be estimated from the 2D camera information. 3D position estimates are obtained by stereo triangulation. A method for obtaining a quantitative measure of the confidence of the 3D position estimate is presented which is a useful built-in error checking mechanism to reject false or poor 3D matches. A predictive gaze controller has been incorporated into the stereo head control system. This anticipates the relative 3D motion of the object to alleviate the effect of computational delays and ensures a smooth trajectory. Validation experiments have been undertaken with a Puma 562 industrial robot to show the functional integration of the camera system with the robot controller. The vision system is capable of tracking moving objects and the information this provides is used to update command information to the controller. The vision system has been shown to be in full control of the robot during a tracking and intercept duty cycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Grepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Der volle Inhalt der Quelle
Annotation:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bohora, Anil R. „Visual robot guidance in time-varying environment using quadtree data structure and parallel processing“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Der volle Inhalt der Quelle
Annotation:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sonmez, Ahmet Coskun. „Robot guidance using image features and fuzzy logic“. Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Stark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Foster, D. J. „Pipelining : an approach for machine vision“. Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:1258e292-2603-4941-87db-d2a56b8856a2.

Der volle Inhalt der Quelle
Annotation:
Much effort has been spent over the last decade in producing so called "Machine Vision" systems for use in robotics, automated inspection, assembly and numerous other fields. Because of the large amount of data involved in an image (typically ¼ MByte) and the complexity of many algorithms used, the processing times required have been far in excess of real time on a VAX-class serial processor. We review a number of image understanding algorithms that compute a globally defined "state", and show that they may be computed using simple local operations that are suited to parallel implementation. In recent years, many massively parallel machines have been designed to apply local operations rapidly across an image. We review several vision machines. We develop an algebraic analysis of the performance of a vision machine and show that, contrary to the commonly-held belief, the time taken to relay images between serial streams can exceed by far the time spent processing. We proceed to investigate the roles that a variety of pipelining techniques might play. We then present three pipelined designs for vision, one of which has been built. This is a parallel pipelined bit slice convolution processor, capable of operating at video rates. This design is examined in detail, and its performance analysed in relation to the theoretical framework of the preceeding chapters. The construction and debugging of the device, which is now operational in its hardware is detailed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Leidenkrantz, Axel, und Erik Westbrandt. „Implementation of machine vision on a collaborative robot“. Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17039.

Der volle Inhalt der Quelle
Annotation:
This project is developed with the University of Skövde and Volvo GTO. Purpose of the project is to complement and facilitate the quality insurance when gluing the engine frame. Quality defects in today’s industry is a major concern due to how costly it is to fix them. With competition rising and quality demands increasing, companies are looking for new and more efficient ways to ensure quality. Collaborative robots is a rising and unexplored technology in most industries. It is an upcoming field with great flexibility that could solve many issues and can assist its processes that are difficult to automate. The project aims to investigate if it is possible and beneficial to implement a vision system on a collaborative robot which ensures quality. Also, investigate if the collaborative robot could work with other tasks as well. This project also includes training and learning an artificial network with CAD generated models and real-life prototypes. The project had a lot of challenges with both training the AI and how the robot would communicate with it. The final results stated that a collaborative robot more specific UR10e could work with machine vision. This solution was based on using a camera which was compatible with the built-in robot software. However, this does not mean that other type of cameras cannot be used for this type of functions as well. Using machine vision based on artificial intelligence is a valid solution but requires further development and training to get a software function working in industry. Working with collaborative robots could change the industry for the better in many ways. Implementing collaborative robots could ease the work for the operators to aid in heavy lifting and repetitive work. Being able to combine a collaborative robot with a vision system could increase productivity and economic benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Machine vision for robot guidance"

1

Vernon, David. Machine vision: Automated visual inspection and robot vision. London: Prentice Hall, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Machine vision: Automated visual inspection and robot vision. New York: Prentice Hall, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kanatani, Kenʼichi. Geometric computation for machine vision. Oxford: Clarendon Press, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Pomerleau, Dean A. Neural Network Perception for Mobile Robot Guidance. Boston, MA: Springer US, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Miller, Richard Kendall. Color machine vision: A market forecast and applications assessment. Madison, GA: SEAI Technical Publications, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Heytler, Peter. Machine vision: A Delphi forecast to 1990. Ann Arbor, MI: Automated Vision Association of RIA, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Heikkilä, Tapio Arturri. A model-based approach to high-level robot control with visual guidance. Espoo, [Finland]: Technical Research Centre of Finland, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bajcsy, Ruzena. Assembly via disassembly: A case in machine perceptual development. Philadelphia, PA: Dept. of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Center, Langley Research, Hrsg. A guidance scheme for automated tetrahedral truss structure assembly based on machine vision. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Takeo, Kanade, Hrsg. Three-dimensional machine vision. Boston: Kluver Academic Publishers, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Machine vision for robot guidance"

1

Sánchez, J., F. Vázquez und E. Paz. „Machine Vision Guidance System for a Modular Climbing Robot used in Shipbuilding“. In Climbing and Walking Robots, 893–900. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-26415-9_107.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mishra, Atul, I. A. Sainul, Sudipta Bhuyan, Sankha Deb, Debashis Sen und A. K. Deb. „Development of a Flexible Assembly System Using Industrial Robot with Machine Vision Guidance and Dexterous Multi-finger Gripper“. In Lecture Notes on Multidisciplinary Industrial Engineering, 31–71. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8767-7_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hummel, John W., und Kenneth E. Von Qualen. „Machine Vision Swath Guidance“. In Proceedings of Soil Specific Crop Management, 359. Madison, WI, USA: American Society of Agronomy, Crop Science Society of America, Soil Science Society of America, 2015. http://dx.doi.org/10.2134/1993.soilspecificcrop.c35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Porat, Moshe. „Localized Video Compression for Machine Vision“. In Robot Vision, 278–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rosenfeld, Azriel. „Robot Vision“. In Machine Intelligence and Knowledge Engineering for Robotic Applications, 1–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/978-3-642-87387-4_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Miller, Richard K. „Fundamentals of Machine Vision“. In Industrial Robot Handbook, 47–60. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4684-6608-9_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Barnes, Nick, und Zhi-Qiang Liu. „Object Recognition Mobile Robot Guidance“. In Knowledge-Based Vision-Guided Robots, 63–86. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1780-5_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Batchelor, Bruce G. „Appendix E: Robot Vision: Calibration“. In Machine Vision Handbook, 2053–61. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-84996-169-1_46.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pomerleau, Dean A. „Other Vision-based Robot Guidance Methods“. In Neural Network Perception for Mobile Robot Guidance, 161–71. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sood, Arun, und Gwo-jyh Tseng. „Motion Parameter Estimation for Robot Application“. In Issues on Machine Vision, 293–309. Vienna: Springer Vienna, 1989. http://dx.doi.org/10.1007/978-3-7091-2830-5_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Machine vision for robot guidance"

1

Meng, Qingkuan, Xiayi Hao, Yingmei Zhang und Genghuang Yang. „Guidance Line Identification for Agricultural Mobile Robot Based on Machine Vision“. In 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2018. http://dx.doi.org/10.1109/iaeac.2018.8577651.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jia, Bao-Zhi, und Ming Zhu. „Study on a human guidance method for autonomous cruise of indoor robot“. In Fourth International Conference on Machine Vision (ICMV 11), herausgegeben von Zhu Zeng und Yuting Li. SPIE, 2011. http://dx.doi.org/10.1117/12.920408.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shan, Shangqiu, Zhongxi Hou und Yue Li. „Optimized online guidance algorithm for the fixed-wing flying robot“. In 2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP). IEEE, 2016. http://dx.doi.org/10.1109/m2vip.2016.7827271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hsue, Albert Wen-Jeng, und Chih-Fan Tsai. „Torque Controlled Mini-Screwdriver Station with A SCARA Robot and A Machine-Vision Guidance“. In 2020 International Symposium on Computer, Consumer and Control (IS3C). IEEE, 2020. http://dx.doi.org/10.1109/is3c50286.2020.00127.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Cong, Chung-Yen Lin und Masayoshi Tomizuka. „Visual Servoing for Robot Manipulators Considering Sensing and Dynamics Limitations“. In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3833.

Der volle Inhalt der Quelle
Annotation:
This paper presents a control scheme of visual servoing. Real-time vision guidance is necessary in many desirable applications of industrial manipulators. Challenge comes from the limitations of visual sensing and robot dynamics. Typical industrial machine vision systems have low sampling rate and large latency. In addition, due to the large inertia of industrial manipulators, a proper consideration of robot dynamics is important. In particular, actuator saturation may cause undesirable response. In this paper, an adaptive tracking filter is used for sensing compensation. Based on the compensated vision feedback, a two-layer controller is formulated using multi-surface sliding control. System kinematics and dynamics are decoupled and handled by the two layers of the controller respectively. Further, a constrained optimal control approach is adopted to avoid actuator saturation. Validation is conducted using a SCARA robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nagchaudhuri, Abhijit, Shinivas Saishyam, John Wood und Anthony Stockus. „Mechatronics Laboratory at UMES: A Platform to Promote Synergy in Education and Research Across Disciplinary Boundaries“. In ASME 2003 International Mechanical Engineering Congress and Exposition. ASMEDC, 2003. http://dx.doi.org/10.1115/imece2003-42883.

Der volle Inhalt der Quelle
Annotation:
Mechatronics is the synergistic integration of mechanics, instrumentation and control, software engineering and information technology. As such it integrates well with not only the modern evolution of mechanical engineering curricula but has wide and growing manifestation in the new generation of industrial products as well as children’s toys. The present set-up of the laboratory consists of an industrial SCARA (Selective Compliance Articulated Robot Arm) robot equipped with machine vision capability for guidance, inspection and recognition associated with robotic manipulation of parts. An open loop stable vibration control platform, an open loop unstable inverted pendulum and a dual water tank system interfaced with appropriate sensors and actuators provide capabilities for learning both analog and digital control of systems belonging to the solid mechanics and fluid mechanics fields. Modern software tools that include graphical programming capability using Simulink and compilation via Real time Windows Target, Real time Workshop (all from Mathworks) and Visual C++ (Microsoft) allow for developing and executing variety of control algorithms on these systems. Capabilities for remote operation of these systems over the internet have also been implemented. The laboratory facilities provide education and research capability at the interfaces of traditional disciplinary boundaries. The laboratory is also equipped with LEGO MINDSTORM and LEGO DACTA products as well as the MIT Handyboard for exploration of mechatronics and robotics activities for prospective engineers and K-12 students.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

N. B. Powell, S. R. Spencer und M. D. Boyette. „Machine Vision for Autonomous Machine Guidance“. In 2005 Tampa, FL July 17-20, 2005. St. Joseph, MI: American Society of Agricultural and Biological Engineers, 2005. http://dx.doi.org/10.13031/2013.19089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kim, Soomin, Taeyoung Kim, Min H. Kim und Sung-Eui Yoon. „Image Completion with Intrinsic Reflectance Guidance“. In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.75.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ho, Joo Ho, Seung-Hwan Baek und Min H. Kim. „Urban Image Stitching using Planar Perspective Guidance“. In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.50.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Granlund, Goesta H. „Issues in Robot Vision“. In British Machine Vision Conference 1993. British Machine Vision Association, 1993. http://dx.doi.org/10.5244/c.7.1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie