Letteratura scientifica selezionata sul tema "Robot vision"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Robot vision".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Robot vision"

1

MASRIL, MUHAMMAD ABRAR, e DEOSA PUTRA CANIAGO. "Optimasi Teknologi Computer Vision pada Robot Industri Sebagai Pemindah Objek Berdasarkan Warna". ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 11, n. 1 (24 gennaio 2023): 46. http://dx.doi.org/10.26760/elkomika.v11i1.46.

Testo completo
Abstract (sommario):
ABSTRAKComputer vision merupakan teknologi yang dapat mendeteksi objek yang ada disekitarnya pada penelitian ini membahas optimasi teknologi computer vison pada robot sebagai pemindah objek berdasarkan warna. Sistem pada robot terdiri dari pengenalan bola berwarna dan memindahkan bola berwarna sesuai dengan warna yang dideteksi. Teknologi computer vision pada pixy 2 camera dapat mendeteksi objek berwarna menggunakan metode deteksi real-time dengan hasil optimasi yang tinggi yaitu 0,2 detik ketika mendeteksi objek berwarna. Pengujian pengenalan objek berwarna dilakukan sebanyak tiga kali pada setiap objek berwarna dengan tingkat akurasi sebesar 100%. Optimasi computer vision dapat membantu robot mengenali objek berwarna.Kata kunci: Computer Vision, Deteksi Objek Berwarna, Pixy2 Camera, Real-Time ABSTRACTComputer vision is a technology that can detect objects that are around it. This study discusses the optimization of computer vision technology on robots as object transfers based on color. The system on the robot consists of recognizing colored balls and moving colored balls according to the detected color. Computer vision technology on the pixy 2 camera can detect colored objects using a real-time detection method with a high optimization result of 0.2 seconds when detecting colored objects. The color object recognition test was carried out three times on each colored object with an accuracy rate of 100%. Computer vision optimization can help robots recognize colored objects.Keywords: Computer Vision, Color Object Detection, Pixy2 Camera, Real-Time
Gli stili APA, Harvard, Vancouver, ISO e altri
2

BARNES, NICK, e ZHI-QIANG LIU. "VISION GUIDED CIRCUMNAVIGATING AUTONOMOUS ROBOTS". International Journal of Pattern Recognition and Artificial Intelligence 14, n. 06 (settembre 2000): 689–714. http://dx.doi.org/10.1142/s0218001400000489.

Testo completo
Abstract (sommario):
We present a system for vision guided autonomous circumnavigation, allowing a mobile robot to navigate safely around objects of arbitrary pose, and avoid obstacles. The system performs model-based object recognition from an intensity image. By enabling robots to recognize and navigate with respect to particular objects, this system empowers robots to perform deterministic actions on specific objects, rather than general exploration and navigation as emphasized in much of the current literature. This paper describes a fully integrated system, and, in particular, introduces canonical-views. Further, we derive a direct algebraic method for finding object pose and position for the four-dimensional case of a ground-based robot with uncalibrated vertical movement of its camera. Vision for mobile robots can be treated as a very different problem to traditional computer vision, as mobile robots have a characteristic perspective, and there is a causal relation between robot actions and view changes. Canonical-views are a novel, active object representation designed specifically to take advantage of the constraints of the robot navigation problem to allow efficient recognition and navigation.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Martinez-Martin, Ester, e Angel del Pobil. "Vision for Robust Robot Manipulation". Sensors 19, n. 7 (6 aprile 2019): 1648. http://dx.doi.org/10.3390/s19071648.

Testo completo
Abstract (sommario):
Advances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper’s flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Umeda, Kazunori. "Special Issue on Robot Vision". Journal of Robotics and Mechatronics 15, n. 3 (20 giugno 2003): 253. http://dx.doi.org/10.20965/jrm.2003.p0253.

Testo completo
Abstract (sommario):
Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target. This special issue reviews recent advances in this exciting field, following up two special issues, Vol. 11 No. 2, and Vol. 13 No. 6, which attracted more papers than expected. This indicates the high degree of research activity in this field. I am most pleased to report that this issue presents 12 excellent papers covering robot vision, including basic algorithms based on precise optical models, pattern and gesture recognition, and active vision. Several papers treat range imaging and others interesting applications to agriculture and quadruped robots and new devices. This issue also presents two news briefs, one on a practical range sensor suited to mobile robots and the other on vision devices that are the improved ones of famous IP-5000 series. I am convinced that this special issue helps research on robot vision more exciting. I would like to close by thanking all of the researchers who submitted their studies, and to give special thanks to the reviewers and editors, especially Prof. M. Kaneko, Dr. K. Yokoi, and Prof. Y. Nakauchi.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Zhang, Hongxin, e Suan Lee. "Robot Bionic Vision Technologies: A Review". Applied Sciences 12, n. 16 (9 agosto 2022): 7970. http://dx.doi.org/10.3390/app12167970.

Testo completo
Abstract (sommario):
The visual organ is important for animals to obtain information and understand the outside world; however, robots cannot do so without a visual system. At present, the vision technology of artificial intelligence has achieved automation and relatively simple intelligence; however, bionic vision equipment is not as dexterous and intelligent as the human eye. At present, robots can function as smartly as human beings; however, existing reviews of robot bionic vision are still limited. Robot bionic vision has been explored in view of humans and animals’ visual principles and motion characteristics. In this study, the development history of robot bionic vision equipment and related technologies are discussed, the most representative binocular bionic and multi-eye compound eye bionic vision technologies are selected, and the existing technologies are reviewed; their prospects are discussed from the perspective of visual bionic control. This comprehensive study will serve as the most up-to-date source of information regarding developments in the field of robot bionic vision technology.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

YACHIDA, Masahiko. "Robot Vision." Journal of the Robotics Society of Japan 10, n. 2 (1992): 140–45. http://dx.doi.org/10.7210/jrsj.10.140.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Haralick, Robert M. "Robot vision". Computer Vision, Graphics, and Image Processing 34, n. 1 (aprile 1986): 118–19. http://dx.doi.org/10.1016/0734-189x(86)90060-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Forrest, A. K. "Robot vision". Physics in Technology 17, n. 1 (gennaio 1986): 5–9. http://dx.doi.org/10.1088/0305-4624/17/1/301.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Shirai, Y. "Robot vision". Robotics 2, n. 3 (settembre 1986): 175–203. http://dx.doi.org/10.1016/0167-8493(86)90028-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Shirai, Y. "Robot vision". Future Generation Computer Systems 1, n. 5 (settembre 1985): 325–52. http://dx.doi.org/10.1016/0167-739x(85)90005-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Robot vision"

1

Grech, Raphael. "Multi-robot vision". Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/27790/.

Testo completo
Abstract (sommario):
It is expected nowadays that robots are able to work in real-life environments, possibly also sharing the same space with humans. These environments are generally considered as being cluttered and hard to train for. The work presented in this thesis focuses on developing an online and real-time biologically inspired model for teams of robots to collectively learn and memorise their visual environment in a very concise and compact manner, whilst sharing their experience to their peers (robots and possibly also humans). This work forms part of a larger project to develop a multi-robot platform capable of performing security patrol checks whilst also assisting people with physical and cognitive impairments to be used in public places such as museums and airports. The main contribution of this thesis is the development of a model which makes robots capable of handling visual information, retain information that is relevant to whatever task is at hand and eliminate superfluous information, trying to mimic human performance. This leads towards the great milestone of having a fully autonomous team of robots capable of collectively surveying, learning and sharing salient visual information of the environment even without any prior information. Solutions to endow a distributed team of robots with object detection and environment understanding capabilities are also provided. The way in which humans process, interpret and store visual information are studied and their visual processes are emulated by a team of robots. In an ideal scenario, robots are deployed in a totally unknown environment and incrementally learn and adapt to operate within that environment. Each robot is an expert of its area however, they possess enough knowledge about other areas to be able to guide users sufficiently till another more knowledgeable robot takes over. Although not limited, it is assumed that, once deployed, each robot operates in its own environment for most of its lifetime and the longer the robots remains in the area the more refined their memory will become. Robots should to be able to automatically recognize previously learnt features, such as faces and known objects, whilst also learning other new information. Salient information extracted from the incoming video streams can be used to select keyframes to be fed into a visual memory thus allowing the robot to learn new interesting areas within its environment. The cooperating robots are to successfully operate within their environment, automatically gather visual information and store it in a compact yet meaningful representation. The storage has to be dynamic, as visual information extracted by the robot team might change. Due to the initial lack of knowledge, small sets of visual memory classes need to evolve as the robots acquire visual information. Keeping memory size within limits whilst at the same time maximising the information content is one of the main factors to consider.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Li, Wan-chiu. "Localization of a mobile robot by monocular vision /". Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23765896.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Baba, Akihiko. "Robot navigation using ultrasonic feedback". Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=677.

Testo completo
Abstract (sommario):
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains viii, 122 p. : ill. Includes abstract. Includes bibliographical references (p. 57-59).
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Roth, Daniel R. (Daniel Risner) 1979. "Vision based robot navigation". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17978.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 53-54).
In this thesis we propose a vision-based robot navigation system that constructs a high level topological representation of the world. A robot using this system learns to recognize rooms and spaces by building a hidden Markov model of the environment. Motion planning is performed by doing bidirectional heuristic search with a discrete set of actions that account for the robot's nonholonomic constraints. The intent of this project is to create a system that allows a robot to be able to explore and to navigate in a wide variety of environments in a way that facilitates goal-oriented tasks.
by Daniel R. Roth.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Skinner, John R. "Simulation for robot vision". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227404/1/John_Skinner_Thesis.pdf.

Testo completo
Abstract (sommario):
This thesis examined the effectiveness of using computer graphics technologies to create simulated data for vision-enabled robots. The research demonstrated that while the behaviour of a robot is greatly affected by what it is looking at, simulated scenes can produce position estimates similar to specific real locations. The findings show that robots need to be tested in a very wide range of different contexts to understand their performance, and simulation provides a cost-effective route to that evaluation.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

李宏釗 e Wan-chiu Li. "Localization of a mobile robot by monocular vision". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226371.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Luh, Cheng-Jye 1960. "Hierarchical modelling of mobile, seeing robots". Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276998.

Testo completo
Abstract (sommario):
This thesis describes the implementation of a hierarchical robot simulation environment which supports the design of robots with vision and mobility. A seeing robot model applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Chen, Haoyao. "Towards multi-robot formations : study on vision-based localization system /". access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-meem-b3008295xf.pdf.

Testo completo
Abstract (sommario):
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Manufacturing Engineering and Engineering Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 87-100)
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Öfjäll, Kristoffer. "Online Learning for Robot Vision". Licentiate thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110892.

Testo completo
Abstract (sommario):
In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35]. Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods. This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Devillard, François. "Vision du robot mobile Mithra". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0112.

Testo completo
Abstract (sommario):
Nous proposons un ensemble de vision stereoscopique embarque, destine a la navigation d'un robot mobile en site industriel. En robotique mobile, les systemes de vision sont soumis a de severes contraintes de fonctionnement (traitement en temps reel, volume, consommation. . . ). Pour une modelisation 3D de l'environnement, le systeme de vision doit utiliser des indices visuels permettant un codage compact, precis et robuste de la scene observee. Afin de repondre au mieux aux contraintes de vitesse, nous nous sommes attaches a extraire, des images, les informations les plus significatives d'un point de vue topologique. Dans le cas de missions en sites industriels, l'ensemble des projets presente des geometries orthogonales telles que les intersections de cloisons, les portes, les fenetres, le mobilier. . . La detection des geometries proches de la verticale permet une definition suffisante de l'environnement tout en reduisant la redondance des informations visuelles dans des proportions satisfaisantes. Les indices utilises sont des segments de droite verticaux extraits de deux images stereoscopiques. Nous proposons des solutions algorithmiques pour la detection de contours et l'approximation polygonale adaptees a une implementation temps reel. Ensuite, nous presentons le systeme de vision realise. L'ensemble est constitue de 2 cartes VME. La premiere carte est un operateur cable systolique implementant l'acquisition d'images et la detection de contours. La seconde est concue a partir d'un processeur de traitement de signal et realise l'approximation polygonale. La conception et la realisation de ce systeme de vision a ete realisee dans le cadre du projet de robotique mobile EUREKA EU 110 (Mithra)
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Robot vision"

1

Klette, Reinhard, Shmuel Peleg e Gerald Sommer, a cura di. Robot Vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Sommer, Gerald, e Reinhard Klette, a cura di. Robot Vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78157-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

inc, International Resource Development, a cura di. Robot vision systems. Norwalk, Conn., U.S.A. (6 Prowitt St., Norwalk 06855): International Resource Development, 1985.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tian, Jiandong. All Weather Robot Vision. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6429-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Pauli, Josef. Learning-Based Robot Vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45124-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Taisho, Matsuda, a cura di. Robot vision: New research. New York: Nova Science Publishers, 2008.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

G, Shapiro Linda, a cura di. Computer and robot vision. Reading, Mass: Addison-Wesley Pub. Co., 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Sood, Arun K., e Harry Wechsler, a cura di. Active Perception and Robot Vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Chatterjee, Amitava, Anjan Rakshit e N. Nirmal Singh. Vision Based Autonomous Robot Navigation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33965-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Sun, Yu, Aman Behal e Chi-Kit Ronald Chung, a cura di. New Development in Robot Vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-43859-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Robot vision"

1

Bräunl, Thomas. "Robot Vision". In Robot Adventures in Python and C, 125–41. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38897-3_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Rosenfeld, Azriel. "Robot Vision". In Machine Intelligence and Knowledge Engineering for Robotic Applications, 1–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/978-3-642-87387-4_1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mihelj, Matjaž, Tadej Bajd, Aleš Ude, Jadran Lenarčič, Aleš Stanovnik, Marko Munih, Jure Rejc e Sebastjan Šlajpah. "Robot Vision". In Robotics, 107–22. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-72911-4_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bräunl, Thomas. "Robot Vision". In Mobile Robot Programming, 133–49. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32797-1_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Björkman, Mårten, e Jan-Olof Eklundh. "Visual Cues for a Fixating Active Agent". In Robot Vision, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Scheibe, Karsten, Hartmut Korsitzky, Ralf Reulke, Martin Scheele e Michael Solbrig. "EYESCAN - A High Resolution Digital Panoramic Camera". In Robot Vision, 77–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Wei, Tiangong, e Reinhard Klette. "A Wavelet-Based Algorithm for Height from Gradients". In Robot Vision, 84–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wuerz, Alexander, Stefan K. Gehrig e Fridtjof J. Stein. "Enhanced Stereo Vision Using Free-Form Surface Mirrors". In Robot Vision, 91–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_12.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Baltes, Jacky. "RoboCup-99: A Student’s Perspective". In Robot Vision, 99–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Baltes, Jacky. "Horus: Object Orientation and Id without Additional Markers". In Robot Vision, 107–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_14.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Robot vision"

1

Stancil, Brian, Hsiang-Wen Hsieh, Tsuhan Chen e Hung-Hsiu Yu. "A Distributed Vision Infrastructure for Multi-Robot Localization". In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35700.

Testo completo
Abstract (sommario):
Localization is one of the critical issues in the field of multi-robot navigation. With an accurate estimate of the robot pose, robots will be able to navigate in their environment autonomously with the aid of flexible path planning. In this paper, the infrastructure of a Distributed Vision System (DVS) for multi-robot localization is presented. The main difference between traditional DVSs and the proposed one is that multiple overhead cameras can simultaneously localize a network of robots. The proposed infrastructure is comprised of a Base Process and Coordinate Transform Process. The Base Process receives images from various cameras mounted in the environment and then utilizes this information to localize multiple robots. Coordinate Transform Process is designed to transform from Image Reference Plane to world coordinate system. ID tags are used to locate each robot within the overhead image and camera intrinsic and extrinsic parameters are used to estimate a global pose for each robot. The presented infrastructure was recently implemented by a network of small robot platforms with several overhead cameras mounted in the environment. The results show that the proposed infrastructure could simultaneously localize multiple robots in a global world coordinate system with localization errors within 0.1 meters.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hall, E. L., e J. H. Nurre. "Robot Vision Overview". In 1985 Los Angeles Technical Symposium, a cura di Andrew G. Tescher. SPIE, 1985. http://dx.doi.org/10.1117/12.946421.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Kuts, Vladimir, Tauno Otto, Toivo Tähemaa, Khuldoon Bukhari e Tengiz Pataraia. "Adaptive Industrial Robots Using Machine Vision". In ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-86720.

Testo completo
Abstract (sommario):
The use of industrial robots in modern manufacturing scenarios is a rising trend in the engineering industry. Currently, industrial robots are able to perform pre-programmed tasks very efficiently irrespective of time and complexity. However, often robots encounter unknown scenarios and to solve those, they need to cooperate with humans, leading to unnecessary downtime of the machine and the need for human intervention. The main aim of this study is to propose a method to develop adaptive industrial robots using Machine Learning (ML)/Machine Vision (MV) tools. The proposed method aims to reduce the effort of re-programming and enable self-learning in industrial robots. The elaborated online programming method can lead to fully automated industrial robotic cells in accordance with the human-robot collaboration standard and provide multiple usage options of this approach in the manufacturing industry. Machine Vision (MV) tools used for online programming allow industrial robots to make autonomous decisions during sorting or assembling operations based on the color and/or shape of the test object. The test setup consisted of an industrial robot cell, cameras and LIDAR connected to MATLAB through a Robot Operation System (ROS). The online programming tests and simulations were performed using Virtual/Augmented Reality (VR/AR) toolkits together with a Digital Twin (DT) concept, to test the industrial robot program on a digital object before executing it on the real object, thus creating a safe and secure test environment.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ruther, Matthias, Martin Lenz e Horst Bischof. "The narcissistic robot: Robot calibration using a mirror". In Vision (ICARCV 2010). IEEE, 2010. http://dx.doi.org/10.1109/icarcv.2010.5707268.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Granlund, Goesta H. "Issues in Robot Vision". In British Machine Vision Conference 1993. British Machine Vision Association, 1993. http://dx.doi.org/10.5244/c.7.1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Drishya, K. A., e Anjaly Krishnan. "Vision-Controlled Flying Robot". In International Conference on Emerging Trends in Engineering & Technology (ICETET-2015). Singapore: Research Publishing Services, 2015. http://dx.doi.org/10.3850/978-981-09-5346-1_eee-516.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Han, Chin Yun, S. Parasuraman, I. Elamvazhuthi, C. Deisy, S. Padmavathy e M. K. A. Ahamed khan. "Vision Guided Soccer Robot". In 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, 2017. http://dx.doi.org/10.1109/iccic.2017.8524422.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Solvang, Bjorn, Gabor Sziebig e Peter Korondi. "Vision Based Robot Programming". In 2008 IEEE International Conference on Networking, Sensing and Control (ICNSC). IEEE, 2008. http://dx.doi.org/10.1109/icnsc.2008.4525353.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Sileo, Monica, Michelangelo Nigro, Domenico D. Bloisi e Francesco Pierri. "Vision based robot-to-robot object handover". In 2021 20th International Conference on Advanced Robotics (ICAR). IEEE, 2021. http://dx.doi.org/10.1109/icar53236.2021.9659446.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Georgiou, Evangelos, Jian S. Dai e Michael Luck. "The KCLBOT: The Challenges of Stereo Vision for a Small Autonomous Mobile Robot". In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70503.

Testo completo
Abstract (sommario):
In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Robot vision"

1

Blackburn, Michael R., e Hoa G. Nguyen. Vision Based Autonomous Robot Navigation: Motion Segmentation,. Fort Belvoir, VA: Defense Technical Information Center, aprile 1996. http://dx.doi.org/10.21236/ada308472.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Alkhulayfi, Khalid. Vision-Based Motion for a Humanoid Robot. Portland State University Library, gennaio 2000. http://dx.doi.org/10.15760/etd.3173.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Franaszek, Marek, Geraldine S. Cheok, Karl van Wyk e Jeremy A. Marvel. Improving 3D vision-robot registration for assembly tasks. Gaithersburg, MD: National Institute of Standards and Technology, aprile 2020. http://dx.doi.org/10.6028/nist.ir.8300.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Metta, Giorgio. An Attentional System for a Humanoid Robot Exploiting Space Variant Vision. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2001. http://dx.doi.org/10.21236/ada434729.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Chen, Jessie Y., Razia V. Oden, Caitlin Kenny e John O. Merritt. Effectiveness of Stereoscopic Displays for Indirect-Vision Driving and Robot Teleoperation. Fort Belvoir, VA: Defense Technical Information Center, agosto 2010. http://dx.doi.org/10.21236/ada526325.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bowyer, Kevin. Development of the Aspect Graph Representation for Use in Robot Vision. Fort Belvoir, VA: Defense Technical Information Center, ottobre 1991. http://dx.doi.org/10.21236/ada247109.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Velázquez López, Noé. Working Paper PUEAA No. 7. Development of a farm robot (Voltan). Universidad Nacional Autónoma de México, Programa Universitario de Estudios sobre Asia y África, 2022. http://dx.doi.org/10.22201/pueaa.005r.2022.

Testo completo
Abstract (sommario):
Over the last century, agriculture has evolved from a labor-intensive industry to one that uses mechanized, high-powered production systems. The introduction of robotic technology in agriculture could be a new step towards labor productivity. By mimicking or extending human skills, robots overcome critical human limitations, including the ability to operate in harsh agricultural environments. In this context, in 2014 the development of the first agricultural robot in Mexico (“Voltan”) began at Chapingo Autonomous University. The research’s objective was to develop an autonomous multitasking vehicle for agricultural work. As a result of this development, a novel suspension system was created. In addition, autonomous navigation between crop rows was achieved through computer vision, allowing crop monitoring, fertilizer application and, in general, pest and disease control.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Snider, Wesley, e Griff Bilbro. Teleoperation of a Team of Robots with Vision. Fort Belvoir, VA: Defense Technical Information Center, novembre 2010. http://dx.doi.org/10.21236/ada546999.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kannan /Vilas K. /Chitrakaran, Hariprasad, Darren M. Dawson e Timothy Burg. Vision-Based Leader/Follower Tracking for Nonholonomic Mobile Robots. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2006. http://dx.doi.org/10.21236/ada462604.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Lee, W. S., Victor Alchanatis e Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, gennaio 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Testo completo
Abstract (sommario):
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia