To see the other types of publications on this topic, follow the link: Robot vision systems.

Journal articles on the topic 'Robot vision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Robot vision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Monta, Mitsuji, Naoshi Kondo, Seiichi Arima, and Kazuhiko Namba. "Robotic Vision for Bioproduction Systems." Journal of Robotics and Mechatronics 15, no. 3 (2003): 341–48. http://dx.doi.org/10.20965/jrm.2003.p0341.

Full text
Abstract:
The vision system is one of the most important external sensors for an agricultural robot because the robot has to find its target among various objects in complicated background. Optical and morphological properties, therefore, should be investigated first to recognize the target object properly, when a visual sensor for agricultural robot is developed. A TV camera is widely used as a vision sensor for agricultural robot. Target image can be easily obtained by using color component images from TV camera, when the target color is different from the colors of the other objects and its backgroun
APA, Harvard, Vancouver, ISO, and other styles
2

Senoo, Taku, Yuji Yamakawa, Yoshihiro Watanabe, Hiromasa Oku, and Masatoshi Ishikawa. "High-Speed Vision and its Application Systems." Journal of Robotics and Mechatronics 26, no. 3 (2014): 287–301. http://dx.doi.org/10.20965/jrm.2014.p0287.

Full text
Abstract:
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260003/02.jpg"" width=""300"" />Batting/throwing robots </span></div> This paper introduces high-speed vision the authors developed, together with its applications. Architecture and development examples of high-speed vision are shown first in the sections that follow, then target tracking using active vision is explained. High-speed vision applied to robot control, design guidelines, and the development system for a high-speed robot are then introduced as examples. High-speed robot tasks, including
APA, Harvard, Vancouver, ISO, and other styles
3

C, Abhishek. "Development of Hexapod Robot with Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 1796–805. http://dx.doi.org/10.22214/ijraset.2021.37455.

Full text
Abstract:
Abstract: Nowadays many robotic systems are developed with lot of innovation, seeking to get flexibility and efficiency of biological systems. Hexapod Robot is the best example for such robots, it is a six-legged robot whose walking movements try to imitate the movements of the insects, it has two sets of three legs alternatively which is used to walk, this will provide stability, flexibility and mobility to travel on irregular surfaces. With these attributes the hexapod robots can be used to explore irregular surfaces, inhospitable places, or places which are difficult for humans to access. T
APA, Harvard, Vancouver, ISO, and other styles
4

Fujita, Toyomi, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji, and Shun’ichi Kaneko. "Special Issue on Vision and Motion Control." Journal of Robotics and Mechatronics 27, no. 2 (2015): 121. http://dx.doi.org/10.20965/jrm.2015.p0121.

Full text
Abstract:
Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Rui, Yuhui Wen, Wang Zhao, and Yong-Jin Liu. "View planning in robot active vision: A survey of systems, algorithms, and applications." Computational Visual Media 6, no. 3 (2020): 225–45. http://dx.doi.org/10.1007/s41095-020-0179-3.

Full text
Abstract:
Abstract Rapid development of artificial intelligence motivates researchers to expand the capabilities of intelligent and autonomous robots. In many robotic applications, robots are required to make planning decisions based on perceptual information to achieve diverse goals in an efficient and effective way. The planning problem has been investigated in active robot vision, in which a robot analyzes its environment and its own state in order to move sensors to obtain more useful information under certain constraints. View planning, which aims to find the best view sequence for a sensor, is one
APA, Harvard, Vancouver, ISO, and other styles
6

Senoo, Taku, Yuji Yamakawa, Shouren Huang, et al. "Dynamic Intelligent Systems Based on High-Speed Vision." Journal of Robotics and Mechatronics 31, no. 1 (2019): 45–56. http://dx.doi.org/10.20965/jrm.2019.p0045.

Full text
Abstract:
This paper presents an overview of the high-speed vision system that the authors have been developing, and its applications. First, examples of high-speed vision are presented, and image-related technologies are described. Next, we describe the use of vision systems to track flying objects at sonic speed. Finally, we present high-speed robotic systems that use high-speed vision for robotic control. Descriptions of the tasks that employ high-speed robots center on manipulation, bipedal running, and human-robot cooperation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kühnlenz, Kolja, and Martin Buss. "Multi-Focal Vision and Gaze Control Improve Navigation Performance." International Journal of Advanced Robotic Systems 9, no. 1 (2012): 25. http://dx.doi.org/10.5772/50920.

Full text
Abstract:
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies
APA, Harvard, Vancouver, ISO, and other styles
8

Menegatti, Emanuele, and Tomas Pajdla. "Omnidirectional robot vision." Robotics and Autonomous Systems 58, no. 6 (2010): 745–46. http://dx.doi.org/10.1016/j.robot.2010.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chioreanu, Adrian, Stelian Brad, and Cosmin Ioanes. "Vision on Intelligent Management of Industrial Robotics Systems." Applied Mechanics and Materials 162 (March 2012): 368–77. http://dx.doi.org/10.4028/www.scientific.net/amm.162.368.

Full text
Abstract:
Based on Future Internet and ITIL, cutting edge concepts and approaches related to software service systems in distributed architectures for managing information and processes in industrial robot platforms are introduced. A new approach in defining the business relation for entities that have various interests related to industrial robots, as well as tools that support the new business approach are also identified in this paper. Further the architecture of a prototype platform designed around those concepts is presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Troscianko, T., B. Vincent, I. D. Gilchrist, R. Knight, and O. Holland. "A robot with active vision." Journal of Vision 6, no. 6 (2010): 456. http://dx.doi.org/10.1167/6.6.456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Idesawa, Masanori, Yasushi Mae, and Junji Oaki. "Special Issue on Robot Vision - Vision for Action -." Journal of Robotics and Mechatronics 21, no. 6 (2009): 671. http://dx.doi.org/10.20965/jrm.2009.p0671.

Full text
Abstract:
Robot vision is a key technology in robotics and mechatronics for realizing intelligent robot systems that work in the real world. The fact that robot vision algorithms required much time and effort to apply in real-world applications has delayed their dissemination until new forms made possible by recent rapid improvements in computer speed. Now the day is coming when robot vision may surpass human vision in many applications. This special issue presents 13 papers on the latest robot vision achievements and their applications. The first two propose ways of measuring and modeling 3D objects in
APA, Harvard, Vancouver, ISO, and other styles
12

Zakhama, Afef, Lotfi Charrabi, and Khaled Jelassi. "Intelligent Selective Compliance Articulated Robot Arm robot with object recognition in a multi-agent manufacturing system." International Journal of Advanced Robotic Systems 16, no. 2 (2019): 172988141984114. http://dx.doi.org/10.1177/1729881419841145.

Full text
Abstract:
Nowadays, industry tends to adopt the smart factory concept in their production. Technology intelligence is applied to use all the resources efficiently. Robots and vision system are masters in this kind of industry. However, information transfer between the robot controller and the vision system poses a great challenge. Data exchange between these two systems shall be secure, and the transfer must be with a very high level of accuracy. In this article, a multi-platform software application using a vision system is performed to control a Selective Compliance Articulated Robot Arm robot. The so
APA, Harvard, Vancouver, ISO, and other styles
13

Park, Jaehong, Wonsang Hwang, Hyunil Kwon, Kwangsoo Kim, and Dong-il “Dan” Cho. "A novel line of sight control system for a robot vision tracking system, using vision feedback and motion-disturbance feedforward compensation." Robotica 31, no. 1 (2012): 99–112. http://dx.doi.org/10.1017/s0263574712000124.

Full text
Abstract:
SUMMARYThis paper presents a novel line of sight control system for a robot vision tracking system, which uses a position feedforward controller to preposition a camera, and a vision feedback controller to compensate for the positioning error. Continuous target tracking is an important function for service robots, surveillance robots, and cooperating robot systems. However, it is difficult to track a specific target using only vision information, while a robot is in motion. This is especially true when a robot is moving fast or rotating fast. The proposed system controls the camera line of sig
APA, Harvard, Vancouver, ISO, and other styles
14

Kragic, Danica, and Henrik I. Christensen. "Advances in robot vision." Robotics and Autonomous Systems 52, no. 1 (2005): 1–3. http://dx.doi.org/10.1016/j.robot.2005.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bingol, Mustafa Can, and Omur Aydogmus. "Practical application of a safe human-robot interaction software." Industrial Robot: the international journal of robotics research and application 47, no. 3 (2020): 359–68. http://dx.doi.org/10.1108/ir-09-2019-0180.

Full text
Abstract:
Purpose Because of the increased use of robots in the industry, it has become inevitable for humans and robots to be able to work together. Therefore, human security has become the primary noncompromising factor of joint human and robot operations. For this reason, the purpose of this study was to develop a safe human-robot interaction software based on vision and touch. Design/methodology/approach The software consists of three modules. Firstly, the vision module has two tasks: to determine whether there is a human presence and to measure the distance between the robot and the human within th
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Haoyao, Hailin Huang, Ye Qin, Yanjie Li, and Yunhui Liu. "Vision and laser fused SLAM in indoor environments with multi-robot system." Assembly Automation 39, no. 2 (2019): 297–307. http://dx.doi.org/10.1108/aa-04-2018-065.

Full text
Abstract:
Purpose Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements. Design/methodology/approach A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM i
APA, Harvard, Vancouver, ISO, and other styles
17

Marshall, S. "Machine vision: Automated visual inspection and robot vision." Automatica 30, no. 4 (1994): 731–32. http://dx.doi.org/10.1016/0005-1098(94)90163-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PALMA-AMESTOY, RODRIGO, JAVIER RUIZ-DEL-SOLAR, JOSÉ MIGUEL YÁÑEZ, and PABLO GUERRERO. "SPATIOTEMPORAL CONTEXT INTEGRATION IN ROBOT VISION." International Journal of Humanoid Robotics 07, no. 03 (2010): 357–77. http://dx.doi.org/10.1142/s0219843610002192.

Full text
Abstract:
Robust vision in dynamic environments using limited processing power is one of the main challenges in robot vision. This is especially true in the case of biped humanoids that use low-end computers. Techniques such as active vision, context-based vision, and multi-resolution are currently in use to deal with these highly demanding requirements. Thus, having as main motivation the development of robust and high performing robot vision systems, which can operate in dynamic environments, with limited computational resources, we propose a spatiotemporal context integration framework that improves
APA, Harvard, Vancouver, ISO, and other styles
19

Bourbakis, Nikolaos G. "A quadtree multimicroprocessor architecture for robot vision systems." Microprocessing and Microprogramming 16, no. 4-5 (1985): 267–71. http://dx.doi.org/10.1016/0165-6074(85)90014-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liang, P., Y. L. Chang, and S. Hackwood. "Adaptive self-calibration of vision-based robot systems." IEEE Transactions on Systems, Man, and Cybernetics 19, no. 4 (1989): 811–24. http://dx.doi.org/10.1109/21.35344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Desnoyer, J. L., O. Dessoude, A. Lanusse, and X. Merlo. "Robot Perception Systems: Acoustics Vision Set-Up Control." IFAC Proceedings Volumes 22, no. 6 (1989): 435–39. http://dx.doi.org/10.1016/s1474-6670(17)54415-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kinnell, Peter, Tom Rymer, John Hodgson, Laura Justham, and Mike Jackson. "Autonomous metrology for robot mounted 3D vision systems." CIRP Annals 66, no. 1 (2017): 483–86. http://dx.doi.org/10.1016/j.cirp.2017.04.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

KUO, CHUNG-HSIEN, HUNG-CHYUN CHOU, SHOU-WEI CHI, and YU-DE LIEN. "VISION-BASED OBSTACLE AVOIDANCE NAVIGATION WITH AUTONOMOUS HUMANOID ROBOTS FOR STRUCTURED COMPETITION PROBLEMS." International Journal of Humanoid Robotics 10, no. 03 (2013): 1350021. http://dx.doi.org/10.1142/s0219843613500217.

Full text
Abstract:
Biped humanoid robots have been developed to successfully perform human-like locomotion. Based on the use of well-developed locomotion control systems, humanoid robots are further expected to achieve high-level intelligence, such as vision-based obstacle avoidance navigation. To provide standard obstacle avoidance navigation problems for autonomous humanoid robot researches, the HuroCup League of Federation of International Robot-Soccer Association (FIRA) and the RoboCup Humanoid League defined the conditions and rules in competitions to evaluate the performance. In this paper, the vision-base
APA, Harvard, Vancouver, ISO, and other styles
24

Sulistijono, Indra Adji, Son Kuswadi, One Setiaji, Inzar Salfikar, and Naoyuki Kubota. "A Study on Fuzzy Control of Humanoid Soccer Robot EFuRIO for Vision Control System and Walking Movement." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 3 (2012): 444–52. http://dx.doi.org/10.20965/jaciii.2012.p0444.

Full text
Abstract:
Instability is one of the major defects in humanoid robots. Recently, various methods on the stability and reliability of humanoid robots have been studied actively. We propose a new fuzzy-logic control scheme for vision systems that would enable a robot to search for and to kick a ball towards an opponent goal. In this paper, a stabilization algorithm is proposed using the balance condition of the robot, which is measured using accelerometer sensors during standing and walking, and turning movement are estimated from these data. From this information the robot selects the appropriate motion p
APA, Harvard, Vancouver, ISO, and other styles
25

SITTE, JOAQUIN, and PETRA WINZER. "SYSTEMATIC DESIGN OF COMPLEX ARTEFACTS: ROBOT VISION." International Journal of Information Acquisition 05, no. 01 (2008): 51–63. http://dx.doi.org/10.1142/s0219878908001491.

Full text
Abstract:
In this paper we use the design of an innovative on-board vision system for a small commercial minirobot to demonstrate the application of the demand compliant design (DeCoDe) method. Vision systems are amongst the most complex sensor systems both in nature and in engineering and thus provide an excellent arena for testing design methods. A review of current design methods for mechatronic systems shows that there are no methods that support or require a complete description of the product system. The DeCoDe method is a step towards overcoming this deficiency. The minirobot robot design is carr
APA, Harvard, Vancouver, ISO, and other styles
26

MAEDER, ANDREAS, HANNES BISTRY, and JIANWEI ZHANG. "INTELLIGENT VISION SYSTEMS FOR ROBOTIC APPLICATIONS." International Journal of Information Acquisition 05, no. 03 (2008): 259–67. http://dx.doi.org/10.1142/s0219878908001648.

Full text
Abstract:
Vision-based sensors are a key component for robot-systems, where many tasks depend on image data. Realtime control constraints bind a lot of processing power for only a single sensor modality. Dedicated and distributed processing resources are the "natural" solution to overcome this limitation. This paper presents experiments, using embedded processors as well as dedicated hardware, to execute various image (pre)processing tasks. Architectural concepts and requirements for intelligent vision systems have been acquired.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Xiaojun, Zeming Fan, Hao Wan, et al. "Positioning, Navigation, and Book Accessing/Returning in an Autonomous Library Robot using Integrated Binocular Vision and QR Code Identification Systems." Sensors 19, no. 4 (2019): 783. http://dx.doi.org/10.3390/s19040783.

Full text
Abstract:
With rapid advancements in artificial intelligence and mobile robots, some of the tedious yet simple jobs in modern libraries, like book accessing and returning (BAR) operations that had been fulfilled manually before, could be undertaken by robots. Due to the limited accuracies of the existing positioning and navigation (P&N) technologies and the operational errors accumulated within the robot P&N process, however, most of the current robots are not able to fulfill such high-precision operations. To address these practical issues, we propose, for the first time (to the best of our kno
APA, Harvard, Vancouver, ISO, and other styles
28

Martyshkin, Alexey I. "Motion Planning Algorithm for a Mobile Robot with a Smart Machine Vision System." Nexo Revista Científica 33, no. 02 (2020): 651–71. http://dx.doi.org/10.5377/nexo.v33i02.10800.

Full text
Abstract:
This study is devoted to the challenges of motion planning for mobile robots with smart machine vision systems. Motion planning for mobile robots in the environment with obstacles is a problem to deal with when creating robots suitable for operation in real-world conditions. The solutions found today are predominantly private, and are highly specialized, which prevents judging of how successful they are in solving the problem of effective motion planning. Solutions with a narrow application field already exist and are being already developed for a long time, however, no major breakthrough has
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Liyuan, Qianli Xu, Gang S. Wang, Xinguo Yu, Yeow Kee Tan, and Haizhou Li. "Visual Perception Based Engagement Awareness for Multiparty Human–Robot Interaction." International Journal of Humanoid Robotics 12, no. 04 (2015): 1550019. http://dx.doi.org/10.1142/s021984361550019x.

Full text
Abstract:
Computational systems for human–robot interaction (HRI) could benefit from visual perceptions of social cues that are commonly employed in human–human interactions. However, existing systems focus on one or two cues for attention or intention estimation. This research investigates how social robots may exploit a wide spectrum of visual cues for multiparty interactions. It is proposed that the vision system for social cue perception should be supported by two dimensions of functionality, namely, vision functionality and cognitive functionality. A vision-based system is proposed for a robot rece
APA, Harvard, Vancouver, ISO, and other styles
30

Tan, Bin. "Soccer-Assisted Training Robot Based on Image Recognition Omnidirectional Movement." Wireless Communications and Mobile Computing 2021 (August 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/5532210.

Full text
Abstract:
With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion
APA, Harvard, Vancouver, ISO, and other styles
31

Tasevski, Jovica, Milutin Nikolic, and Dragisa Miskovic. "Integration of an industrial robot with the systems for image and voice recognition." Serbian Journal of Electrical Engineering 10, no. 1 (2013): 219–30. http://dx.doi.org/10.2298/sjee1301219t.

Full text
Abstract:
The paper reports a solution for the integration of the industrial robot ABB IRB140 with the system for automatic speech recognition (ASR) and the system for computer vision. The robot has the task to manipulate the objects placed randomly on a pad lying on a table, and the computer vision system has to recognize their characteristics (shape, dimension, color, position, and orientation). The ASR system has a task to recognize human speech and use it as a command to the robot, so the robot can manipulate the objects.
APA, Harvard, Vancouver, ISO, and other styles
32

Preising, B., and T. C. Hisa. "Robot performance measurement and calibration using a 3D computer vision system." Robotica 13, no. 4 (1995): 327–37. http://dx.doi.org/10.1017/s0263574700018762.

Full text
Abstract:
SummaryPresent day robot systems are manufactured to perform within industry accepted tolerances. However, to use such systems for tasks requiring high precision, various methods of robot calibration are generally required. These procedures can improve the accuracy of a robot within a small volume of the robot's workspace. The objective of this paper is to demonstrate the use of a single camera 3D computer vision system as a position sensor in order to perform robot calibration. A vision feedback scheme, termed Vision-guided Robot Control (VRC), is described which can improve the accuracy of a
APA, Harvard, Vancouver, ISO, and other styles
33

Nehmzow, Ulrich. "Vision processing for robot learning." Industrial Robot: An International Journal 26, no. 2 (1999): 121–30. http://dx.doi.org/10.1108/01439919910260204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Hui, Xieyuanli Chen, Huimin Lu, and Junhao Xiao. "Distributed and collaborative monocular simultaneous localization and mapping for multi-robot systems in large-scale environments." International Journal of Advanced Robotic Systems 15, no. 3 (2018): 172988141878017. http://dx.doi.org/10.1177/1729881418780178.

Full text
Abstract:
In this article, we propose a distributed and collaborative monocular simultaneous localization and mapping system for the multi-robot system in large-scale environments, where monocular vision is the only exteroceptive sensor. Each robot estimates its pose and reconstructs the environment simultaneously using the same monocular simultaneous localization and mapping algorithm. Meanwhile, they share the results of their incremental maps by streaming keyframes through the robot operating system messages and the wireless network. Subsequently, each robot in the group can obtain the global map wit
APA, Harvard, Vancouver, ISO, and other styles
35

Rahmaniar, Wahyu, Wen-June Wang, Wahyu Caesarendra, et al. "Distance Measurement of Unmanned Aerial Vehicles Using Vision-Based Systems in Unknown Environments." Electronics 10, no. 14 (2021): 1647. http://dx.doi.org/10.3390/electronics10141647.

Full text
Abstract:
Localization for the indoor aerial robot remains a challenging issue because global positioning system (GPS) signals often cannot reach several buildings. In previous studies, navigation of mobile robots without the GPS required the registration of building maps beforehand. This paper proposes a novel framework for addressing indoor positioning for unmanned aerial vehicles (UAV) in unknown environments using a camera. First, the UAV attitude is estimated to determine whether the robot is moving forward. Then, the camera position is estimated based on optical flow and the Kalman filter. Semanti
APA, Harvard, Vancouver, ISO, and other styles
36

Geng, Mingyang, Shuqi Liu, and Zhaoxia Wu. "Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System." Sensors 19, no. 4 (2019): 823. http://dx.doi.org/10.3390/s19040823.

Full text
Abstract:
Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance
APA, Harvard, Vancouver, ISO, and other styles
37

Cannata, Giorgio, and Enrico Grosso. "On perceptual advantages of active robot vision." Journal of Robotic Systems 16, no. 3 (1999): 163–83. http://dx.doi.org/10.1002/(sici)1097-4563(199903)16:3<163::aid-rob3>3.0.co;2-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Weber, Cornelius, Stefan Wermter, and Alexandros Zochios. "Robot docking with neural vision and reinforcement." Knowledge-Based Systems 17, no. 2-4 (2004): 165–72. http://dx.doi.org/10.1016/j.knosys.2004.03.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wirbel, Emilie, Silvère Bonnabel, Arnaud de La Fortelle, and Fabien Moutarde. "Humanoid Robot Navigation: Getting Localization Information from Vision." Journal of Intelligent Systems 23, no. 2 (2014): 113–32. http://dx.doi.org/10.1515/jisys-2013-0079.

Full text
Abstract:
AbstractIn this article, we present our work to provide a navigation and localization system on a constrained humanoid platform, the NAO robot, without modifying the robot sensors. First, we tried to implement a simple and light version of classic monocular Simultaneous Localization and Mapping (SLAM) algorithms, while adapting to the CPU and camera quality, which turned out to be insufficient on the platform for the moment. From our work on keypoints tracking, we identified that some keypoints can be still accurately tracked at little cost, and used them to build a visual compass. This compas
APA, Harvard, Vancouver, ISO, and other styles
40

Ikeuchi, Katsushi, Khotaro Ohba, and Yoichi Sato. "Special Issue On 'Robot Vision'." Advanced Robotics 12, no. 3 (1997): 309–11. http://dx.doi.org/10.1163/156855398x00208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hannan, M. W., and I. D. Walker. "Real-time shape estimation for continuum robots using vision." Robotica 23, no. 5 (2005): 645–51. http://dx.doi.org/10.1017/s0263574704001018.

Full text
Abstract:
This paper describes external camera-based shape estimation for continuum robots. Continuum robots have a continuous backbone made of sections which bend to produce changes of configuration. A major difficulty with continuum robots is the determination of the robot's shape, as there are no discrete joints. This paper presents a method for shape determination based on machine vision. Using an engineered environment and image processing from a high speed camera, shape determination of a continuum robot is achieved. Experimental results showing the effectiveness of the technique on our Elephant's
APA, Harvard, Vancouver, ISO, and other styles
42

Adachi, Yoshinobu, and Masayoshi Kakikura. "Research on the Sheepdog Problem Using Cellular Automata." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 9 (2007): 1099–106. http://dx.doi.org/10.20965/jaciii.2007.p1099.

Full text
Abstract:
The simulation framework we propose for complex path planning problems with multiagent systems focuses on the sheepdog problem for handling distributed autonomous robot systems – an extension of the pursuit problem for handling one prey robot and multiple predator robot. The sheepdog problem involves a more complex issue in which multiple dog robot chase and herd multiple sheep robot. We use the Boids model and cellular automata to model sheep flocking and chase and herd behavior for dog robots. We conduct experiments using a Sheepdog problem simulator and study cooperative behavior.
APA, Harvard, Vancouver, ISO, and other styles
43

Sahoo, Saumya R., and Shital S. Chiddarwar. "Flatness-based control scheme for hardware-in-the-loop simulations of omnidirectional mobile robot." SIMULATION 96, no. 2 (2019): 169–83. http://dx.doi.org/10.1177/0037549719859064.

Full text
Abstract:
Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is us
APA, Harvard, Vancouver, ISO, and other styles
44

Feng, Haibo, Yanwu Zhai, and Yili Fu. "Development of master-slave magnetic anchoring vision robotic system for single-port laparoscopy (SPL) surgery." Industrial Robot: An International Journal 45, no. 4 (2018): 458–68. http://dx.doi.org/10.1108/ir-01-2018-0016.

Full text
Abstract:
Purpose Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to effectively improve the visualization of surgical robot systems for relatively complex surgical procedures. Design/methodology/approach In this paper, a new master-slave magnetic anchoring vision robotic system for SPL surgery was proposed. A lighting distribution analysis for the imaging unit of the vision robot was carried out to guarantee illumination uniformity in the workspace during SPL surgery. Moreover, c
APA, Harvard, Vancouver, ISO, and other styles
45

Koutaki, Gou, and Keiichi Uchimura. "Fast and Robust Vision System for Shogi Robot." Journal of Robotics and Mechatronics 27, no. 2 (2015): 182–90. http://dx.doi.org/10.20965/jrm.2015.p0182.

Full text
Abstract:
&lt;div class=""abs_img""&gt; &lt;img src=""[disp_template_path]/JRM/abst-image/00270002/08.jpg"" width=""150"" /&gt;Developed shogi robot system&lt;/div&gt; The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and
APA, Harvard, Vancouver, ISO, and other styles
46

Lenz, Reiner. "Lie methods for color robot vision." Robotica 26, no. 4 (2008): 453–64. http://dx.doi.org/10.1017/s0263574707003906.

Full text
Abstract:
SUMMARYWe describe how Lie-theoretical methods can be used to analyze color related problems in machine vision. The basic observation is that the nonnegative nature of spectral color signals restricts these functions to be members of a limited, conical section of the larger Hilbert space of square-integrable functions. From this observation, we conclude that the space of color signals can be equipped with a coordinate system consisting of a half-axis and a unit ball with the Lorentz groups as natural transformation group. We introduce the theory of the Lorentz group SU(1, 1) as a natural tool
APA, Harvard, Vancouver, ISO, and other styles
47

BANDERA, J. P., J. A. RODRÍGUEZ, L. MOLINA-TANCO, and A. BANDERA. "A SURVEY OF VISION-BASED ARCHITECTURES FOR ROBOT LEARNING BY IMITATION." International Journal of Humanoid Robotics 09, no. 01 (2012): 1250006. http://dx.doi.org/10.1142/s0219843612500065.

Full text
Abstract:
Learning by imitation is a natural and intuitive way to teach social robots new behaviors. While these learning systems can use different sensory inputs, vision is often their main or even their only source of input data. However, while many vision-based robot learning by imitation (RLbI) architectures have been proposed in the last decade, they may be difficult to compare due to the absence of a common, structured description. The first contribution of this survey is the definition of a set of standard components that can be used to describe any RLbI architecture. Once these components have b
APA, Harvard, Vancouver, ISO, and other styles
48

Kyrki, Ville, and Danica Kragic. "Computer and Robot Vision [TC Spotlight]." IEEE Robotics & Automation Magazine 18, no. 2 (2011): 121–22. http://dx.doi.org/10.1109/mra.2011.941638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Siagian, C., and L. Itti. "Biologically Inspired Mobile Robot Vision Localization." IEEE Transactions on Robotics 25, no. 4 (2009): 861–73. http://dx.doi.org/10.1109/tro.2009.2022424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Braggins, Don. "A critical look at robot vision." Industrial Robot: An International Journal 22, no. 6 (1995): 9–12. http://dx.doi.org/10.1108/01439919510105093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!