To see the other types of publications on this topic, follow the link: Robot vision systems.

Journal articles on the topic 'Robot vision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Robot vision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Monta, Mitsuji, Naoshi Kondo, Seiichi Arima, and Kazuhiko Namba. "Robotic Vision for Bioproduction Systems." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 341–48. http://dx.doi.org/10.20965/jrm.2003.p0341.

Full text
Abstract:
The vision system is one of the most important external sensors for an agricultural robot because the robot has to find its target among various objects in complicated background. Optical and morphological properties, therefore, should be investigated first to recognize the target object properly, when a visual sensor for agricultural robot is developed. A TV camera is widely used as a vision sensor for agricultural robot. Target image can be easily obtained by using color component images from TV camera, when the target color is different from the colors of the other objects and its background. When the target has a similar color with its background, it is often possible to discriminate objects by a monochrome TV camera whose sensitivity is from visible to infrared region. However, it is not easy to measure the target depth by TV cameras because many objects are sometimes overlapped in a field view. In this paper, robotic vision using TV camera for tomato and cucumber harvesting robots and a depth measurement systems using laser scanner are introduced.
APA, Harvard, Vancouver, ISO, and other styles
2

Senoo, Taku, Yuji Yamakawa, Yoshihiro Watanabe, Hiromasa Oku, and Masatoshi Ishikawa. "High-Speed Vision and its Application Systems." Journal of Robotics and Mechatronics 26, no. 3 (June 20, 2014): 287–301. http://dx.doi.org/10.20965/jrm.2014.p0287.

Full text
Abstract:
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260003/02.jpg"" width=""300"" />Batting/throwing robots </span></div> This paper introduces high-speed vision the authors developed, together with its applications. Architecture and development examples of high-speed vision are shown first in the sections that follow, then target tracking using active vision is explained. High-speed vision applied to robot control, design guidelines, and the development system for a high-speed robot are then introduced as examples. High-speed robot tasks, including dynamic manipulation and handling of soft objects, are explained and then book flipping scanning – an image analysis application – is explained followed by 1 ms auto pan/tilt and micro visual feedback, which are optical applications. </span>
APA, Harvard, Vancouver, ISO, and other styles
3

C, Abhishek. "Development of Hexapod Robot with Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1796–805. http://dx.doi.org/10.22214/ijraset.2021.37455.

Full text
Abstract:
Abstract: Nowadays many robotic systems are developed with lot of innovation, seeking to get flexibility and efficiency of biological systems. Hexapod Robot is the best example for such robots, it is a six-legged robot whose walking movements try to imitate the movements of the insects, it has two sets of three legs alternatively which is used to walk, this will provide stability, flexibility and mobility to travel on irregular surfaces. With these attributes the hexapod robots can be used to explore irregular surfaces, inhospitable places, or places which are difficult for humans to access. This paper involves the development of hexapod robot with digital image processing implemented on Raspberry Pi, to study in the areas of robotic systems with legged locomotion and robotic vision. This paper is an integration of a robotic system and an embedded system of digital image processing, programmed in high level language using Python. It is equipped with a camera to capture real time video and uses a distance sensor that allow the robot to detect obstacles. The Robot is Self-Stabilizing and can detect corners. The robot has 3 degrees of freedom in each six legs thus making a 18 DOF robotic movement. The use of multiple degrees of freedom at the joints of the legs allows the legged robots to change their movement direction without slippage. Additionally, it is possible to change the height from the ground, introducing a damping and a decoupling between the terrain irregularities and the body of the robot servo motors. Keywords: Hexapod, Raspberry Pi, Computer vision, Object detection, Yolo, Servo Motor, OpevCV.
APA, Harvard, Vancouver, ISO, and other styles
4

Fujita, Toyomi, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji, and Shun’ichi Kaneko. "Special Issue on Vision and Motion Control." Journal of Robotics and Mechatronics 27, no. 2 (April 20, 2015): 121. http://dx.doi.org/10.20965/jrm.2015.p0121.

Full text
Abstract:
Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on. This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Rui, Yuhui Wen, Wang Zhao, and Yong-Jin Liu. "View planning in robot active vision: A survey of systems, algorithms, and applications." Computational Visual Media 6, no. 3 (August 1, 2020): 225–45. http://dx.doi.org/10.1007/s41095-020-0179-3.

Full text
Abstract:
Abstract Rapid development of artificial intelligence motivates researchers to expand the capabilities of intelligent and autonomous robots. In many robotic applications, robots are required to make planning decisions based on perceptual information to achieve diverse goals in an efficient and effective way. The planning problem has been investigated in active robot vision, in which a robot analyzes its environment and its own state in order to move sensors to obtain more useful information under certain constraints. View planning, which aims to find the best view sequence for a sensor, is one of the most challenging issues in active robot vision. The quality and efficiency of view planning are critical for many robot systems and are influenced by the nature of their tasks, hardware conditions, scanning states, and planning strategies. In this paper, we first summarize some basic concepts of active robot vision, and then review representative work on systems, algorithms and applications from four perspectives: object reconstruction, scene reconstruction, object recognition, and pose estimation. Finally, some potential directions are outlined for future work.
APA, Harvard, Vancouver, ISO, and other styles
6

Senoo, Taku, Yuji Yamakawa, Shouren Huang, Keisuke Koyama, Makoto Shimojo, Yoshihiro Watanabe, Leo Miyashita, Masahiro Hirano, Tomohiro Sueishi, and Masatoshi Ishikawa. "Dynamic Intelligent Systems Based on High-Speed Vision." Journal of Robotics and Mechatronics 31, no. 1 (February 20, 2019): 45–56. http://dx.doi.org/10.20965/jrm.2019.p0045.

Full text
Abstract:
This paper presents an overview of the high-speed vision system that the authors have been developing, and its applications. First, examples of high-speed vision are presented, and image-related technologies are described. Next, we describe the use of vision systems to track flying objects at sonic speed. Finally, we present high-speed robotic systems that use high-speed vision for robotic control. Descriptions of the tasks that employ high-speed robots center on manipulation, bipedal running, and human-robot cooperation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kühnlenz, Kolja, and Martin Buss. "Multi-Focal Vision and Gaze Control Improve Navigation Performance." International Journal of Advanced Robotic Systems 9, no. 1 (January 1, 2012): 25. http://dx.doi.org/10.5772/50920.

Full text
Abstract:
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Menegatti, Emanuele, and Tomas Pajdla. "Omnidirectional robot vision." Robotics and Autonomous Systems 58, no. 6 (June 2010): 745–46. http://dx.doi.org/10.1016/j.robot.2010.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chioreanu, Adrian, Stelian Brad, and Cosmin Ioanes. "Vision on Intelligent Management of Industrial Robotics Systems." Applied Mechanics and Materials 162 (March 2012): 368–77. http://dx.doi.org/10.4028/www.scientific.net/amm.162.368.

Full text
Abstract:
Based on Future Internet and ITIL, cutting edge concepts and approaches related to software service systems in distributed architectures for managing information and processes in industrial robot platforms are introduced. A new approach in defining the business relation for entities that have various interests related to industrial robots, as well as tools that support the new business approach are also identified in this paper. Further the architecture of a prototype platform designed around those concepts is presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Troscianko, T., B. Vincent, I. D. Gilchrist, R. Knight, and O. Holland. "A robot with active vision." Journal of Vision 6, no. 6 (March 19, 2010): 456. http://dx.doi.org/10.1167/6.6.456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Idesawa, Masanori, Yasushi Mae, and Junji Oaki. "Special Issue on Robot Vision - Vision for Action -." Journal of Robotics and Mechatronics 21, no. 6 (December 20, 2009): 671. http://dx.doi.org/10.20965/jrm.2009.p0671.

Full text
Abstract:
Robot vision is a key technology in robotics and mechatronics for realizing intelligent robot systems that work in the real world. The fact that robot vision algorithms required much time and effort to apply in real-world applications has delayed their dissemination until new forms made possible by recent rapid improvements in computer speed. Now the day is coming when robot vision may surpass human vision in many applications. This special issue presents 13 papers on the latest robot vision achievements and their applications. The first two propose ways of measuring and modeling 3D objects in everyday environments. Four more detail object detection and tracking, including visual servoing. Three propose advances in hand feature extraction and pose calculation, and one treats video coding for visual sensor networks. Two papers discuss robot vision applications based on human visual physiology, and the last clarifies an application in optical force sensors. We thank the authors for their invaluable contributions to this issue and the reviewers for their generous time and effort. Last, we thank the Editorial Board of JRM for making this issue possible.
APA, Harvard, Vancouver, ISO, and other styles
12

Zakhama, Afef, Lotfi Charrabi, and Khaled Jelassi. "Intelligent Selective Compliance Articulated Robot Arm robot with object recognition in a multi-agent manufacturing system." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141984114. http://dx.doi.org/10.1177/1729881419841145.

Full text
Abstract:
Nowadays, industry tends to adopt the smart factory concept in their production. Technology intelligence is applied to use all the resources efficiently. Robots and vision system are masters in this kind of industry. However, information transfer between the robot controller and the vision system poses a great challenge. Data exchange between these two systems shall be secure, and the transfer must be with a very high level of accuracy. In this article, a multi-platform software application using a vision system is performed to control a Selective Compliance Articulated Robot Arm robot. The software solution includes the detection of defaults in a product by calculating a compliance rate using an efficient algorithm. An analysis of four different algorithms related to histogram-based similarity functions is set. Then, the most efficient algorithm is integrated into the application that provides a secure communication between three different operating systems. Experiments in a multi-agent manufacturing center validate the effectiveness of the proposed method. Tests demonstrate the efficiency of the data transfer between the vision system and the multi-platform software application and the Selective Compliance Articulated Robot Arm robot. This data transfer can be controlled in a high accuracy manner without any additional manual parameters tuning.
APA, Harvard, Vancouver, ISO, and other styles
13

Park, Jaehong, Wonsang Hwang, Hyunil Kwon, Kwangsoo Kim, and Dong-il “Dan” Cho. "A novel line of sight control system for a robot vision tracking system, using vision feedback and motion-disturbance feedforward compensation." Robotica 31, no. 1 (April 12, 2012): 99–112. http://dx.doi.org/10.1017/s0263574712000124.

Full text
Abstract:
SUMMARYThis paper presents a novel line of sight control system for a robot vision tracking system, which uses a position feedforward controller to preposition a camera, and a vision feedback controller to compensate for the positioning error. Continuous target tracking is an important function for service robots, surveillance robots, and cooperating robot systems. However, it is difficult to track a specific target using only vision information, while a robot is in motion. This is especially true when a robot is moving fast or rotating fast. The proposed system controls the camera line of sight, using a feedforward controller based on estimated robot position and motion information. Specifically, the camera is rotated in the direction opposite to the motion of the robot. To implement the system, a disturbance compensator is developed to determine the current position of the robot, even when the robot wheels slip. The disturbance compensator is comprised of two extended Kalman filters (EKFs) and a slip detector. The inputs of the disturbance compensator are data from an accelerometer, a gyroscope, and two wheel-encoders. The vision feedback information, which is the targeting error, is used as the measurement update for the two EKFs. Using output of the disturbance compensator, an actuation module pans the camera to locate a target at the center of an image plane. This line of sight control methodology improves the recognition performance of the vision tracking system, by keeping a target image at the center of an image frame. The proposed system is implemented on a two-wheeled robot. Experiments are performed for various robot motion scenarios in dynamic situations to evaluate the tracking and recognition performance. Experimental results showed the proposed system achieves high tracking and recognition performances with a small targeting error.
APA, Harvard, Vancouver, ISO, and other styles
14

Kragic, Danica, and Henrik I. Christensen. "Advances in robot vision." Robotics and Autonomous Systems 52, no. 1 (July 2005): 1–3. http://dx.doi.org/10.1016/j.robot.2005.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bingol, Mustafa Can, and Omur Aydogmus. "Practical application of a safe human-robot interaction software." Industrial Robot: the international journal of robotics research and application 47, no. 3 (January 16, 2020): 359–68. http://dx.doi.org/10.1108/ir-09-2019-0180.

Full text
Abstract:
Purpose Because of the increased use of robots in the industry, it has become inevitable for humans and robots to be able to work together. Therefore, human security has become the primary noncompromising factor of joint human and robot operations. For this reason, the purpose of this study was to develop a safe human-robot interaction software based on vision and touch. Design/methodology/approach The software consists of three modules. Firstly, the vision module has two tasks: to determine whether there is a human presence and to measure the distance between the robot and the human within the robot’s working space using convolutional neural networks (CNNs) and depth sensors. Secondly, the touch detection module perceives whether or not a human physically touches the robot within the same work environment using robot axis torques, wavelet packet decomposition algorithm and CNN. Lastly, the robot’s operating speed is adjusted according to hazard levels came from vision and touch module using the robot’s control module. Findings The developed software was tested with an industrial robot manipulator and successful results were obtained with minimal error. Practical implications The success of the developed algorithm was demonstrated in the current study and the algorithm can be used in other industrial robots for safety. Originality/value In this study, a new and practical safety algorithm is proposed and the health of people working with industrial robots is guaranteed.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Haoyao, Hailin Huang, Ye Qin, Yanjie Li, and Yunhui Liu. "Vision and laser fused SLAM in indoor environments with multi-robot system." Assembly Automation 39, no. 2 (April 1, 2019): 297–307. http://dx.doi.org/10.1108/aa-04-2018-065.

Full text
Abstract:
Purpose Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements. Design/methodology/approach A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM in large-scale environments; both vision and laser loop detections are integrated to detect robust loops. A method based on oriented brief (ORB) feature detection and bag of words (BoW) is developed, to ensure the robustness and computational effectiveness of the multi-robot SLAM system. A robust and efficient graph fusion algorithm is proposed to merge pose graphs from different robots. Findings The proposed method can detect loops more quickly and accurately than the laser-only SLAM, and it can fuse the submaps of each single robot to promote the efficiency, accuracy and robustness of the system. Originality/value Compared with the state of art of multi-robot SLAM approaches, the paper proposed a novel and more sophisticated approach. The vision-based and laser-based loops are integrated to realize a robust loop detection. The ORB features and BoW technologies are further utilized to gain real-time performance. Finally, random sample consensus and least-square methodologies are used to remove the outlier loops among robots.
APA, Harvard, Vancouver, ISO, and other styles
17

Marshall, S. "Machine vision: Automated visual inspection and robot vision." Automatica 30, no. 4 (April 1994): 731–32. http://dx.doi.org/10.1016/0005-1098(94)90163-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PALMA-AMESTOY, RODRIGO, JAVIER RUIZ-DEL-SOLAR, JOSÉ MIGUEL YÁÑEZ, and PABLO GUERRERO. "SPATIOTEMPORAL CONTEXT INTEGRATION IN ROBOT VISION." International Journal of Humanoid Robotics 07, no. 03 (September 2010): 357–77. http://dx.doi.org/10.1142/s0219843610002192.

Full text
Abstract:
Robust vision in dynamic environments using limited processing power is one of the main challenges in robot vision. This is especially true in the case of biped humanoids that use low-end computers. Techniques such as active vision, context-based vision, and multi-resolution are currently in use to deal with these highly demanding requirements. Thus, having as main motivation the development of robust and high performing robot vision systems, which can operate in dynamic environments, with limited computational resources, we propose a spatiotemporal context integration framework that improves the perceptual capabilities of a given robot vision system. Furthermore, we try to link the vision, tracking, and self-localization problems using a context filter to improve the performance of all these parts together more than to improve them separately. This framework computes: (i) an estimation of the poses of visible and nonvisible objects using Kalman filters; (ii) the spatial coherence of each current detection with all other simultaneous detections and with all tracked objects; and (iii) the spatial coherence of each tracked object with all current detections. Using a Bayesian approach, we calculate the a-posteriori probabilities for each detected and tracked object, which is used in a filtering stage. We choose as a first application of this framework, the detection of static objects in the RoboCup Standard Platform League domain, where Nao humanoid robots are employed. The proposed system is validated in simulations and using real video sequences. In noisy environments, the system is able to decrease largely the number of false detections and to improve effectively the self-localization of the robot.
APA, Harvard, Vancouver, ISO, and other styles
19

Bourbakis, Nikolaos G. "A quadtree multimicroprocessor architecture for robot vision systems." Microprocessing and Microprogramming 16, no. 4-5 (November 1985): 267–71. http://dx.doi.org/10.1016/0165-6074(85)90014-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liang, P., Y. L. Chang, and S. Hackwood. "Adaptive self-calibration of vision-based robot systems." IEEE Transactions on Systems, Man, and Cybernetics 19, no. 4 (1989): 811–24. http://dx.doi.org/10.1109/21.35344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Desnoyer, J. L., O. Dessoude, A. Lanusse, and X. Merlo. "Robot Perception Systems: Acoustics Vision Set-Up Control." IFAC Proceedings Volumes 22, no. 6 (July 1989): 435–39. http://dx.doi.org/10.1016/s1474-6670(17)54415-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kinnell, Peter, Tom Rymer, John Hodgson, Laura Justham, and Mike Jackson. "Autonomous metrology for robot mounted 3D vision systems." CIRP Annals 66, no. 1 (2017): 483–86. http://dx.doi.org/10.1016/j.cirp.2017.04.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

KUO, CHUNG-HSIEN, HUNG-CHYUN CHOU, SHOU-WEI CHI, and YU-DE LIEN. "VISION-BASED OBSTACLE AVOIDANCE NAVIGATION WITH AUTONOMOUS HUMANOID ROBOTS FOR STRUCTURED COMPETITION PROBLEMS." International Journal of Humanoid Robotics 10, no. 03 (September 2013): 1350021. http://dx.doi.org/10.1142/s0219843613500217.

Full text
Abstract:
Biped humanoid robots have been developed to successfully perform human-like locomotion. Based on the use of well-developed locomotion control systems, humanoid robots are further expected to achieve high-level intelligence, such as vision-based obstacle avoidance navigation. To provide standard obstacle avoidance navigation problems for autonomous humanoid robot researches, the HuroCup League of Federation of International Robot-Soccer Association (FIRA) and the RoboCup Humanoid League defined the conditions and rules in competitions to evaluate the performance. In this paper, the vision-based obstacle avoidance navigation approaches for humanoid robots were proposed in terms of combining the techniques of visual localization, obstacle map construction and artificial potential field (APF)-based reactive navigations. Moreover, a small-size humanoid robot (HuroEvolutionJR) and an adult-size humanoid robot (HuroEvolutionAD) were used to evaluate the performance of the proposed obstacle avoidance navigation approach. The navigation performance was evaluated with the distance of ground truth trajectory collected from a motion capture system. Finally, the experiment results demonstrated the effectiveness of using vision-based localization and obstacle map construction approaches. Moreover, the APF-based navigation approach was capable of achieving smaller trajectory distance when compared to conventional just-avoiding-nearest-obstacle-rule approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Sulistijono, Indra Adji, Son Kuswadi, One Setiaji, Inzar Salfikar, and Naoyuki Kubota. "A Study on Fuzzy Control of Humanoid Soccer Robot EFuRIO for Vision Control System and Walking Movement." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 3 (May 20, 2012): 444–52. http://dx.doi.org/10.20965/jaciii.2012.p0444.

Full text
Abstract:
Instability is one of the major defects in humanoid robots. Recently, various methods on the stability and reliability of humanoid robots have been studied actively. We propose a new fuzzy-logic control scheme for vision systems that would enable a robot to search for and to kick a ball towards an opponent goal. In this paper, a stabilization algorithm is proposed using the balance condition of the robot, which is measured using accelerometer sensors during standing and walking, and turning movement are estimated from these data. From this information the robot selects the appropriate motion pattern effectively. In order to generate the appropriate reaction in various body of robot situations, a fuzzy algorithm is applied in finding the appropriate angle of the joint from the vision system. The performance of the proposed algorithm is verified by searching for a ball, walking, turning tap and ball kicking movement experiments using an 18-DOF humanoid robot, called EFuRIO.
APA, Harvard, Vancouver, ISO, and other styles
25

SITTE, JOAQUIN, and PETRA WINZER. "SYSTEMATIC DESIGN OF COMPLEX ARTEFACTS: ROBOT VISION." International Journal of Information Acquisition 05, no. 01 (March 2008): 51–63. http://dx.doi.org/10.1142/s0219878908001491.

Full text
Abstract:
In this paper we use the design of an innovative on-board vision system for a small commercial minirobot to demonstrate the application of the demand compliant design (DeCoDe) method. Vision systems are amongst the most complex sensor systems both in nature and in engineering and thus provide an excellent arena for testing design methods. A review of current design methods for mechatronic systems shows that there are no methods that support or require a complete description of the product system. The DeCoDe method is a step towards overcoming this deficiency. The minirobot robot design is carried from the generic vision system level down to first refinement for a minirobot vision system for visual navigation.
APA, Harvard, Vancouver, ISO, and other styles
26

MAEDER, ANDREAS, HANNES BISTRY, and JIANWEI ZHANG. "INTELLIGENT VISION SYSTEMS FOR ROBOTIC APPLICATIONS." International Journal of Information Acquisition 05, no. 03 (September 2008): 259–67. http://dx.doi.org/10.1142/s0219878908001648.

Full text
Abstract:
Vision-based sensors are a key component for robot-systems, where many tasks depend on image data. Realtime control constraints bind a lot of processing power for only a single sensor modality. Dedicated and distributed processing resources are the "natural" solution to overcome this limitation. This paper presents experiments, using embedded processors as well as dedicated hardware, to execute various image (pre)processing tasks. Architectural concepts and requirements for intelligent vision systems have been acquired.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Xiaojun, Zeming Fan, Hao Wan, Yuye He, Junye Du, Nan Li, Zhaohui Yuan, and Gaoxi Xiao. "Positioning, Navigation, and Book Accessing/Returning in an Autonomous Library Robot using Integrated Binocular Vision and QR Code Identification Systems." Sensors 19, no. 4 (February 14, 2019): 783. http://dx.doi.org/10.3390/s19040783.

Full text
Abstract:
With rapid advancements in artificial intelligence and mobile robots, some of the tedious yet simple jobs in modern libraries, like book accessing and returning (BAR) operations that had been fulfilled manually before, could be undertaken by robots. Due to the limited accuracies of the existing positioning and navigation (P&N) technologies and the operational errors accumulated within the robot P&N process, however, most of the current robots are not able to fulfill such high-precision operations. To address these practical issues, we propose, for the first time (to the best of our knowledge), to combine the binocular vision and Quick Response (QR) code identification techniques together to improve the robot P&N accuracies, and then construct an autonomous library robot for high-precision BAR operations. Specifically, the binocular vision system is used for dynamic digital map construction and autonomous P&N, as well as obstacle identification and avoiding functions, while the QR code identification technique is responsible for both robot operational error elimination and robotic arm BAR operation determination. Both simulations and experiments are conducted to verify the effectiveness of the proposed technique combination, as well as the constructed robot. Results show that such a technique combination is effective and robust, and could help to significantly improve the P&N and BAR operation accuracies, while reducing the BAR operation time. The implemented autonomous robot is fully-autonomous and cost-effective, and may find applications far beyond libraries with only sophisticated technologies employed.
APA, Harvard, Vancouver, ISO, and other styles
28

Martyshkin, Alexey I. "Motion Planning Algorithm for a Mobile Robot with a Smart Machine Vision System." Nexo Revista Científica 33, no. 02 (December 31, 2020): 651–71. http://dx.doi.org/10.5377/nexo.v33i02.10800.

Full text
Abstract:
This study is devoted to the challenges of motion planning for mobile robots with smart machine vision systems. Motion planning for mobile robots in the environment with obstacles is a problem to deal with when creating robots suitable for operation in real-world conditions. The solutions found today are predominantly private, and are highly specialized, which prevents judging of how successful they are in solving the problem of effective motion planning. Solutions with a narrow application field already exist and are being already developed for a long time, however, no major breakthrough has been observed yet. Only a systematic improvement in the characteristics of such systems can be noted. The purpose of this study: develop and investigate a motion planning algorithm for a mobile robot with a smart machine vision system. The research subject for this article is a motion planning algorithm for a mobile robot with a smart machine vision system. This study provides a review of domestic and foreign mobile robots that solve the motion planning problem in a known environment with unknown obstacles. The following navigation methods are considered for mobile robots: local, global, individual. In the course of work and research, a mobile robot prototype has been built, capable of recognizing obstacles of regular geometric shapes, as well as plan and correct the movement path. Environment objects are identified and classified as obstacles by means of digital image processing methods and algorithms. Distance to the obstacle and relative angle are calculated by photogrammetry methods, image quality is improved by linear contrast enhancement and optimal linear filtering using the Wiener-Hopf equation. Virtual tools, related to mobile robot motion algorithm testing, have been reviewed, which led us to selecting Webots software package for prototype testing. Testing results allowed us to make the following conclusions. The mobile robot has successfully identified the obstacle, planned a path in accordance with the obstacle avoidance algorithm, and continued moving to the destination. Conclusions have been drawn regarding the concluded research.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Liyuan, Qianli Xu, Gang S. Wang, Xinguo Yu, Yeow Kee Tan, and Haizhou Li. "Visual Perception Based Engagement Awareness for Multiparty Human–Robot Interaction." International Journal of Humanoid Robotics 12, no. 04 (November 27, 2015): 1550019. http://dx.doi.org/10.1142/s021984361550019x.

Full text
Abstract:
Computational systems for human–robot interaction (HRI) could benefit from visual perceptions of social cues that are commonly employed in human–human interactions. However, existing systems focus on one or two cues for attention or intention estimation. This research investigates how social robots may exploit a wide spectrum of visual cues for multiparty interactions. It is proposed that the vision system for social cue perception should be supported by two dimensions of functionality, namely, vision functionality and cognitive functionality. A vision-based system is proposed for a robot receptionist to embrace both functionalities for multiparty interactions. The module of vision functionality consists of a suite of methods that computationally recognize potential visual cues related to social behavior understanding. The performance of the models is validated by the ground truth annotation dataset. The module of cognitive functionality consists of two computational models that (1) quantify users’ attention saliency and engagement intentions, and (2) facilitate engagement-aware behaviors for the robot to adjust its direction of attention and manage the conversational floor. The performance of the robot’s engagement-aware behaviors is evaluated in a multiparty dialog scenario. The results show that the robot’s engagement-aware behavior based on visual perceptions significantly improve the effectiveness of communication and positively affect user experience.
APA, Harvard, Vancouver, ISO, and other styles
30

Tan, Bin. "Soccer-Assisted Training Robot Based on Image Recognition Omnidirectional Movement." Wireless Communications and Mobile Computing 2021 (August 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/5532210.

Full text
Abstract:
With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Tasevski, Jovica, Milutin Nikolic, and Dragisa Miskovic. "Integration of an industrial robot with the systems for image and voice recognition." Serbian Journal of Electrical Engineering 10, no. 1 (2013): 219–30. http://dx.doi.org/10.2298/sjee1301219t.

Full text
Abstract:
The paper reports a solution for the integration of the industrial robot ABB IRB140 with the system for automatic speech recognition (ASR) and the system for computer vision. The robot has the task to manipulate the objects placed randomly on a pad lying on a table, and the computer vision system has to recognize their characteristics (shape, dimension, color, position, and orientation). The ASR system has a task to recognize human speech and use it as a command to the robot, so the robot can manipulate the objects.
APA, Harvard, Vancouver, ISO, and other styles
32

Preising, B., and T. C. Hisa. "Robot performance measurement and calibration using a 3D computer vision system." Robotica 13, no. 4 (July 1995): 327–37. http://dx.doi.org/10.1017/s0263574700018762.

Full text
Abstract:
SummaryPresent day robot systems are manufactured to perform within industry accepted tolerances. However, to use such systems for tasks requiring high precision, various methods of robot calibration are generally required. These procedures can improve the accuracy of a robot within a small volume of the robot's workspace. The objective of this paper is to demonstrate the use of a single camera 3D computer vision system as a position sensor in order to perform robot calibration. A vision feedback scheme, termed Vision-guided Robot Control (VRC), is described which can improve the accuracy of a robot in an on-line iterative manner. This system demonstrates the advantage that can be achieved by a Cartesian space robot control scheme when end effector position/orientation are actually sensed instead ofcalculated from the kinematic equations. The degree of accuracy is determined by setting a tolerance level for each of the six robot Cartesian space coordinates. In general, a small tolerance level requires a large number of iterations in order to position the end effector, and a large tolerance level requires fewer iterations. The viability of using a vision system for robot calibration is demonstrated by experimentally showing that the accuracy of a robot can be drastically improved. In addition, the vision system can also be used to determine the repeatability and accuracy of a robot in a simple, efficient, and quick manner. Experimental work with an IBM Electric Drive Robot (EDR) and the proposed vision system produced a 97 and a 145 fold improvement in the position and orientation accuracy of the robot, respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Nehmzow, Ulrich. "Vision processing for robot learning." Industrial Robot: An International Journal 26, no. 2 (March 1999): 121–30. http://dx.doi.org/10.1108/01439919910260204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Hui, Xieyuanli Chen, Huimin Lu, and Junhao Xiao. "Distributed and collaborative monocular simultaneous localization and mapping for multi-robot systems in large-scale environments." International Journal of Advanced Robotic Systems 15, no. 3 (May 1, 2018): 172988141878017. http://dx.doi.org/10.1177/1729881418780178.

Full text
Abstract:
In this article, we propose a distributed and collaborative monocular simultaneous localization and mapping system for the multi-robot system in large-scale environments, where monocular vision is the only exteroceptive sensor. Each robot estimates its pose and reconstructs the environment simultaneously using the same monocular simultaneous localization and mapping algorithm. Meanwhile, they share the results of their incremental maps by streaming keyframes through the robot operating system messages and the wireless network. Subsequently, each robot in the group can obtain the global map with high efficiency. To build the collaborative simultaneous localization and mapping architecture, two novel approaches are proposed. One is a robust relocalization method based on active loop closure, and the other is a vision-based multi-robot relative pose estimating and map merging method. The former is used to solve the problem of tracking failures when robots carry out long-term monocular simultaneous localization and mapping in large-scale environments, while the latter uses the appearance-based place recognition method to determine multi-robot relative poses and build the large-scale global map by merging each robot’s local map. Both KITTI data set and our own data set acquired by a handheld camera are used to evaluate the proposed system. Experimental results show that the proposed distributed multi-robot collaborative monocular simultaneous localization and mapping system can be used in both indoor small-scale and outdoor large-scale environments.
APA, Harvard, Vancouver, ISO, and other styles
35

Rahmaniar, Wahyu, Wen-June Wang, Wahyu Caesarendra, Adam Glowacz, Krzysztof Oprzędkiewicz, Maciej Sułowicz, and Muhammad Irfan. "Distance Measurement of Unmanned Aerial Vehicles Using Vision-Based Systems in Unknown Environments." Electronics 10, no. 14 (July 10, 2021): 1647. http://dx.doi.org/10.3390/electronics10141647.

Full text
Abstract:
Localization for the indoor aerial robot remains a challenging issue because global positioning system (GPS) signals often cannot reach several buildings. In previous studies, navigation of mobile robots without the GPS required the registration of building maps beforehand. This paper proposes a novel framework for addressing indoor positioning for unmanned aerial vehicles (UAV) in unknown environments using a camera. First, the UAV attitude is estimated to determine whether the robot is moving forward. Then, the camera position is estimated based on optical flow and the Kalman filter. Semantic segmentation using deep learning is carried out to get the position of the wall in front of the robot. The UAV distance is measured using the comparison of the image size ratio based on the corresponding feature points between the current and the reference of the wall images. The UAV is equipped with ultrasonic sensors to measure the distance of the UAV from the surrounded wall. The ground station receives information from the UAV to show the obstacles around the UAV and its current location. The algorithm is verified by capture the images with distance information and compared with the current image and UAV position. The experimental results show that the proposed method achieves an accuracy of 91.7% and a computation time of 8 frames per second (fps).
APA, Harvard, Vancouver, ISO, and other styles
36

Geng, Mingyang, Shuqi Liu, and Zhaoxia Wu. "Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System." Sensors 19, no. 4 (February 17, 2019): 823. http://dx.doi.org/10.3390/s19040823.

Full text
Abstract:
Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance and perform the trail-following task in a better manner. Concretely, each robot can periodically exchange the vision data with other robots and make decisions based both on its local view and the information from others. This paper proposes a sensor fusion-based cooperative trail-following method, which enables a group of robots to implement the trail-following task by fusing the sensor data of each robot. Our method allows each robot to face the same direction from different altitudes to fuse the vision data feature on the collective level and then take action respectively. Besides, considering the quality of service requirement of the robotic software, our method limits the condition to implementing the sensor data fusion process by using the “threshold” mechanism. Qualitative and quantitative experiments on the real-world dataset have shown that our method can significantly promote the recognition accuracy and lead to a more robust performance compared with the single-robot system.
APA, Harvard, Vancouver, ISO, and other styles
37

Cannata, Giorgio, and Enrico Grosso. "On perceptual advantages of active robot vision." Journal of Robotic Systems 16, no. 3 (March 1999): 163–83. http://dx.doi.org/10.1002/(sici)1097-4563(199903)16:3<163::aid-rob3>3.0.co;2-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Weber, Cornelius, Stefan Wermter, and Alexandros Zochios. "Robot docking with neural vision and reinforcement." Knowledge-Based Systems 17, no. 2-4 (May 2004): 165–72. http://dx.doi.org/10.1016/j.knosys.2004.03.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wirbel, Emilie, Silvère Bonnabel, Arnaud de La Fortelle, and Fabien Moutarde. "Humanoid Robot Navigation: Getting Localization Information from Vision." Journal of Intelligent Systems 23, no. 2 (June 1, 2014): 113–32. http://dx.doi.org/10.1515/jisys-2013-0079.

Full text
Abstract:
AbstractIn this article, we present our work to provide a navigation and localization system on a constrained humanoid platform, the NAO robot, without modifying the robot sensors. First, we tried to implement a simple and light version of classic monocular Simultaneous Localization and Mapping (SLAM) algorithms, while adapting to the CPU and camera quality, which turned out to be insufficient on the platform for the moment. From our work on keypoints tracking, we identified that some keypoints can be still accurately tracked at little cost, and used them to build a visual compass. This compass was then used to correct the robot walk because it makes it possible to control the robot orientation accurately.
APA, Harvard, Vancouver, ISO, and other styles
40

Ikeuchi, Katsushi, Khotaro Ohba, and Yoichi Sato. "Special Issue On 'Robot Vision'." Advanced Robotics 12, no. 3 (January 1997): 309–11. http://dx.doi.org/10.1163/156855398x00208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hannan, M. W., and I. D. Walker. "Real-time shape estimation for continuum robots using vision." Robotica 23, no. 5 (August 23, 2005): 645–51. http://dx.doi.org/10.1017/s0263574704001018.

Full text
Abstract:
This paper describes external camera-based shape estimation for continuum robots. Continuum robots have a continuous backbone made of sections which bend to produce changes of configuration. A major difficulty with continuum robots is the determination of the robot's shape, as there are no discrete joints. This paper presents a method for shape determination based on machine vision. Using an engineered environment and image processing from a high speed camera, shape determination of a continuum robot is achieved. Experimental results showing the effectiveness of the technique on our Elephant's Trunk Manipulator are presented.
APA, Harvard, Vancouver, ISO, and other styles
42

Adachi, Yoshinobu, and Masayoshi Kakikura. "Research on the Sheepdog Problem Using Cellular Automata." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 9 (November 20, 2007): 1099–106. http://dx.doi.org/10.20965/jaciii.2007.p1099.

Full text
Abstract:
The simulation framework we propose for complex path planning problems with multiagent systems focuses on the sheepdog problem for handling distributed autonomous robot systems – an extension of the pursuit problem for handling one prey robot and multiple predator robot. The sheepdog problem involves a more complex issue in which multiple dog robot chase and herd multiple sheep robot. We use the Boids model and cellular automata to model sheep flocking and chase and herd behavior for dog robots. We conduct experiments using a Sheepdog problem simulator and study cooperative behavior.
APA, Harvard, Vancouver, ISO, and other styles
43

Sahoo, Saumya R., and Shital S. Chiddarwar. "Flatness-based control scheme for hardware-in-the-loop simulations of omnidirectional mobile robot." SIMULATION 96, no. 2 (June 26, 2019): 169–83. http://dx.doi.org/10.1177/0037549719859064.

Full text
Abstract:
Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is used. The developed controller, the Kinect-based vision system, and the HIL configuration are validated in the HIL simulation-based environment. The results confirm that the proposed HIL system can be an efficient tool for verifying the performance of the hardware and simulation designs of flatness-based control systems for omnidirectional mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
44

Feng, Haibo, Yanwu Zhai, and Yili Fu. "Development of master-slave magnetic anchoring vision robotic system for single-port laparoscopy (SPL) surgery." Industrial Robot: An International Journal 45, no. 4 (June 18, 2018): 458–68. http://dx.doi.org/10.1108/ir-01-2018-0016.

Full text
Abstract:
Purpose Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to effectively improve the visualization of surgical robot systems for relatively complex surgical procedures. Design/methodology/approach In this paper, a new master-slave magnetic anchoring vision robotic system for SPL surgery was proposed. A lighting distribution analysis for the imaging unit of the vision robot was carried out to guarantee illumination uniformity in the workspace during SPL surgery. Moreover, cleaning force for the lens of the camera was measured to assess safety for an abdominal wall, and performance assessment of the system was performed. Findings Extensive experimental results for illumination, control, cleaning force and functionality test have indicated that the proposed system has an excellent performance in providing the visual feedback. Originality/value The main contribution of this paper lies in the development of a magnetic anchoring vision robot system that successfully improves the ability of cleaning the lens and avoiding the blind area in a field of view.
APA, Harvard, Vancouver, ISO, and other styles
45

Koutaki, Gou, and Keiichi Uchimura. "Fast and Robust Vision System for Shogi Robot." Journal of Robotics and Mechatronics 27, no. 2 (April 20, 2015): 182–90. http://dx.doi.org/10.20965/jrm.2015.p0182.

Full text
Abstract:
<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/08.jpg"" width=""150"" />Developed shogi robot system</div> The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and robot systems, then discuss experiments conducted to verify the feasibility of the proposal, showing that even a low-cost system can be highly reliable. </span>
APA, Harvard, Vancouver, ISO, and other styles
46

Lenz, Reiner. "Lie methods for color robot vision." Robotica 26, no. 4 (July 2008): 453–64. http://dx.doi.org/10.1017/s0263574707003906.

Full text
Abstract:
SUMMARYWe describe how Lie-theoretical methods can be used to analyze color related problems in machine vision. The basic observation is that the nonnegative nature of spectral color signals restricts these functions to be members of a limited, conical section of the larger Hilbert space of square-integrable functions. From this observation, we conclude that the space of color signals can be equipped with a coordinate system consisting of a half-axis and a unit ball with the Lorentz groups as natural transformation group. We introduce the theory of the Lorentz group SU(1, 1) as a natural tool for analyzing color image processing problems and derive some descriptions and algorithms that are useful in the investigation of dynamical color changes. We illustrate the usage of these results by describing how to compress, interpolate, extrapolate, and compensate image sequences generated by dynamical color changes.
APA, Harvard, Vancouver, ISO, and other styles
47

BANDERA, J. P., J. A. RODRÍGUEZ, L. MOLINA-TANCO, and A. BANDERA. "A SURVEY OF VISION-BASED ARCHITECTURES FOR ROBOT LEARNING BY IMITATION." International Journal of Humanoid Robotics 09, no. 01 (March 2012): 1250006. http://dx.doi.org/10.1142/s0219843612500065.

Full text
Abstract:
Learning by imitation is a natural and intuitive way to teach social robots new behaviors. While these learning systems can use different sensory inputs, vision is often their main or even their only source of input data. However, while many vision-based robot learning by imitation (RLbI) architectures have been proposed in the last decade, they may be difficult to compare due to the absence of a common, structured description. The first contribution of this survey is the definition of a set of standard components that can be used to describe any RLbI architecture. Once these components have been defined, the second contribution of the survey is an analysis of how different vision-based architectures implement and connect them. This bottom–up, structural analysis of architectures allows to compare different solutions, highlighting their main advantages and drawbacks, from a more flexible perspective than the comparison of monolithic systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Kyrki, Ville, and Danica Kragic. "Computer and Robot Vision [TC Spotlight]." IEEE Robotics & Automation Magazine 18, no. 2 (June 2011): 121–22. http://dx.doi.org/10.1109/mra.2011.941638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Siagian, C., and L. Itti. "Biologically Inspired Mobile Robot Vision Localization." IEEE Transactions on Robotics 25, no. 4 (August 2009): 861–73. http://dx.doi.org/10.1109/tro.2009.2022424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Braggins, Don. "A critical look at robot vision." Industrial Robot: An International Journal 22, no. 6 (December 1995): 9–12. http://dx.doi.org/10.1108/01439919510105093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography