Journal articles on the topic 'Robot vision Mathematical models'

To see the other types of publications on this topic, follow the link: Robot vision Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Robot vision Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Khodabandehloo, K. "Robotic handling and packaging of poultry products." Robotica 8, no. 4 (October 1990): 285–97. http://dx.doi.org/10.1017/s0263574700000321.

Full text
Abstract:
SUMMARYThis paper presents the findings of a research programme leading to the development of a robotic system for packaging poultry portions. The results show that an integrated system, incorporating machine vision and robots, can be made feasible for industrial use. The elements of this system, including the end-effector, the vision module, the robot hardware and the system software are presented. Models and algorithms for automatic recognition and handling of poultry portions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Yanbiao, Jinchao Li, and Xiangzhi Chen. "Seam tracking investigation via striped line laser sensor." Industrial Robot: An International Journal 44, no. 5 (August 21, 2017): 609–17. http://dx.doi.org/10.1108/ir-11-2016-0294.

Full text
Abstract:
Purpose This paper aims to propose a set of six-axis robot arm welding seam tracking experiment platform based on Halcon machine vision library to resolve the curve seam tracking issue. Design/methodology/approach Robot-based and image coordinate systems are converted based on the mathematical model of the three-dimensional measurement of structured light vision and conversion relations between robot-based and camera coordinate systems. An object tracking algorithm via weighted local cosine similarity is adopted to detect the seam feature points to prevent effectively the interference from arc and spatter. This algorithm models the target state variable and corresponding observation vector within the Bayes framework and finds the optimal region with highest similarity to the image-selected modules using cosine similarity. Findings The paper tests the approach and the experimental results show that using metal inert-gas (MIG) welding with maximum welding current of 200A can achieve real-time accurate curve seam tracking under strong arc light and splash. Minimal distance between laser stripe and welding molten pool can reach 15 mm, and sensor sampling frequency can reach 50 Hz. Originality/value Designing a set of six-axis robot arm welding seam tracking experiment platform with a system of structured light sensor based on Halcon machine vision library; and adding an object tracking algorithm to seam tracking system to detect image feature points. By this technology, this system can track the curve seam while welding.
APA, Harvard, Vancouver, ISO, and other styles
3

Panarin, R. N., А. А. Soloviev, and Любовь Анатольевна Хворова. "Application of Artificial Intelligence and Computer Vision Technologies in Solving Problems of Automation of Processing and Recognition of Biological Objects." Izvestiya of Altai State University, no. 1(123) (March 18, 2022): 101–7. http://dx.doi.org/10.14258/izvasu(2022)1-16.

Full text
Abstract:
The article considers the application of artificial intelligence and computer vision technologies to solve the automation of processing and analysis of botanical micro and macro objects (images of fern spores). Also, there is a problem of developing software for a digital twin of an agrobot. The first problem is an interdisciplinary research aimed at solving applied and fundamental problems in botanical objects' biosystematics and studying microevolutionary processes using computer vision technologies, methods of intelligent image analysis, machine learning, and artificial intelligence. The article presents the developed software module FAST (Functional Automated System Tool) for solving the direct problem — performing measurements from images obtained by scanning electron microscopy, virtual herbaria image library, entomological collections, or images taken in a natural environment. The second problem is software development for the digital twin of the agrorobot, designed for precise mechanical processing of plants and soil. The proposed solution includes several components: the control unit — NVIDIA Jetson NANO computing module; the actuator — 6-axis robotic arm; the machine vision unit based on an Intel RealSense camera; the chassis unit — tracked tracks and software drivers and components for their control. The digital twin of the robot considers the environmental conditions and the landscape of the operation area. The use of ROS (Robot Operating System) allows minimal effort to transfer a digital model to a physical one (prototype and serial robot) without changing the source code. Furthermore, consideration of the environmental conditions during the programming stage provides opportunities for further development and testing of real-life mathematical models for device control.
APA, Harvard, Vancouver, ISO, and other styles
4

Uršič, Peter, Aleš Leonardis, Danijel Skočaj, and Matej Kristan. "Learning part-based spatial models for laser-vision-based room categorization." International Journal of Robotics Research 36, no. 4 (April 2017): 379–402. http://dx.doi.org/10.1177/0278364917704707.

Full text
Abstract:
Room categorization, that is, recognizing the functionality of a never before seen room, is a crucial capability for a household mobile robot. We present a new approach for room categorization that is based on two-dimensional laser range data. The method is based on a novel spatial model consisting of mid-level parts that are built on top of a low-level part-based representation. The approach is then fused with a vision-based method for room categorization, which is also based on a spatial model consisting of mid-level visual parts. In addition, we propose a new discriminative dictionary learning technique that is applied for part-dictionary selection in both laser-based and vision-based modalities. Finally, we present a comparative analysis between laser-based, vision-based, and laser-vision-fusion-based approaches in a uniform part-based framework, which is evaluated on a large dataset with several categories of rooms from domestic environments.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xiaoyue, and Liang Huo. "A Vision/Inertia Integrated Positioning Method Using Position and Orientation Matching." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6835456.

Full text
Abstract:
A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV) and mobile robot is proposed in this work. The method is introduced firstly. Landmarks are placed into the navigation field and camera and inertial measurement unit (IMU) are installed on the vehicle. Vision processor calculates the azimuth and position information from the pictures which include artificial landmarks with the known direction and position. Inertial navigation system (INS) calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS output position. Then the needed mathematical models are established and integrated navigation is implemented by Kalman filter with the observation of azimuth and the calculated pixel position of landmark. Navigation errors and IMU errors are estimated and compensated in real time so that high precision navigation results can be got. Finally, simulation and test are performed, respectively. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve centimeter-level autonomic continuous navigation.
APA, Harvard, Vancouver, ISO, and other styles
6

Varlashin, V. V., and A. V. Lopota. "Optimization of Surround-View System Projection Parameters using Fiducial Markers." Mekhatronika, Avtomatizatsiya, Upravlenie 23, no. 2 (February 6, 2022): 97–103. http://dx.doi.org/10.17587/mau.23.97-103.

Full text
Abstract:
The paper is devoted to the problem of increasing quality of reproduction of the environment by mobile robot’s surround-view system, operating in the augmented reality mode. A variant of a surround-view system based on the cameras with over-lapping fields of view is being considered. A virtual model has been developed, it includes 3D-CAD models of a mobile robot and surrounding objects, as well as virtual models of cameras. The cross-platform integrated development environment "Unity" was chosen to implement the model. Methods for solving the problem of displaying the surrounding mobile robot space in the "third-person view" mode are determined. A mathematical criterion for assessing the quality of reproduction of the surrounding space is proposed. It is based on the comparison of points obtained from a virtual model with points obtained as a result of projection of images from virtual cameras. To obtain points, ArUco fiducial markers were used, providing an unambiguous comparison of points on the original and synthesized images. The dependence of the value of the objective function of the optimization problem on the projection parameters by the uniform search method are investigated. A method for automatic adaptation of projection parameters using fisheye lenses and stereo vision methods is proposed. Directions for further research are identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Solovyeva, Elena, and Ali Abdullah. "Controlling system based on neural networks with reinforcement learning for robotic manipulator." Information and Control Systems, no. 5 (October 20, 2020): 24–32. http://dx.doi.org/10.31799/1684-8853-2020-5-24-32.

Full text
Abstract:
Introduction: Due to its advantages, such as high flexibility and the ability to move heavy pieces with high torques and forces, the robotic arm, also named manipulator robot, is the most used industrial robot. Purpose: We improve the controlling quality of a manipulator robot with seven degrees of freedom in the V-REP program's environment using the reinforcement learning method based on deep neural networks. Methods: Estimate the action signal's policy by building a numerical algorithm using deep neural networks. The action-network sends the action's signal to the robotic manipulator, and the critic-network performs a numerical function approximation to calculate the value function (Q-value). Results: We create a model of the robot and the environment using the reinforcement-learning library in MATLAB and connecting the output signals (the action's signal) to a simulated robot in V-REP program. Train the robot to reach an object in its workspace after interacting with the environment and calculating the reward of such interaction. The model of the observations was done using three vision sensors. Based on the proposed deep learning method, a model of an agent representing the robotic manipulator was built using four layers neural network for the actor with four layers neural network for the critic. The agent's model representing the robotic manipulator was trained for several hours until the robot started to reach the object in its workspace in an acceptable way. The main advantage over supervised learning control is allowing our robot to perform actions and train at the same moment, giving the robot the ability to reach an object in its workspace in a continuous space action. Practical relevance: The results obtained are used to control the behavior of the movement of the manipulator without the need to construct kinematic models, which reduce the mathematical complexity of the calculation and provide a universal solution.
APA, Harvard, Vancouver, ISO, and other styles
8

Mata, M., J. M. Armingol, J. Fernández, and A. de la Escalera. "Object learning and detection using evolutionary deformable models for mobile robot navigation." Robotica 26, no. 1 (January 2008): 99–107. http://dx.doi.org/10.1017/s0263574707003633.

Full text
Abstract:
SUMMARYDeformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Trabasso, Luis Gonzaga, and Cezary Zielinski. "Semi-automatic calibration procedure for the vision-robot interface applied to scale model decoration." Robotica 10, no. 4 (July 1992): 303–8. http://dx.doi.org/10.1017/s0263574700008134.

Full text
Abstract:
SUMMARYA semi-automatic method for calibrating a robot-vision interface is presented. It puts a small work-load on the operator, requires a simple calibration jig and a solution of a very simple system of equations. It has been extensively used in an experimental robotic cell set up at Loughborough University of Technology, where various aspects of the manufacturing and the decoration of scale models are being investigated. As an extension of the calibration procedure, the paper also shows practical solutions for the problem of dealing with three dimensional objects using a single camera.
APA, Harvard, Vancouver, ISO, and other styles
10

Steiger-Carçao, Adolfo, and L. M. Camarinha-Matos. "Concurrent Pascal as a robot level language – a suggestion." Robotica 4, no. 4 (October 1986): 269–72. http://dx.doi.org/10.1017/s0263574700009966.

Full text
Abstract:
SUMMARYThis paper briefly describes actual robot level programming languages, focusing on their intrinsic limitations when compared with traditional concurrent programming languages or when used for robotic systems/flexible production workshops programming, and not only for an isolated manipulator control.To reduce such limitations, a suggestion is made to base the development of robotic programming systems on already existing concurrent languages (Concurrent Pascal, Modula-2), taking into account their built-in extension facilities for fastening the incorporation of (or easy interfacing with) existing packages or products already developed in robotics (robot models, CAD systems, vision systems, etc).Using such languages as a support base for a robotic station programming environment, with access to different components developed separately, will allow a better understanding of the inter-relations among components and their limitations when faced with an integration perspective.
APA, Harvard, Vancouver, ISO, and other styles
11

Stansfield, Sharon A. "Haptic Perception with an Articulated, Sensate Robot Hand." Robotica 10, no. 6 (November 1992): 497–508. http://dx.doi.org/10.1017/s0263574700005828.

Full text
Abstract:
SUMMARYIn this paper we present a series of haptic exploratory procedures, or EPs, implemented for a multi-fingered, articulated, sensate robot hand. These EPs are designed to extract specific tactile and kinesthetic information from an object via their purposive invocation by an intelligent robotic system. Taken together, they form an active robotic touch perception system to be used both in extracting information about the environment for internal representation and in acquiring grasps for manipulation. The theory and structure of this robotic haptic system is based upon models of human haptic exploration and information processing.The haptic system presented utilizes an integrated robotic system consisting of a PUMA 560 robot arm, a JPL/Stanford robot hand, with joint torque sensing in the fingers, a wrist force/torque sensor, and a 256 element, spatially-resolved fingertip tactile array. We describe the EPs implemented for this system and provide experimental results which illustrate how they function and how the information which they extract may be used. In addition to the sensate hand and arm, the robot also contains structured-lighting vision and a Prolog-based reasoning system capable of grasp generation and object categorization. We present a set of simple tasks which show how both grasping and recognition may be enhanced by the addition of active touch perception.
APA, Harvard, Vancouver, ISO, and other styles
12

Valada, Abhinav, and Wolfram Burgard. "Deep spatiotemporal models for robust proprioceptive terrain classification." International Journal of Robotics Research 36, no. 13-14 (August 31, 2017): 1521–39. http://dx.doi.org/10.1177/0278364917727062.

Full text
Abstract:
Terrain classification is a critical component of any autonomous mobile robot system operating in unknown real-world environments. Over the years, several proprioceptive terrain classification techniques have been introduced to increase robustness or act as a fallback for traditional vision based approaches. However, they lack widespread adaptation due to various factors that include inadequate accuracy, robustness and slow run-times. In this paper, we use vehicle-terrain interaction sounds as a proprioceptive modality and propose a deep long-short term memory based recurrent model that captures both the spatial and temporal dynamics of such a problem, thereby overcoming these past limitations. Our model consists of a new convolution neural network architecture that learns deep spatial features, complemented with long-short term memory units that learn complex temporal dynamics. Experiments on two extensive datasets collected with different microphones on various indoor and outdoor terrains demonstrate state-of-the-art performance compared to existing techniques. We additionally evaluate the performance in adverse acoustic conditions with high-ambient noise and propose a noise-aware training scheme that enables learning of more generalizable models that are essential for robust real-world deployments.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Can, Zhibin Li, Yefei Kang, and Yingzheng Li. "Applying SLAM Algorithm Based on Nonlinear Optimized Monocular Vision and IMU in the Positioning Method of Power Inspection Robot in Complex Environment." Mathematical Problems in Engineering 2022 (September 19, 2022): 1–14. http://dx.doi.org/10.1155/2022/3378163.

Full text
Abstract:
Under China’s Intelligent Electric Power Grid (IEPG), the research on IEPG inspection mode is of great significance. This work aims to improve the positioning and navigation performance of IEPG inspection robots in a complex environment. First, it reviews the monocular camera projection and the Inertial Measurement Unit (IMU) models. It also discusses the tight-coupling monocular Vision Inertial Navigation System (VINS) and the initialization theory of the Simultaneous Localization and Mapping (SLAM) system. Nonlinear optimization for SLAM by the Gauss–Newton Method (GNM) is established. Accordingly, this work proposes the SLAM system based on tight-coupling monocular VINS. The EuRoC dataset data sequence commonly used in visual-inertial algorithm testing in IEPG is used for simulation testing. The proposed SLAM system’s attitude and position estimation errors are analyzed on different datasets. The results show that the errors of roll, pitch, and yaw angle are acceptable. The errors of the X, Y, and Z axes are within 40 cm, meeting the positioning requirements of an Unmanned Aerial Vehicle (UAV). Meanwhile, the Root Mean Square Error (RMSE) evaluates the improvement of positioning accuracy by loop detection. The results testify that loop detection can reduce the RMSE and improve positioning accuracy. The attitude estimation tests the angle changes of pitch, roll, and yaw angles with time under a single rotation condition. The estimated value of the proposed SLAM algorithm is compared with the real value through Absolute Trajectory Error (ATE). The results show that the real value and the estimated value of attitude error can coincide well. Thus, the proposed SLAM algorithm is effective for positioning and navigation. ATE can also be controlled within ±2.5°, satisfying the requirements of navigation and positioning accuracy. The proposed SLAM system based on tight-coupling monocular VINS presents excellent positioning and navigation accuracy for the IEPG inspection robot. The finding has a significant reference value in the later research of IEPG inspection robots.
APA, Harvard, Vancouver, ISO, and other styles
14

González-Baños, Héctor H., and Jean-Claude Latombe. "Navigation Strategies for Exploring Indoor Environments." International Journal of Robotics Research 21, no. 10-11 (October 2002): 829–48. http://dx.doi.org/10.1177/0278364902021010834.

Full text
Abstract:
In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds apolygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensor-placement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the next-best-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs.
APA, Harvard, Vancouver, ISO, and other styles
15

Yao, Ping. "Spatial Expression of Multifaceted Soft Decoration Elements: Application of 3D Reconstruction Algorithm in Soft Decoration and Furnishing Design of Office Space." Journal of Sensors 2022 (August 28, 2022): 1–11. http://dx.doi.org/10.1155/2022/5345293.

Full text
Abstract:
In China’s modern market economy under the rapid development of the general situation, we work more and more problems, and work pressure is also increasing. The so-called office space refers to the space layout, style, and the physical and psychological division of the space. Office space must take into account many factors, involving technology, technology, humanities, aesthetics, and other elements, while the office space is the space where people work and relax. In recent years, as people’s requirements for the work environment are increasingly high, therefore, the design of the office space is also more and more attention to people. The concept of soft furnishing design into the work space will help improve the overall corporate and office space design of cultural taste which is one of the main methods to show the quality and human connotation of the enterprise. The three-dimensional reconstruction refers to the creation of a mathematical model suitable for computer display and processing of three-dimensional space objects. It is an important basic tool for data processing, computing, and researching the performance of mathematical models in the computer environment, which can be applied in various fields such as autonomous navigation of mobile robots, aviation and remote sensing computing, industrial monitoring information system, medical imaging, and virtual reality. The 3D environment reconstruction technology has become one of the popular research areas in computer vision and increasingly attracts the attention of design practitioners. This paper takes the 3D environment reconstruction technology of office space soft decoration design as the basis and discusses the important elements and modeling ideas in soft decoration design, which adds to the interior design of office space, and uses Kinect to obtain the depth data in the 3D environment, so as to complete the realistic 3D reproduction of the interior environment based on computer vision technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Schlatow, Johannes, Edgard Schmidt, and Rolf Ernst. "Automating integration under emergent constraints for embedded systems." SICS Software-Intensive Cyber-Physical Systems 35, no. 3-4 (October 23, 2021): 185–99. http://dx.doi.org/10.1007/s00450-021-00428-2.

Full text
Abstract:
AbstractAs embedded applications are subject to non-functional requirements (latency, safety, reliability, etc.) they require special care when it comes to providing assurances. Traditionally, these systems are quite static in their software and hardware composition. However, there is an increasing interest in enabling adaptivity and autonomy in embedded systems that cannot be satisfied with preprogrammed adaptations any more. Instead, it requires automated software composition in conjunction with model-based analyses that must adhere to requirements and constraints from various viewpoints. A major challenge in this matter is that embedded systems are subject to emergent constraints which are affected by inter-dependent properties resulting from the software composition and platform configuration. As these properties typically require an in-depth evaluation by complex analyses, a holistic formulation of parameters and their constraints is not applicable. We present a compositional framework for model-based integration of component-based embedded systems. The framework provides a structured approach to perform operations on a cross-layer model for model enrichment, synthesis and analysis. It thereby provides the overarching mechanisms to combine existing models, analyses and reasoning. Furthermore, it automates integration decisions and enables an iterative exploration of feasible system compositions. We demonstrate the applicability of this framework on a case study of a stereo-vision robot that uses a component-based operating system.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yingxu. "On Theoretical Foundations of Human and Robot Vision." Journal of Physics: Conference Series 2278, no. 1 (May 1, 2022): 012001. http://dx.doi.org/10.1088/1742-6596/2278/1/012001.

Full text
Abstract:
Abstract A set of cognitive, neurological, and mathematical theories for human and robot vision has been recognized that encompasses David Hubel’s hypercolumn vision theory (The Nobel Prize in Physiology or Medicine 1981 [1]) and Dennis Gabor’s wavelet filter theory (The Nobel Prize in Physics 1971 [2]). This keynote lecture presents a theoretical framework of the Cognitive Vision Theory (CVT) [3-6] and its neurological and mathematical foundations. A set of Intelligent Mathematics (IM) [7-13] and formal vision theories developed in my laboratory is introduced encompassing Image Frame Algebra (IFA) [3], Visual Semantic Algebra (VSA) [4], and the Spike Frequency Modulation (SFM) theory [5]. IM is created for enabling cognitive robots to gain autonomous vision cognition capability supported by Visual Knowledge Bases (VKBs). Paradigms and case studies of robot vision powered by CVTs and IM will be demonstrated. The basic research on CVTs has led to new perspectives to human and robot vision for developing novel image processing applications in AI, neural networks, image recognitions, sequence learning, computational intelligence, self-driving vehicles, unmanned systems, and robot navigations.
APA, Harvard, Vancouver, ISO, and other styles
18

Pop, Cristian, Arjana Davidescu, and Sanda Margareta Grigorescu. "Robot Vision Application for Overlapped Work Pieces." Applied Mechanics and Materials 657 (October 2014): 849–53. http://dx.doi.org/10.4028/www.scientific.net/amm.657.849.

Full text
Abstract:
An automated robot vision application for work piece manipulation is described. The purpose is to pick up work pieces like bearings from a pile and store them in a specific location. The necessary image processing algorithms developed for this application and the camera calibration process are realized by using Matlab software. Based on these algorithms the operation for detecting, identifying, sorting and manipulating the bearings is realized. Also the mathematical model for determining the position and orientation of the bearing with respect to the world reference system attached to the robot’s base is presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, K. M. W., and V. D. Tourassis. "Mathematical deficiencies of numerically simplified dynamic robot models." IEEE Transactions on Automatic Control 34, no. 10 (1989): 1109–11. http://dx.doi.org/10.1109/9.35289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Umeda, Kazunori. "Special Issue on Robot Vision." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 253. http://dx.doi.org/10.20965/jrm.2003.p0253.

Full text
Abstract:
Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target. This special issue reviews recent advances in this exciting field, following up two special issues, Vol. 11 No. 2, and Vol. 13 No. 6, which attracted more papers than expected. This indicates the high degree of research activity in this field. I am most pleased to report that this issue presents 12 excellent papers covering robot vision, including basic algorithms based on precise optical models, pattern and gesture recognition, and active vision. Several papers treat range imaging and others interesting applications to agriculture and quadruped robots and new devices. This issue also presents two news briefs, one on a practical range sensor suited to mobile robots and the other on vision devices that are the improved ones of famous IP-5000 series. I am convinced that this special issue helps research on robot vision more exciting. I would like to close by thanking all of the researchers who submitted their studies, and to give special thanks to the reviewers and editors, especially Prof. M. Kaneko, Dr. K. Yokoi, and Prof. Y. Nakauchi.
APA, Harvard, Vancouver, ISO, and other styles
21

Flores-Fuentes, Wendy, Mónica Valenzuela-Delgado, Danilo Cáceres-Hernández, Oleg Sergiyenko, Miguel E. Bravo-Zanoguera, Julio C. Rodríguez-Quiñonez, Daniel Hernández-Balbuena, and Moisés Rivas-López. "Magnetohydrodynamic velocity profile measurement for microelectromechanical systems micro-robot design." International Journal of Advanced Robotic Systems 16, no. 5 (September 1, 2019): 172988141987561. http://dx.doi.org/10.1177/1729881419875611.

Full text
Abstract:
The development of microelectromechanical systems based on magnetohydrodynamic for micro-robot applications requires precise control of the micro-flow behavior. The micro-flow channel design and its performance under the influence of the Lorentz force is a critical challenge, the mathematical model of each magnetohydrodynamic device design must be experimentally validated before to be employed in the fabrication of microelectromechanical systems. For this purpose, the present article proposes the enhancement of a particle image velocimetry measurement process in a customized machine vision system. The particle image velocimetry measurements are performed for the micro-flow velocity profile mathematical model validation of a magnetohydrodynamic stirrer prototype. Data mining and filtering have been applied to a raw measurement database from the customized machine vision system designed to evaluate the magnetohydrodynamic stirrer prototype. Outlier’s elimination and smoothing have been applied to raw data to approximate the particle image velocimetry measurements output to the velocity profile mathematical model to increase the accuracy of a customized machine vision system for two-dimensional velocity profile measurements. The accurate measurement of the two-dimensional velocity profile is fundamental owing to the requirement of future enhancement of the customized machine vision system to construct the three-dimensional velocity profile of the magnetohydrodynamic stirrer prototype. The presented methodology can be used for measurement and validation in the design of microelectromechanical systems micro-robot design and any other devices that require micro-flow manipulation for tasks such as stirring, pumping, mixing, networking, propelling, and even cooling.
APA, Harvard, Vancouver, ISO, and other styles
22

KIMURA, Hiroshi. "High-speed Vision Robot System Suitable for Many Product Models." Journal of the Society of Mechanical Engineers 108, no. 1039 (2005): 472–73. http://dx.doi.org/10.1299/jsmemag.108.1039_472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wei, Yuan, and Zhen Wang. "A Vision-Based Navigation Robot System Using Hybrid Color Models." Journal of Physics: Conference Series 1087 (September 2018): 062006. http://dx.doi.org/10.1088/1742-6596/1087/6/062006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

VILLACAMPA, Y., and J. L. USO-DOMENECH. "MATHEMATICAL MODELS OF COMPLEX STRUCTURAL SYSTEMS. A LINGUISTIC VISION." International Journal of General Systems 28, no. 1 (June 1999): 37–52. http://dx.doi.org/10.1080/03081079908935228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Usó Doménech, José Luis, Josué Antonio Nescolarde-Selva, Lorena Segura-Abad, and Hugh Gash. "A dialectical vision of mathematical models of complex systems." Kybernetes 49, no. 3 (July 11, 2019): 938–59. http://dx.doi.org/10.1108/k-01-2019-0032.

Full text
Abstract:
Purpose Mathematical models are constructed at the interface between practice, experience and theories. The function of models puts us on guard against the privilege granted to what is accepted as abstract and formal, and at the same time puts us on guard against a static and phenomenological conception of knowledge. The epistemology of models does not suppress in any way the objectives of science: only, a dogmatic conception concerning truth is removed, and dynamic and dialectical aspects of monitoring are stressed to establish the most viable model. The purpose of this paper is to examine hybrid methodologies (inductive-deductive) that may either propose hypothetical causal relations and seek support for them in field data or detect causal relations in field data and propose hypotheses for the relations detected. Design/methodology/approach The authors follow a dialectical analysis for a type of inductive-deductive model. Findings In this work, the authors present an inductive-deductive methodology whose practical result satisfies the Hegelian dialectic. The consequent implication of their mutual reciprocal integration produces abstractions from the concrete that enable thought. The real problem in this case is a given ontological system or reality. Originality/value The essential elements of the models – variables, equations, simulation and feedback – are studied using a dialectic Hegelian theory.
APA, Harvard, Vancouver, ISO, and other styles
26

Shen, Dan, Haibin Ling, Khanh Pham, Erik Blasch, and Genshe Chen. "Computer vision and pursuit–evasion game theoretical controls for ground robots." Advances in Mechanical Engineering 11, no. 8 (August 2019): 168781401987291. http://dx.doi.org/10.1177/1687814019872911.

Full text
Abstract:
A hardware-in-loop control framework with robot dynamic models, pursuit–evasion game models, sensor and information solutions, and entity tracking algorithms is designed and developed to demonstrate discrete-time robotic pursuit–evasion games for real-world conditions. A parameter estimator is implemented to learn the unknown parameters in the robot dynamics. For visual tracking and fusion, several markers are designed and selected with the best balance of robot tracking accuracy and robustness. The target robots are detected after background modeling, and the robot poses are estimated from the local gradient patterns. Based on the robot dynamic model, a two-player discrete-time game model with limited action space and limited look-ahead horizons is created. The robot controls are based on the game-theoretic (mixed) Nash solutions. Supportive results are obtained from the robot control framework to enable future research of the robot applications in sensor fusion, target tracking and detection, and decision making.
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Wencheng. "Robot Obstacle Recognition and Target Tracking Based on Binocular Vision." Advances in Multimedia 2022 (August 18, 2022): 1–12. http://dx.doi.org/10.1155/2022/9022038.

Full text
Abstract:
To further improve the perception ability of binocular vision sensor in getting rich environment information and scene depth information, a research method of a robot obstacle recognition and target tracking based on binocular vision was proposed. The method focused on target recognition and obstacle recognition of binocular stereo vision. The system based on obstacles of visual identification was set up. Through the analysis of the Bouguet mathematical model algorithm based on OpenCV, the binocular stereo correction was carried out and the obstacle recognition system was calibrated and corrected. Through the experimental data, it was found that the average error of the obstacle recognition and target tracking algorithm based on binocular vision could be controlled within 50 mm within the range of 2100 mm. The average time of obstacle recognition was 0.096 s and the average time consumption of the whole system was 0.466 s, indicating that the robot obstacle recognition and target tracking system based on binocular vision could meet the accuracy and real-timeness requirements of obstacle recognition and detection.
APA, Harvard, Vancouver, ISO, and other styles
28

Schmitt, Lorenz A., William A. Gruver, and Assad Ansari. "A Robot Vision System Based on Two-Dimensional Object-Oriented Models." IEEE Transactions on Systems, Man, and Cybernetics 16, no. 4 (July 1986): 582–89. http://dx.doi.org/10.1109/tsmc.1986.289263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sevastopoulos, Christos, Stasinos Konstantopoulos, Keshav Balaji, Mohammad Zaki Zadeh, and Fillia Makedon. "A Simulated Environment for Robot Vision Experiments." Technologies 10, no. 1 (January 12, 2022): 7. http://dx.doi.org/10.3390/technologies10010007.

Full text
Abstract:
Training on simulation data has proven invaluable in applying machine learning in robotics. However, when looking at robot vision in particular, simulated images cannot be directly used no matter how realistic the image rendering is, as many physical parameters (temperature, humidity, wear-and-tear in time) vary and affect texture and lighting in ways that cannot be encoded in the simulation. In this article we propose a different approach for extracting value from simulated environments: although neither of the trained models can be used nor are any evaluation scores expected to be the same on simulated and physical data, the conclusions drawn from simulated experiments might be valid. If this is the case, then simulated environments can be used in early-stage experimentation with different network architectures and features. This will expedite the early development phase before moving to (harder to conduct) physical experiments in order to evaluate the most promising approaches. In order to test this idea we created two simulated environments for the Unity engine, acquired simulated visual datasets, and used them to reproduce experiments originally carried out in a physical environment. The comparison of the conclusions drawn in the physical and the simulated experiments is promising regarding the validity of our approach.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheema, Sachmanik Singh, and Sumit Budhiraja. "An Improved Technique for Vision Based Path Tracking Robot." Advanced Materials Research 403-408 (November 2011): 4986–90. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.4986.

Full text
Abstract:
This paper describes a simple algorithm for tracking path for a line following robot (black line on white background) with the information related to steering decision obtained from the images of the path by processing the images using MATLAB Image Processing Toolbox. The images are captured using MATLAB Image Acquisition Toolbox by triggering frames from a video in real time and applying the algorithm on these frames. This approach provides an improvement to the infrared sensor based robots, which tend to give random values due to unavailability of adequate light, are not independent of the shape of the path as there is no information of the path ahead, and shape and width of the path is also a deciding factor for sensor separation. This approach is inspired from the human vision of determining the deviation of the path by having knowledge of the path ahead by comparing the orientation of the path. This is a simple computational technique working on the pixel information of the image in comparison to other complex mathematical techniques available. The algorithm has been verified using a recorded video and correct deviation of the path has been observed.
APA, Harvard, Vancouver, ISO, and other styles
31

Filaretov, Vladimir, Alexander Zuev, Alexander Procenko, and Sergey Melman. "Fault Detection of Actuators of Robot Manipulator by Vision System." Applied Mechanics and Materials 865 (June 2017): 457–62. http://dx.doi.org/10.4028/www.scientific.net/amm.865.457.

Full text
Abstract:
This paper considers synthesis method of fault detection system for actuators of robot manipulators based on using of signals fusion from stereo camera, angles sensors of joints and desired values of joint variables. The vision system is used for determining the position of three markers rigidly connected with working tool in the coordinate system associated with the manipulator. The advantage of proposed fault detection system is the simplicity of implementation and precision of detection of typical faults without knowledge about non-linear dynamic of robot and actuators. The results of mathematical simulation on the example of the PUMA-type manipulator using its kinematic model, position and orientation data of markers placed on working tool of manipulator, obtained from vision system fully confirm the efficiency of the proposed fault detection system.
APA, Harvard, Vancouver, ISO, and other styles
32

Migdalovici, Marcel, L. Vladareanu, Hongnian Yu, N. Pop, M. Iliescu, V. Vladareanu, D. Baran, and G. Vladeanu. "The walking robots critical position of the kinematics or dynamic systems applied on the environment model." International Journal of Engineering & Technology 7, no. 2.28 (May 16, 2018): 134. http://dx.doi.org/10.14419/ijet.v7i2.28.12896.

Full text
Abstract:
The exposure is dedicated in the first to mathematical modeling of the environment where the aspects on the walking robots evolution models are described. The environment’s mathematical model is defined through the models of kinematics or dynamic systems in the general case of systems that depend on parameters. The important property of the dynamic system evolution models that approach the phenomenon from the environment is property of separation between stable and unstable regions from the free parameters domain of the system. Some mathematical conditions that imply the separation of stable regions from the free parameters domain of the system are formulated. In the second part is described our idea on walking robot kinematics and dynamic models with aspects exemplified on walking robot leg. An inverse method for identification of possible critical positions of the walking robot leg is established.
APA, Harvard, Vancouver, ISO, and other styles
33

Duan, Lu Ling, Pin Wang, and Ni Ruan. "Mathematical Model of Obstacle Avoidance Shortest Time Path for Robot." Applied Mechanics and Materials 721 (December 2014): 214–17. http://dx.doi.org/10.4028/www.scientific.net/amm.721.214.

Full text
Abstract:
In this paper, we study the mathematical model of obstacle avoidance shortest time path for robot. From origin to specified point, the robot shortest time path is divided into three cases, then we establish three mathematical models , through mathematical software, we get three required times, by comparison, we can get results of optimal path.
APA, Harvard, Vancouver, ISO, and other styles
34

Moshayedi, Ata Jahangir, Atanu Shuvam Roy, Sithembiso Khaya Sambo, Yangwan Zhong, and Liefa Liao. "Review On: The Service Robot Mathematical Model." EAI Endorsed Transactions on AI and Robotics 1 (February 23, 2022): 1–19. http://dx.doi.org/10.4108/airo.v1i.20.

Full text
Abstract:
After nearly 30 years of development, service robot technology has made important achievements in the interdisciplinary aspects of machinery, information, materials, control, medicine, etc. These robot types have different shapes, and mainly in some are shaped based on application. Till today various structure are proposed which for the better analysis’s need to have the mathematical equation that can model the structure and later the behaviour of them after implementing the controlling strategy. The current paper discusses the various shape and applications of all available service robots and briefly summarizes the research progress of key points such as robot dynamics, robot types, and different dynamic models of the differential types of service robots. The current review study can be helpful as an initial node for all researchers in this topic and help them to have the better simulation and analyses. Besides the current research shows some application that can specify the service robot model over the application.
APA, Harvard, Vancouver, ISO, and other styles
35

Yoneyama, Ryota, Angel J. Duran, and Angel P. del Pobil. "Integrating Sensor Models in Deep Learning Boosts Performance: Application to Monocular Depth Estimation in Warehouse Automation." Sensors 21, no. 4 (February 19, 2021): 1437. http://dx.doi.org/10.3390/s21041437.

Full text
Abstract:
Deep learning is the mainstream paradigm in computer vision and machine learning, but performance is usually not as good as expected when used for applications in robot vision. The problem is that robot sensing is inherently active, and often, relevant data is scarce for many application domains. This calls for novel deep learning approaches that can offer a good performance at a lower data consumption cost. We address here monocular depth estimation in warehouse automation with new methods and three different deep architectures. Our results suggest that the incorporation of sensor models and prior knowledge relative to robotic active vision, can consistently improve the results and learning performance from fewer than usual training samples, as compared to standard data-driven deep learning.
APA, Harvard, Vancouver, ISO, and other styles
36

Turlapati, Sri Harsha, and Domenico Campolo. "Towards Haptic-Based Dual-Arm Manipulation." Sensors 23, no. 1 (December 29, 2022): 376. http://dx.doi.org/10.3390/s23010376.

Full text
Abstract:
Vision is the main component of current robotics systems that is used for manipulating objects. However, solely relying on vision for hand−object pose tracking faces challenges such as occlusions and objects moving out of view during robotic manipulation. In this work, we show that object kinematics can be inferred from local haptic feedback at the robot−object contact points, combined with robot kinematics information given an initial vision estimate of the object pose. A planar, dual-arm, teleoperated robotic setup was built to manipulate an object with hands shaped like circular discs. The robot hands were built with rubber cladding to allow for rolling contact without slipping. During stable grasping by the dual arm robot, under quasi-static conditions, the surface of the robot hand and object at the contact interface is defined by local geometric constraints. This allows one to define a relation between object orientation and robot hand orientation. With rolling contact, the displacement of the contact point on the object surface and the hand surface must be equal and opposite. This information, coupled with robot kinematics, allows one to compute the displacement of the object from its initial location. The mathematical formulation of the geometric constraints between robot hand and object is detailed. This is followed by the methodology in acquiring data from experiments to compute object kinematics. The sensors used in the experiments, along with calibration procedures, are presented before computing the object kinematics from recorded haptic feedback. Results comparing object kinematics obtained purely from vision and from haptics are presented to validate our method, along with the future ideas for perception via haptic manipulation.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Guifeng, Lu-ming Zhang, and Yichuan Sheng. "Machine-Vision-Based Enhanced Deep Genetic Algorithm for Robot Action Analysis." Applied Bionics and Biomechanics 2022 (September 30, 2022): 1–6. http://dx.doi.org/10.1155/2022/4047826.

Full text
Abstract:
The machine-driven products processed by CNC machine tools are divided into multiple parts, which are often transmitted to unqualified processing crystals due to the error of the part’s posture, thus affecting the projection effect. Aiming at handling this problem, we, in this work, propose a method for locating and correcting machine capabilities based on bicycle visual recognition. The robot obtains coordinates and offset points through the vision system. In order to satisfy the requirement of fast sorting of the robot parts, a robot species method (BAS-GA) that supports machine vision and improved genetic algorithm rules is proposed. The rank method first preprocesses the part copies, then uses the Sift feature twin similarity notification algorithm to filter the part idols, and finally uses in-law deformation to place the target parts. Afterward, an particular model is built for the maintenance of the nurtural skill, and the mathematical mold is solved using the BAS-GA algorithmic program authority. The close trail of the robot is maintained to manifest the marijuana of the robot. Experimental inference have shown that the BAS-GA algorithm rule achieves the optimal conclusion similar to the pseudo-annealing algorithmic program plant. Meanwhile, the genetic algorithm rule is modified ant settlement algorithm program. Knife sharpening was also reduced by 7%, inferring that this process can effectively improve the robot’s action success rate.
APA, Harvard, Vancouver, ISO, and other styles
38

Sasaki, Yoshifumi, and Michitaka Kameyama. "Design of a Model-Based Robot Vision VLSI Processor." Journal of Robotics and Mechatronics 6, no. 2 (April 20, 1994): 131–36. http://dx.doi.org/10.20965/jrm.1994.p0131.

Full text
Abstract:
For intelligent robots, a robot vision system is usually required to perform three-dimensional (3-D) position estimation as well as object recognition at high speeds. In this paper, we propose an algorithm for 3-D object recognition and position estimation for the implementation of a VLSI processor The principle of the algorithm is based on model matching between an input image and models stored in memory. Because of enormous computation time, the development of a high-performance VLSI processor is essential. Highly parallel architecture is introduced in the VLSI processor to reduce the latency. As a result of highly parallel computing, the computational time is 10000 times faster than that of a 28.5 MIPS workstation.
APA, Harvard, Vancouver, ISO, and other styles
39

Qiu, Ning Jia, Ming Zhe Li, Zhen Sui, Cheng Xiang Zheng, Ren Jun Li, and Wei Yao. "Analysis and Synthesis of 6-DOF Robot Measurement Errors." Advanced Materials Research 718-720 (July 2013): 455–59. http://dx.doi.org/10.4028/www.scientific.net/amr.718-720.455.

Full text
Abstract:
Robot motion accuracy plays a vital role in the production which makes use of industrial robots. This paper takes advantage of iterative algorithm to calibrate the robot joint parameters on the basis of setting up mathematical models of 6-DOF robot crossed technology on the mathematical model. It puts forward the method obtained by measuring the pose compared with the theoretical and the robot absolute posing deviation. It provides the basis of surface sheet metal precise scribing work on the next stage.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Liyuan, Qianli Xu, Gang S. Wang, Xinguo Yu, Yeow Kee Tan, and Haizhou Li. "Visual Perception Based Engagement Awareness for Multiparty Human–Robot Interaction." International Journal of Humanoid Robotics 12, no. 04 (November 27, 2015): 1550019. http://dx.doi.org/10.1142/s021984361550019x.

Full text
Abstract:
Computational systems for human–robot interaction (HRI) could benefit from visual perceptions of social cues that are commonly employed in human–human interactions. However, existing systems focus on one or two cues for attention or intention estimation. This research investigates how social robots may exploit a wide spectrum of visual cues for multiparty interactions. It is proposed that the vision system for social cue perception should be supported by two dimensions of functionality, namely, vision functionality and cognitive functionality. A vision-based system is proposed for a robot receptionist to embrace both functionalities for multiparty interactions. The module of vision functionality consists of a suite of methods that computationally recognize potential visual cues related to social behavior understanding. The performance of the models is validated by the ground truth annotation dataset. The module of cognitive functionality consists of two computational models that (1) quantify users’ attention saliency and engagement intentions, and (2) facilitate engagement-aware behaviors for the robot to adjust its direction of attention and manage the conversational floor. The performance of the robot’s engagement-aware behaviors is evaluated in a multiparty dialog scenario. The results show that the robot’s engagement-aware behavior based on visual perceptions significantly improve the effectiveness of communication and positively affect user experience.
APA, Harvard, Vancouver, ISO, and other styles
41

Riego del Castillo, Virginia, Lidia Sánchez-González, Adrián Campazas-Vega, and Nicola Strisciuglio. "Vision-Based Module for Herding with a Sheepdog Robot." Sensors 22, no. 14 (July 16, 2022): 5321. http://dx.doi.org/10.3390/s22145321.

Full text
Abstract:
Livestock farming is assisted more and more by technological solutions, such as robots. One of the main problems for shepherds is the control and care of livestock in areas difficult to access where grazing animals are attacked by predators such as the Iberian wolf in the northwest of the Iberian Peninsula. In this paper, we propose a system to automatically generate benchmarks of animal images of different species from iNaturalist API, which is coupled with a vision-based module that allows us to automatically detect predators and distinguish them from other animals. We tested multiple existing object detection models to determine the best one in terms of efficiency and speed, as it is conceived for real-time environments. YOLOv5m achieves the best performance as it can process 64 FPS, achieving an mAP (with IoU of 50%) of 99.49% for a dataset where wolves (predator) or dogs (prey) have to be detected and distinguished. This result meets the requirements of pasture-based livestock farms.
APA, Harvard, Vancouver, ISO, and other styles
42

Pensky, Oleg G., Vladimir O. Michailov, and Kirill V. Chernikov. "Mathematical Models of Receptivity of a Robot and a Human to Education." Intelligent Control and Automation 05, no. 03 (2014): 97–101. http://dx.doi.org/10.4236/ica.2014.53011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ehrenman, Gayle. "Eyes on the Line." Mechanical Engineering 127, no. 08 (August 1, 2005): 25–27. http://dx.doi.org/10.1115/1.2005-aug-2.

Full text
Abstract:
This article discusses vision-enabled robots that are helping factories to keep the production lines rolling, even when the parts are out of place. The automotive industry was one of the earliest to adopt industrial robots, and continues to be one of its biggest users, but now industrial robots are turning up in more unusual factory settings, including pharmaceutical production and packaging, consumer electronics assembly, machine tooling, and food packaging. No current market research is available that breaks down vision-enabled versus blind robot usage. However, all the major industrial robot manufacturers are turning out models that are vision-enabled; one manufacturer said that its entire current line of robots are vision enabled. All it takes to change over the robot system is some fairly basic tooling changes to the robot's end-effector, and some programming changes in the software. The combination of speed, relatively low cost , flexibility, and ease of use that vision-enabled robots offer is making an increasing number of factories consider putting another set of eyes on their lines.
APA, Harvard, Vancouver, ISO, and other styles
44

Jovanovic, Kosta, Jovana Vranic, and Nadica Miljkovic. "Hill’s and Huxley’s muscle models - tools for simulations in biomechanics." Serbian Journal of Electrical Engineering 12, no. 1 (2015): 53–67. http://dx.doi.org/10.2298/sjee1501053j.

Full text
Abstract:
Numerous mathematical models of human skeletal muscles have been developed. However, none of them is adopted as a general one and each of them is suggested for some specific purpose. This topic is essential in humanoid robotics, since we firstly need to understand how human moves and acts in order to exploit human movement patterns in robotics and design human like actuators. Simulations in biomechanics are intensively used in research of locomotion, safe human-robot interaction, development of novel robotic actuators, biologically inspired control algorithms, etc. This paper presents two widely adopted muscle models (Hill?s and Huxley?s model), elaborates their features and demonstrates trade-off between their accuracy and efficiency of computer simulations. The simulation setup contains mathematical representation of passive muscle structures as well as mathematical model of an elastic tendon as a series elastic actuation element. Advanced robot control techniques point out energy consumption as one of the key issues. Therefore, energy store and release mechanism in elastic elements in both tendon and muscle, based on the simulation models, are considered.
APA, Harvard, Vancouver, ISO, and other styles
45

Matuliauskas, Arvydas, and Bronislovas Spruogis. "PIPELINE ROBOTS WITH ELASTIC ELEMENTS." TRANSPORT 17, no. 5 (October 31, 2002): 177–81. http://dx.doi.org/10.3846/16483840.2002.10414039.

Full text
Abstract:
In the article constructions of the pipeline robots with elastic elements are reviewed and the scheme of new original construction is presented. The mathematical models of a robot with one-dimensional vibration exciter with two degrees of freedom were developed and the equations of movement were formed and written. The mathematical model of the pipeline robot with circular elements is formed and its motion equations are presented.
APA, Harvard, Vancouver, ISO, and other styles
46

HÜLSE, MARTIN, SEBASTIAN McBRIDE, and MARK LEE. "TASK MODULATED ACTIVE VISION FOR ADVANCED HUMAN–ROBOT INTERACTION." International Journal of Humanoid Robotics 09, no. 03 (September 2012): 1250024. http://dx.doi.org/10.1142/s0219843612500247.

Full text
Abstract:
Eye fixation and gaze fixation patterns in general play an important part when humans interact with each other. Also, gaze fixation patterns of humans are highly determined by the task they perform. Our assumption is that meaningful human–robot interaction with robots having active vision components (such a humanoids) is highly supported if the robot system is able to create task modulated fixation patterns. We present an architecture for a robot active vision system equipped with one manipulator where we demonstrate the generation of task modulated gaze control, meaning that fixation patterns are in accordance with a specific task the robot has to perform. Experiments demonstrate different strategies of multi-modal task modulation for robotic active vision where visual and nonvisual features (tactile feedback) determine gaze fixation patterns. The results are discussed in comparison to purely saliency based strategies toward visual attention and gaze control. The major advantages of our approach to multi-modal task modulation is that the active vision system can generate, first, active avoidance of objects, and second, active engagement with objects. Such behaviors cannot be generated by current approaches of visual attention which are based on saliency models only, but they are important for mimicking human-like gaze fixation patterns.
APA, Harvard, Vancouver, ISO, and other styles
47

Bostanov, B. О. "Unstressed combined trajectory of robot." Bulletin of the National Engineering Academy of the Republic of Kazakhstan 4, no. 78 (January 10, 2020): 29–34. http://dx.doi.org/10.47533/2020.1606-146x.29.

Full text
Abstract:
The problem of forming a smooth combined trajectory of the robot and determining the position of the connection points, providing kinematic and dynamic smoothness conditions, is considered. Mathematical relations are obtained that express the conditions for connecting arcs of a trajectory without a jump in the radii of curvature at the joints. The proposed method provides the formation of complex technical forms and creates on their basis new models of the combined trajectory of a robot of continuous curvature.
APA, Harvard, Vancouver, ISO, and other styles
48

Bostanov, B. О. "Unstressed combined trajectory of robot." Bulletin of the National Engineering Academy of the Republic of Kazakhstan 4, no. 78 (January 10, 2020): 29–34. http://dx.doi.org/10.47533/2020.1606-146x.29.

Full text
Abstract:
The problem of forming a smooth combined trajectory of the robot and determining the position of the connection points, providing kinematic and dynamic smoothness conditions, is considered. Mathematical relations are obtained that express the conditions for connecting arcs of a trajectory without a jump in the radii of curvature at the joints. The proposed method provides the formation of complex technical forms and creates on their basis new models of the combined trajectory of a robot of continuous curvature.
APA, Harvard, Vancouver, ISO, and other styles
49

Bertetto, Andrea Manuello, and Maurizio Ruggiu. "Low Cost Pipe-crawling Pneumatic Robot." Journal of Robotics and Mechatronics 14, no. 4 (August 20, 2002): 400–407. http://dx.doi.org/10.20965/jrm.2002.p0400.

Full text
Abstract:
A pipes inspection robot prototype was built and a mathematical modeling of its dynamics was developed. In order to pass over complicated pipeline networks, the robot was constructed with a flexible rubber structure moved by pneumatics power The robot locomotion was inspired by the inch-worm gait kinematics. Because of significant acceleration values revealed during the robot gait, the robot dynamics was mathematically formulated either by a single degree of freedom model or by the assumed mode summation method. A set of experiments were conducted for obtaining all the parameters required for models formulation. Finally the models were validated by comparing the numerical and experimental robot gait with time.
APA, Harvard, Vancouver, ISO, and other styles
50

Transeth, Aksel Andreas, Kristin Ytterstad Pettersen, and Pål Liljebäck. "A survey on snake robot modeling and locomotion." Robotica 27, no. 7 (March 3, 2009): 999–1015. http://dx.doi.org/10.1017/s0263574709005414.

Full text
Abstract:
SUMMARYSnake robots have the potential to make substantial contributions in areas such as rescue missions, firefighting, and maintenance where it may either be too narrow or too dangerous for personnel to operate. During the last 10–15 years, the published literature on snake robots has increased significantly. The purpose of this paper is to give a survey of the various mathematical models and motion patterns presented for snake robots. Both purely kinematic models and models including dynamics are investigated. Moreover, the different approaches to biologically inspired locomotion and artificially generated motion patterns for snake robots are discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography