Academic literature on the topic 'Robot vision Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robot vision Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Robot vision Mathematical models"

1

Khodabandehloo, K. "Robotic handling and packaging of poultry products." Robotica 8, no. 4 (October 1990): 285–97. http://dx.doi.org/10.1017/s0263574700000321.

Full text
Abstract:
SUMMARYThis paper presents the findings of a research programme leading to the development of a robotic system for packaging poultry portions. The results show that an integrated system, incorporating machine vision and robots, can be made feasible for industrial use. The elements of this system, including the end-effector, the vision module, the robot hardware and the system software are presented. Models and algorithms for automatic recognition and handling of poultry portions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Yanbiao, Jinchao Li, and Xiangzhi Chen. "Seam tracking investigation via striped line laser sensor." Industrial Robot: An International Journal 44, no. 5 (August 21, 2017): 609–17. http://dx.doi.org/10.1108/ir-11-2016-0294.

Full text
Abstract:
Purpose This paper aims to propose a set of six-axis robot arm welding seam tracking experiment platform based on Halcon machine vision library to resolve the curve seam tracking issue. Design/methodology/approach Robot-based and image coordinate systems are converted based on the mathematical model of the three-dimensional measurement of structured light vision and conversion relations between robot-based and camera coordinate systems. An object tracking algorithm via weighted local cosine similarity is adopted to detect the seam feature points to prevent effectively the interference from arc and spatter. This algorithm models the target state variable and corresponding observation vector within the Bayes framework and finds the optimal region with highest similarity to the image-selected modules using cosine similarity. Findings The paper tests the approach and the experimental results show that using metal inert-gas (MIG) welding with maximum welding current of 200A can achieve real-time accurate curve seam tracking under strong arc light and splash. Minimal distance between laser stripe and welding molten pool can reach 15 mm, and sensor sampling frequency can reach 50 Hz. Originality/value Designing a set of six-axis robot arm welding seam tracking experiment platform with a system of structured light sensor based on Halcon machine vision library; and adding an object tracking algorithm to seam tracking system to detect image feature points. By this technology, this system can track the curve seam while welding.
APA, Harvard, Vancouver, ISO, and other styles
3

Panarin, R. N., А. А. Soloviev, and Любовь Анатольевна Хворова. "Application of Artificial Intelligence and Computer Vision Technologies in Solving Problems of Automation of Processing and Recognition of Biological Objects." Izvestiya of Altai State University, no. 1(123) (March 18, 2022): 101–7. http://dx.doi.org/10.14258/izvasu(2022)1-16.

Full text
Abstract:
The article considers the application of artificial intelligence and computer vision technologies to solve the automation of processing and analysis of botanical micro and macro objects (images of fern spores). Also, there is a problem of developing software for a digital twin of an agrobot. The first problem is an interdisciplinary research aimed at solving applied and fundamental problems in botanical objects' biosystematics and studying microevolutionary processes using computer vision technologies, methods of intelligent image analysis, machine learning, and artificial intelligence. The article presents the developed software module FAST (Functional Automated System Tool) for solving the direct problem — performing measurements from images obtained by scanning electron microscopy, virtual herbaria image library, entomological collections, or images taken in a natural environment. The second problem is software development for the digital twin of the agrorobot, designed for precise mechanical processing of plants and soil. The proposed solution includes several components: the control unit — NVIDIA Jetson NANO computing module; the actuator — 6-axis robotic arm; the machine vision unit based on an Intel RealSense camera; the chassis unit — tracked tracks and software drivers and components for their control. The digital twin of the robot considers the environmental conditions and the landscape of the operation area. The use of ROS (Robot Operating System) allows minimal effort to transfer a digital model to a physical one (prototype and serial robot) without changing the source code. Furthermore, consideration of the environmental conditions during the programming stage provides opportunities for further development and testing of real-life mathematical models for device control.
APA, Harvard, Vancouver, ISO, and other styles
4

Uršič, Peter, Aleš Leonardis, Danijel Skočaj, and Matej Kristan. "Learning part-based spatial models for laser-vision-based room categorization." International Journal of Robotics Research 36, no. 4 (April 2017): 379–402. http://dx.doi.org/10.1177/0278364917704707.

Full text
Abstract:
Room categorization, that is, recognizing the functionality of a never before seen room, is a crucial capability for a household mobile robot. We present a new approach for room categorization that is based on two-dimensional laser range data. The method is based on a novel spatial model consisting of mid-level parts that are built on top of a low-level part-based representation. The approach is then fused with a vision-based method for room categorization, which is also based on a spatial model consisting of mid-level visual parts. In addition, we propose a new discriminative dictionary learning technique that is applied for part-dictionary selection in both laser-based and vision-based modalities. Finally, we present a comparative analysis between laser-based, vision-based, and laser-vision-fusion-based approaches in a uniform part-based framework, which is evaluated on a large dataset with several categories of rooms from domestic environments.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xiaoyue, and Liang Huo. "A Vision/Inertia Integrated Positioning Method Using Position and Orientation Matching." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6835456.

Full text
Abstract:
A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV) and mobile robot is proposed in this work. The method is introduced firstly. Landmarks are placed into the navigation field and camera and inertial measurement unit (IMU) are installed on the vehicle. Vision processor calculates the azimuth and position information from the pictures which include artificial landmarks with the known direction and position. Inertial navigation system (INS) calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS output position. Then the needed mathematical models are established and integrated navigation is implemented by Kalman filter with the observation of azimuth and the calculated pixel position of landmark. Navigation errors and IMU errors are estimated and compensated in real time so that high precision navigation results can be got. Finally, simulation and test are performed, respectively. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve centimeter-level autonomic continuous navigation.
APA, Harvard, Vancouver, ISO, and other styles
6

Varlashin, V. V., and A. V. Lopota. "Optimization of Surround-View System Projection Parameters using Fiducial Markers." Mekhatronika, Avtomatizatsiya, Upravlenie 23, no. 2 (February 6, 2022): 97–103. http://dx.doi.org/10.17587/mau.23.97-103.

Full text
Abstract:
The paper is devoted to the problem of increasing quality of reproduction of the environment by mobile robot’s surround-view system, operating in the augmented reality mode. A variant of a surround-view system based on the cameras with over-lapping fields of view is being considered. A virtual model has been developed, it includes 3D-CAD models of a mobile robot and surrounding objects, as well as virtual models of cameras. The cross-platform integrated development environment "Unity" was chosen to implement the model. Methods for solving the problem of displaying the surrounding mobile robot space in the "third-person view" mode are determined. A mathematical criterion for assessing the quality of reproduction of the surrounding space is proposed. It is based on the comparison of points obtained from a virtual model with points obtained as a result of projection of images from virtual cameras. To obtain points, ArUco fiducial markers were used, providing an unambiguous comparison of points on the original and synthesized images. The dependence of the value of the objective function of the optimization problem on the projection parameters by the uniform search method are investigated. A method for automatic adaptation of projection parameters using fisheye lenses and stereo vision methods is proposed. Directions for further research are identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Solovyeva, Elena, and Ali Abdullah. "Controlling system based on neural networks with reinforcement learning for robotic manipulator." Information and Control Systems, no. 5 (October 20, 2020): 24–32. http://dx.doi.org/10.31799/1684-8853-2020-5-24-32.

Full text
Abstract:
Introduction: Due to its advantages, such as high flexibility and the ability to move heavy pieces with high torques and forces, the robotic arm, also named manipulator robot, is the most used industrial robot. Purpose: We improve the controlling quality of a manipulator robot with seven degrees of freedom in the V-REP program's environment using the reinforcement learning method based on deep neural networks. Methods: Estimate the action signal's policy by building a numerical algorithm using deep neural networks. The action-network sends the action's signal to the robotic manipulator, and the critic-network performs a numerical function approximation to calculate the value function (Q-value). Results: We create a model of the robot and the environment using the reinforcement-learning library in MATLAB and connecting the output signals (the action's signal) to a simulated robot in V-REP program. Train the robot to reach an object in its workspace after interacting with the environment and calculating the reward of such interaction. The model of the observations was done using three vision sensors. Based on the proposed deep learning method, a model of an agent representing the robotic manipulator was built using four layers neural network for the actor with four layers neural network for the critic. The agent's model representing the robotic manipulator was trained for several hours until the robot started to reach the object in its workspace in an acceptable way. The main advantage over supervised learning control is allowing our robot to perform actions and train at the same moment, giving the robot the ability to reach an object in its workspace in a continuous space action. Practical relevance: The results obtained are used to control the behavior of the movement of the manipulator without the need to construct kinematic models, which reduce the mathematical complexity of the calculation and provide a universal solution.
APA, Harvard, Vancouver, ISO, and other styles
8

Mata, M., J. M. Armingol, J. Fernández, and A. de la Escalera. "Object learning and detection using evolutionary deformable models for mobile robot navigation." Robotica 26, no. 1 (January 2008): 99–107. http://dx.doi.org/10.1017/s0263574707003633.

Full text
Abstract:
SUMMARYDeformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Trabasso, Luis Gonzaga, and Cezary Zielinski. "Semi-automatic calibration procedure for the vision-robot interface applied to scale model decoration." Robotica 10, no. 4 (July 1992): 303–8. http://dx.doi.org/10.1017/s0263574700008134.

Full text
Abstract:
SUMMARYA semi-automatic method for calibrating a robot-vision interface is presented. It puts a small work-load on the operator, requires a simple calibration jig and a solution of a very simple system of equations. It has been extensively used in an experimental robotic cell set up at Loughborough University of Technology, where various aspects of the manufacturing and the decoration of scale models are being investigated. As an extension of the calibration procedure, the paper also shows practical solutions for the problem of dealing with three dimensional objects using a single camera.
APA, Harvard, Vancouver, ISO, and other styles
10

Steiger-Carçao, Adolfo, and L. M. Camarinha-Matos. "Concurrent Pascal as a robot level language – a suggestion." Robotica 4, no. 4 (October 1986): 269–72. http://dx.doi.org/10.1017/s0263574700009966.

Full text
Abstract:
SUMMARYThis paper briefly describes actual robot level programming languages, focusing on their intrinsic limitations when compared with traditional concurrent programming languages or when used for robotic systems/flexible production workshops programming, and not only for an isolated manipulator control.To reduce such limitations, a suggestion is made to base the development of robotic programming systems on already existing concurrent languages (Concurrent Pascal, Modula-2), taking into account their built-in extension facilities for fastening the incorporation of (or easy interfacing with) existing packages or products already developed in robotics (robot models, CAD systems, vision systems, etc).Using such languages as a support base for a robotic station programming environment, with access to different components developed separately, will allow a better understanding of the inter-relations among components and their limitations when faced with an integration perspective.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Robot vision Mathematical models"

1

Entschev, Peter Andreas. "Efficient construction of multi-scale image pyramids for real-time embedded robot vision." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/720.

Full text
Abstract:
Detectores de pontos de interesse, ou detectores de keypoints, têm sido de grande interesse para a área de visão robótica embarcada, especialmente aqueles que possuem robustez a variações geométricas, como rotação, transformações afins e mudanças em escala. A detecção de características invariáveis a escala é normalmente realizada com a construção de pirâmides de imagens em multiescala e pela busca exaustiva de extremos no espaço de escala, uma abordagem presente em métodos de reconhecimento de objetos como SIFT e SURF. Esses métodos são capazes de encontrar pontos de interesse bastante robustos, com propriedades adequadas para o reconhecimento de objetos, mas são ao mesmo tempo computacionalmente custosos. Nesse trabalho é apresentado um método eficiente para a construção de pirâmides de imagens em sistemas embarcados, como a plataforma BeagleBoard-xM, de forma similar ao método SIFT. O método aqui apresentado tem como objetivo utilizar técnicas computacionalmente menos custosas e a reutilização de informações previamente processadas de forma eficiente para reduzir a complexidade computacional. Para simplificar o processo de construção de pirâmides, o método utiliza filtros binomiais em substituição aos filtros Gaussianos convencionais utilizados no método SIFT original para calcular múltiplas escalas de uma imagem. Filtros binomiais possuem a vantagem de serem implementáveis utilizando notação ponto-fixo, o que é uma grande vantagem para muitos sistemas embarcados que não possuem suporte nativo a ponto-flutuante. A quantidade de convoluções necessária é reduzida pela reamostragem de escalas já processadas da pirâmide. Após a apresentação do método para construção eficiente de pirâmides, é apresentada uma maneira de implementação eficiente do método em uma plataforma SIMD (Single Instruction, Multiple Data, em português, Instrução Única, Dados Múltiplos) – a plataforma SIMD usada é a extensão ARM Neon disponível no processador ARM Cortex-A8 da BeagleBoard-xM. Plataformas SIMD em geral são muito úteis para aplicações multimídia, onde normalmente é necessário realizar a mesma operação em vários elementos, como pixels em uma imagem, permitindo que múltiplos dados sejam processados com uma única instrução do processador. Entretanto, a extensão Neon no processador Cortex-A8 não suporta operações em ponto-flutuante, tendo o método sido cuidadosamente implementado de forma a superar essa limitação. Por fim, alguns resultados sobre o método aqui proposto e método SIFT original são apresentados, incluindo seu desempenho em tempo de execução e repetibilidade de pontos de interesse detectados. Com uma implementação direta (sem o uso da plataforma SIMD), é mostrado que o método aqui apresentado necessita de aproximadamente 1/4 do tempo necessário para construir a pirâmide do método SIFT original, ao mesmo tempo em que repete até 86% dos pontos de interesse. Com uma abordagem completamente implementada em ponto-fixo (incluindo a vetorização com a plataforma SIMD) a repetibilidade chega a 92% dos pontos de interesse do método SIFT original, porém, reduzindo o tempo de processamento para menos de 3%.
Interest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
APA, Harvard, Vancouver, ISO, and other styles
2

Nikolaidis, Stefanos. "Mathematical Models of Adaptation in Human-Robot Collaboration." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1121.

Full text
Abstract:
While much work in human-robot interaction has focused on leaderfollower teamwork models, the recent advancement of robotic systems that have access to vast amounts of information suggests the need for robots that take into account the quality of the human decision making and actively guide people towards better ways of doing their task. This thesis proposes an equal-partners model, where human and robot engage in a dance of inference and action, and focuses on one particular instance of this dance: the robot adapts its own actions via estimating the probability of the human adapting to the robot. We start with a bounded-memory model of human adaptation parameterized by the human adaptability - the probability of the human switching towards a strategy newly demonstrated by the robot. We then examine more subtle forms of adaptation, where the human teammate adapts to the robot, without replicating the robot’s policy. We model the interaction as a repeated game, and present an optimal policy computation algorithm that has complexity linear to the number of robot actions. Integrating these models into robot action selection allows for human-robot mutual-adaptation. Human subject experiments in a variety of collaboration and shared-autonomy settings show that mutual adaptation significantly improves human-robot team performance, compared to one-way robot adaptation to the human.
APA, Harvard, Vancouver, ISO, and other styles
3

朱國基 and Kwok-kei Chu. "Design and control of a six-legged mobile robot." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Landecker, Will. "Interpretable Machine Learning and Sparse Coding for Computer Vision." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1937.

Full text
Abstract:
Machine learning offers many powerful tools for prediction. One of these tools, the binary classifier, is often considered a black box. Although its predictions may be accurate, we might never know why the classifier made a particular prediction. In the first half of this dissertation, I review the state of the art of interpretable methods (methods for explaining why); after noting where the existing methods fall short, I propose a new method for a particular type of black box called additive networks. I offer a proof of trustworthiness for this new method (meaning a proof that my method does not "make up" the logic of the black box when generating an explanation), and verify that its explanations are sound empirically. Sparse coding is part of a family of methods that are believed, by many researchers, to not be black boxes. In the second half of this dissertation, I review sparse coding and its application to the binary classifier. Despite the fact that the goal of sparse coding is to reconstruct data (an entirely different goal than classification), many researchers note that it improves classification accuracy. I investigate this phenomenon, challenging a common assumption in the literature. I show empirically that sparse reconstruction is not necessarily the right intermediate goal, when our ultimate goal is classification. Along the way, I introduce a new sparse coding algorithm that outperforms competing, state-of-the-art algorithms for a variety of important tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Choy, Siu Kai. "Statistical histogram characterization and modeling : theory and applications." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ehtiati, Tina. "Strongly coupled Bayesian models for interacting object and scene classification processes." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102975.

Full text
Abstract:
In this thesis, we present a strongly coupled data fusion architecture within a Bayesian framework for modeling the bi-directional influences between the scene and object classification mechanisms. A number of psychophysical studies provide experimental evidence that the object and the scene perception mechanisms are not functionally separate in the human visual system. Object recognition facilitates the recognition of the scene background and also knowledge of the scene context facilitates the recognition of the individual objects in the scene. The evidence indicating a bi-directional exchange between the two processes has motivated us to build a computational model where object and scene classification proceed in an interdependent manner, while no hierarchical relationship is imposed between the two processes. We propose a strongly coupled data fusion model for implementing the feedback relationship between the scene and object classification processes. We present novel schemes for modifying the Bayesian solutions for the scene and object classification tasks which allow data fusion between the two modules based on the constraining of the priors or the likelihoods. We have implemented and tested the two proposed models using a database of natural images created for this purpose. The Receiver Operator Curves (ROC) depicting the scene classification performance of the likelihood coupling and the prior coupling models show that scene classification performance improves significantly in both models as a result of the strong coupling of the scene and object modules.
ROC curves depicting the scene classification performance of the two models also show that the likelihood coupling model achieves a higher detection rate compared to the prior coupling model. We have also computed the average rise times of the models' outputs as a measure of comparing the speed of the two models. The results show that the likelihood coupling model outputs have a shorter rise time. Based on these experimental findings one can conclude that imposing constraints on the likelihood models provides better solutions to the scene classification problems compared to imposing constraints on the prior models.
We have also proposed an attentional feature modulation scheme, which consists of tuning the input image responses to the bank of Gabor filters based on the scene class probabilities estimated by the model and the energy profiles of the Gabor filters for different scene categories. Experimental results based on combining the attentional feature tuning scheme with the likelihood coupling and the prior coupling methods show a significant improvement in the scene classification performances of both models.
APA, Harvard, Vancouver, ISO, and other styles
7

Ngan, Yuk-tung Henry, and 顏旭東. "Motif-based method for patterned texture defect detection." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40203608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nifong, Nathaniel H. "Learning General Features From Images and Audio With Stacked Denoising Autoencoders." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1550.

Full text
Abstract:
One of the most impressive qualities of the brain is its neuro-plasticity. The neocortex has roughly the same structure throughout its whole surface, yet it is involved in a variety of different tasks from vision to motor control, and regions which once performed one task can learn to perform another. Machine learning algorithms which aim to be plausible models of the neocortex should also display this plasticity. One such candidate is the stacked denoising autoencoder (SDA). SDA's have shown promising results in the field of machine perception where they have been used to learn abstract features from unlabeled data. In this thesis I develop a flexible distributed implementation of an SDA and train it on images and audio spectrograms to experimentally determine properties comparable to neuro-plasticity. Specifically, I compare the visual-auditory generalization between a multi-level denoising autoencoder trained with greedy, layer-wise pre-training (GLWPT), to one trained without. I test a hypothesis that multi-modal networks will perform better than uni-modal networks due to the greater generality of features that may be learned. Furthermore, I also test the hypothesis that the magnitude of improvement gained from this multi-modal training is greater when GLWPT is applied than when it is not. My findings indicate that these hypotheses were not confirmed, but that GLWPT still helps multi-modal networks adapt to their second sensory modality.
APA, Harvard, Vancouver, ISO, and other styles
9

North, Ben. "Learning dynamical models for visual tracking." Thesis, University of Oxford, 1998. http://ora.ox.ac.uk/objects/uuid:6ed12552-4c30-4d80-88ef-7245be2d8fb8.

Full text
Abstract:
Using some form of dynamical model in a visual tracking system is a well-known method for increasing robustness and indeed performance in general. Often, quite simple models are used and can be effective, but prior knowledge of the likely motion of the tracking target can often be exploited by using a specially-tailored model. Specifying such a model by hand, while possible, is a time-consuming and error-prone process. Much more desirable is for an automated system to learn a model from training data. A dynamical model learnt in this manner can also be a source of useful information in its own right, and a set of dynamical models can provide discriminatory power for use in classification problems. Methods exist to perform such learning, but are limited in that they assume the availability of 'ground truth' data. In a visual tracking system, this is rarely the case. A learning system must work from visual data alone, and this thesis develops methods for learning dynamical models while explicitly taking account of the nature of the training data --- they are noisy measurements. The algorithms are developed within two tracking frameworks. The Kalman filter is a simple and fast approach, applicable where the visual clutter is limited. The recently-developed Condensation algorithm is capable of tracking in more demanding situations, and can also employ a wider range of dynamical models than the Kalman filter, for instance multi-mode models. The success of the learning algorithms is demonstrated experimentally. When using a Kalman filter, the dynamical models learnt using the algorithms presented here produce better tracking when compared with those learnt using current methods. Learning directly from training data gathered using Condensation is an entirely new technique, and experiments show that many aspects of a multi-mode system can be successfully identified using very little prior information. Significant computational effort is required by the implementation of the methods, and there is scope for improvement in this regard. Other possibilities for future work include investigation of the strong links this work has with learning problems in other areas. Most notable is the study of the 'graphical models' commonly used in expert systems, where the ideas presented here promise to give insight and perhaps lead to new techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Bernier, Thomas. "Development of an algorithmic method for the recognition of biological objects." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29656.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Robot vision Mathematical models"

1

Ji suan zhi neng ji qi zai san wei biao mian sao miao ji qi ren xi tong zhong de ying yong. Dalian Shi: Dalian hai shi da xue chu ban she, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bochsler, Daniel C. Robotic space simulation: Integration of vision algorithms into an orbital operations simulation. [Houston, Tex.]: Research Institute for Computing and Information Systems, University of Houston--Clear Lake, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jaklič, Aleš. Segmentation and recovery of superquadrics. Dordrecht: Kluwer Academic Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

International Workshop on Visuomotor Coordination in Amphibians: Experiments, Comparisons, Models, and Robots (1987 Kassel, Germany). Visuomotor coordination: Amphibians, comparisons, models, and robots. New York: Plenum Press, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Paragios, Nikos, Yunmei Chen, and Olivier Faugeras, eds. Handbook of Mathematical Models in Computer Vision. Boston, MA: Springer US, 2006. http://dx.doi.org/10.1007/0-387-28831-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Megahed, Saïd M. Principles of robot modelling and simulation. Chichester: Wiley, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Megahed, Saïd M. Principles of robot modelling and simulation. Chichester: J. Wiley, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stefan, Türk, ed. The DFVLR models 1 and 2 of the Manutec r3 robot. Oberpfaffenhofen: DFVLR, Institut für Dynamik der Flugsysteme, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jepson, Allan D. Mixture models for optical flow computation. Toronto: University of Toronto, Dept. of Computer Science, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jepson, Allan D. What is a percept? Toronto: University of Toronto, Dept. of Computer Science, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Robot vision Mathematical models"

1

Luck, Jason, Dan Small, and Charles Q. Little. "Real-Time Tracking of Articulated Human Models Using a 3D Shape-from-Silhouette Method." In Robot Vision, 19–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weik, Sebastian, and C. E. Liedtke. "Hierarchical 3D Pose Estimation for Articulated Human Body Models from a Sequence of Volume Data." In Robot Vision, 27–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44690-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cohen, Laurent D. "On Active Contour Models." In Active Perception and Robot Vision, 599–613. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Ramesh. "Environment Models and Information Assimilation." In Active Perception and Robot Vision, 217–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mundy, Joseph L. "Symbolic Representation of Object Models." In Active Perception and Robot Vision, 189–215. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Romero, Pantaleón D., and Vicente F. Candela. "Mathematical Models for Restoration of Baroque Paintings." In Advanced Concepts for Intelligent Vision Systems, 24–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11864349_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Legeveldt, B., A. Frangi, S. Mitchell, H. van Assen, S. Ordas, J. Reiber, and M. Sonka. "3D Active Shape and Appearance Models in Cardiac Image Analysis." In Handbook of Mathematical Models in Computer Vision, 471–85. Boston, MA: Springer US, 2006. http://dx.doi.org/10.1007/0-387-28831-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Budden, David, Shannon Fenn, Alexandre Mendes, and Stephan Chalup. "Evaluation of Colour Models for Computer Vision Using Cluster Validation Techniques." In RoboCup 2012: Robot Soccer World Cup XVI, 261–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39250-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Xianfeng, Na Lei, and Shing-Tung Yau. "Optimal Transport for Generative Models." In Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging, 1–48. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03009-4_105-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lanza, Alessandro, Serena Morigi, Ivan W. Selesnick, and Fiorella Sgallari. "Convex Non-convex Variational Models." In Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging, 1–57. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03009-4_61-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Robot vision Mathematical models"

1

Mikawa, Masahiko, Masahiro Yoshikawa, and Takeshi Tsujimura. "Memory System Controlled by Mathematical AIM Model for Robot Vision Equipped with Sleep and Wake Functions." In 2006 SICE-ICASE International Joint Conference. IEEE, 2006. http://dx.doi.org/10.1109/sice.2006.315242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ping, Guiju, Mahdi Abolfazli Esfahani, and Han Wang. "Unsupervised 3D Primitive Shape Detection using Mathematical Models." In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2020. http://dx.doi.org/10.1109/icarcv50220.2020.9305494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ping, Guiju, Mahdi Abolfazli Esfahani, and Han Wang. "Unsupervised 3D Primitive Shape Detection using Mathematical Models." In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2020. http://dx.doi.org/10.1109/icarcv50220.2020.9305494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yeromina, Nataliia, and Oleksii Lukashyn. "Basic Classes Of Mathematical Models Used In Machine Vision Problems." In COMPUTER AND INFORMATION SYSTEMS AND TECHNOLOGIES. Kharkiv, Ukraine: Press of the Kharkiv National University of Radioelectronics, 2020. http://dx.doi.org/10.30837/ivcsitic2020201372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Guangbing, Baosheng Shen, and Jie Yan. "Research on the Algorithm for Solving the Indoor Vision Positioning Model of Mobile Robot." In 2018 International Conference on Mathematics, Modelling, Simulation and Algorithms (MMSA 2018). Paris, France: Atlantis Press, 2018. http://dx.doi.org/10.2991/mmsa-18.2018.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Person Following through Appearance Models and Stereo Vision Using a Mobile Robot." In International Workshop on Robot Vision. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0002069800460056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zheng, and Faisal Z. Qureshi. "Topic Models for Image Localization." In 2013 International Conference on Computer and Robot Vision (CRV). IEEE, 2013. http://dx.doi.org/10.1109/crv.2013.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leitner, Jurgen, Alexander Forster, and Jurgen Schmidhuber. "Improving robot vision models for object detection through interaction." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Keselman, Leonid, and Martial Hebert. "Direct Fitting of Gaussian Mixture Models." In 2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019. http://dx.doi.org/10.1109/crv.2019.00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Andersen, Christensen, and Ravn. "Augmented models for improving vision control of a mobile robot." In Proceedings of IEEE International Conference on Control and Applications CCA-94. IEEE, 1994. http://dx.doi.org/10.1109/cca.1994.381270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography