Academic literature on the topic 'Real-time vision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Real-time vision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Real-time vision systems"

1

Wong, Kam W. "REAL-TIME MACHINE VISION SYSTEMS." Canadian Surveyor 41, no. 2 (June 1987): 173–80. http://dx.doi.org/10.1139/tcs-1987-0013.

Full text
Abstract:
Recent developments in machine vision systems, solid state cameras, and image processing are reviewed. Both hardware and software systems are currently available for performing real-time recognition and geometric measurements. More than 1000 units of these imaging systems are already being used in manufacturing plants in the United States. Current research efforts are focused on the processing of three-dimensional information and on knowledge-based processing systems. Five potential research topics in the area of photogrammetry are proposed: 1) stereo solid state camera systems, 2) image correlation, 3) self-calibration and self-orientation, 4) general algorithm for multistation and multicamera photography, and 5) artificial photogrammetry.
APA, Harvard, Vancouver, ISO, and other styles
2

Wong, Kam W., Anthony G. Wiley, and Michael Lew. "GPS‐Guided Vision Systems for Real‐Time Surveying." Journal of Surveying Engineering 115, no. 2 (May 1989): 243–51. http://dx.doi.org/10.1061/(asce)0733-9453(1989)115:2(243).

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rodd, M. G., and Q. M. Wu. "Knowledge-Based Vision Systems in Real-Time Control." IFAC Proceedings Volumes 22, no. 13 (September 1989): 13–18. http://dx.doi.org/10.1016/b978-0-08-040185-0.50007-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodd, M. G., and Q. M. Wu. "Knowledge-based vision systems in real-time control." Annual Review in Automatic Programming 15 (January 1989): 13–18. http://dx.doi.org/10.1016/0066-4138(89)90003-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bleser, Gabriele, Mario Becker, and Didier Stricker. "Real-time vision-based tracking and reconstruction." Journal of Real-Time Image Processing 2, no. 2-3 (August 22, 2007): 161–75. http://dx.doi.org/10.1007/s11554-007-0034-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thomas, B. T., E. L. Dagless, D. J. Milford, and A. D. Morgan. "Real-time vision guided navigation." Engineering Applications of Artificial Intelligence 4, no. 4 (January 1991): 287–300. http://dx.doi.org/10.1016/0952-1976(91)90043-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shah, Meet. "Review on Real-time Applications of Computer Vision Systems." International Journal for Research in Applied Science and Engineering Technology 9, no. 4 (April 30, 2021): 1323–27. http://dx.doi.org/10.22214/ijraset.2021.33942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nekrasov, Victor V. "Real-time coherent optical correlator for machine vision systems." Optical Engineering 31, no. 4 (1992): 789. http://dx.doi.org/10.1117/12.56141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chermak, Lounis, Nabil Aouf, Mark Richardson, and Gianfranco Visentin. "Real-time smart and standalone vision/IMU navigation sensor." Journal of Real-Time Image Processing 16, no. 4 (June 22, 2016): 1189–205. http://dx.doi.org/10.1007/s11554-016-0613-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gutierrez, Daniel. "Contributions to Real-time Metric Localisation with Wearable Vision Systems." ELCVIA Electronic Letters on Computer Vision and Image Analysis 15, no. 2 (November 4, 2016): 27. http://dx.doi.org/10.5565/rev/elcvia.951.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Real-time vision systems"

1

Benoit, Stephen M. "Monocular optical flow for real-time vision systems." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23862.

Full text
Abstract:
This thesis introduces a monocular optical flow algorithm that has been shown to perform well at nearly real-time frame rates (4 FPS) on natural image sequences. The system is completely bottom-up, using pixel region-matching techniques. A coordinated gradient descent method is broken down into two stages; pixel region matching error measures are locally minimized, and flow field consistency constraints apply non-linear adaptive diffusion, causing confident measurements to influence their less confident neighbors. Convergence is usually accomplished with one iteration for an image frame pair. Temporal integration and Kalman filtering predicts upcoming flow fields and figure/ground separation. The algorithm is designed for flexibility: large displacements are tracked as easily as sub-pixel displacements, and higher-level information can feed flow field predictions into the measurement predictions into the measurement process.
APA, Harvard, Vancouver, ISO, and other styles
2

Tippetts, Beau J. "Real-Time Stereo Vision for Resource Limited Systems." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/2972.

Full text
Abstract:
A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. Although much of the literature does not address it, many applications are sensitive to the tradeoff between accuracy and speed that exists among stereo vision algorithms. Overall, this work aims to organize existing efforts and encourage new ones in the development of stereo vision algorithms for resource limited systems. It does this through a review of the status quo as well as providing both software and hardware designs of new stereo vision algorithms that offer an efficient tradeoff between speed and accuracy. A comprehensive review and analysis of stereo vision algorithms is provided with specific emphasis on real-time performance and suitability for resource limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. The tradeoff in accuracy that is typically made to achieve real-time performance is examined with an example of an existing highly accurate stereo vision that is modified to see how much speedup can be achieved. Two new stereo vision algorithms, GA Spline and Profile Shape Matching, are presented with a hardware design of the latter also being provided, making Profile Shape Matching available to both embedded processor-based and programmable hardware-based resource limited systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Arshad, Norhashim Mohd. "Real-time data compression for machine vision measurement systems." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pan, Wenbo. "Real-time human face tracking." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55535.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Dai-Duong. "A vision system based real-time SLAM applications." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS518/document.

Full text
Abstract:
SLAM (localisation et cartographie simultanées) joue un rôle important dans plusieurs applications telles que les robots autonomes, les véhicules intelligents, les véhicules aériens sans pilote (UAV) et autres. De nos jours, les applications SLAM basées sur la vision en temps réel deviennent un sujet d'intérêt général dans de nombreuses recherches. L'une des solutions pour résoudre la complexité de calcul des algorithmes de traitement d'image, dédiés aux applications SLAM, consiste à effectuer un traitement de haut ou de bas niveau sur les coprocesseurs afin de créer un système sur puce. Les architectures hétérogènes ont démontré leur capacité à devenir des candidats potentiels pour un système sur puce dans une approche de co-conception de logiciels matériels. L'objectif de cette thèse est de proposer un système de vision implémentant un algorithme SLAM sur une architecture hétérogène (CPU-GPU ou CPU-FPGA). L'étude permettra d'évaluer ce type d'architectures et contribuer à répondre aux questions relatives à la définition des fonctions et/ou opérateurs élémentaires qui devraient être implantés et comment intégrer des algorithmes de traitement de données tout en prenant en considération l'architecture cible (dans un contexte d'adéquation algorithme architecture). Il y a deux parties dans un système SLAM visuel : Front-end (extraction des points d'intérêt) et Back-end (cœur de SLAM). Au cours de la thèse, concernant la partie Front-end, nous avons étudié plusieurs algorithmes de détection et description des primitives dans l’image. Nous avons développé notre propre algorithme intitulé HOOFR (Hessian ORB Overlapped FREAK) qui possède une meilleure performance par rapport à ceux de l’état de l’art. Cet algorithme est basé sur la modification du détecteur ORB et du descripteur bio-inspiré FREAK. Les résultats de l’amélioration ont été validés en utilisant des jeux de données réel connus. Ensuite, nous avons proposé l'algorithme HOOFR-SLAM Stereo pour la partie Back-end. Cet algorithme utilise les images acquises par une paire de caméras pour réaliser la localisation et cartographie simultanées. La validation a été faite sur plusieurs jeux de données (KITTI, New_College, Malaga, MRT, St_lucia…). Par la suite, pour atteindre un système temps réel, nous avons étudié la complexité algorithmique de HOOFR SLAM ainsi que les architectures matérielles actuelles dédiées aux systèmes embarqués. Nous avons utilisé une méthodologie basée sur la complexité de l'algorithme et le partitionnement des blocs fonctionnels. Le temps de traitement de chaque bloc est analysé en tenant compte des contraintes des architectures ciblées. Nous avons réalisé une implémentation de HOOFR SLAM sur une architecture massivement parallèle basée sur CPU-GPU. Les performances ont été évaluées sur un poste de travail puissant et sur des systèmes embarqués basés sur des architectures. Dans cette étude, nous proposons une architecture au niveau du système et une méthodologie de conception pour intégrer un algorithme de vision SLAM sur un SoC. Ce système mettra en évidence un compromis entre polyvalence, parallélisme, vitesse de traitement et résultats de localisation. Une comparaison avec les systèmes conventionnels sera effectuée pour évaluer l'architecture du système définie. Vue de la consommation d'énergie, nous avons étudié l'implémentation la partie Front-end sur l'architecture configurable type soc-FPGA. Le SLAM kernel est destiné à être exécuté sur un processeur. Nous avons proposé une architecture par la méthode HLS (High-level synthesis) en utilisant langage OpenCL. Nous avons validé notre architecture sur la carte Altera Arria 10 soc. Une comparaison avec les systèmes les plus récents montre que l’architecture conçue présente de meilleures performances et un compromis entre la consommation d’énergie et les temps de traitement
SLAM (Simultaneous Localization And Mapping) has an important role in several applications such as autonomous robots, smart vehicles, unmanned aerial vehicles (UAVs) and others. Nowadays, real-time vision based SLAM applications becomes a subject of widespread interests in many researches. One of the solutions to solve the computational complexity of image processing algorithms, dedicated to SLAM applications, is to perform high or/and low level processing on co-processors in order to build a System on Chip. Heterogeneous architectures have demonstrated their ability to become potential candidates for a system on chip in a hardware software co-design approach. The aim of this thesis is to propose a vision system implementing a SLAM algorithm on a heterogeneous architecture (CPU-GPU or CPU-FPGA). The study will allow verifying if these types of heterogeneous architectures are advantageous, what elementary functions and/or operators should be added on chip and how to integrate image-processing and the SLAM Kernel on a heterogeneous architecture (i. e. How to map the vision SLAM on a System on Chip).There are two parts in a visual SLAM system: Front-end (feature extraction, image processing) and Back-end (SLAM kernel). During this thesis, we studied several features detection and description algorithms for the Front-end part. We have developed our own algorithm denoted as HOOFR (Hessian ORB Overlapped FREAK) extractor which has a better compromise between precision and processing times compared to those of the state of the art. This algorithm is based on the modification of the ORB (Oriented FAST and rotated BRIEF) detector and the bio-inspired descriptor: FREAK (Fast Retina Keypoint). The improvements were validated using well-known real datasets. Consequently, we propose the HOOFR-SLAM Stereo algorithm for the Back-end part. This algorithm uses images acquired by a stereo camera to perform simultaneous localization and mapping. The HOOFR SLAM performances were evaluated on different datasets (KITTI, New-College , Malaga, MRT, St-Lucia, ...).Afterward, to reach a real-time system, we studied the algorithmic complexity of HOOFR SLAM as well as the current hardware architectures dedicated for embedded systems. We used a methodology based on the algorithm complexity and functional blocks partitioning. The processing time of each block is analyzed taking into account the constraints of the targeted architectures. We achieved an implementation of HOOFR SLAM on a massively parallel architecture based on CPU-GPU. The performances were evaluated on a powerful workstation and on architectures based embedded systems. In this study, we propose a system-level architecture and a design methodology to integrate a vision SLAM algorithm on a SoC. This system will highlight a compromise between versatility, parallelism, processing speed and localization results. A comparison related to conventional systems will be performed to evaluate the defined system architecture. In order to reduce the energy consumption, we have studied the implementation of the Front-end part (image processing) on an FPGA based SoC system. The SLAM kernel is intended to run on a CPU processor. We proposed a parallelized architecture using HLS (High-level synthesis) method and OpenCL language programming. We validated our architecture for an Altera Arria 10 SoC. A comparison with systems in the state-of-the-art showed that the designed architecture presents better performances and a compromise between power consumption and processing times
APA, Harvard, Vancouver, ISO, and other styles
6

Garner, Harry Douglas Jr. "Development of a real-time vision based absolute orientation sensor." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/17022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Guanghao. "Evaluation of FPGA Partial Reconfiguration : for real-time Vision applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279957.

Full text
Abstract:
The usage of programmable logic resources in Field Programmable Gate Arrays, also known as FPGAs, has increased a lot recently due to the complexity of the algorithms, especially for some computer vision algorithms. Due to this reason, sometimes the hardware resources in the FPGA are not sufficient. Partial reconfiguration provides us with the possibility to solve this problem. Partial reconfiguration is a technique that can be used to reconfigure specific parts of the FPGA during run-time. By using this technique, we can reduce the need for programmable logic resources. This master thesis project aims to design a software framework for partial reconfiguration that can load a set of processing components/algorithms (e.g. object detection, optical flow, Harris-Corner detection etc) in the FPGA area without affecting real-time static components such as camera capture, basic image filtering and colour conversion which are continuously running. Partial reconfiguration has been applied to two different video processing pipelines, a direct streaming architecture and a frame buffer streaming architecture respectively. The result shows that reconfiguration time is predictable which depends on the partial bitstream size, and that partial reconfiguration can be used in real-time applications taking the partial bitstream size and the frequency to switch the partial bitstreams into account.
Användningen av programmerbara logiska resurser i Field Programmable Gate Arrayer, även känd som FPGA:er, har ökat mycket nyligen på grund av komplexiteten hos algoritmerna, speciellt för vissa datorvisningsalgoritmer. På grund av detta är det ibland inte tillräckligt med hårdvaruresurser i FPGA:n. Partiell omkonfiguration ger oss möjlighet att lösa detta problem. Partiell omkonfigurering är en teknik som kan användas för att omkonfigurera specifika delar av FPGA:n under körtid. Genom att använda denna teknik kan vi minska behovet av programmerbara logiska resurser. Det här mastersprojektet syftar till att utforma ett programvaru-ramverk för partiell omkonfiguration som kan ladda en uppsättning processkomponenter / algoritmer (t.ex. objektdetektering, optiskt flöde, Harris-Corner detection etc) i FPGA- området utan att påverka statiska realtids-komponenter såsom kamerafångst, grundläggande bildfiltrering och färgkonvertering som körs kontinuerligt. Partiell omkonfiguration har tillämpats på två olika videoprocessnings-pipelines, en direkt-strömmande respektive en rambuffert-strömmande arkitektur. Resultatet visar att omkonfigurationstiden är förutsägbar och att partiell omkonfiguration kan användas i realtids-tillämpningar.
APA, Harvard, Vancouver, ISO, and other styles
8

Hiromoto, Masayuki. "LSI design methodology for real-time computer vision on embedded systems." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126476.

Full text
Abstract:
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第15012号
情博第371号
新制||情||68(附属図書館)
27462
UT51-2009-R736
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 佐藤 高史, 教授 小野寺 秀俊, 教授 松山 隆司, 准教授 越智 裕之
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
9

Pereira, Pedro André Marques. "Measuring the strain of metallic surfaces in real time through vision systems." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/16447.

Full text
Abstract:
Mestrado em Engenharia Mecânica
Vision systems have already proven to be a useful tool in various elds. The ease of their implementation, allied to their low cost mean that their growth potential is immense. In this dissertation it is proposed a approach to measure strains in metallic surfaces, using stereo vision. This approach is based on the 3D DIC. This method measures the strain of the surface by dividing this surface in small sections, called subsets, and iteratively nding the equation that describes its shape variation through time. However, calculating the transformation of this subset is very timeconsuming. The proposed approach tries to optimize this calculation by rst determine the displacement eld, and then the strain eld by derivation. The dissertation also presents some experimental data and practical considerations relatively to the camera setup and image equalization algorithms in order to obtain better disparity maps. The results were veri ed experimentally and compared with the results obtained from other softwares.
Os sistemas de vis~ao j a provaram ser uma ferramenta util em v arios campos. A facilidade da sua implementa c~ao, aliada ao seu baixo custo signi cam que o seu potencial de crescimento e enorme. Nesta disserta c~ao e proposta uma abordagem para medir deforma c~oes em superf cies met alicas usando vis~ao stereo. Esta abordagem e baseada na t ecnica 3D DIC. Este m etodo mede as deforma c~oes da superf cie dividindo-a em pequenas se c~oes, designadas por sub- sets, tentando iterativamente encontrar a equa c~ao que de ne as varia c~oes das suas formas ao longo do tempo. No entanto, o c alculo das transforma c~oes destes subsets e demorado. A abordagem proposta pretende pretende otimizar este c alculo determinando primeiro o campo de deslocamentos e depois o campo das deforma c~oes atrav es da deriva c~ao. A disserta c~ao apresenta tamb em dados experimentais e considera c~oes pr aticas relativamente a con gura c~ao (setup) das c^amaras e algoritmos de equaliza c~ao de imagens de forma a se obterem melhores mapas de disparidade. Os resultados foram veri cados experimentalmente e comparados com os resultados obtidos por outros softwares.
APA, Harvard, Vancouver, ISO, and other styles
10

Katramados, Ioannis. "Real-time object detection using monocular vision for low-cost automotive sensing systems." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/10386.

Full text
Abstract:
This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Real-time vision systems"

1

Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin, and Yusuf Leblebici. Design and Implementation of Real-Time Multi-Sensor Vision Systems. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilberg, Jörg. Codesign for Real-Time Video Applications. Boston, MA: Springer US, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schaeren, Peter. Real-time 3-D scene acquisition by monocular motion induced stero. Konstanz: Hartung-Gorre, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Herout, Adam. Real-Time Detection of Lines and Grids: By PClines and Other Approaches. London: Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Verghese, Gilbert. Perspective alignment back-projection for real-time monocular three-dimensional model-based computer vision. Toronto: Dept. of Computer Science, University of Toronto, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Remias, Leonard V. A real-time image understanding system for an autonomous mobile robot. Monterey, California: Naval Postgraduate School, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rechsteiner, Martin. Real time inverse stereo system for surveillance of dynamic safety envelopes. Konstanz: Hartung-Gorre, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Asari, Vijayan K. Wide Area Surveillance: Real-Time Motion Detection Systems. Springer Berlin / Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Asari, Vijayan K. Wide Area Surveillance: Real-time Motion Detection Systems. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Asari, Vijayan K. Wide Area Surveillance: Real-Time Motion Detection Systems. Springer London, Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Real-time vision systems"

1

Yi, JongSu, JunSeong Kim, LiPing Li, John Morris, Gareth Lee, and Philippe Leclercq. "Real-Time Three Dimensional Vision." In Advances in Computer Systems Architecture, 309–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30102-8_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wehking, Thomas, Alexander Würz-Wessel, and Wolfgang Rosenstiel. "System Architecture for Future Driver Assistance Based on Stereo Vision." In Advances in Real-Time Systems, 245–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24349-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Galčík, František, and Radoslav Gargalík. "Real-Time Depth Map Based People Counting." In Advanced Concepts for Intelligent Vision Systems, 330–41. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02895-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Morris, John, Khurram Jawed, and Georgy Gimel’farb. "Intelligent Vision: A First Step – Real Time Stereovision." In Advanced Concepts for Intelligent Vision Systems, 355–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04697-1_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin, and Yusuf Leblebici. "Towards Real-Time Gigapixel Video." In Design and Implementation of Real-Time Multi-Sensor Vision Systems, 139–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pieters, Roel, Pieter Jonker, and Henk Nijmeijer. "Real-Time Center Detection of an OLED Structure." In Advanced Concepts for Intelligent Vision Systems, 400–409. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04697-1_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hazar, Mliki, Hammami Mohamed, and Ben-Abdallah Hanêne. "Real-Time Face Pose Estimation in Challenging Environments." In Advanced Concepts for Intelligent Vision Systems, 114–25. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02895-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bayoudh, Ines, Saoussen Ben Jabra, and Ezzeddine Zagrouba. "A Robust Video Watermarking for Real-Time Application." In Advanced Concepts for Intelligent Vision Systems, 493–504. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70353-4_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Szewczyk, Przemysław. "Real-Time Control of Active Stereo Vision System." In Advances in Intelligent Systems and Computing, 271–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60699-6_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kaur, Manjot, and Rajneesh Randhawa. "Vision-Based Real Time Vehicle Detection: A Survey." In Lecture Notes in Networks and Systems, 747–60. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-5529-6_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Real-time vision systems"

1

Reid, Alastair, John Peterson, Greg Hager, and Paul Hudak. "Prototyping real-time vision systems." In the 21st international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/302405.302681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Balszun, Michael, Martin Geier, and Samarjit Chakraborty. "Predictable Vision for Autonomous Systems." In 2020 IEEE 23rd International Symposium on Real-Time Distributed Computing (ISORC). IEEE, 2020. http://dx.doi.org/10.1109/isorc49007.2020.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elliott, Glenn A., Kecheng Yang, and James H. Anderson. "Supporting Real-Time Computer Vision Workloads Using OpenVX on Multicore+GPU Platforms." In 2015 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2015. http://dx.doi.org/10.1109/rtss.2015.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Caliskan, Anil, Volkan Ozdemir, Enes Bayturk, Oguzhan Mete Oztork, Osman Dogukan Kefeli, and Anil Uzengi. "Real Time Retail Analytics with Computer Vision." In 2022 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 2022. http://dx.doi.org/10.1109/asyu56188.2022.9925538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rowe, Anthony, Dhiraj Goel, and Raj Rajkumar. "FireFly Mosaic: A Vision-Enabled Wireless Sensor Networking System." In 28th IEEE International Real-Time Systems Symposium (RTSS 2007). IEEE, 2007. http://dx.doi.org/10.1109/rtss.2007.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, K. S., R. Saatchi, H. Elphick, and D. Burke. "Real-time vision based respiration monitoring system." In 2010 7th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/csndsp16145.2010.5580316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Persa, Stelian, and Pieter P. Jonker. "Real-time computer vision system for mobile robot." In Intelligent Systems and Advanced Manufacturing, edited by David P. Casasent and Ernest L. Hall. SPIE, 2001. http://dx.doi.org/10.1117/12.444173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Persa, Stelian, and Pieter P. Jonker. "Real-time image processing architecture for robot vision." In Intelligent Systems and Smart Manufacturing, edited by David P. Casasent. SPIE, 2000. http://dx.doi.org/10.1117/12.403766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bailey, Donald G., Ken Mercer, Colin Plaw, Ralph Ball, and Harvey Barraclough. "Three-dimensional vision for real-time produce grading." In Intelligent Systems and Advanced Manufacturing, edited by Kevin G. Harding and John W. V. Miller. SPIE, 2002. http://dx.doi.org/10.1117/12.455254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Browning, B., and M. Veloso. "Real-time, adaptive color-based robot vision." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545424.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Real-time vision systems"

1

Delwiche, Michael, Yael Edan, and Yoav Sarig. An Inspection System for Sorting Fruit with Machine Vision. United States Department of Agriculture, March 1996. http://dx.doi.org/10.32747/1996.7612831.bard.

Full text
Abstract:
Concepts for real-time grading of fruits and vegetables were developed, including multi-spectral imaging with structured illumination to detect and distinguish surface defects from concavities. Based on these concepts, a single-lane conveyor and inspection system were designed and evaluated. Image processing algorithms were developed to inspect and grade large quasi-spherical fruits (peaches and apples) and smaller dried fruits (dates). Adjusting defect pixel thresholds to achieve a 25% error rate on good apples, classification errors for bruise, crack, and cut classes were 51%, 42%, and 46%, respectively. Comparable results for bruise, scar, and cut peach clases were 48%, 22%, and 58%, respectively. Acquiring more than two images of each fruit and using more than six lines of structured illumination per fruit would reduce sorting errors. Doing so, potential sorting error rates for bruise, crack, and cut apple classes were estimated to be 38%, 38%, and 33%, respectively. Similarly, potential error rates for the bruitse, scar, and cut peach classes were 9%, 3%, and 30%, respectively. Date size classification results were good: 68% within one size class and 98% within two size classes. Date quality classification results were not adequate due to the problem of blistering. Improved features were discussed. The most significant contribution of this research was the on-going collaboration with producers and equipment manufacturers, and the resulting transfer of research ideas to expedite the commercial application of machine vision for postharvest inspection and grading of agricultural products.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Full text
Abstract:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Sillah, Bukhari. Country Diagnostic Study – United Arab Emirates. Islamic Development Bank Institute, November 2021. http://dx.doi.org/10.55780/rp21002.

Full text
Abstract:
The Country Diagnostic Study (CDS) for United Arab Emirates (U.A.E.) uses the Hausmann-Rodrik-Velasco growth diagnostics model to identify the binding constraints being faced in its quest for higher economic growth and make recommendations to relax these constraints. Hence, the findings of the CDS can help the Islamic Development Bank in identifying areas where it can have a greater impact and provide an evidence basis to support the development of the Member Country Partnership Strategy. U.A.E.’s development journey has been painstakingly crafted over time, with the latest being Vision 2021. Launched in 2010 and in the aftermath of the global financial crisis (GFC), Vision 2021 was designed to place the U.A.E. among the best nations in the world. It has achieved several targets under the competitive knowledge pillar of the Vision, but some key targets related to economic growth, innovation, and knowledge workers are yet to be fully realized. This is because growth has been low and inadequate with relatively low private investment since the 2008–2009 GFC, leading to a lower than potential real GDP trend. To bring in private investment and improve growth, both quantity and quality of human capital may need to be scaled up through improving the education system and spending on research and development to support industry-university collaboration on innovations. Efficient institutional governance in the areas of corruption control, regulatory quality and conducive bureaucracy is necessary for the vibrant functioning of the private sector.
APA, Harvard, Vancouver, ISO, and other styles
4

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography