Добірка наукової літератури з теми "Real-time vision systems"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Real-time vision systems".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Real-time vision systems"
Wong, Kam W. "REAL-TIME MACHINE VISION SYSTEMS." Canadian Surveyor 41, no. 2 (June 1987): 173–80. http://dx.doi.org/10.1139/tcs-1987-0013.
Повний текст джерелаWong, Kam W., Anthony G. Wiley, and Michael Lew. "GPS‐Guided Vision Systems for Real‐Time Surveying." Journal of Surveying Engineering 115, no. 2 (May 1989): 243–51. http://dx.doi.org/10.1061/(asce)0733-9453(1989)115:2(243).
Повний текст джерелаRodd, M. G., and Q. M. Wu. "Knowledge-Based Vision Systems in Real-Time Control." IFAC Proceedings Volumes 22, no. 13 (September 1989): 13–18. http://dx.doi.org/10.1016/b978-0-08-040185-0.50007-5.
Повний текст джерелаRodd, M. G., and Q. M. Wu. "Knowledge-based vision systems in real-time control." Annual Review in Automatic Programming 15 (January 1989): 13–18. http://dx.doi.org/10.1016/0066-4138(89)90003-7.
Повний текст джерелаBleser, Gabriele, Mario Becker, and Didier Stricker. "Real-time vision-based tracking and reconstruction." Journal of Real-Time Image Processing 2, no. 2-3 (August 22, 2007): 161–75. http://dx.doi.org/10.1007/s11554-007-0034-0.
Повний текст джерелаThomas, B. T., E. L. Dagless, D. J. Milford, and A. D. Morgan. "Real-time vision guided navigation." Engineering Applications of Artificial Intelligence 4, no. 4 (January 1991): 287–300. http://dx.doi.org/10.1016/0952-1976(91)90043-6.
Повний текст джерелаShah, Meet. "Review on Real-time Applications of Computer Vision Systems." International Journal for Research in Applied Science and Engineering Technology 9, no. 4 (April 30, 2021): 1323–27. http://dx.doi.org/10.22214/ijraset.2021.33942.
Повний текст джерелаNekrasov, Victor V. "Real-time coherent optical correlator for machine vision systems." Optical Engineering 31, no. 4 (1992): 789. http://dx.doi.org/10.1117/12.56141.
Повний текст джерелаChermak, Lounis, Nabil Aouf, Mark Richardson, and Gianfranco Visentin. "Real-time smart and standalone vision/IMU navigation sensor." Journal of Real-Time Image Processing 16, no. 4 (June 22, 2016): 1189–205. http://dx.doi.org/10.1007/s11554-016-0613-z.
Повний текст джерелаGutierrez, Daniel. "Contributions to Real-time Metric Localisation with Wearable Vision Systems." ELCVIA Electronic Letters on Computer Vision and Image Analysis 15, no. 2 (November 4, 2016): 27. http://dx.doi.org/10.5565/rev/elcvia.951.
Повний текст джерелаДисертації з теми "Real-time vision systems"
Benoit, Stephen M. "Monocular optical flow for real-time vision systems." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23862.
Повний текст джерелаTippetts, Beau J. "Real-Time Stereo Vision for Resource Limited Systems." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/2972.
Повний текст джерелаArshad, Norhashim Mohd. "Real-time data compression for machine vision measurement systems." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285284.
Повний текст джерелаPan, Wenbo. "Real-time human face tracking." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ55535.pdf.
Повний текст джерелаNguyen, Dai-Duong. "A vision system based real-time SLAM applications." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS518/document.
Повний текст джерелаSLAM (Simultaneous Localization And Mapping) has an important role in several applications such as autonomous robots, smart vehicles, unmanned aerial vehicles (UAVs) and others. Nowadays, real-time vision based SLAM applications becomes a subject of widespread interests in many researches. One of the solutions to solve the computational complexity of image processing algorithms, dedicated to SLAM applications, is to perform high or/and low level processing on co-processors in order to build a System on Chip. Heterogeneous architectures have demonstrated their ability to become potential candidates for a system on chip in a hardware software co-design approach. The aim of this thesis is to propose a vision system implementing a SLAM algorithm on a heterogeneous architecture (CPU-GPU or CPU-FPGA). The study will allow verifying if these types of heterogeneous architectures are advantageous, what elementary functions and/or operators should be added on chip and how to integrate image-processing and the SLAM Kernel on a heterogeneous architecture (i. e. How to map the vision SLAM on a System on Chip).There are two parts in a visual SLAM system: Front-end (feature extraction, image processing) and Back-end (SLAM kernel). During this thesis, we studied several features detection and description algorithms for the Front-end part. We have developed our own algorithm denoted as HOOFR (Hessian ORB Overlapped FREAK) extractor which has a better compromise between precision and processing times compared to those of the state of the art. This algorithm is based on the modification of the ORB (Oriented FAST and rotated BRIEF) detector and the bio-inspired descriptor: FREAK (Fast Retina Keypoint). The improvements were validated using well-known real datasets. Consequently, we propose the HOOFR-SLAM Stereo algorithm for the Back-end part. This algorithm uses images acquired by a stereo camera to perform simultaneous localization and mapping. The HOOFR SLAM performances were evaluated on different datasets (KITTI, New-College , Malaga, MRT, St-Lucia, ...).Afterward, to reach a real-time system, we studied the algorithmic complexity of HOOFR SLAM as well as the current hardware architectures dedicated for embedded systems. We used a methodology based on the algorithm complexity and functional blocks partitioning. The processing time of each block is analyzed taking into account the constraints of the targeted architectures. We achieved an implementation of HOOFR SLAM on a massively parallel architecture based on CPU-GPU. The performances were evaluated on a powerful workstation and on architectures based embedded systems. In this study, we propose a system-level architecture and a design methodology to integrate a vision SLAM algorithm on a SoC. This system will highlight a compromise between versatility, parallelism, processing speed and localization results. A comparison related to conventional systems will be performed to evaluate the defined system architecture. In order to reduce the energy consumption, we have studied the implementation of the Front-end part (image processing) on an FPGA based SoC system. The SLAM kernel is intended to run on a CPU processor. We proposed a parallelized architecture using HLS (High-level synthesis) method and OpenCL language programming. We validated our architecture for an Altera Arria 10 SoC. A comparison with systems in the state-of-the-art showed that the designed architecture presents better performances and a compromise between power consumption and processing times
Garner, Harry Douglas Jr. "Development of a real-time vision based absolute orientation sensor." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/17022.
Повний текст джерелаGuo, Guanghao. "Evaluation of FPGA Partial Reconfiguration : for real-time Vision applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279957.
Повний текст джерелаAnvändningen av programmerbara logiska resurser i Field Programmable Gate Arrayer, även känd som FPGA:er, har ökat mycket nyligen på grund av komplexiteten hos algoritmerna, speciellt för vissa datorvisningsalgoritmer. På grund av detta är det ibland inte tillräckligt med hårdvaruresurser i FPGA:n. Partiell omkonfiguration ger oss möjlighet att lösa detta problem. Partiell omkonfigurering är en teknik som kan användas för att omkonfigurera specifika delar av FPGA:n under körtid. Genom att använda denna teknik kan vi minska behovet av programmerbara logiska resurser. Det här mastersprojektet syftar till att utforma ett programvaru-ramverk för partiell omkonfiguration som kan ladda en uppsättning processkomponenter / algoritmer (t.ex. objektdetektering, optiskt flöde, Harris-Corner detection etc) i FPGA- området utan att påverka statiska realtids-komponenter såsom kamerafångst, grundläggande bildfiltrering och färgkonvertering som körs kontinuerligt. Partiell omkonfiguration har tillämpats på två olika videoprocessnings-pipelines, en direkt-strömmande respektive en rambuffert-strömmande arkitektur. Resultatet visar att omkonfigurationstiden är förutsägbar och att partiell omkonfiguration kan användas i realtids-tillämpningar.
Hiromoto, Masayuki. "LSI design methodology for real-time computer vision on embedded systems." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126476.
Повний текст джерела0048
新制・課程博士
博士(情報学)
甲第15012号
情博第371号
新制||情||68(附属図書館)
27462
UT51-2009-R736
京都大学大学院情報学研究科通信情報システム専攻
(主査)教授 佐藤 高史, 教授 小野寺 秀俊, 教授 松山 隆司, 准教授 越智 裕之
学位規則第4条第1項該当
Pereira, Pedro André Marques. "Measuring the strain of metallic surfaces in real time through vision systems." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/16447.
Повний текст джерелаVision systems have already proven to be a useful tool in various elds. The ease of their implementation, allied to their low cost mean that their growth potential is immense. In this dissertation it is proposed a approach to measure strains in metallic surfaces, using stereo vision. This approach is based on the 3D DIC. This method measures the strain of the surface by dividing this surface in small sections, called subsets, and iteratively nding the equation that describes its shape variation through time. However, calculating the transformation of this subset is very timeconsuming. The proposed approach tries to optimize this calculation by rst determine the displacement eld, and then the strain eld by derivation. The dissertation also presents some experimental data and practical considerations relatively to the camera setup and image equalization algorithms in order to obtain better disparity maps. The results were veri ed experimentally and compared with the results obtained from other softwares.
Os sistemas de vis~ao j a provaram ser uma ferramenta util em v arios campos. A facilidade da sua implementa c~ao, aliada ao seu baixo custo signi cam que o seu potencial de crescimento e enorme. Nesta disserta c~ao e proposta uma abordagem para medir deforma c~oes em superf cies met alicas usando vis~ao stereo. Esta abordagem e baseada na t ecnica 3D DIC. Este m etodo mede as deforma c~oes da superf cie dividindo-a em pequenas se c~oes, designadas por sub- sets, tentando iterativamente encontrar a equa c~ao que de ne as varia c~oes das suas formas ao longo do tempo. No entanto, o c alculo das transforma c~oes destes subsets e demorado. A abordagem proposta pretende pretende otimizar este c alculo determinando primeiro o campo de deslocamentos e depois o campo das deforma c~oes atrav es da deriva c~ao. A disserta c~ao apresenta tamb em dados experimentais e considera c~oes pr aticas relativamente a con gura c~ao (setup) das c^amaras e algoritmos de equaliza c~ao de imagens de forma a se obterem melhores mapas de disparidade. Os resultados foram veri cados experimentalmente e comparados com os resultados obtidos por outros softwares.
Katramados, Ioannis. "Real-time object detection using monocular vision for low-cost automotive sensing systems." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/10386.
Повний текст джерелаКниги з теми "Real-time vision systems"
Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin, and Yusuf Leblebici. Design and Implementation of Real-Time Multi-Sensor Vision Systems. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8.
Повний текст джерелаWilberg, Jörg. Codesign for Real-Time Video Applications. Boston, MA: Springer US, 1997.
Знайти повний текст джерелаSchaeren, Peter. Real-time 3-D scene acquisition by monocular motion induced stero. Konstanz: Hartung-Gorre, 1994.
Знайти повний текст джерелаHerout, Adam. Real-Time Detection of Lines and Grids: By PClines and Other Approaches. London: Springer London, 2013.
Знайти повний текст джерелаVerghese, Gilbert. Perspective alignment back-projection for real-time monocular three-dimensional model-based computer vision. Toronto: Dept. of Computer Science, University of Toronto, 1995.
Знайти повний текст джерелаRemias, Leonard V. A real-time image understanding system for an autonomous mobile robot. Monterey, California: Naval Postgraduate School, 1996.
Знайти повний текст джерелаRechsteiner, Martin. Real time inverse stereo system for surveillance of dynamic safety envelopes. Konstanz: Hartung-Gorre, 1997.
Знайти повний текст джерелаAsari, Vijayan K. Wide Area Surveillance: Real-Time Motion Detection Systems. Springer Berlin / Heidelberg, 2013.
Знайти повний текст джерелаAsari, Vijayan K. Wide Area Surveillance: Real-time Motion Detection Systems. Springer, 2016.
Знайти повний текст джерелаAsari, Vijayan K. Wide Area Surveillance: Real-Time Motion Detection Systems. Springer London, Limited, 2013.
Знайти повний текст джерелаЧастини книг з теми "Real-time vision systems"
Yi, JongSu, JunSeong Kim, LiPing Li, John Morris, Gareth Lee, and Philippe Leclercq. "Real-Time Three Dimensional Vision." In Advances in Computer Systems Architecture, 309–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30102-8_26.
Повний текст джерелаWehking, Thomas, Alexander Würz-Wessel, and Wolfgang Rosenstiel. "System Architecture for Future Driver Assistance Based on Stereo Vision." In Advances in Real-Time Systems, 245–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24349-3_12.
Повний текст джерелаGalčík, František, and Radoslav Gargalík. "Real-Time Depth Map Based People Counting." In Advanced Concepts for Intelligent Vision Systems, 330–41. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02895-8_30.
Повний текст джерелаMorris, John, Khurram Jawed, and Georgy Gimel’farb. "Intelligent Vision: A First Step – Real Time Stereovision." In Advanced Concepts for Intelligent Vision Systems, 355–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04697-1_33.
Повний текст джерелаPopovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin, and Yusuf Leblebici. "Towards Real-Time Gigapixel Video." In Design and Implementation of Real-Time Multi-Sensor Vision Systems, 139–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8_7.
Повний текст джерелаPieters, Roel, Pieter Jonker, and Henk Nijmeijer. "Real-Time Center Detection of an OLED Structure." In Advanced Concepts for Intelligent Vision Systems, 400–409. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04697-1_37.
Повний текст джерелаHazar, Mliki, Hammami Mohamed, and Ben-Abdallah Hanêne. "Real-Time Face Pose Estimation in Challenging Environments." In Advanced Concepts for Intelligent Vision Systems, 114–25. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02895-8_11.
Повний текст джерелаBayoudh, Ines, Saoussen Ben Jabra, and Ezzeddine Zagrouba. "A Robust Video Watermarking for Real-Time Application." In Advanced Concepts for Intelligent Vision Systems, 493–504. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70353-4_42.
Повний текст джерелаSzewczyk, Przemysław. "Real-Time Control of Active Stereo Vision System." In Advances in Intelligent Systems and Computing, 271–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60699-6_26.
Повний текст джерелаKaur, Manjot, and Rajneesh Randhawa. "Vision-Based Real Time Vehicle Detection: A Survey." In Lecture Notes in Networks and Systems, 747–60. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-5529-6_57.
Повний текст джерелаТези доповідей конференцій з теми "Real-time vision systems"
Reid, Alastair, John Peterson, Greg Hager, and Paul Hudak. "Prototyping real-time vision systems." In the 21st international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/302405.302681.
Повний текст джерелаBalszun, Michael, Martin Geier, and Samarjit Chakraborty. "Predictable Vision for Autonomous Systems." In 2020 IEEE 23rd International Symposium on Real-Time Distributed Computing (ISORC). IEEE, 2020. http://dx.doi.org/10.1109/isorc49007.2020.00025.
Повний текст джерелаElliott, Glenn A., Kecheng Yang, and James H. Anderson. "Supporting Real-Time Computer Vision Workloads Using OpenVX on Multicore+GPU Platforms." In 2015 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2015. http://dx.doi.org/10.1109/rtss.2015.33.
Повний текст джерелаCaliskan, Anil, Volkan Ozdemir, Enes Bayturk, Oguzhan Mete Oztork, Osman Dogukan Kefeli, and Anil Uzengi. "Real Time Retail Analytics with Computer Vision." In 2022 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 2022. http://dx.doi.org/10.1109/asyu56188.2022.9925538.
Повний текст джерелаRowe, Anthony, Dhiraj Goel, and Raj Rajkumar. "FireFly Mosaic: A Vision-Enabled Wireless Sensor Networking System." In 28th IEEE International Real-Time Systems Symposium (RTSS 2007). IEEE, 2007. http://dx.doi.org/10.1109/rtss.2007.50.
Повний текст джерелаTan, K. S., R. Saatchi, H. Elphick, and D. Burke. "Real-time vision based respiration monitoring system." In 2010 7th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/csndsp16145.2010.5580316.
Повний текст джерелаPersa, Stelian, and Pieter P. Jonker. "Real-time computer vision system for mobile robot." In Intelligent Systems and Advanced Manufacturing, edited by David P. Casasent and Ernest L. Hall. SPIE, 2001. http://dx.doi.org/10.1117/12.444173.
Повний текст джерелаPersa, Stelian, and Pieter P. Jonker. "Real-time image processing architecture for robot vision." In Intelligent Systems and Smart Manufacturing, edited by David P. Casasent. SPIE, 2000. http://dx.doi.org/10.1117/12.403766.
Повний текст джерелаBailey, Donald G., Ken Mercer, Colin Plaw, Ralph Ball, and Harvey Barraclough. "Three-dimensional vision for real-time produce grading." In Intelligent Systems and Advanced Manufacturing, edited by Kevin G. Harding and John W. V. Miller. SPIE, 2002. http://dx.doi.org/10.1117/12.455254.
Повний текст джерелаBrowning, B., and M. Veloso. "Real-time, adaptive color-based robot vision." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545424.
Повний текст джерелаЗвіти організацій з теми "Real-time vision systems"
Delwiche, Michael, Yael Edan, and Yoav Sarig. An Inspection System for Sorting Fruit with Machine Vision. United States Department of Agriculture, March 1996. http://dx.doi.org/10.32747/1996.7612831.bard.
Повний текст джерелаLee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.
Повний текст джерелаSillah, Bukhari. Country Diagnostic Study – United Arab Emirates. Islamic Development Bank Institute, November 2021. http://dx.doi.org/10.55780/rp21002.
Повний текст джерелаEngel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Повний текст джерела