Статті в журналах з теми "Real-time vision systems"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Real-time vision systems.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Real-time vision systems".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wong, Kam W. "REAL-TIME MACHINE VISION SYSTEMS." Canadian Surveyor 41, no. 2 (June 1987): 173–80. http://dx.doi.org/10.1139/tcs-1987-0013.

Повний текст джерела
Анотація:
Recent developments in machine vision systems, solid state cameras, and image processing are reviewed. Both hardware and software systems are currently available for performing real-time recognition and geometric measurements. More than 1000 units of these imaging systems are already being used in manufacturing plants in the United States. Current research efforts are focused on the processing of three-dimensional information and on knowledge-based processing systems. Five potential research topics in the area of photogrammetry are proposed: 1) stereo solid state camera systems, 2) image correlation, 3) self-calibration and self-orientation, 4) general algorithm for multistation and multicamera photography, and 5) artificial photogrammetry.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wong, Kam W., Anthony G. Wiley, and Michael Lew. "GPS‐Guided Vision Systems for Real‐Time Surveying." Journal of Surveying Engineering 115, no. 2 (May 1989): 243–51. http://dx.doi.org/10.1061/(asce)0733-9453(1989)115:2(243).

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rodd, M. G., and Q. M. Wu. "Knowledge-Based Vision Systems in Real-Time Control." IFAC Proceedings Volumes 22, no. 13 (September 1989): 13–18. http://dx.doi.org/10.1016/b978-0-08-040185-0.50007-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rodd, M. G., and Q. M. Wu. "Knowledge-based vision systems in real-time control." Annual Review in Automatic Programming 15 (January 1989): 13–18. http://dx.doi.org/10.1016/0066-4138(89)90003-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bleser, Gabriele, Mario Becker, and Didier Stricker. "Real-time vision-based tracking and reconstruction." Journal of Real-Time Image Processing 2, no. 2-3 (August 22, 2007): 161–75. http://dx.doi.org/10.1007/s11554-007-0034-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Thomas, B. T., E. L. Dagless, D. J. Milford, and A. D. Morgan. "Real-time vision guided navigation." Engineering Applications of Artificial Intelligence 4, no. 4 (January 1991): 287–300. http://dx.doi.org/10.1016/0952-1976(91)90043-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shah, Meet. "Review on Real-time Applications of Computer Vision Systems." International Journal for Research in Applied Science and Engineering Technology 9, no. 4 (April 30, 2021): 1323–27. http://dx.doi.org/10.22214/ijraset.2021.33942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nekrasov, Victor V. "Real-time coherent optical correlator for machine vision systems." Optical Engineering 31, no. 4 (1992): 789. http://dx.doi.org/10.1117/12.56141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chermak, Lounis, Nabil Aouf, Mark Richardson, and Gianfranco Visentin. "Real-time smart and standalone vision/IMU navigation sensor." Journal of Real-Time Image Processing 16, no. 4 (June 22, 2016): 1189–205. http://dx.doi.org/10.1007/s11554-016-0613-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gutierrez, Daniel. "Contributions to Real-time Metric Localisation with Wearable Vision Systems." ELCVIA Electronic Letters on Computer Vision and Image Analysis 15, no. 2 (November 4, 2016): 27. http://dx.doi.org/10.5565/rev/elcvia.951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sabarinathan, E., and E. Manoj. "A Survey of Techniques for Real Time Computer Vision Systems." i-manager's Journal on Embedded Systems 4, no. 2 (July 15, 2015): 24–35. http://dx.doi.org/10.26634/jes.4.2.4837.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Derzhanovsky, Alexander Sergeevich, and Sergey Mikhailovich Sokolov. "Image processing in real-time computer vision systems using FPGA." Keldysh Institute Preprints, no. 126 (2016): 1–16. http://dx.doi.org/10.20948/prepr-2016-126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Haggrén, Henrik. "REAL-TIME PHOTOGRAMMETRY AS USED FOR MACHINE VISION APPLICATIONS." Canadian Surveyor 41, no. 2 (June 1987): 201–8. http://dx.doi.org/10.1139/tcs-1987-0015.

Повний текст джерела
Анотація:
A photogrammetric machine vision system, called Mapvision, has been developed at the Technical Research Centre of Finland. The paper deals with the principles of real-time photogrammetry as applied to machine vision. There is a brief overview of the present machine vision markets and the commercial systems available. The practical examples presented concern the tentative experiences in applying the Mapvision and its preceding prototype system for industrial inspection and assembly control.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Komuro, Takashi, Idaku Ish, and Masatoshi Ishikawa. "General-purpose vision chip architecture for real-time machine vision." Advanced Robotics 12, no. 6 (January 1997): 619–27. http://dx.doi.org/10.1163/156855399x00036.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ćwikła, Grzegorz. "Real-Time Monitoring Station for Production Systems." Advanced Materials Research 837 (November 2013): 334–39. http://dx.doi.org/10.4028/www.scientific.net/amr.837.334.

Повний текст джерела
Анотація:
Real-time monitoring of the flow of materials, semi-completed and completed products during production process is necessary practice for every company because of need for optimal production management. Identification of technological operations, parts, products and persons responsible for any production stage is possible using means of processes control devices and automatic identification systems. Paper describes the in-line monitoring station designed for tests of real-time production monitoring methods. Available sources of information are RFiD, bar codes, and vision system. These data sources are integrated into the in-line production monitoring station. Modular production system model or small production system can be placed under the In-line station as an object of monitoring. Advanced PLC integrates control over subsystems and allows communication between hardware and software components of data acquisition system. Data acquired from the in-line research station is stored in a dedicated database, then processed and analysed using MES (Manufacturing Execution System) software.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wang, Xinhua, Jihong Ouyang, Yi Wei, Fei Liu, and Guang Zhang. "Real-Time Vision through Haze Based on Polarization Imaging." Applied Sciences 9, no. 1 (January 3, 2019): 142. http://dx.doi.org/10.3390/app9010142.

Повний текст джерела
Анотація:
Various gases and aerosols in bad weather conditions can cause severe image degradation, which will seriously affect the detection efficiency of optical monitoring stations for high pollutant discharge systems. Thus, penetrating various gases and aerosols to sense and detect the discharge of pollutants plays an important role in the pollutant emission detection system. Against this backdrop, we recommend a real-time optical monitoring system based on the Stokes vectors through analyzing the scattering characteristics and polarization characteristics of both gases and aerosols in the atmosphere. This system is immune to the effects of various gases and aerosols on the target to be detected and achieves the purpose of real-time sensing and detection of high pollutant discharge systems under bad weather conditions. The imaging system is composed of four polarizers with different polarization directions integrated into independent cameras aligned parallel to the optical axis in order to acquire the Stokes vectors from various polarized azimuth images. Our results show that this approach achieves high-contrast and high-definition images in real time without the loss of spatial resolution in comparison with the performance of conventional imaging techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Llano, Christian R., Yuan Ren, and Nazrul I. Shaikh. "Object Detection and Tracking in Real Time Videos." International Journal of Information Systems in the Service Sector 11, no. 2 (April 2019): 1–17. http://dx.doi.org/10.4018/ijisss.2019040101.

Повний текст джерела
Анотація:
Object and human tracking in streaming videos are one of the most challenging problems in vision computing. In this article, we review some relevant machine learning algorithms and techniques for human identification and tracking in videos. We provide details on metrics and methods used in the computer vision literature for monitoring and propose a state-space representation of the object tracking problem. A proof of concept implementation of the state-space based object tracking using particle filters is presented as well. The proposed approach enables tracking objects/humans in a video, including foreground/background separation for object movement detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Silva, Bruno A. da, Arthur M. Lima, Janier Arias-Garcia, Michael Huebner, and Jones Yudi. "A Manycore Vision Processor for Real-Time Smart Cameras." Sensors 21, no. 21 (October 27, 2021): 7137. http://dx.doi.org/10.3390/s21217137.

Повний текст джерела
Анотація:
Real-time image processing and computer vision systems are now in the mainstream of technologies enabling applications for cyber-physical systems, Internet of Things, augmented reality, and Industry 4.0. These applications bring the need for Smart Cameras for local real-time processing of images and videos. However, the massive amount of data to be processed within short deadlines cannot be handled by most commercial cameras. In this work, we show the design and implementation of a manycore vision processor architecture to be used in Smart Cameras. With massive parallelism exploration and application-specific characteristics, our architecture is composed of distributed processing elements and memories connected through a Network-on-Chip. The architecture was implemented as an FPGA overlay, focusing on optimized hardware utilization. The parameterized architecture was characterized by its hardware occupation, maximum operating frequency, and processing frame rate. Different configurations ranging from one to eighty-one processing elements were implemented and compared to several works from the literature. Using a System-on-Chip composed of an FPGA integrated into a general-purpose processor, we showcase the flexibility and efficiency of the hardware/software architecture. The results show that the proposed architecture successfully allies programmability and performance, being a suitable alternative for future Smart Cameras.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Tippetts, Beau J., Dah-Jye Lee, James K. Archibald, and Kirt D. Lillywhite. "Dense Disparity Real-Time Stereo Vision Algorithm for Resource-Limited Systems." IEEE Transactions on Circuits and Systems for Video Technology 21, no. 10 (October 2011): 1547–55. http://dx.doi.org/10.1109/tcsvt.2011.2163444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Medeiros, Petrucio R. T., Rafael B. Gomes, Esteban W. G. Clua, and Luiz Gonçalves. "Dynamic multifoveated structure for real-time vision tasks in robotic systems." Journal of Real-Time Image Processing 17, no. 5 (July 5, 2019): 1403–19. http://dx.doi.org/10.1007/s11554-019-00895-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Batchelor, Bruce G., and Paul F. Whelan. "Real-time colour recognition in symbolic programming for machine vision systems." Machine Vision and Applications 8, no. 6 (November 1995): 385–98. http://dx.doi.org/10.1007/bf01213500.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Xu, Yunsong, Steven X. Ding, Hao Luo, and Shen Yin. "A Real-Time Performance Recovery Framework for Vision-Based Control Systems." IEEE Transactions on Industrial Electronics 68, no. 2 (February 2021): 1571–80. http://dx.doi.org/10.1109/tie.2020.2967678.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Oh, Jinwook, Gyeonghoon Kim, Injoon Hong, Junyoung Park, Seungjin Lee, Joo-Young Kim, Jeong-Ho Woo, and Hoi-Jun Yoo. "Low-Power, Real-Time Object-Recognition Processors for Mobile Vision Systems." IEEE Micro 32, no. 6 (November 2012): 38–50. http://dx.doi.org/10.1109/mm.2012.90.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Batchelor, Bruce G., and Paul F. Whelan. "Real-time colour recognition in symbolic programming for machine vision systems." Machine Vision and Applications 8, no. 6 (December 1, 1995): 385–98. http://dx.doi.org/10.1007/s001380050020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Iocchi, Luca, Luca Novelli, Luigi Tombolini, and Michele Vianello. "Automatic Real-Time River Traffic Monitoring Based on Artificial Vision Techniques." International Journal of Social Ecology and Sustainable Development 1, no. 2 (April 2010): 40–51. http://dx.doi.org/10.4018/jsesd.2010040104.

Повний текст джерела
Анотація:
Artificial vision techniques derived from computer vision and autonomous robotic systems have been successfully employed for river traffic monitoring and management. For this purpose, ARGOS and HYDRA systems have been developed by Achimedes Logica in collaboration with Sapienza University of Rome under the EU initiatives URBAN and MOBILIS for the monitoring of the boat traffic in Venice on the Gran Canal and the harbour area. These advanced systems provide an efficient automatic traffic monitoring to guarantee navigation safety and regular flow while producing and distributing information about the traffic. The systems are based on the processing of digital images that are gathered by survey cell stations distributed throughout the supervised area providing a visual platform on which the system displays recent and live traffic conditions in a synthetic way similar to radar view. ARGOS and HYDRA systems are programmed to automatically recognize and notice situations of great interest in whatever sea or land-targeted security applications including environmental, perimeter, and security control. This article describes the wide spectrum of applications of these two systems, that is, monitoring traffic and automatically tracking position, speed and direction of all vehicles.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gomez-Gonzalez, Sebastian, Yassine Nemmour, Bernhard Schölkopf, and Jan Peters. "Reliable Real-Time Ball Tracking for Robot Table Tennis." Robotics 8, no. 4 (October 22, 2019): 90. http://dx.doi.org/10.3390/robotics8040090.

Повний текст джерела
Анотація:
Robot table tennis systems require a vision system that can track the ball position with low latency and high sampling rate. Altering the ball to simplify the tracking using, for instance, infrared coating changes the physics of the ball trajectory. As a result, table tennis systems use custom tracking systems to track the ball based on heuristic algorithms respecting the real-time constrains applied to RGB images captured with a set of cameras. However, these heuristic algorithms often report erroneous ball positions, and the table tennis policies typically need to incorporate additional heuristics to detect and possibly correct outliers. In this paper, we propose a vision system for object detection and tracking that focuses on reliability while providing real-time performance. Our assumption is that by using multiple cameras, we can find and discard the errors obtained in the object detection phase by checking for consistency with the positions reported by other cameras. We provide an open source implementation of the proposed tracking system to simplify future research in robot table tennis or related tracking applications with strong real-time requirements. We evaluate the proposed system thoroughly in simulation and in the real system, outperforming previous work. Furthermore, we show that the accuracy and robustness of the proposed system increases as more cameras are added. Finally, we evaluate the table tennis playing performance of an existing method in the real robot using the proposed vision system. We measure a slight increase in performance compared to a previous vision system even after removing all the heuristics previously present to filter out erroneous ball observations.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Jyothi, Madapati Asha, and Mr M. Kalidas. "Real Time Smart Object Detection using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 11 (November 30, 2022): 212–17. http://dx.doi.org/10.22214/ijraset.2022.47281.

Повний текст джерела
Анотація:
Abstract: Efficient and accurate object detection has been an important topic in the advancement of computer vision systems. With the advent of deep learning techniques, the accuracy for object detection has increased drastically. The project aims to incorporate state-of-the-art technique for object detection with the goal of achieving high accuracy with a real-time performance. A major challenge in many of the object detection systems is the dependency on other computer vision techniques for helping the deep learning based approach, which leads to slow and non-optimal performance. In this project, we use a completely deep learning based approach to solve the problem of object detection in an end-to-end fashion. The network is trained on the most challenging publicly available data-set, on which a object detection challenge is conducted annually. The resulting system is fast and accurate, thus aiding those applications which require object detection
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Liu, Kui, and Nasser Kehtarnavaz. "Real-time robust vision-based hand gesture recognition using stereo images." Journal of Real-Time Image Processing 11, no. 1 (February 26, 2013): 201–9. http://dx.doi.org/10.1007/s11554-013-0333-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ashwin Sai C., Karthik Srinivas K., and Allwyn Raja P. "Real Time Motion Detection for Traffic Analysis Using Computer Vision." International Journal of Computer Vision and Image Processing 10, no. 2 (April 2020): 1–14. http://dx.doi.org/10.4018/ijcvip.2020040101.

Повний текст джерела
Анотація:
Nowadays, as the digital era proliferates, there are a number of traffic violation detection systems built using hardware and software to detect violation of traffic rules. This article proposes an integrated method for traffic analysis by detecting vehicles in the video and tracking their motion for multiple violation detection. The purpose of this integrated system is to provide a method to identify different types of traffic violations and to reduce the number of systems used to record violations. This method receives input from traffic surveillance camera and uses DNN to classify the vehicles to reduce the number of personnel needed to do this manually. The authors have implemented modules which are used to track vehicles and detect violations such as line crossing, lane changing, signal jumping, over-speeding and find illegally parked vehicles. The main purpose of this project is to convert manual traffic analysis into a smart traffic management system.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Kloock, Maximilian, Patrick Scheffe, Isabelle Tülleners, Janis Maczijewski, Stefan Kowalewski, and Bassam Alrifaee. "Vision-Based Real-Time Indoor Positioning System for Multiple Vehicles." IFAC-PapersOnLine 53, no. 2 (2020): 15446–53. http://dx.doi.org/10.1016/j.ifacol.2020.12.2367.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Morita, Toshihiko. "Tracking vision system for real-time motion analysis." Advanced Robotics 12, no. 6 (January 1997): 609–17. http://dx.doi.org/10.1163/156855399x00027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Cho, C. S., B. M. Chung, and M. J. Park. "Development of Real-Time Vision-Based Fabric Inspection System." IEEE Transactions on Industrial Electronics 52, no. 4 (August 2005): 1073–79. http://dx.doi.org/10.1109/tie.2005.851648.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Martı́nez, Judit, Eva Costa, Paco Herreros, Xavi Sánchez, and Ramon Baldrich. "A modular and scalable architecture for PC-based real-time vision systems." Real-Time Imaging 9, no. 2 (April 2003): 99–112. http://dx.doi.org/10.1016/s1077-2014(03)00002-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sokolov, Sergey Sergeevich, and Andrey Alexandrovich Boguslavskiy. "Automation of requirements preparation for the software of real-time vision systems." Keldysh Institute Preprints, no. 141 (2016): 1–16. http://dx.doi.org/10.20948/prepr-2016-141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Diaz, Javier, Eduardo Ros, Alberto Prieto, and Francisco J. Pelayo. "Fine grain pipeline systems for real-time motion and stereo-vision computation." International Journal of High Performance Systems Architecture 1, no. 1 (2007): 60. http://dx.doi.org/10.1504/ijhpsa.2007.013292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Hannan, M. W., and I. D. Walker. "Real-time shape estimation for continuum robots using vision." Robotica 23, no. 5 (August 23, 2005): 645–51. http://dx.doi.org/10.1017/s0263574704001018.

Повний текст джерела
Анотація:
This paper describes external camera-based shape estimation for continuum robots. Continuum robots have a continuous backbone made of sections which bend to produce changes of configuration. A major difficulty with continuum robots is the determination of the robot's shape, as there are no discrete joints. This paper presents a method for shape determination based on machine vision. Using an engineered environment and image processing from a high speed camera, shape determination of a continuum robot is achieved. Experimental results showing the effectiveness of the technique on our Elephant's Trunk Manipulator are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Peixoto, Paulo, Jorge Batista, and Helder J. Araujo. "Real-time human activity monitoring exploring multiple vision sensors." Robotics and Autonomous Systems 35, no. 3-4 (June 2001): 221–28. http://dx.doi.org/10.1016/s0921-8890(01)00117-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Dell’Aquila, Rocco V., Giampiero Campa, Marcello R. Napolitano, and Marco Mammarella. "Real-time machine-vision-based position sensing system for UAV aerial refueling." Journal of Real-Time Image Processing 1, no. 3 (February 15, 2007): 213–24. http://dx.doi.org/10.1007/s11554-007-0023-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hol, Jeroen D., Thomas B. Schön, Henk Luinge, Per J. Slycke, and Fredrik Gustafsson. "Robust real-time tracking by fusing measurements from inertial and vision sensors." Journal of Real-Time Image Processing 2, no. 2-3 (October 18, 2007): 149–60. http://dx.doi.org/10.1007/s11554-007-0040-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fiack, Laurent, Nicolas Cuperlier, and Benoît Miramond. "Embedded and real-time architecture for bio-inspired vision-based robot navigation." Journal of Real-Time Image Processing 10, no. 4 (January 22, 2014): 699–722. http://dx.doi.org/10.1007/s11554-013-0391-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Dusa, Varshini, Satya Shodhaka R. Prabhanjan, Sharon Carlina Chatragadda, Sravani Bandaru, and Ajay Jakkanapally. "Real-time Telugu Sign Language Translator with Computer Vision." International Journal for Research in Applied Science and Engineering Technology 10, no. 9 (September 30, 2022): 1833–40. http://dx.doi.org/10.22214/ijraset.2022.46928.

Повний текст джерела
Анотація:
Abstract: Sign language is the basic communication method among hearing disabled and speech disabled people. To express themselves, they require an interpreter or motion sensing devices who/which converts sign language in a few of the standard languages. However, there is no system for those who speak in the Telugu language and hence they are forced to speak in the national language over the regional language of their culture along with the same issues of cumbersome hardware or need for an interpreter. This paper proposes a system that detects hand gestures and signs from a real-time video stream that is processed with the help of computer vision and classified with object detection YOLOv3 algorithm. Additionally, the labels are mapped to corresponding Telugu text. The style of learning is transfer learning, unlike conventional CNNs, RNNs or traditional Machine Learning models. It involves applying a pre-trained model onto a completely new problem to solve the related problem statement and adapts to the new problem’s requirements efficiently. This requires lesser training effort in terms of dataset size and greater accuracy. It is the first system developed as a sign language translator for Telugu script. It has given the best results as compared to the existing systems. The system is trained on 52 Telugu letters, 10 numbers and 8 frequently used Telugu words.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Nguyen, Bella, and Ioannis Brilakis. "Real-time validation of vision-based over-height vehicle detection system." Advanced Engineering Informatics 38 (October 2018): 67–80. http://dx.doi.org/10.1016/j.aei.2018.06.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Do, Hyeonsu, Colin Yoon, Yunbo Liu, Xintao Zhao, John Gregg, Ancheng Da, Younggeun Park, and Somin Eunice Lee. "Intelligent Fusion Imaging Photonics for Real-Time Lighting Obstructions." Sensors 23, no. 1 (December 28, 2022): 323. http://dx.doi.org/10.3390/s23010323.

Повний текст джерела
Анотація:
Dynamic detection in challenging lighting environments is essential for advancing intelligent robots and autonomous vehicles. Traditional vision systems are prone to severe lighting conditions in which rapid increases or decreases in contrast or saturation obscures objects, resulting in a loss of visibility. By incorporating intelligent optimization of polarization into vision systems using the iNC (integrated nanoscopic correction), we introduce an intelligent real-time fusion algorithm to address challenging and changing lighting conditions. Through real-time iterative feedback, we rapidly select polarizations, which is difficult to achieve with traditional methods. Fusion images were also dynamically reconstructed using pixel-based weights calculated in the intelligent polarization selection process. We showed that fused images by intelligent polarization selection reduced the mean-square error by two orders of magnitude to uncover subtle features of occluded objects. Our intelligent real-time fusion algorithm also achieved two orders of magnitude increase in time performance without compromising image quality. We expect intelligent fusion imaging photonics to play increasingly vital roles in the fields of next generation intelligent robots and autonomous vehicles.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Shashank and Indu Sreedevi. "Distributed Network of Adaptive and Self-Reconfigurable Active Vision Systems." Symmetry 14, no. 11 (October 31, 2022): 2281. http://dx.doi.org/10.3390/sym14112281.

Повний текст джерела
Анотація:
The performance of a computer vision system depends on the accuracy of visual information extracted by the sensors and the system’s visual-processing capabilities. To derive optimum information from the sensed data, the system must be capable of identifying objects of interest (OOIs) and activities in the scene. Active vision systems intend to capture OOIs with the highest possible resolution to extract the optimum visual information by calibrating the configuration spaces of the cameras. As the data processing and reconfiguration of cameras are interdependent, it becomes very challenging for advanced active vision systems to perform in real time. Due to limited computational resources, model-based asymmetric active vision systems only work in known conditions and fail miserably in unforeseen conditions. Symmetric/asymmetric systems employing artificial intelligence, while they manage to tackle unforeseen environments, require iterative training and thus are not reliable for real-time applications. Thus, the contemporary symmetric/asymmetric reconfiguration systems proposed to obtain optimum configuration spaces of sensors for accurate activity tracking and scene understanding may not be adequate to tackle unforeseen conditions in real time. To address this problem, this article presents an adaptive self-reconfiguration (ASR) framework for active vision systems operating co-operatively in a distributed blockchain network. The ASR framework enables active vision systems to share their derived learning about an activity or an unforeseen environment, which learning can be utilized by other active vision systems in the network, thus lowering the time needed for learning and adaptation to new conditions. Further, as the learning duration is reduced, the duration of the reconfiguration of the cameras is also reduced, yielding better performance in terms of understanding of a scene. The ASR framework enables resource and data sharing in a distributed network of active vision systems and outperforms state-of-the-art active vision systems in terms of accuracy and latency, making it ideal for real-time applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Erokhin, D. Y., A. B. Feldman, and S. E. Korepanov. "DETECTION AND TRACKING OF MOVING OBJECTS WITH REAL-TIME ONBOARD VISION SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 67–71. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-67-2017.

Повний текст джерела
Анотація:
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Cho, Yong Cheol Peter, Nandhini Chandramoorthy, Kevin M. Irick, and Vijaykrishnan Narayanan. "Accelerating Multiresolution Gabor Feature Extraction for Real Time Vision Applications." Journal of Signal Processing Systems 76, no. 2 (April 9, 2014): 149–68. http://dx.doi.org/10.1007/s11265-014-0873-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Jia, Qingyu, Liang Chang, Baohua Qiang, Shihao Zhang, Wu Xie, Xianyi Yang, Yangchang Sun, and Minghao Yang. "Real-Time 3D Reconstruction Method Based on Monocular Vision." Sensors 21, no. 17 (September 2, 2021): 5909. http://dx.doi.org/10.3390/s21175909.

Повний текст джерела
Анотація:
Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Nie, Yuman, Idaku Ishii, Kenkichi Yamamoto, Kensuke Orito, and Hiroshi Matsuda. "Real-time scratching behavior quantification system for laboratory mice using high-speed vision." Journal of Real-Time Image Processing 4, no. 2 (February 4, 2009): 181–90. http://dx.doi.org/10.1007/s11554-009-0111-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Jensen, Lars Baunegaard With, Anders Kjær-Nielsen, Karl Pauwels, Jeppe Barsøe Jessen, Marc Van Hulle, and Norbert Krüger. "A two-level real-time vision machine combining coarse- and fine-grained parallelism." Journal of Real-Time Image Processing 5, no. 4 (June 10, 2010): 291–304. http://dx.doi.org/10.1007/s11554-010-0159-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Watanabe, Yoshihiro, Takashi Komuro, Shingo Kagami, and Masatoshi Ishikawa. "Multi-Target Tracking Using a Vision Chip and its Applications to Real-Time Visual Measurement." Journal of Robotics and Mechatronics 17, no. 2 (April 20, 2005): 121–29. http://dx.doi.org/10.20965/jrm.2005.p0121.

Повний текст джерела
Анотація:
Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії