Siga este enlace para ver otros tipos de publicaciones sobre el tema: Camera recognition.

Artículos de revistas sobre el tema "Camera recognition"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Camera recognition".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zhao, Ruiyi, Yangshi Ge, Ye Duan y Quanhong Jiang. "Large-field Gesture Tracking and Recognition for Augmented Reality Interaction". Journal of Physics: Conference Series 2560, n.º 1 (1 de agosto de 2023): 012016. http://dx.doi.org/10.1088/1742-6596/2560/1/012016.

Texto completo
Resumen
Abstract In recent years, with the continuous development of computer vision and artificial intelligence technology, gesture recognition is widely used in many fields, such as virtual reality, augmented reality and so on. However, the traditional binocular camera architecture is limited by its limited field of view Angle and depth perception range. Fisheye camera is gradually applied in gesture recognition field because of its advantage of larger field of view Angle. Fisheye cameras offer a wider field of vision than previous binocular cameras, allowing for a greater range of gesture recognition. This gives fisheye cameras a distinct advantage in situations that require a wide field of view. However, because the imaging mode of fisheye camera is different from traditional camera, the image of fisheye camera has a certain degree of distortion, which makes the calculation of gesture recognition more complicated. Our goal is to design a distortion correction processing strategy suitable for fisheye cameras in order to extend the range of gesture recognition and achieve large field of view gesture recognition. Combined with binocular technology, we can use the acquired hand depth information to enrich the means of interaction. By taking advantage of the large viewing Angle of the fisheye camera to expand the range of gesture recognition, make it more extensive and accurate. This will help improve the real-time and precision of gesture recognition, which has important implications for artificial intelligence, virtual reality and augmented reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

WANG, Chenyu, Yukinori KOBAYASHI, Takanori EMARU y Ankit RAVANKAR. "1A1-H04 Recognition of 3-D Grid Structure Recognition with Fixed Camera and RGB-D Camera". Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2015 (2015): _1A1—H04_1—_1A1—H04_4. http://dx.doi.org/10.1299/jsmermd.2015._1a1-h04_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Reddy, K. Manideep. "Face Recognition for Criminal Detection". International Journal for Research in Applied Science and Engineering Technology 10, n.º 6 (30 de junio de 2022): 2856–60. http://dx.doi.org/10.22214/ijraset.2022.44528.

Texto completo
Resumen
Abstract: In these days, assessment camera structure wins as a security system at high speed since this structure can screen from remote spots using Web camera joined to video screen by network. Besides, computerized supplies like Web camera, and hard circle drive are proficiently fabricated, and are sold for minimal price. Likewise, execution gain of these mechanized sorts of stuff improves at a fast rate. Current perception camera structure shows dynamic pictures from some oversight areas shot by various Web cameras all the while. Then, this system makes spectator's mind and body tired considering the way that he/she wants to watch enormous number of dynamic pictures been persistently strengthened. Moreover, this structure has a troublesome issue, which is an observer slips over mark of bad behavior. This study eliminates Motion Region from moving individual, and measures Motion Quantity for assessing his/her dynamic state. Also, this recommendation method finds the distinctive place of questionable activity, and checks the degree of risk of the questionable development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chen, Zhuo, Hai Bo Wu y Sheng Ping Xia. "A Cooperative Dual-Camera System for Face Recognition and Video Monitoring". Advanced Materials Research 998-999 (julio de 2014): 784–88. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.784.

Texto completo
Resumen
In the ordinary video monitoring system, the whole small scene is usually observed by a stationary camera or a few stationary cameras, but the system can’t zoom and focus on the target of interest rapidly, and also can’t get the high resolution image of the target of interest in a far distance. Therefore based on the research of the dual-camera cooperation and a RSOM clustering tree and CSHG algorithm, a cooperative dual-camera system is designed to track and recognize a face quickly in a large-scale and far-distance scene in this paper, which is made up of a Stationary Wide Field of View (SWFV) camera and a Pan-Tilt-Zoom (PTZ) camera. In the meanwhile, the algorithm can ensure the real-time requirement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Tseng, Hung Li, Chao Nan Hung, Sun Yen Tan, Chiu Ching Tuan, Chi Ping Lee y Wen Tzeng Huang. "Single Camera for Multiple Vehicles License Plate Localization and Recognition on Multilane Highway". Applied Mechanics and Materials 418 (septiembre de 2013): 120–23. http://dx.doi.org/10.4028/www.scientific.net/amm.418.120.

Texto completo
Resumen
License plate recognition systems can be classified into several categories: systems with single camera for motionless vehicle, systems with single camera for moving vehicle, and systems with multiple cameras for moving vehicles on highways (one camera for each lane). In this paper we present an innovative system which can locate multiple moving vehicles and recognize their license plates with only one single camera. Obviously, our system is highly cost effective in comparison with other systems. Our system has license plate localization success rate 94% and license plate recognition success rate 88%. These success rates are pretty satisfiable considering the system is working on fast moving vehicles on highway.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Francisca O Nwokoma, Juliet N Odii, Ikechukwu I Ayogu y James C Ogbonna. "Camera-based OCR scene text detection issues: A review". World Journal of Advanced Research and Reviews 12, n.º 3 (30 de diciembre de 2021): 484–89. http://dx.doi.org/10.30574/wjarr.2021.12.3.0705.

Texto completo
Resumen
Camera-based scene text detection and recognition is a research area that has attracted countless attention and had made noticeable progress in the area of deep learning technology, computer vision, and pattern recognition. They are highly recommended for capturing text on-scene images (signboards), documents with a multipart and complex background, images on thick books and documents that are highly fragile. This technology encourages real-time processing since handheld cameras are built with very high processing speed and internal memory, are quite easy and flexible to use than the traditional scanner whose usability is limited as they are not portable in size and cannot be used on images captured by cameras. However, characters captured by traditional scanners pose fewer computational difficulties as compared to camera captured images that are associated with divers’ challenges with consequences of high computational complexity and recognition difficulties. This paper, therefore, reviews the various factors that increase the computational difficulties of Camera-Based OCR, and made some recommendations as per the best practices for Camera-Based OCR systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fan, Zhijie, Zhiwei Cao, Xin Li, Chunmei Wang, Bo Jin y Qianjin Tang. "Video Surveillance Camera Identity Recognition Method Fused With Multi-Dimensional Static and Dynamic Identification Features". International Journal of Information Security and Privacy 17, n.º 1 (9 de marzo de 2023): 1–18. http://dx.doi.org/10.4018/ijisp.319304.

Texto completo
Resumen
With the development of smart cities, video surveillance networks have become an important infrastructure for urban governance. However, by replacing or tampering with surveillance cameras, an important front-end device, attackers are able to access the internal network. In order to identify illegal or suspicious camera identities in advance, a camera identity identification method that incorporates multidimensional identification features is proposed. By extracting the static information of cameras and dynamic traffic information, a camera identity system that incorporates explicit, implicit, and dynamic identifiers is constructed. The experimental results show that the explicit identifiers have the highest contribution, but they are easy to forge; the dynamic identifiers rank second, but the traffic preprocessing is complex; the static identifiers rank last but are indispensable. Experiments on 40 cameras verified the effectiveness and feasibility of the proposed identifier system for camera identification, and the accuracy of identification reached 92.5%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Park, Yeonji, Yoojin Jeong y Chaebong Sohn. "Suspicious behavior recognition using deep learning". Journal of Advances in Military Studies 4, n.º 1 (30 de abril de 2021): 43–59. http://dx.doi.org/10.37944/jams.v4i1.78.

Texto completo
Resumen
The purpose of this study is to reinforce the defense and security system by recognizing the behaviors of suspicious person both inside and outside the military using deep learning. Surveillance cameras help detect criminals and people who are acting unusual. However, it is inefficient in that the administrator must monitor all the images transmitted from the camera. It incurs a large cost and is vulnerable to human error. Therefore, in this study, we propose a method to find a person who should be watched carefully only with surveillance camera images. For this purpose, the video data of doubtful behaviors were collected. In addition, after applying a algorithm that generalizes different heights and motions for each person in the input images, we trained through a model combining CNN, bidirectional LSTM, and DNN. As a result, the accuracy of the behavior recognition of suspicious behaviors was improved. Therefore, if deep learning is applied to existing surveillance cameras, it is expected that it will be possible to find the dubious person efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ake, Kanako, Tadatoshi Ogura, Yayoi Kaneko y Gregory S. A. Rasmussen. "Automated photogrammetric method to identify individual painted dogs (Lycaon pictus)". Zoology and Ecology 29, n.º 2 (30 de julio de 2019): 103–8. http://dx.doi.org/10.35513/21658005.2019.2.5.

Texto completo
Resumen
The painted dog, Lycaon pictus, has been visually identified by their tricolor patterns in surveys and whilst computerised recognition methods have been used in other species, they have not been used in painted dogs. This study compares results achieved from Hotspotter software against human recognition. Fifteen individual painted dogs in Yokohama Zoo, Japan were photographed using camera-traps and hand-held cameras from October 17–20, 2017. Twenty examinees identified 297 photos visually, and the same images were identified using Hotspotter. In the visual identification, mean accuracy rate was 61.20%, and a mean finish time was 4,840 seconds. At 90.57%, the accuracy rate for Hotspotter was significantly higher, with a mean finish time of 3,168 seconds. This highlights that visual photo-recognition may not be of value for untrained eyes, while software recognition can be useful for this species. For visual identification there was a significant difference in accuracy rates between hand-held cameras and camera-traps whereas for software identification there was no significant difference. This result shows that the accuracy of software identification may be unaffected by the type of photographic device. With software identification there was a significant difference with camera-trap height. This may be because the images of one camera-trap at a lower position became dark due to it being in a shadow.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Rusydi, Muhammad Ilhamdi, Aulia Novira, Takayuki Nakagome, Joseph Muguro, Rio Nakajima, Waweru Njeri, Kojiro Matsushita y Minoru Sasaki. "Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera". Journal of Robotics and Control (JRC) 3, n.º 3 (1 de mayo de 2022): 361–73. http://dx.doi.org/10.18196/jrc.v3i3.14750.

Texto completo
Resumen
In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study aims to realize human-following autonomous movement based on human recognition. In addition, the distance from the robot to the person is measured with a stereo thermal camera that uses two thermal cameras. A coaxial two-wheeled robot that is compact and capable of super-credit turning is used as a mobile robot. Finally, we conduct an autonomous movement experiment of a coaxial mobile robot based on human recognition by combining these. We performed human-following experiments on a coaxial two-wheeled robot based on human recognition using a stereo thermal camera and confirmed that it moves appropriately to the location where the recognized person is in multiple use cases (scenarios). However, the accuracy of distance measurement by stereo vision is inferior to that of laser measurement. It is necessary to improve it in the case of movement that requires more accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zamorano, Chandra I., Kiki Prawiroredjo, E. Shintadewi Julian y Endang Djuana. "Rancang Bangun Sistem Kamera Pengawas dengan Pengenalan Wajah untuk Keamanan Berbasis Blynk Legacy". Techné : Jurnal Ilmiah Elektroteknika 22, n.º 2 (5 de diciembre de 2023): 241–58. http://dx.doi.org/10.31358/techne.v22i2.381.

Texto completo
Resumen
Covid-19 pandemic that has occurred since the beginning of 2020 has brought down all aspects of the country, starting from community activities to the economy. This has an impact on increasing the number of crimes committed by the community such as theft, robbery or other crimes. In this study, a room security system is proposed that uses a surveillance camera with a face recognition ability that records the face image of an intruder and records events as evidence of an intrusion. This system sends information quickly and automatically to the Android application user if an intruder who the camera doesn't recognize enters his house. The smartphone application user can control camera movements inside the house to monitor the movement of intruders and record the incident. This system uses 5 ESP32-CAM cameras. One camera is used to recognize and record the intruder's face image placed in front of the house and four cameras as surveillance and face recognition cameras are placed inside of the house. Each camera is driven by a servo motor controlled by a ESP8266 microcontroller. From the test results it is known that the maximum distance that the cameras still recognize the face image of an intruder or the home owner's face image is 2 meters when the light is bright. When it is dim, the camera in front of the house recognizes the face images up to 0.5 meters while the cameras inside of the house recognize the face images up to 1 meter. The average delay time for sending data from the camera system to application user is 201 ms to 617 ms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

WANG, Lei, Chao HU, Jie WU, Qing HE y Wei LIU. "Multi-camera face gesture recognition". Journal of Computer Applications 30, n.º 12 (5 de enero de 2011): 3307–10. http://dx.doi.org/10.3724/sp.j.1087.2010.03307.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Bernardi, Bryan D. "Camera on-board voice recognition". Journal of the Acoustical Society of America 101, n.º 5 (1997): 2429. http://dx.doi.org/10.1121/1.418474.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Athanasiadou, Eleni, Zeno Geradts y Erwin Van Eijk. "Camera recognition with deep learning". Forensic Sciences Research 3, n.º 3 (3 de julio de 2018): 210–18. http://dx.doi.org/10.1080/20961790.2018.1485198.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yu, X. y S. Beucher. "Vehicles Recognition by Video Camera". IFAC Proceedings Volumes 27, n.º 12 (agosto de 1994): 389–94. http://dx.doi.org/10.1016/s1474-6670(17)47501-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Astrid, Marcella y Seung‐Ik Lee. "Assembling three one‐camera images for three‐camera intersection classification". ETRI Journal 45, n.º 5 (octubre de 2023): 862–73. http://dx.doi.org/10.4218/etrij.2023-0100.

Texto completo
Resumen
AbstractDetermining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three‐camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian‐view intersection classification experiments show that our feature fusion model provides an area under the curve and F1‐score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three‐ and one‐camera models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Holešovský, Ondřej, Radoslav Škoviera, Václav Hlaváč y Roman Vítek. "Experimental Comparison between Event and Global Shutter Cameras". Sensors 21, n.º 4 (6 de febrero de 2021): 1137. http://dx.doi.org/10.3390/s21041137.

Texto completo
Resumen
We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Son, Sungho y Han-Cheol Ryu. "Study on the Effect of Small Blockage on Autonomous Camera Recognition". Institute of Future Society and Christianity 4, n.º 2 (31 de diciembre de 2023): 89–99. http://dx.doi.org/10.53665/isc.4.2.89.

Texto completo
Resumen
The environmental awareness sensors of self-driving cars include cameras, radars, and lidar. While various cognitive sensors are employed in combinations as per self-driving manufacturers' unique development strategies, the camera is a consistently utilized sensor. Camera sensors are the only ones capable of capturing texture, color, and contrast information, as well as recognizing objects such as road lanes, signals, signs, pedestrians, bicycles, and surrounding vehicles. Due to the ever-increasing pixel resolution and relatively low prices, camera sensors are gaining importance in autonomous vehicles. However, they are susceptible to environmental changes like dust, sun, rain, snow, or darkness. Furthermore, due to their relatively small, lens-like shape in comparison to radar and lidar sensors, camera sensor performance can be compromised by visual obstructions such as small dust particles (referred to as ‘blockage’ hereafter), which can significantly impact the safety of autonomous driving. In this study, a camera simulator was employed to simulate a virtual accident scenario based on an actual accident, projecting the virtual scenario screen directly through a camera. Object recognition delay time was assessed based on the density and color of the blockage. This study demonstrates that cognitive delays caused by blockage can lead to major accidents and draws a parallel with the idea that a small speck of dust in one's faith can cause significant trials. In addition to emphasizing the importance of cleaning the camera lens to prevent blockage, I would like to suggest the periodic purification of one's faith from external temptations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Aragon, Maria Christina, Melissa Juanillo y Rosmina Joy Cabauatan. "Camera-Captured Writing System Recognition of Logosyllabic Han Character". International Journal of Computer and Communication Engineering 3, n.º 3 (2014): 166–71. http://dx.doi.org/10.7763/ijcce.2014.v3.313.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Rodriguez, Julian Severiano. "A comparison of an RGB-D cameras performance and a stereo camera in relation to object recognition and spatial position determination". ELCVIA Electronic Letters on Computer Vision and Image Analysis 20, n.º 1 (27 de mayo de 2021): 16–27. http://dx.doi.org/10.5565/rev/elcvia.1238.

Texto completo
Resumen
Results of using an RGB-D camera (Kinect sensor) and a stereo camera, separately, in order to determine the 3D real position of characteristic points of a predetermined object in a scene are presented. KAZE algorithm was used to make the recognition, that algorithm exploits the nonlinear scale space through nonlinear diffusion filtering; 3D coordinates of the centroid of a predetermined object were calculated employing the camera calibration information and the depth parameter provided by a Kinect sensor and a stereo camera. Experimental results show it is possible to get the required coordinates with both cameras in order to locate a robot, although a balance in the distance where the sensor is placed must be guaranteed: no fewer than 0.8 m from the object to guarantee the real depth information, it is due to Kinect operating range; 0.5 m to stereo camera, but it must not be 1 m away to have a suitable rate of object recognition, besides, Kinect sensor has more precision with distance measures regarding a stereo camera.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Fang, Liang, Zhiwei Guan y Jinghua Li. "Automatic Roadblock Identification Algorithm for Unmanned Vehicles Based on Binocular Vision". Wireless Communications and Mobile Computing 2021 (23 de noviembre de 2021): 1–7. http://dx.doi.org/10.1155/2021/3333754.

Texto completo
Resumen
In order to improve the accuracy of automatic obstacle recognition algorithm for driverless vehicles, an automatic obstacle recognition algorithm for driverless vehicles based on binocular vision is constructed. Firstly, the relevant parameters of the camera are calibrated around the new car coordinate system to determine the corresponding obstacle position of the vehicle. At the same time, the three-dimensional coordinates of obstacle points are obtained by binocular matching method. Then, the left and right cameras are used to capture the feature points of obstacles in the image to realize the recognition of obstacles. Finally, the experimental results show that for obstacle 1, the recognition error of the algorithm is 0.03 m; for obstacle 2, the recognition error is 0.02 m; for obstacle 3, the recognition error is 0.01 m. The algorithm has small recognition error. The vehicle coordinate system is added in the camera calibration process, which can accurately measure the relative position information between the vehicle and the obstacle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

ABAYOMI-ALLI, A., E. O. OMIDIORA, S. O. OLABIYISI, J. A. Ojo y A. Y. AKINGBOYE. "BLACKFACE SURVEILLANCE CAMERA DATABASE FOR EVALUATING FACE RECOGNITION IN LOW QUALITY SCENARIOS". Journal of Natural Sciences Engineering and Technology 15, n.º 2 (22 de noviembre de 2017): 13–31. http://dx.doi.org/10.51406/jnset.v15i2.1668.

Texto completo
Resumen
Many face recognition algorithms perform poorly in real life surveillance scenarios because they were tested with datasets that are already biased with high quality images and certain ethnic or racial types. In this paper a black face surveillance camera (BFSC) database was described, which was collected from four low quality cameras and a professional camera. There were fifty (50) random volunteers and 2,850 images were collected for the frontal mugshot, surveillance (visible light), surveillance (IR night vision), and pose variations datasets, respectively. Images were taken at distance 3.4, 2.4, and 1.4 metres from the camera, while the pose variation images were taken at nine distinct pose angles with an increment of 22.5 degrees to the left and right of the subject. Three Face Recognition Algorithms (FRA), a commercially available Luxand SDK, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were evaluated for performance comparison in low quality scenarios. Results obtained show that camera quality (resolution), face-to-camera distance, average recognition time, lighting conditions and pose variations all affect the performance of FRAs. Luxand SDK, PCA and LDA returned an overall accuracy of 97.5%, 93.8% and 92.9% after categorizing the BFSC images into excellent, good and acceptable quality scales.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Jiang, Mingjun, Zihan Zhang, Kohei Shimasaki, Shaopeng Hu y Idaku Ishii. "Multi-Thread AI Cameras Using High-Speed Active Vision System". Journal of Robotics and Mechatronics 34, n.º 5 (20 de octubre de 2022): 1053–62. http://dx.doi.org/10.20965/jrm.2022.p1053.

Texto completo
Resumen
In this study, we propose a multi-thread artificial intelligence (AI) camera system that can simultaneously recognize remote objects in desired multiple areas of interest (AOIs), which are distributed in a wide field of view (FOV) by using single image sensor. The proposed multi-thread AI camera consists of an ultrafast active vision system and a convolutional neural network (CNN)-based ultrafast object recognition system. The ultrafast active vision system can function as multiple virtual cameras with high spatial resolution by synchronizing exposure of a high-speed camera and movement of an ultrafast two-axis mirror device at hundreds of hertz, and the CNN-based ultrafast object recognition system simultaneously recognizes the acquired high-frame-rate images in real time. The desired AOIs for monitoring can be automatically determined after rapidly scanning pre-placed visual anchors in the wide FOV at hundreds of fps with object recognition. The effectiveness of the proposed multi-thread AI camera system was demonstrated by conducting several wide area monitoring experiments on quick response (QR) codes and persons in nature spacious scene such as meeting room, which was formerly too wide for a single still camera with wide angle lens to simultaneously acquire clear images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ramakic, Adnan, Zlatko Bundalo y Zeljko Vidovic. "Feature extraction for person gait recognition applications". Facta universitatis - series: Electronics and Energetics 34, n.º 4 (2021): 557–67. http://dx.doi.org/10.2298/fuee2104557r.

Texto completo
Resumen
In this paper we present some features that may be used in person gait recognition applications. Gait recognition is an interesting way of people identification. During a gait cycle, each person creates unique patterns that can be used for people identification. Also, gait recognition methods ordinarily do not need interaction with a person and that is the main advantage of these methods. Features used in a person gait recognition methods can be obtained with widely available RGB and RGB-D cameras. In this paper we present a two features which are suitable for use in gait recognition applications. Mentioned features are height of a person and step length of a person. They may be extracted and were extracted from depth images obtained from RGB-D camera. For experimental purposes, we used a custom dataset created in outdoor environment using a long-range stereo camera.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Jianbo Zhang, Qun Yin, Duan Peng-Fei y Meisu Yin. "Student Attendance Analysis and Statistics Platform based on Capture Recognition Technology". Electrotehnica, Electronica, Automatica 70, n.º 1 (15 de marzo de 2022): 85–94. http://dx.doi.org/10.46904/eea.22.70.1.1108009.

Texto completo
Resumen
With the development of face recognition technology and HD camera, it is possible to use face recognition to realize the classroom attendance statistics. The traditional way of classroom attendance statics needs teachers to roll call according to the list of students, but face recognition can not only save the time of class, but also lighten the burden of statistics attendance of school. This paper realizes the face recognition system of attendance analysis and statistics platform, and it needs cameras and a main computer. In software, under the development environment of VS2017, it relies on OPENCV, arcsoft face recognition SDK and My -SQL database realizes a real-time video stream face recognition system with C++ programming. The system captures the student face by the camera installed in the classroom, then using SDK extracts the face feature and compares with the feature in the database. When the compare value is over the set value, the system will output the corresponding students' information and completion time to the TXT file.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Betta, Giovanni, Domenico Capriglione, Mariella Corvino, Alberto Lavatelli, Consolatina Liguori, Paolo Sommella y Emanuele Zappa. "Metrological characterization of 3D biometric face recognition systems in actual operating conditions". ACTA IMEKO 6, n.º 1 (25 de abril de 2017): 33. http://dx.doi.org/10.21014/acta_imeko.v6i1.392.

Texto completo
Resumen
<p>Nowadays, face recognition systems are going to widespread in many fields of application, from automatic user login for financial activities and access to restricted areas, to surveillance for improving security in airports and railway stations, to cite a few.<br />In such scenarios, the architectures based on stereo vision and 3D reconstruction of the face are going to assume a predominant role because they can generally assure a better reliability than solutions based on a single camera (which make use of a single image instead of a couple of images). To realize such systems, different architectures can be considered by varying the positioning of the pair of cameras with respect to the face of the subject to be identified, as well as both kind and resolution of camera considered. These parameters can affect the correct decision rate of the system in classifying the input face, especially in presence of image uncertainty.<br />In this paper, several 3D architectures differing in camera specifications and geometrical positioning of the camera pair (with respect to the input face) are realized and compared. The detection of facial features in the images is made by adopting a popular method based on the Active Appearance Model (AAM) algorithm. 3D position of facial features is then obtained by means of stereo triangulation. The performance of the realized systems has been compared in terms of sensitivity to the quantities of influence and related uncertainty, and of typical indexes for the analysis of classification systems. Main results of such comparison show that the best performance can be reached by reducing the distance between cameras and subject to be identified and by minimizing the horizontal angle between the plane containing the camera pair axis and the face to be identified.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Son, Sungho, Woongsu Lee, Hyungi Jung, Jungki Lee, Charyung Kim, Hyunwoo Lee, Hyungwon Park et al. "Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain". Sensors 23, n.º 19 (22 de septiembre de 2023): 8027. http://dx.doi.org/10.3390/s23198027.

Texto completo
Resumen
This study is the first to develop technology to evaluate the object recognition performance of camera sensors, which are increasingly important in autonomous vehicles owing to their relatively low price, and to verify the efficiency of camera recognition algorithms in obstruction situations. To this end, the concentration and color of the blockage and the type and color of the object were set as major factors, with their effects on camera recognition performance analyzed using a camera simulator based on a virtual test drive toolkit. The results show that the blockage concentration has the largest impact on object recognition, followed in order by the object type, blockage color, and object color. As for the blockage color, black exhibited better recognition performance than gray and yellow. In addition, changes in the blockage color affected the recognition of object types, resulting in different responses to each object. Through this study, we propose a blockage-based camera recognition performance evaluation method using simulation, and we establish an algorithm evaluation environment for various manufacturers through an interface with an actual camera. By suggesting the necessity and timing of future camera lens cleaning, we provide manufacturers with technical measures to improve the cleaning timing and camera safety.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Satybaldina, D. Zh, N. S. Glazyrina, V. S. Stepanov y K. A. Kalymova. "Development of a Python application for recognizing gestures from a video stream of RGB and RGBD cameras". Bulletin of L.N. Gumilyov Eurasian National University. Mathematics. Computer Science. Mechanics Series 136, n.º 3 (2022): 6–17. http://dx.doi.org/10.32523/bulmathenu.2021/3.1.

Texto completo
Resumen
Gesture recognition systems have changed a lot recently, due to the development of modern data capture devices (sensors) and the development of new recognition algorithms. The article presents the results of a study for recognizing static and dynamic hand gestures from a video stream from RGB and RGBD cameras, namely from the Logitech HD Pro Webcam C920 webcam and from the Intel RealSense D435 depth camera. Software implementation is done using Python 3.6 tools. Open source Python libraries provide robust implementations of image processing and segmentation algorithms. The feature extraction and gesture classification subsystem is based on the VGG-16 neural network architecture implemented using the TensorFlow and Keras deep learning frameworks. The technical characteristics of the cameras are given. The algorithm of the application is described. The research results aimed at comparing data capture devices under various experimental conditions (distance and illumination) are presented. Experimental results show that using the Intel RealSense D435 depth camera provides more accurate gesture recognition under various experimental conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Manoliu, Mitica-Valentin. "Biometric security: Recognition according to the pattern of palm veins". Scientific Bulletin of Naval Academy XXIII, n.º 1 (15 de julio de 2020): 257–62. http://dx.doi.org/10.21279/1454-864x-20-i1-036.

Texto completo
Resumen
Palm vein recognition is a promising new biometric method, which has additional potential in the forensic field. This process is performed using light using NIR(Near-infrared) LEDs and the camera that captures the acquisition of veins. The obtained images have noise with variations of rotation and translation. Therefore, the input image made by the camera must be pre-processed using characteristic processes. A set of features is extracted based on images taken from infrared light cameras and processed in order to make authentication possible. This whole process can be accomplished by several methods. Thus, the application can be used to improve the security of military ships in restricted areas, but not only.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Nishikawa, Noboru, Masaki Onishi, Takuya Matsumoto, Masao Izumi y Kunio Fukunaga. "Object Recognition Based on Camera Control". IEEJ Transactions on Electronics, Information and Systems 118, n.º 2 (1998): 210–16. http://dx.doi.org/10.1541/ieejeiss1987.118.2_210.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Routray, Jyotirmayee, Sarthak Rout, Jiban Jyoti Panda, Bhabani Shankar Mohapatra y Hitendrita Panda. "Hand Gesture Recognition using TOF camera". International Journal of Applied Engineering Research 16, n.º 4 (15 de junio de 2021): 302. http://dx.doi.org/10.37622/ijaer/16.4.2021.302-307.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Kuznetsova, S. Yu, K. Zhigalov y I. M. Daudov. "Camera testing technique for auto recognition". Journal of Physics: Conference Series 1582 (julio de 2020): 012058. http://dx.doi.org/10.1088/1742-6596/1582/1/012058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Berstis, Viktors. "Digital camera with voice recognition annotation". Journal of the Acoustical Society of America 116, n.º 3 (2004): 1332. http://dx.doi.org/10.1121/1.1809943.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Wu, Yi-Chang, Yao-Cheng Liu y Ru-Yi Huang. "Real-time microreaction recognition system". IAES International Journal of Robotics and Automation (IJRA) 12, n.º 2 (1 de junio de 2023): 157. http://dx.doi.org/10.11591/ijra.v12i2.pp157-166.

Texto completo
Resumen
<span lang="EN-US">This study constructed a real-time microreaction recognition system that can give real-time assistance to investigators. Test results indicated that the number of frames per second (30 or 190); angle of the camera, namely the front view of the interviewee or left (+45°) or right (−45°) view; and image resolution (480 or 680 p) did not have major effects on the system’s recognition ability. However, when the camera was placed at a distance of 300 cm, recognition did not always succeed. Value changes were larger when the camera was placed at an elevation 45° than when it was placed directly in front of the person being interrogated. Within a specific distance, the recognition results of the proposed real-time microreaction recognition system concurred with the six reaction case videos. In practice, only the distance and height of the camera must be adjusted in the real-time microreaction recognition system.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Koo, Ja, Se Cho, Na Baek, Min Kim y Kang Park. "CNN-Based Multimodal Human Recognition in Surveillance Environments". Sensors 18, n.º 9 (11 de septiembre de 2018): 3040. http://dx.doi.org/10.3390/s18093040.

Texto completo
Resumen
In the current field of human recognition, most of the research being performed currently is focused on re-identification of different body images taken by several cameras in an outdoor environment. On the other hand, there is almost no research being performed on indoor human recognition. Previous research on indoor recognition has mainly focused on face recognition because the camera is usually closer to a person in an indoor environment than an outdoor environment. However, due to the nature of indoor surveillance cameras, which are installed near the ceiling and capture images from above in a downward direction, people do not look directly at the cameras in most cases. Thus, it is often difficult to capture front face images, and when this is the case, facial recognition accuracy is greatly reduced. To overcome this problem, we can consider using the face and body for human recognition. However, when images are captured by indoor cameras rather than outdoor cameras, in many cases only part of the target body is included in the camera viewing angle and only part of the body is captured, which reduces the accuracy of human recognition. To address all of these problems, this paper proposes a multimodal human recognition method that uses both the face and body and is based on a deep convolutional neural network (CNN). Specifically, to solve the problem of not capturing part of the body, the results of recognizing the face and body through separate CNNs of VGG Face-16 and ResNet-50 are combined based on the score-level fusion by Weighted Sum rule to improve recognition performance. The results of experiments conducted using the custom-made Dongguk face and body database (DFB-DB1) and the open ChokePoint database demonstrate that the method proposed in this study achieves high recognition accuracy (the equal error rates of 1.52% and 0.58%, respectively) in comparison to face or body single modality-based recognition and other methods used in previous studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Bharathi, M., N. Padmaja y M. Dharani. "OCR-based vehicle number plate recognition powered by a raspberry Pi". i-manager’s Journal on Electronics Engineering 12, n.º 3 (2022): 33. http://dx.doi.org/10.26634/jele.12.3.18959.

Texto completo
Resumen
Modern technology has revolutionized automation. Security is at high priority with increasing automation. Today, to help people feel comfortable, video surveillance cameras are installed in public places like schools, hospitals, and other buildings. The main goal of this research work is to automatically collect vehicle images with a camera using a Raspberry Pi and recognising the licence plate of the vehicles. Vehicle number plate recognition is a challenging but crucial system. This is highly helpful for automating toll booths, identifying automated signal violators, and identifying traffic regulation violators. In this work, a Raspberry Pi is used for vehicle license plate recognition, which uses image processing to automatically recognize license plates. Incoming camera footage is continuously processed by the system to look for any signs of number plates. When the camera detects a number plate, Optical Character Recognition (OCR) technique is used to process the image and extract the number from it. The distance to an object is calculated by a sensor utilizing sound waves. The extracted number is then displayed by the system. This can be used for additional authentication.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Jang, Youjin, Inbae Jeong, Moein Younesi Heravi, Sajib Sarkar, Hyunkyu Shin y Yonghan Ahn. "Multi-Camera-Based Human Activity Recognition for Human–Robot Collaboration in Construction". Sensors 23, n.º 15 (7 de agosto de 2023): 6997. http://dx.doi.org/10.3390/s23156997.

Texto completo
Resumen
As the use of construction robots continues to increase, ensuring safety and productivity while working alongside human workers becomes crucial. To prevent collisions, robots must recognize human behavior in close proximity. However, single, or RGB-depth cameras have limitations, such as detection failure, sensor malfunction, occlusions, unconstrained lighting, and motion blur. Therefore, this study proposes a multiple-camera approach for human activity recognition during human–robot collaborative activities in construction. The proposed approach employs a particle filter, to estimate the 3D human pose by fusing 2D joint locations extracted from multiple cameras and applies long short-term memory network (LSTM) to recognize ten activities associated with human and robot collaboration tasks in construction. The study compared the performance of human activity recognition models using one, two, three, and four cameras. Results showed that using multiple cameras enhances recognition performance, providing a more accurate and reliable means of identifying and differentiating between various activities. The results of this study are expected to contribute to the advancement of human activity recognition and utilization in human–robot collaboration in construction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Alsadik, Bashar, Luuk Spreeuwers, Farzaneh Dadrass Javan y Nahuel Manterola. "Mathematical Camera Array Optimization for Face 3D Modeling Application". Sensors 23, n.º 24 (12 de diciembre de 2023): 9776. http://dx.doi.org/10.3390/s23249776.

Texto completo
Resumen
Camera network design is a challenging task for many applications in photogrammetry, biomedical engineering, robotics, and industrial metrology, among other fields. Many driving factors are found in the camera network design including the camera specifications, object of interest, and type of application. One of the interesting applications is 3D face modeling and recognition which involves recognizing an individual based on facial attributes derived from the constructed 3D model. Developers and researchers still face difficulty in reaching the required high level of accuracy and reliability needed for image-based 3D face models. This is caused among many factors by the hardware limitations and imperfection of the cameras and the lack of proficiency in designing the ideal camera-system configuration. Accordingly, for precise measurements, we still need engineering-based techniques to ascertain the specific level of deliverables quality. In this paper, an optimal geometric design methodology of the camera network is presented by investigating different multi-camera system configurations composed of four up to eight cameras. A mathematical nonlinear constrained optimization technique is applied to solve the problem and each camera system configuration is tested for a facial 3D model where a quality assessment is applied to conclude the best configuration. The optimal configuration is found to be a 7-camera array, comprising a pentagon shape enclosing two additional cameras, offering high accuracy. For those who prioritize point density, a 9-camera array with a pentagon and quadrilateral arrangement in the X-Z plane is a viable choice. However, a 5-camera array offers a balance between accuracy and the number of cameras.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

B L, Sunil Kumar y Sharmila Kumari M. "RGB-D FACE RECOGNITION USING LBP-DCT ALGORITHM". Applied Computer Science 17, n.º 3 (30 de septiembre de 2021): 73–81. http://dx.doi.org/10.35784/acs-2021-22.

Texto completo
Resumen
Face recognition is one of the applications in image processing that recognizes or checks an individual's identity. 2D images are used to identify the face, but the problem is that this kind of image is very sensitive to changes in lighting and various angles of view. The images captured by 3D camera and stereo camera can also be used for recognition, but fairly long processing times is needed. RGB-D images that Kinect produces are used as a new alternative approach to 3D images. Such cameras cost less and can be used in any situation and any environment. This paper shows the face recognition algorithms’ performance using RGB-D images. These algorithms calculate the descriptor which uses RGB and Depth map faces based on local binary pattern. Those images are also tested for the fusion of LBP and DCT methods. The fusion of LBP and DCT approach produces a recognition rate of 97.5% during the experiment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chaudhari, V. J. "Currency Recognition App". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (10 de junio de 2021): 435–37. http://dx.doi.org/10.22214/ijraset.2021.34982.

Texto completo
Resumen
Visually Impaired & foreign people are those people who have vision impairment or vision loss. Problems faced by visually impaired in performing daily activities are in great number. They also face a lot of difficulties in monetary transactions. They are unable to recognize the paper currencies due to similarity of paper texture and size between different categories. This money detector app helps visually impaired patients to recognize and detect money. Using this application blind people can speak and give command to open camera of a smartphone and camera will click picture of the note and tell the user by speech how much the money note is. This Android project uses speech to text conversion to convert the command given by the blind patient. Speech Recognition is a technology that allows users to provide spoken input into the systems. This android application uses text to speech concept to read the value of note to the user and then it converts the text value into speech. For currency detection, this application uses Azure custom vision API using Machine learning classification technique to detect currency based on images or paper using mobile camera.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Varga, Jozef y Marek Sukop. "Simple Algorithm for Patterns Recognition". Applied Mechanics and Materials 844 (julio de 2016): 75–78. http://dx.doi.org/10.4028/www.scientific.net/amm.844.75.

Texto completo
Resumen
This article describes algorithm for patterns recognition of application for dual arm robot, android device and camera system. As first was create android application for getting information from computer via Bluetooth. Computer is using for image processing from external camera and then send image of dice to android devices and show on screen score of dice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Guan, Haike, Naoki Motohashi, Takashi Maki y Toshifumi Yamaai. "Cattle Identification and Activity Recognition by Surveillance Camera". Electronic Imaging 2020, n.º 12 (26 de enero de 2020): 174–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.12.fais-174.

Texto completo
Resumen
In cattle farm, it is important to monitor activity of cattle to know their health condition and prevent accidents. Sensors were used by conventional methods to recognize activity of cattle, but attachment of sensors to the animal may cause stress. Camera was used to recognize activity of cattle, but it is difficult to identify cattle because cattle have similar appearance, especially for black or brown cattle. We propose a new method to identify cattle and recognize their activity by surveillance camera. The cattle are recognized at first by CNN deep learning method. Face and body areas of cattle, sitting and standing state are recognized separately at same time. Image samples of day and night were collected for learning model to recognize cattle for 24-hours. Among the recognized cattle, initial ID numbers are set at first frame of the video to identify the animal. Then particle filter object tracking is used to track the cattle. Combing cattle recognition and tracking results, ID numbers of the cattle are kept to the following frames of the video. Cattle activity is recognized by using multi-frame of the video. In areas of face and body of cattle, active or static activities are recognized. Activity times for the areas are outputted as cattle activity recognition results. Cattle identification and activity recognition experiments were made in a cattle farm by wide angle surveillance cameras. Evaluation results demonstrate effectiveness of our proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Wang, Zhiyi. "Designing a dual-camera highway monitoring system based on high spatiotemporal resolution using neural networks". Applied and Computational Engineering 31, n.º 1 (22 de enero de 2024): 139–49. http://dx.doi.org/10.54254/2755-2721/31/20230137.

Texto completo
Resumen
The criticality of infrastructure to societal development has seen highways evolve into an essential component of this ecosystem. Within this, the camera system has assumed significant importance due to the necessity for monitoring, evidence collection, and danger detection. However, the current standard of using high frame rate and high-resolution (HSR-HFR) cameras presents substantial costs associated with installation and data storage. This project, therefore, proposes a solution in the form of a High Spatiotemporal Resolution process applied to dual-camera videos. After evaluating state-of-the-art methodologies, this project develops a dual-camera system designed to merge frames from a high-resolution, low frame rate (HSR-LFR) camera with a high frame rate, low-resolution (LSR-HFR) camera. The result is a high-resolution, high frame rate video that effectively optimizes costs. The system pre-processes data using frame extraction and a histogram equalization method, followed by video processing with a neural network. Further refinement of the footage is performed via color adjustment and sharpening prior to a specific application, which in this case is license plate recognition. The system employs YOLOv5 in conjunction with LPRNet for license plate recognition. The resulting outputs demonstrate significant improvement in both clarity and accuracy, providing a more cost-effective solution for highway monitoring systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Salánki, Dániel y Kornél Sarvajcz. "Development of a Gait Recognition System in NI LabVIEW Programming Language". Műszaki Tudományos Közlemények 11, n.º 1 (1 de octubre de 2019): 167–70. http://dx.doi.org/10.33894/mtk-2019.11.37.

Texto completo
Resumen
Abstract Nowadays, the biometric identifier’s world is one of the most rapidly developing security technology areas. Within the biometric identification, the research team worked in the area of gait recognition. The research team developed a complex walking recognition system in NI LabVIEW environment that can detect multiple simultaneous reference points using a universal camera and capable of matching a predetermined curve to the collected samples. In the first version, real-time processing was done with a single camera, while in the second, two high-resolution cameras work with post-processing. The program can compare and evaluate the functions that are matched to the reference curve and the current curve in a specific way, whether two walking images are identical. The self-developed gait recognition system was tested on several test subjects by the research team and according to the results, the False Acceptance Rate was zero.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Wang, Lei, Kun Zhang, Jian Kang, Meng Peng y Jiayu Xu. "Personnel identification and distribution density analysis of subway station based on convolution neural network". ITM Web of Conferences 47 (2022): 02036. http://dx.doi.org/10.1051/itmconf/20224702036.

Texto completo
Resumen
In this paper, a method based on convolution neural network and multi-camera fusion is proposed to improve the recognition accuracy of crowd and then the personnel distribution of subway station platform is analyzed. In this method, tensorflow is used as the deep learning training framework and the yolov4 neural network algorithm is used to identify the subway station platform area using three videos synchronously. Through affine transformation and time average statistics, the passenger density of each sub-area is calculated and the distribution of personnel density in the whole area is analyzed. The results show that the number of people recognized by multiple cameras is 58% higher than that by single camera. The new recognition method has high recognition rate for the actual scene with large crowd and more obstacles. Finally some areas with high risk of personnel aggregation have been found, which should be the focus of safety monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Arfan, M., Ahmad Nurjalal, Maman Somantri y Sudjadi. "Pengenalan Aktivitas Manusia pada Area Tambak Udang dengan Convolutional Neural Network". Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 5, n.º 1 (28 de febrero de 2021): 174–79. http://dx.doi.org/10.29207/resti.v5i1.2888.

Texto completo
Resumen
Thievery is a problem that can harm theft victims. Thievery usually occurs at night when there is no supervision of goods in a location. To avoid thievery and monitor conditions in a location, CCTV (Closed-Circuit Television) cameras can be used. However, the function of CCTV camera systems is only a passive monitoring systems. In this paper, a human activity recognition is designed using CCTV cameras to produce a security system. Inputs on the recognition process are videos obtained from CCTV cameras installed in the shrimp pond. Human activity recognition that is used in this study is Convolutional Neural Network. Before the human activity recognition was carried out, the program first detected humans with the YOLO (You Only Look Once) algorithm and tracking it with the SORT (Simple Online and Realtime Tracking) algorithm. The results obtained from the human activity recognition is class labels on human objects that are tracked.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ozbaran, Yavuz y Serkan Tasgin. "Using cameras of automatic number plate recognition system for seat belt enforcement a case study of Sanliurfa (Turkey)". Policing: An International Journal 42, n.º 4 (12 de agosto de 2019): 688–700. http://dx.doi.org/10.1108/pijpsm-07-2018-0093.

Texto completo
Resumen
Purpose The purpose of this paper is to quantify the effect of the enforcement, which was carried out with ANPRs, on seat belt use. Though the Seat belt Act was enacted in 1992, it did not lead to an expected increase in seat belt use in Turkey including Sanliurfa, which is one of the immense provinces with a population of over 2m. The Sanliurfa Police Department set in an enforcement campaign, in which automatic number plate recognition (ANPR) cameras were used to facilitate an increment in using seat belts in the city center. Under the police leadership, seat belt use enforcement campaign was hugely publicized and sustained throughout the city. Design/methodology/approach The ANPRs did not have a feature to detect seat belt wearing automatically. Thus, this study tested whether automated plate recognition cameras have a deterrence effect on seat belt usage. To assess the efficacy of this enforcement project, the authors employed a pre/post-implementation design. For this study, the records of the 11 ANPR camera sites, 2 non-camera sites and 2 control sites were utilized. Findings The results of this study revealed that the seat belt use rate was around 8 percent, before camera enforcement in Sanliurfa. Overall increases were 12 percent during the warning period, 60 percent for the beginning period and 78 percent three months after enforcement began at camera sites. One-way ANOVA results suggested the differences between means of seat belt use counts were statistically significant F (3, 61,596)=15,456, p=0.000. Research limitations/implications The findings suggest that there are several reasons for the substantial increase in the seat belt use rate. The first reason for the success of the cameras was their deterrent effect on the drivers, because the drivers were aware that the traffic offense had become readily observable via camera detection in the intersections, and the drivers did not want to be penalized. Second, it is considered that a well-organized publicity of the cameras made a significant contribution to the effectiveness of the enforcement by increasing perceived detection risk. Finally, it is considered that the reason behind the sudden increase in seat belt use was the red-light cameras that had been already in use in Sanliurfa. Namely, the experience of the drivers about camera enforcement gave rise to the rapid decrease in seat belt violation rate in the warning period. Practical implications Using cameras (automatic or not) for seat belt enforcement and publicizing this enforcement can help to save resources and lives. Originality/value This study found a lot of news about similar enforcement on the internet, but no study was found in the literature that reveals if the enforcement can produce an effective result. Thus, this is the first study in Turkey, may be in the world, that evaluated if cameras of the ANPRS can generate effective seat belt enforcement. Furthermore, the study betokened that traffic violations, which cannot be automatically detected by cameras such as cell phone use and smoking in a vehicle can be effectively enforced by non-automatic cameras. Therefore, we believe that the study will contribute policing and the traffic safety literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Zhang, Xiaoning. "Evaluation of Motion Standard Based on Kinect Human Bone and Joint Data Acquisition". Wireless Communications and Mobile Computing 2022 (31 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/7624968.

Texto completo
Resumen
In order to improve human bone and joint data, we propose a method to collect data and judge the standard of motion. Kinect is a 3D somatosensory camera released by Microsoft. It has three cameras in total. The middle is a color camera, which can take color images and obtain 30 images per second; on the left is the infrared projector, which irradiates the object to form speckle. On the right is the depth camera to analyze the infrared spectrum. On both sides are two depth sensors to detect the relative position of people. On both sides of Kinect are a set of quaternion linear microphone arrays for speech recognition and filtering background noise, which can locate the sound source. There is also a base with built-in motor below, which can adjust the elevation angle. It can not only complete the collection of color images, but also measure the depth information of objects. The experimental results show that we use MSRAction3D data set and compare the same cross-validation method with other latest research methods in the figures. The highest recognition rate of this method (algorithm 10) is the second, and the lowest and average recognition rates are the highest. The improvement in the lowest recognition rate is obvious, which can show that this method has good recognition performance and better stability than other research methods. Kinect plays a relatively important role in the movement of human bone and joint data acquisition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Sheu, Ruey-Kai, Mayuresh Pardeshi, Lun-Chi Chen y Shyan-Ming Yuan. "STAM-CCF: Suspicious Tracking Across Multiple Camera Based on Correlation Filters". Sensors 19, n.º 13 (9 de julio de 2019): 3016. http://dx.doi.org/10.3390/s19133016.

Texto completo
Resumen
There is strong demand for real-time suspicious tracking across multiple cameras in intelligent video surveillance for public areas, such as universities, airports and factories. Most criminal events show that the nature of suspicious behavior are carried out by un-known people who try to hide themselves as much as possible. Previous learning-based studies collected a large volume data set to train a learning model to detect humans across multiple cameras but failed to recognize newcomers. There are also several feature-based studies aimed to identify humans within-camera tracking. It would be very difficult for those methods to get necessary feature information in multi-camera scenarios and scenes. It is the purpose of this study to design and implement a suspicious tracking mechanism across multiple cameras based on correlation filters, called suspicious tracking across multiple cameras based on correlation filters (STAM-CCF). By leveraging the geographical information of cameras and YOLO object detection framework, STAM-CCF adjusts human identification and prevents errors caused by information loss in case of object occlusion and overlapping for within-camera tracking cases. STAM-CCF also introduces a camera correlation model and a two-stage gait recognition strategy to deal with problems of re-identification across multiple cameras. Experimental results show that the proposed method performs well with highly acceptable accuracy. The evidences also show that the proposed STAM-CCF method can continuously recognize suspicious behavior within-camera tracking and re-identify it successfully across multiple cameras.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Davani, Sina G., Musab S. Al-Hadrusi y Nabil J. Sarhan. "An Autonomous System for Efficient Control of PTZ Cameras". ACM Transactions on Autonomous and Adaptive Systems 16, n.º 2 (30 de junio de 2021): 1–22. http://dx.doi.org/10.1145/3507658.

Texto completo
Resumen
This article addresses the research problem of how to autonomously control Pan/Tilt/Zoom (PTZ) cameras in a manner that seeks to optimize the face recognition accuracy or the overall threat detection and proposes an overall system. The article presents two alternative schemes for camera scheduling: Grid-Based Grouping (GBG) and Elevator-Based Planning (EBP). The camera control works with realistic 3D environments and considers many factors, including the direction of the subject’s movement and its location, distances from the cameras, occlusion, overall recognition probability so far, and the expected time to leave the site, as well as the movements of cameras and their capabilities and limitations. In addition, the article utilizes clustering to group subjects, thereby enabling the system to focus on the areas that are more densely populated. Moreover, it proposes a dynamic mechanism for controlling the pre-recording time spent on running the solution. Furthermore, it develops a parallel algorithm, allowing the most time-consuming phases to be parallelized, and thus run efficiently by the centralized parallel processing subsystem. We analyze through simulation the effectiveness of the overall solution, including the clustering approach, scheduling alternatives, dynamic mechanism, and parallel implementation in terms of overall recognition probability and the running time of the solution, considering the impacts of numerous parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía