Статті в журналах з теми "Camera monitoring"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Camera monitoring.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Camera monitoring".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Garland, Laura, Andrew Crosby, Richard Hedley, Stan Boutin, and Erin Bayne. "Acoustic vs. photographic monitoring of gray wolves (Canis lupus): a methodological comparison of two passive monitoring techniques." Canadian Journal of Zoology 98, no. 3 (March 2020): 219–28. http://dx.doi.org/10.1139/cjz-2019-0081.

Повний текст джерела
Анотація:
Remote camera traps are often used in large-mammal research and monitoring programs because they are cost-effective, allow for repeat surveys, and can be deployed for long time periods. Statistical advancements in calculating population densities from camera-trap data have increased the popularity of camera usage in mammal studies. However, drawbacks to camera traps include their limited sampling area and tendency for animals to notice the devices. In contrast, autonomous recording units (ARUs) record the sounds of animals with a much larger sampling area but are dependent on animals producing detectable vocalizations. In this study, we compared estimates of occupancy and detectability between ARUs and remote cameras for gray wolves (Canis lupus Linnaeus, 1758) in northern Alberta, Canada. We found ARUs to be comparable with cameras in their detectability and occupancy of wolves, despite only operating for 3% of the time that cameras were active. However, combining cameras and ARUs resulted in the highest detection probabilities for wolves. These advances in survey technology and statistical methods provide innovative avenues for large-mammal monitoring that, when combined, can be applied to a broad spectrum of conservation and management questions, provided assumptions for these methods are rigorously tested and met.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Ying, Jitao Bai, Yu Diao, Zhonghao Chen, Chu Wang, Kun Yang, Zeng Gao, and Huajie Wei. "Risk and Energy Based Optimization for Fire Monitoring System in Utility Tunnel Using Cellular Automata." Sustainability 16, no. 11 (June 1, 2024): 4717. http://dx.doi.org/10.3390/su16114717.

Повний текст джерела
Анотація:
Fire is one of the biggest threats to the safety of utility tunnels, and establishing camera-based monitoring systems is conducive to early fire finding and better understanding of the evolution of tunnel fires. However, conventional monitoring systems are being faced with the challenge of high energy consumption. In this paper, the camera operation in a utility tunnel was optimized considering both fire risk and energy consumption. Three design variables were investigated, namely the camera sight, the number of cameras in simultaneous operation, and the duration of camera operation. Cellular automata were used as a simple but effective method to simulate the spread of fire in a utility tunnel. Results show that as the number of cameras in simultaneous operation increases, the probability of fire capture also increases, but the energy consumption decreases. A shorter duration of camera operation can lead to a higher probability of fire capture, and meanwhile, lower energy consumption. For the duration of camera operation shorter than or equal to the allowable time, the probability of fire capture is significantly higher than that for the duration longer than the allowable time. Increasing the camera sight will significantly increase the probability of fire capture and lower the total energy consumption when a blind monitoring area exists. The total energy consumption of a camera-based monitoring system roughly satisfies hyperbolic correlation with the duration of camera operation, while the probability of fire capture can be predicted based on the number of cameras in simultaneous operation through a power model. The optimal design for the modeled tunnel section is two cameras in simultaneous operation with a tangent monitoring area. The duration of camera operation should be as short as possible, at least shorter than the allowable time. The study is expected to provide a reference for the sustainable design of energy-saving utility tunnels with lower fire risk.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Orisa, Mira, Karina Auliasari, and Rofila El Maghfiroh. "TEKNOLOGI MOTION-BASED TRACKING UNTUK PENGEMBANGAN APLIKASI KEAMANAN." Jurnal Teknologi Informasi dan Terapan 4, no. 2 (April 1, 2019): 119–24. http://dx.doi.org/10.25047/jtit.v4i2.69.

Повний текст джерела
Анотація:
The surveillance camera system is widely used by the public. The surveillance camera system used is usually a CCTV camera. In general, CCTV cameras can only record video. Security monitoring or monitoring by CCTV cameras can only be effective if there are operators who see the record directly on a monitor. Actually the surveillance camera system can be programmed to give a warning sign to the user. The surveillance camera used in this study is the IP camera. The camera is a camera that can be programmed to provide a notification to the user. By implementing motion-based tracking technology on the surveillance camera system can detect movement. Kalman filter is one of the motion-based tracking methods. Kalman filters can predict the movements recorded by the IP camera. The results of this study state that the surveillance camera system can provide notification messages to users via an android device when the surveillance camera records the movement of human objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ali, S. Y., O. Al-Saleh, and P. A. Koushki. "Effectiveness of Automated Speed-Monitoring Cameras in Kuwait." Transportation Research Record: Journal of the Transportation Research Board 1595, no. 1 (January 1997): 20–26. http://dx.doi.org/10.3141/1595-04.

Повний текст джерела
Анотація:
In 1994 the General Traffic Department installed automatic radar cameras to monitor traffic speed at a number of strategic roadway locations in Kuwait. The aim was to lower the number of high-speed violations and thus reduce road accidents. Recent traffic safety records point to an increase in both the number of violations and the occurrence of road accidents. It is argued in this paper that without live enforcement support and active follow-up of camera-recorded violations, the effectiveness of these cameras in improving road safety is insignificant at best, particularly in the undisciplined driving environment of the oil-rich nations in the Middle East. The speed of traffic was simultaneously measured via radar instruments both at the automatic camera site and at sections approximately 1 km before or after or before and after the cameras at eight camera locations. Measurements were recorded for six 1/2-hr periods at each site for a total of 72 hr over a period of 3 months, so that morning, afternoon, and after-dark hours, as well as different days of the week and roadway types, were covered. Analysis of the speed data showed that for the three daily periods and various roadway types, traffic speeds were consistently higher in sections before or after or before and after the automatic camera at the camera site. Statistical tests indicated that the difference in speed measured at and away from the cameras was at the 99 percent level. The findings demonstrate that in a traffic environment characterized by poor driving behavior, inconsistent and piecemeal driver education programs, and insufficient presence of law enforcement officials, reliance on automatic cameras alone to reduce traffic violations is doomed to fail.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hotař, Vlastimil. "Monitoring of Glass Production Using Vision Systems." Advanced Materials Research 39-40 (April 2008): 511–16. http://dx.doi.org/10.4028/www.scientific.net/amr.39-40.511.

Повний текст джерела
Анотація:
Applications of vision systems for control and monitoring are becoming more widespread. However, there are still specific problems in the glass industry, especially transparency of glass (many times colourless) that requires the use of special illuminations, adapters for lenses, filters, software filters, analyses of images, etc. An important problem is the choice of an optimal analysis for obtained images that should correspond with the character of obtained data. Our research is developing the use of cameras (an area scan cameras, a line scan cameras, and an intelligent cameras) for quality monitoring of glass production with connection to a PC or compact vision systems, software solutions with appropriate image analyses (using standard and new algorithms) and to solve of real problems for industry applications. The first stage of the research is: using a digital camera, an area scan camera and a line scan camera for glass melt and glass products monitoring to solve specific problems with lighting and to test some standard and non-standard analysis such as fractal geometry for the evaluation of productions and products. This article briefly shows basic information about our results and possibilities of application in the glass industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang, and Bin Bai. "Online Pyrometry Calibration for Industrial Combustion Process Monitoring." Processes 10, no. 9 (August 26, 2022): 1694. http://dx.doi.org/10.3390/pr10091694.

Повний текст джерела
Анотація:
Temperature and its distribution are crucial for combustion monitoring and control. For this application, digital camera-based pyrometers become increasingly popular, due to its relatively low cost. However, these pyrometers are not universally applicable due to the dependence of calibration. Compared with pyrometers, monitoring cameras exist in all most every combustion chamber. Although these cameras, theologically, have the ability to measure temperature, due to lack of calibration they are only used for visualization to support the decisions of operators. Almost all existing calibration methods are laboratory-based, and hence cannot calibrate a camera in operation. This paper proposes an online calibration method. It uses a pre-calibrated camera as a standard pyrometer to calibrate another camera in operation. The calibration is based on a photo taken by the pyrometry-camera at a position close to the camera in operation. Since the calibration does not affect the use of the camera in operation, it sharply reduces the cost and difficulty of pyrometer calibration. In this paper, a procedure of online calibration is proposed, and the advice about how to set camera parameters is given. Besides, the radio pyrometry is revised for a wider temperature range. The online calibration algorithm is developed based on two assumptions for images of the same flame taken in proximity: (1) there are common regions between the two images taken at close position; (2) there are some constant characteristic temperatures between the two-dimensional temperature distributions of the same flame taken from different angles. And those two assumptions are verified in a real industrial plants. Based on these two verified features, a temperature distribution matching algorithm is developed to calibrate pyrometers online. This method was tested and validated in an industrial-scale municipal solid waste incinerator. The accuracy of the calibrated pyrometer is sufficient for flame monitoring and control.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

A. Atya, Instructor Baidaa, Abdul Monem S. Rahma, and Abdul Mohssen J. Abdul Hossen. "Design and Implementation of Secure Building Monitoring System using Programmable Wireless Mobile Camera." International Journal of Computer Network and Information Security 9, no. 3 (March 8, 2017): 29–35. http://dx.doi.org/10.5815/ijcnis.2017.03.04.

Повний текст джерела
Анотація:
In the last decades, monitoring cameras begin to play a vital role in securing sensitive systems such as government sites or establishments. Generally, these kinds of cameras are fixed-location (i.e. outdoor camera) such that the viewpoint is limited to small area and not covering the whole place. In addition, there are some drawbacks that appear when using these kinds of cameras such as being breakable (intentionally or not) which may lead to camera malfunction or breaking in the linked electrical wires that may cause disconnection between the camera, monitor and its receiver. However, the main problem is the lacking of secure protecting system that prevents intruders from entering into the system disabling or malfunction it. In this research a new system is proposed in order to solve these problems by using wireless-mobile camera with embedded programmable operating system which enables controlling this camera remotely by sending wireless commands through the embedded component called Arduino card controller. This card enables the connection between the camera and the server to be programmatic by the user or developer. The main goal of this research is to design a monitoring system to detect any suspicious events and to ensure that the transferring monitoring data from the camera to the server is not infiltrated by unauthorized person by applying a set of techniques from image detection, object tracking and security algorithms to the instructions or the program of the camera. Compared with other researches, this work achieved the following goals: 1- Using Arduino card for programming the camera. 2- IP camera does not require user name and password. 3- The images and the other information are (encrypted) when sending to/from computer, 4- Using Mobile-wireless camera. 5- Process of keys exchanging between camera and computer. The results of this research are good and achieved the main goals of new developed technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mahyan, Fariza Binti, Asrani Bin Lit, Nglai Anak Satu, Mark Sydney Anak Banyah, and Mcirvine Igoh Ak Gumis. "Raspberry Pi-based home security monitoring system." E3S Web of Conferences 479 (2024): 07014. http://dx.doi.org/10.1051/e3sconf/202447907014.

Повний текст джерела
Анотація:
The era of technology has opened space to facilitate daily tasks. Security cameras have nowadays become a necessity for every expert’s safety environment. A buzzer, PIR sensor, PI camera, and Raspberry Pi are used to create home security systems. The PIR sensor detects motion, the PI camera snaps an image, and the buzzer beeps. A notification will be delivered immediately to the owner’s Telegram account when the camera captures the person’s face and enables the posting of live video of that moment. It enables the user to record the incidents that happen at home. The purpose of developing the Telegram application is mainly to provide the owner with an Android application because nowadays society is more dependent on mobile technology. PIR sensors will be active when they detect people or animals, but they will deactivate when they detect breezes. As a result, the PI Camera will only record images of things like people and animals, not the wind. The PIR sensor, PI camera, and buzzer will all have a linear relationship to one another. When the PIR sensor is turned on, the PI camera and buzzer will also turn on, and vice versa.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Xiao Meng, Shu Ce Zhang, Yong Liu, Xue Heng Tao, Xue Jun Wang, and Jin Shi Lu. "Design of Control System for Monitoring Camera Cleaning Robot." Applied Mechanics and Materials 513-517 (February 2014): 4047–51. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.4047.

Повний текст джерела
Анотація:
In order to solve the problem that manual cleaning high-altitude monitoring camera is difficult and risky, the scheme that mobile knee-type robot with three degrees of freedom cleans the monitoring probe instead of worker is proposed, and the control system based on MCU is designed. The hardware and program design is finished, which includes movement of manipulator, cameras jet cleaning with high-pressure spray gun, drying of cameras surface, ultrasonic obstacle avoidance, camera monitoring and image processing module. At last, the experiment and test for the cleaning robot prototype are carried out.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Hua, Pengjie Tao, Xiaoliang Meng, Mengbiao Liu, and Xinxia Liu. "An Optimum Deployment Algorithm of Camera Networks for Open-Pit Mine Slope Monitoring." Sensors 21, no. 4 (February 6, 2021): 1148. http://dx.doi.org/10.3390/s21041148.

Повний текст джерела
Анотація:
With the growth in demand for mineral resources and the increase in open-pit mine safety and production accidents, the intelligent monitoring of open-pit mine safety and production is becoming more and more important. In this paper, we elaborate on the idea of combining the technologies of photogrammetry and camera sensor networks to make full use of open-pit mine video camera resources. We propose the Optimum Camera Deployment algorithm for open-pit mine slope monitoring (OCD4M) to meet the requirements of a high overlap of photogrammetry and full coverage of monitoring. The OCD4M algorithm is validated and analyzed with the simulated conditions of quantity, view angle, and focal length of cameras, at different monitoring distances. To demonstrate the availability and effectiveness of the algorithm, we conducted field tests and developed the mine safety monitoring prototype system which can alert people to slope collapse risks. The simulation’s experimental results show that the algorithm can effectively calculate the optimum quantity of cameras and corresponding coordinates with an accuracy of 30 cm at 500 m (for a given camera). Additionally, the field tests show that the algorithm can effectively guide the deployment of mine cameras and carry out 3D inspection tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Cresswell, Anna K., Nicole M. Ryan, Andrew J. Heyward, Adam N. H. Smith, Jamie Colquhoun, Mark Case, Matthew J. Birt, et al. "A quantitative comparison of towed-camera and diver-camera transects for monitoring coral reefs." PeerJ 9 (April 14, 2021): e11090. http://dx.doi.org/10.7717/peerj.11090.

Повний текст джерела
Анотація:
Novel tools and methods for monitoring marine environments can improve efficiency but must not compromise long-term data records. Quantitative comparisons between new and existing methods are therefore required to assess their compatibility for monitoring. Monitoring of shallow water coral reefs is typically conducted using diver-based collection of benthic images along transects. Diverless systems for obtaining underwater images (e.g. towed-cameras, remotely operated vehicles, autonomous underwater vehicles) are increasingly used for mapping coral reefs. Of these imaging platforms, towed-cameras offer a practical, low cost and efficient method for surveys but their utility for repeated measures in monitoring studies has not been tested. We quantitatively compare a towed-camera approach to repeated surveys of shallow water coral reef benthic assemblages on fixed transects, relative to benchmark data from diver photo-transects. Differences in the percent cover detected by the two methods was partly explained by differences in the morphology of benthic groups. The reef habitat and physical descriptors of the site—slope, depth and structural complexity—also influenced the comparability of data, with differences between the tow-camera and the diver data increasing with structural complexity and slope. Differences between the methods decreased when a greater number of images were collected per tow-camera transect. We attribute lower image quality (variable perspective, exposure and focal distance) and lower spatial accuracy and precision of the towed-camera transects as the key reasons for differences in the data from the two methods and suggest changes to the sampling design to improve the application of tow-cameras to monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Oščádal, Petr, Tomáš Kot, Tomáš Spurný, Jiří Suder, Michal Vocetka, Libor Dobeš, and Zdenko Bobovský. "Camera Arrangement Optimization for Workspace Monitoring in Human–Robot Collaboration." Sensors 23, no. 1 (December 27, 2022): 295. http://dx.doi.org/10.3390/s23010295.

Повний текст джерела
Анотація:
Human–robot interaction is becoming an integral part of practice. There is a greater emphasis on safety in workplaces where a robot may bump into a worker. In practice, there are solutions that control the robot based on the potential energy in a collision or a robot re-planning the straight-line trajectory. However, a sensor system must be designed to detect obstacles across the human–robot shared workspace. So far, there is no procedure that engineers can follow in practice to deploy sensors ideally. We come up with the idea of classifying the space as an importance index, which determines what part of the workspace sensors should sense to ensure ideal obstacle sensing. Then, the ideal camera positions can be automatically found according to this classified map. Based on the experiment, the coverage of the important volume by the calculated camera position in the workspace was found to be on average 37% greater compared to a camera placed intuitively by test subjects. Using two cameras at the workplace, the calculated positions were 27% more effective than the subjects’ camera positions. Furthermore, for three cameras, the calculated positions were 13% better than the subjects’ camera positions, with a total coverage of more than 99% of the classified map.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Chen, Zhuo, Hai Bo Wu, and Sheng Ping Xia. "A Cooperative Dual-Camera System for Face Recognition and Video Monitoring." Advanced Materials Research 998-999 (July 2014): 784–88. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.784.

Повний текст джерела
Анотація:
In the ordinary video monitoring system, the whole small scene is usually observed by a stationary camera or a few stationary cameras, but the system can’t zoom and focus on the target of interest rapidly, and also can’t get the high resolution image of the target of interest in a far distance. Therefore based on the research of the dual-camera cooperation and a RSOM clustering tree and CSHG algorithm, a cooperative dual-camera system is designed to track and recognize a face quickly in a large-scale and far-distance scene in this paper, which is made up of a Stationary Wide Field of View (SWFV) camera and a Pan-Tilt-Zoom (PTZ) camera. In the meanwhile, the algorithm can ensure the real-time requirement.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Taylor, Brendan D., Ross L. Goldingay, and John M. Lindsay. "Horizontal or vertical? Camera trap orientations and recording modes for detecting potoroos, bandicoots and pademelons." Australian Mammalogy 36, no. 1 (2014): 60. http://dx.doi.org/10.1071/am13012.

Повний текст джерела
Анотація:
Camera traps can detect rare and cryptic species, and may enable description of the stability of populations of threatened species. We investigated the relative performance of cameras oriented horizontally or vertically, and recording mode (still and video) to detect the vulnerable long-nosed potoroo (Potorous tridactylus) as a precursor to population monitoring. We established camera traps for periods of 13–21 days across 21 sites in Richmond Range National Park in north-east New South Wales. Each camera trap set consisted of three KeepGuard KG680V cameras directed at a bait container – one horizontal and one vertical camera in still mode and one horizontal camera in video mode. Potoroos and bandicoots (Perameles nasuta and Isoodon macrourus) were detected at 14 sites and pademelons (Thylogale stigmatica and T. thetis) were detected at 19 sites. We used program Presence to compare detection probabilities for each camera category. The detection probability for all three taxa groups was lowest for the vertical still and similar for the horizontal cameras. The detection probability (horizontal still) was highest for the potoroos (0.43) compared with the bandicoots (0.16) and pademelons (0.25). We estimate that the horizontal stills camera could achieve a 95% probability of detection of a potoroo within 6 days compared with 8 days using a vertical stills camera. This suggests that horizontal cameras in still mode have great potential for monitoring the dynamics of this potoroo population.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Simarro, Gonzalo, Daniel Calvete, and Paola Souto. "UCalib: Cameras Autocalibration on Coastal Video Monitoring Systems." Remote Sensing 13, no. 14 (July 16, 2021): 2795. http://dx.doi.org/10.3390/rs13142795.

Повний текст джерела
Анотація:
Following the path set out by the “Argus” project, video monitoring stations have become a very popular low cost tool to continuously monitor beaches around the world. For these stations to be able to offer quantitative results, the cameras must be calibrated. Cameras are typically calibrated when installed, and, at best, extrinsic calibrations are performed from time to time. However, intra-day variations of camera calibration parameters due to thermal factors, or other kinds of uncontrolled movements, have been shown to introduce significant errors when transforming the pixels to real world coordinates. Departing from well-known feature detection and matching algorithms from computer vision, this paper presents a methodology to automatically calibrate cameras, in the intra-day time scale, from a small number of manually calibrated images. For the three cameras analyzed here, the proposed methodology allows for automatic calibration of >90% of the images in favorable conditions (images with many fixed features) and ∼40% in the worst conditioned camera (almost featureless images). The results can be improved by increasing the number of manually calibrated images. Further, the procedure provides the user with two values that allow for the assessment of the expected quality of each automatic calibration. The proposed methodology, here applied to Argus-like stations, is applicable e.g., in CoastSnap sites, where each image corresponds to a different camera.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yuan, Wendan, Ziyao Ning, Zhitong Liu, Shencheng Yang, and Haiyun Zou. "The method for evaluating the coverage rate of roadside monitoring cameras in road areas." Advances in Computer and Materials Scienc Research 1, no. 1 (July 23, 2024): 137. http://dx.doi.org/10.70114/acmsr.2024.1.1.p137.

Повний текст джерела
Анотація:
Accurately assessing the coverage rate of roadside monitoring cameras in road areas is crucial for intelligent transportation systems and road risk assessment. This paper presents a new method that uses Carsim software to simulate different camera tilt angles and road geometry conditions to evaluate the coverage area of roadside monitoring cameras. By adjusting the camera tilt angles and the curve radius of the road, the coverage effect under different conditions is analyzed, and affine transformation is used to calculate the actual coverage area. The research results show that this method can effectively fit the road area, providing theoretical support for the deployment of monitoring cameras in traffic.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Selvaraju, Vinothini, Nicolai Spicher, Ju Wang, Nagarajan Ganapathy, Joana M. Warnecke, Steffen Leonhardt, Ramakrishnan Swaminathan, and Thomas M. Deserno. "Continuous Monitoring of Vital Signs Using Cameras: A Systematic Review." Sensors 22, no. 11 (May 28, 2022): 4097. http://dx.doi.org/10.3390/s22114097.

Повний текст джерела
Анотація:
In recent years, noncontact measurements of vital signs using cameras received a great amount of interest. However, some questions are unanswered: (i) Which vital sign is monitored using what type of camera? (ii) What is the performance and which factors affect it? (iii) Which health issues are addressed by camera-based techniques? Following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement, we conduct a systematic review of continuous camera-based vital sign monitoring using Scopus, PubMed, and the Association for Computing Machinery (ACM) databases. We consider articles that were published between January 2018 and April 2021 in the English language. We include five vital signs: heart rate (HR), respiratory rate (RR), blood pressure (BP), body skin temperature (BST), and oxygen saturation (SpO2). In total, we retrieve 905 articles and screened them regarding title, abstract, and full text. One hundred and four articles remained: 60, 20, 6, 2, and 1 of the articles focus on HR, RR, BP, BST, and SpO2, respectively, and 15 on multiple vital signs. HR and RR can be measured using red, green, and blue (RGB) and near-infrared (NIR) as well as far-infrared (FIR) cameras. So far, BP and SpO2 are monitored with RGB cameras only, whereas BST is derived from FIR cameras only. Under ideal conditions, the root mean squared error is around 2.60 bpm, 2.22 cpm, 6.91 mm Hg, 4.88 mm Hg, and 0.86 °C for HR, RR, systolic BP, diastolic BP, and BST, respectively. The estimated error for SpO2 is less than 1%, but it increases with movements of the subject and the camera-subject distance. Camera-based remote monitoring mainly explores intensive care, post-anaesthesia care, and sleep monitoring, but also explores special diseases such as heart failure. The monitored targets are newborn and pediatric patients, geriatric patients, athletes (e.g., exercising, cycling), and vehicle drivers. Camera-based techniques monitor HR, RR, and BST in static conditions within acceptable ranges for certain applications. The research gaps are large and heterogeneous populations, real-time scenarios, moving subjects, and accuracy of BP and SpO2 monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ye, Yonglong, Liping Pan, Dongfang Yu, Dongfeng Gu, Hongzhou Lu, and Wenjin Wang. "Notch RGB-camera based SpO2 estimation: a clinical trial in a neonatal intensive care unit." Biomedical Optics Express 15, no. 1 (December 22, 2023): 428. http://dx.doi.org/10.1364/boe.510925.

Повний текст джерела
Анотація:
Regular and narrow-band RGB cameras are recently explored for contactless SpO2 monitoring. Regular RGB cameras with cross-band overlap provide a high signal-to-noise-ratio (SNR) in measuring the photoplethysmographic signals but possess high dependency on the spectra of incident light, whereas narrow-band RGB cameras have better spectral independence but lower SNR especially in dim lighting conditions, such as in the neonatal intensive care unit (NICU). This paper proposes a notch RGB camera based SpO2 measurement approach that uses an optical notch filter to attenuate the wavelengths of 580–605 nm of a regular RGB camera to improve the spectral independence while maintaining high SNR in signal measurement. The proposed setup was validated in the lab condition (e.g. dark chamber) against the existing solutions for visible-light based camera-SpO2 measurement and further verified in the NICU on preterm infants. The clinical trial conducted in the NICU with 22 preterm infants shows that the notch RGB camera can achieve a mean absolute error (MAE) less than 4% for SpO2 measurement. This is the first showcase of continuous monitoring of absolute camera-SpO2 values in the NICU.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Green-Barber, Jai M., and Julie M. Old. "Is camera trap videography suitable for assessing activity patterns in eastern grey kangaroos?" Pacific Conservation Biology 24, no. 2 (2018): 134. http://dx.doi.org/10.1071/pc17051.

Повний текст джерела
Анотація:
Camera traps are frequently used in wildlife research and may be a useful tool for monitoring behavioural patterns. The suitability of camera traps to monitor behaviour depends on the size, locomotion, and behaviour of the species being investigated. The suitability of cameras for documenting the behaviour of eastern grey kangaroos was assessed here by comparing activity patterns collected using cameras to published activity patterns for the species. The activity patterns calculated from camera trap data were largely consistent with data from previous studies, although nocturnal activity appeared to be under-represented. Observations of unusual fighting behaviour illustrates the potential for camera traps to enable capture of novel observations. Kangaroo behaviour appeared to be influenced by the presence of cameras; however, no kangaroos retreated from cameras. Data suggested that kangaroos became habituated to cameras after eight months. The findings of this study suggest that camera traps are suitable for assessing the diurnal activity of eastern grey kangaroos and are useful tools for documenting their behaviour.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Phuong, Doan Ngoc, and Nguyen Thi Phuong Thanh. "Other Applications of Thermal Cameras and Developing Handle Temperature Camera." Asian Journal of Applied Science and Technology 06, no. 02 (2022): 92–99. http://dx.doi.org/10.38177/ajast.2022.6211.

Повний текст джерела
Анотація:
Thermal cameras are useful devices, Today, advanced thermal imaging devices with high sensitivity are being developed and used, especially in the medical field (Lahiri et al., 2012). Products from the research results of the author group towards compact size, easy to handle, convenient to carry, low cost of implementation, monitoring range in the measuring area of the sensor eye up to 7 meters, the monitoring temperature zone can be limited, the color displayed is equivalent to the temperature zone, and the resolution of the thermal pixels can also be adjusted simply through the buttons. In order to expand the application range of research products, improve accuracy, reliability, increase resolution, etc., it is possible to use more measuring sensor eyes, use wireless communication networks, accurate image recognition and processing algorithms, large display screen to facilitate monitoring more clearly.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Idris, Ghassan, Claire Smith, Barbara Galland, Rachael Taylor, Christopher John Robertson, and Mauro Farella. "Home-Based Monitoring of Eating in Adolescents: A Pilot Study." Nutrients 13, no. 12 (December 3, 2021): 4354. http://dx.doi.org/10.3390/nu13124354.

Повний текст джерела
Анотація:
Objectives: To investigate eating episodes in a group of adolescents in their home-setting using wearable electromyography (EMG) and camera, and to evaluate the agreement between the two devices. Approach: Fifteen adolescents (15.5 ± 1.3 years) had a smartphone-assisted wearable-EMG device attached to the jaw to assess chewing features over one evening. EMG outcomes included chewing pace, time, episode count, and mean power. An automated wearable-camera worn on the chest facing outwards recorded four images/minute. The agreement between the camera and the EMG device in detecting eating episodes was evaluated by calculating specificity, sensitivity, and accuracy. Main results: The features of eating episodes identified by EMG throughout the entire recording time were (mean (SD)); chewing pace 1.64 (0.20) Hz, time 10.5 (10.4) minutes, episodes count 56.8 (39.0), and power 32.1% (4.3). The EMG device identified 5.1 (1.8) eating episodes lasting 27:51 (16:14) minutes whereas the cameras indicated 2.4 (2.1) episodes totaling 14:49 (11:18) minutes, showing that the EMG-identified chewing episodes were not all detected by the camera. However, overall accuracy of eating episodes identified ranged from 0.8 to 0.92. Significance: The combination of wearable EMG and camera is a promising tool to investigate eating behaviors in research and clinical-settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fiore, Loren, Duc Fehr, Robot Bodor, Andrew Drenner, Guruprasad Somasundaram, and Nikolaos Papanikolopoulos. "Multi-Camera Human Activity Monitoring." Journal of Intelligent and Robotic Systems 52, no. 1 (January 29, 2008): 5–43. http://dx.doi.org/10.1007/s10846-007-9201-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Wanping, Jiajie Wu, Kuiying Yin, Guang Jiang, Chao Yu, and Lanyu Li. "A Method of Attention Analysis on Video." Journal of Physics: Conference Series 2253, no. 1 (April 1, 2022): 012032. http://dx.doi.org/10.1088/1742-6596/2253/1/012032.

Повний текст джерела
Анотація:
Abstract Attention monitoring system is important for various tasks such as driving by alarming the person when he or she is not attending to the task at hand. Past research has not explored a usable attention monitoring system. In the current study, we used eye trackers, depth camera, and infrared cameras to assess the attention of the participants as they read texts. We extracted features from eye tracking and camera data, and then used convolutional neural network to predict the attention state of the participants. We found the eye tracker data yielded a 90% accuracy in predicting attentional state of the subjects. The camera data yielded over 70% accuracy in prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

van der Werff, Harald, Eunice Bonyo, and Christoph Hecker. "An Autonomous Thermal Camera System for Monitoring Fumarole Activity." Sensors 24, no. 6 (March 21, 2024): 1999. http://dx.doi.org/10.3390/s24061999.

Повний текст джерела
Анотація:
The Kenyan part of the East African Rift System hosts several geothermal fields for energy production. Changes in the extraction rate of geothermal fluids and the amount of water re-injected into the system affect reservoir pressure and production capacity over time. Understanding the balance of production, natural processes and the response of the geothermal system requires long-term monitoring. The presence of a geothermal system at depth is often accompanied by surface manifestations, such as hot water springs and fumaroles, which have the potential for monitoring subsurface activity. Two thermal camera timelapse systems were developed and installed as part of a multi-sensor observatory in Kenya to capture fumarole activity over time. These cameras are an aggregation of a camera unit, a control unit, and a battery charged by a solar panel, and they monitor fumarole activity on an hourly basis, with a deep sleep of the system in between recordings. The article describes the choice of hardware and software, presents the data that the cameras acquire, and discusses the system’s performance and possible improvement points.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Livens, S., K. Pauly, P. Baeck, J. Blommaert, D. Nuyts, J. Zender, and B. Delauré. "A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 223–28. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-223-2017.

Повний текст джерела
Анотація:
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1&amp;thinsp;% of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. <br><br> We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600&amp;ndash;900&amp;thinsp;nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2<sup>nd</sup> generation, commercially available ButterflEYE camera offering extended spectral range (475&amp;ndash;925&amp;thinsp;nm), and we discuss future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ioli, F., E. Bruno, D. Calzolari, M. Galbiati, A. Mannocchi, P. Manzoni, M. Martini, et al. "A REPLICABLE OPEN-SOURCE MULTI-CAMERA SYSTEM FOR LOW-COST 4D GLACIER MONITORING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-1-2023 (April 21, 2023): 137–44. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-1-2023-137-2023.

Повний текст джерела
Анотація:
Abstract. Image-based monitoring has emerged as a prevalent technique for sensing mountain environments. Monoscopic time-lapse cameras, which rely on digital image correlation to quantify glacier motion, have limitations due to the need for a Digital Elevation Model for deriving 3D flow velocity fields. Multi-camera systems overcome this limitation, as they allow for a 3D reconstruction of the scene. This paper presents a replicable low-cost stereoscopic system designed for 4D glacier monitoring. The system consists of independent and autonomous units, built from off-the-shelves components, such as a DSLR camera, an Arduino microcontroller, and a Raspberry Pi Zero, reducing costs compared to pre-built time-lapse cameras. The units are energetically self-sufficient and resistant to harsh alpine conditions. The system was successfully tested for more than a year to monitor the northwest terminus of the Belvedere Glacier (Italian Alps). Daily stereo-pairs acquired were processed with Structure-from-Motion to derive 3D point clouds of the glacier terminus and estimate glacier retreat and ice volume loss. By combining the information about ice volume loss with ablation estimates and ice flow velocity information, e.g., derived from monoscopic-camera time series, a multi-camera system enables a comprehensive understanding of sub-seasonal glacier dynamics.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Feng, Jie, Hai-Chuan Wang, Yu-Dong Li, Lin Wen, and Qi Guo. "Mechanism of Total Ionizing Dose Effects of CMOS Image Sensors on Camera Resolution." Electronics 12, no. 12 (June 14, 2023): 2667. http://dx.doi.org/10.3390/electronics12122667.

Повний текст джерела
Анотація:
The nuclear industry and other high-radiation environments often need remote monitoring equipment with advanced cameras to achieve precise remote control operations. CMOS image sensors, as a critical component of these cameras, get exposed to γ-ray irradiation while operating in such environments, which causes performance degradation that adversely affects camera resolution. This study conducted total ionizing dose experiments on CMOS image sensors and camera systems and thoroughly analyzed the impact mechanisms of the dark current, Full Well Capacity, and quantum efficiency of CMOS image sensors on camera resolution. A quantitative evaluation formula was established to evaluate the impact of Full Well Capacity and quantum efficiency of the CMOS image sensor on camera resolution. This study provides a theoretical basis for the evaluation of the radiation resistance of cameras in environments with strong nuclear radiation and the development of radiation-resistant cameras.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Godfrey, Samantha, James R. Cooper, and Andrew J. Plater. "Roving Multiple Camera Array with Structure-from-Motion for Coastal Monitoring." Journal of Marine Science and Engineering 11, no. 3 (March 10, 2023): 591. http://dx.doi.org/10.3390/jmse11030591.

Повний текст джерела
Анотація:
Regular monitoring is essential for vulnerable coastal locations such as areas of landward retreat. However, for coastal practitioners, surveying is limited by budget, specialist personnel/equipment and weather. In combination structure-from-motion and multi-view stereo (SfM-MVS) has helped to improve accessibility to topographic data acquisition. Pole-mounted cameras with SfM-MVS have gained traction but to guarantee coverage and reconstruction quality, greater understanding of camera position and interaction is required. This study uses a multi-camera array for image acquisition and reviews processing procedures in Agisoft Photoscan (Metashape). The camera rig was deployed at three sites and results were verified against a terrestrial laser scanner (TLS) and independent precision estimates. The multi-camera approach provided effective image acquisition ~11 times faster than the TLS. Reconstruction quality equalled (>92% similarity) the TLS, subject to processing parameters. A change in the image alignment parameter demonstrated a significant influence on deformation, reducing reprojection error by~94%. A lower densification parameter (‘High’) offered results ~4.39% dissimilar from the TLS at 1/8th of the processing time of other parameters. Independent precision estimates were <8.2 mm for x, y and z dimensions. These findings illustrate the potential of multi-camera systems and the influence of processing on point cloud quality and computation time.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Hoan, Nguyen Cong, Nguyen Van Hoa, Vu Thanh Luan, and Yeong Min Jang. "Design and Implementation of a Monitoring System using Optical Camera Communication for a Smart Factory." Applied Sciences 9, no. 23 (November 25, 2019): 5103. http://dx.doi.org/10.3390/app9235103.

Повний текст джерела
Анотація:
Wireless technologies that are based on radio frequencies are currently widely used, with numerous applications around the world. However, they pose some disadvantages to human health. High frequencies can have potentially harmful effects on children, hospital patients, and even healthy people if the signal power exceeds the permitted standard. Conversely, the use of visible light for data transmission is a trend that presents new options, including optical wireless communication, optical camera communication, and visible light communication. This paper proposes a modulation scheme based on on-off keying in the time domain, which is applied to a monitoring system using optical camera communication. This scheme has various compatible supports for the global-shutter camera and rolling-shutter camera, which are popular commercially available cameras. Furthermore, this scheme facilitates a low-cost monitoring system. By using small light-emitting diodes (LEDs) and controlling the exposure time in a single camera, the camera, as a receiver, can simultaneously detect signals from up to 10 sensor devices in different positions at a maximum distance of up to 50 m, with a low error rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lin, Min, and Bin Li. "Modeling of Single Camera Location under the Natural Light in the Vedio-Oculography System." Advanced Materials Research 663 (February 2013): 638–44. http://dx.doi.org/10.4028/www.scientific.net/amr.663.638.

Повний текст джерела
Анотація:
Today, most eye monitoring systems use cameras to capture eye images, which are called vedio-oculography systems (VOG). The basic elements of the VOG system are eyes, cameras and lights [1]. Especially, the location of the camera to eyes is very important. In this paper, the location of the single camera under the natural light was modeled. And the model was computed and simulated on the Matlab platform. Then, the eyes moving images were sampled by the camera In-Sight Micro 1020 from the Cognex Company to analyze the reasonable position of the camera. Test results show that the single camera should locate at the side of one eye within 40o, or it couldn’t sample clear eye moving images. This conclusion provides a reference to the further development of the actual VOG system.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Rodriguez-Padilla, Isaac, Bruno Castelle, Vincent Marieu, and Denis Morichon. "A Simple and Efficient Image Stabilization Method for Coastal Monitoring Video Systems." Remote Sensing 12, no. 1 (December 24, 2019): 70. http://dx.doi.org/10.3390/rs12010070.

Повний текст джерела
Анотація:
Fixed video camera systems are consistently prone to importune motions over time due to either thermal effects or mechanical factors. Even subtle displacements are mostly overlooked or ignored, although they can lead to large geo-rectification errors. This paper describes a simple and efficient method to stabilize an either continuous or sub-sampled image sequence based on feature matching and sub-pixel cross-correlation techniques. The method requires the presence and identification of different land-sub-image regions containing static recognizable features, such as corners or salient points, referred to as keypoints. A Canny edge detector ( C E D ) is used to locate and extract the boundaries of the features. Keypoints are matched against themselves after computing their two-dimensional displacement with respect to a reference frame. Pairs of keypoints are subsequently used as control points to fit a geometric transformation in order to align the whole frame with the reference image. The stabilization method is applied to five years of daily images collected from a three-camera permanent video system located at Anglet Beach in southwestern France. Azimuth, tilt, and roll deviations are computed for each camera. The three cameras showed motions on a wide range of time scales, with a prominent annual signal in azimuth and tilt deviation. Camera movement amplitude reached up to 10 pixels in azimuth, 30 pixels in tilt, and 0.4° in roll, together with a quasi-steady counter-clockwise trend over the five-year time series. Moreover, camera viewing angle deviations were found to induce large rectification errors of up to 400 m at a distance of 2.5 km from the camera. The mean shoreline apparent position was also affected by an approximately 10–20 m bias during the 2013/2014 outstanding winter period. The stabilization semi-automatic method successfully corrects camera geometry for fixed video monitoring systems and is able to process at least 90% of the frames without user assistance. The use of the C E D greatly improves the performance of the cross-correlation algorithm by making it more robust against contrast and brightness variations between frames. The method appears as a promising tool for other coastal imaging applications such as removal of undesired high-frequency movements of cameras equipped in unmanned aerial vehicles (UAVs).
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Manikanta Reddy, L. Danush, T. Bharath, M. Srikanth, and N. Hari Kumar. "Affordable Mobile Application Camera System to Monitor Residential Societies Vehicle Monitoring Activities." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (January 13, 2024): 1–13. http://dx.doi.org/10.55041/ijsrem28006.

Повний текст джерела
Анотація:
In contemporary urban environments, ensuring the safety and security of residential societies has become a paramount concern. One crucial aspect of this is the effective monitoring of vehicular activities within the premises. This project proposes the development of an Affordable Mobile Application Camera System designed to enhance the surveillance capabilities of residential societies for comprehensive vehicle monitoring. The system comprises a network of strategically placed cameras integrated with a mobile application that provides real-time access and control. The objective is to offer a cost-effective solution without compromising on the quality and effectiveness of surveillance. KEYWORDS 1. Affordable Mobile Application 2. Camera System 3. Residential Society 4. Vehicle Monitoring 5. Surveillance 6. Real-time Monitoring
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Saxena, Tilak. "Onboard Surveillance Camera Robotic Vehicle." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 4127–30. http://dx.doi.org/10.22214/ijraset.2024.60768.

Повний текст джерела
Анотація:
Abstract: Surveillance cameras have been integrated with autonomous vehicles for various purposes like monitoring, guarding and spying. This research paper focuses on how to design, implement, deploy and evaluate a robotic vehicle with a surveillance camera system. This study is on both hardware and software architecture, choice of camera, image processing techniques as wellas communication protocol with other devices. Therefore, the suggested system aims at improving surveillance in addition with the number of sensors which can prevent any accident and can increase remote monitoring. The blynk server is used for the transmission of the data of various servers used in the system. Telegram bot is used to perform a two way communication with the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Bruno, N., K. Thoeni, F. Diotri, M. Santise, R. Roncella, and A. Giacomini. "A COMPARISON OF LOW-COST CAMERAS APPLIED TO FIXED MULTI-IMAGE MONITORING SYSTEMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 1033–40. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-1033-2020.

Повний текст джерела
Анотація:
Abstract. Photogrammetry is becoming a widely used technique for slope monitoring and rock fall data collection. Its scalability, simplicity of components and low costs for hardware and operations makes its use constantly increasing for both civil and mining applications. Recent on site permanent installation of cameras resulted particularly viable for the monitoring of extended surfaces at very reasonable costs. The current work investigates the performances of a customised Raspberry Pi camera module V2 system and three additional low-cost camera systems including an ELP-USB8MP02G camera module, a compact digital camera (Nikon S3100) and a DSLR (Nikon D3). All system, except the Nikon D3, are available at comparable price. The comparison was conducted by collecting images of rock surfaces, one located in Australia and three located in Italy, from distances between 55 and 110 m. Results are presented in terms of image quality and three dimensional reconstruction error. Thereby, the multi-view reconstructions are compared to a reference model acquired with a terrestrial laser scanner.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Boynton, Madison K., Matthew Toenies, Nicole Cornelius, and Lindsey Rich. "Comparing camera traps and visual encounter surveys for monitoring small animals." California Fish and Wildlife Journal 107, no. 2 (August 9, 2021): 99–117. http://dx.doi.org/10.51492/cfwj.107.9.

Повний текст джерела
Анотація:
Amphibian and reptile species face numerous threats including disease, habitat loss and degradation, invasive species, and global climate change. However, effective management and conservation of herpetofauna largely depends upon resource-intensive survey methodologies. Recent research has shown promise in the use of camera trapping techniques, but these methods must be tested alongside traditional methods to fully understand their advantages and disadvantages. To meet this research need, we tested two herpetofauna survey methods: a modified version of the Adapted-Hunt Drift Fence Technique, which combines a drift fence with camera traps; and a traditional method of visual encounter surveys (VES) with cover boards. Between June and August 2020, we conducted two VES and installed one drift fence with camera traps at ten sites in Monterey County, CA, USA. The drift fence/camera setup outperformed the VES in terms of number of observations and herpetofauna species detected. Drift fences with cameras produced a mean of 248 images of three to six species per site, while VES and cover objects produced a mean of 0.6 observations of zero to one species per site. Across all sites, we detected seven reptile and one amphibian species with the drift fence/camera setup, while VES resulted in identifications of two reptile and one amphibian species. In addition, drift fence/camera setups recorded a minimum of nine nonherpetofauna species including small mammals, birds, and invertebrates. Our research supports that drift fences combined with camera traps offer an effective alternative to VES for large-scale, multi-species herpetofauna survey efforts. Furthermore, we suggest specific improvements to enhance this method’s performance, cost-effectiveness, and utility in remote environments. These advances in survey methods hold great promise for aiding efforts to manage and conserve global herpetofauna diversity.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Saitoh, Tomoko, and Yuko Kato. "Evaluation of Wearable Cameras for Monitoring and Analyzing Calf Behavior: A Preliminary Study." Animals 11, no. 9 (September 7, 2021): 2622. http://dx.doi.org/10.3390/ani11092622.

Повний текст джерела
Анотація:
Understanding cattle behavior is important for discerning their health and management status. However, manual observations of cattle are time-consuming and labor-intensive. Moreover, during manual observations, the presence or position of a human observer may alter the normal behavior of the cattle. Wearable cameras are small and lightweight; therefore, they do not disturb cattle behavior when attached to their bodies. Thus, this study aimed to evaluate the suitability of wearable cameras for monitoring and analyzing cattle behavior. From December 18 to 27, 2017, this study used four 2-month-old, group-housed Holstein calves at the Field Science Center of the Obihiro University of Agriculture and Veterinary Medicine, Japan. Calf behavior was recorded every 30 s using a wearable camera (HX-A1H, Panasonic, Japan) from 10:00 to 15:30 and observed directly from 11:00 to 12:00 and 14:00 to 15:00. In addition, the same observer viewed the camera recordings corresponding to the direct observation periods, and the results were compared. The correlation coefficients of all behavioral data from direct and wearable camera video observations were significant (p < 0.01). We conclude that wearable cameras are suitable for observing calf behavior, particularly their posture (standing or lying), as well as their ruminating and feeding behaviors.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Tjahjadi, M. E., L. A. Parsamardhani, and K. T. Suhari. "Bridge Structural Deformation Monitoring Using Digital Camera." IOP Conference Series: Earth and Environmental Science 1051, no. 1 (July 1, 2022): 012009. http://dx.doi.org/10.1088/1755-1315/1051/1/012009.

Повний текст джерела
Анотація:
Abstract Burgeoning off-the-selves Digital Single Lens Reflector (DSLR) cameras have been gaining attentions as a fast and affordable tool for conducting deformation monitoring of man-made engineering structures. When a sub millimetre of accuracy is sought, deliberate concerns of their usage must be considered since lingering systematic errors in the imaging process plaque such non metric cameras. This paper discusses a close range photogrammetric method to conduct structure deformation monitoring of the bridge using the digital DSLR camera. The bridge is located in Malang Municipality, East Java province, Indonesia. There are more than 100 images of the bridge’s concrete pillars were photographed using convergent photogrammetric network at distance variations between 5m to 30m long on each epoch. Then, the coordinates of around 550 captured retro-reflective markers attached on the pillars facade are calculated using self-calibrating bundle adjustment method. The coordinate differences of the markers from the two consecutive epochs are detected with a magnitude between 0.03 mm to 6 mm with a sub-millimetre precision measurement level. However, by using global congruency testing and a localization of deformation testing, it is confirmed that the bridge pillar’s structures are remain stable between those epochs.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Sourav, Abdullah All, and Joshua M. Peschel. "Visual Sensor Placement Optimization with 3D Animation for Cattle Health Monitoring in a Confined Operation." Animals 12, no. 9 (May 5, 2022): 1181. http://dx.doi.org/10.3390/ani12091181.

Повний текст джерела
Анотація:
Computer vision has been extensively used for livestock welfare monitoring in recent years, and data collection with a sensor or camera is the first part of the complete workflow. While current practice in computer vision-based animal welfare monitoring often analyzes data collected from a sensor or camera mounted on the roof or ceiling of a laboratory, such camera placement is not always viable in a commercial confined cattle feeding environment. This study therefore sought to determine the optimal camera placement locations in a confined steer feeding operation. Measurements of cattle pens were used to create a 3D farm model using Blender 3D computer graphic software. In the first part of this study, a method was developed to calculate the camera coverage in a 3D farm environment, and in the next stage, a genetic algorithm-based model was designed for finding optimal placements of a multi-camera and multi-pen setup. The algorithm’s objective was to maximize the multi-camera coverage while minimizing budget. Two different optimization methods involving multiple cameras and pen combinations were used. The results demonstrated the applicability of the genetic algorithm in achieving the maximum coverage and thereby enhancing the quality of the livestock visual-sensing data. The algorithm also provided the top 25 solutions for each camera and pen combination with a maximum coverage difference of less than 3.5% between them, offering numerous options for the farm manager.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Song, Junfang, Yao Fan, Huansheng Song, and Haili Zhao. "Target Tracking and 3D Trajectory Reconstruction Based on Multicamera Calibration." Journal of Advanced Transportation 2022 (January 4, 2022): 1–8. http://dx.doi.org/10.1155/2022/5006347.

Повний текст джерела
Анотація:
In traffic scenarios, vehicle trajectories can provide almost all the dynamic information of moving vehicles. Analyzing the vehicle trajectory in the monitoring scene can grasp the dynamic road traffic information. Cross-camera association of vehicle trajectories in multiple cameras can break the isolation of target information between single cameras and obtain the overall road operation conditions in a large-scale video surveillance area, which helps road traffic managers to conduct traffic analysis, prediction, and control. Based on the framework of DBT automatic target detection, this paper proposes a cross-camera vehicle trajectory correlation matching method based on the Euclidean distance metric correlation of trajectory points. For the multitarget vehicle trajectory acquired in a single camera, we first perform 3D trajectory reconstruction based on the combined camera calibration in the overlapping area and then complete the similarity association between the cross-camera trajectories and the cross-camera trajectory update, and complete the trajectory transfer of the vehicle between adjacent cameras. Experiments show that the method in this paper can well solve the problem that the current tracking technology is difficult to match the vehicle trajectory under different cameras in complex traffic scenes and essentially achieves long-term and long-distance continuous tracking and trajectory acquisition of multiple targets across cameras.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Yang. "Measurement of Dynamic Responses from Large Structural Tests by Analyzing Non-Synchronized Videos." Sensors 19, no. 16 (August 11, 2019): 3520. http://dx.doi.org/10.3390/s19163520.

Повний текст джерела
Анотація:
Image analysis techniques have been employed to measure displacements, deformation, crack propagation, and structural health monitoring. With the rapid development and wide application of digital imaging technology, consumer digital cameras are commonly used for making such measurements because of their satisfactory imaging resolution, video recording capability, and relatively low cost. However, three-dimensional dynamic response monitoring and measurement on large-scale structures pose challenges of camera calibration and synchronization to image analysis. Without satisfactory camera position and orientation obtained from calibration and well-synchronized imaging, significant errors would occur in the dynamic responses during image analysis and stereo triangulation. This paper introduces two camera calibration approaches that are suitable for large-scale structural experiments, as well as a synchronization method to estimate the time difference between two cameras and further minimize the error of stereo triangulation. Two structural experiments are used to verify the calibration approaches and the synchronization method to acquire dynamic responses. The results demonstrate the performance and accuracy improvement by using the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chalmers, Carl, Paul Fergus, Serge Wich, Steven N. Longmore, Naomi Davies Walsh, Philip A. Stephens, Chris Sutherland, Naomi Matthews, Jens Mudde, and Amira Nuseibeh. "Removing Human Bottlenecks in Bird Classification Using Camera Trap Images and Deep Learning." Remote Sensing 15, no. 10 (May 18, 2023): 2638. http://dx.doi.org/10.3390/rs15102638.

Повний текст джерела
Анотація:
Birds are important indicators for monitoring both biodiversity and habitat health; they also play a crucial role in ecosystem management. Declines in bird populations can result in reduced ecosystem services, including seed dispersal, pollination and pest control. Accurate and long-term monitoring of birds to identify species of concern while measuring the success of conservation interventions is essential for ecologists. However, monitoring is time-consuming, costly and often difficult to manage over long durations and at meaningfully large spatial scales. Technology such as camera traps, acoustic monitors and drones provide methods for non-invasive monitoring. There are two main problems with using camera traps for monitoring: (a) cameras generate many images, making it difficult to process and analyse the data in a timely manner; and (b) the high proportion of false positives hinders the processing and analysis for reporting. In this paper, we outline an approach for overcoming these issues by utilising deep learning for real-time classification of bird species and automated removal of false positives in camera trap data. Images are classified in real-time using a Faster-RCNN architecture. Images are transmitted over 3/4G cameras and processed using Graphical Processing Units (GPUs) to provide conservationists with key detection metrics, thereby removing the requirement for manual observations. Our models achieved an average sensitivity of 88.79%, a specificity of 98.16% and accuracy of 96.71%. This demonstrates the effectiveness of using deep learning for automatic bird monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chen, Ruizhe, Wei Tu, Qingquan Li, Zhipeng Chen, and Bochen Zhang. "Real-Time Deformation Measurement of Long-Span Bridge using Multi-Inertial Camera System." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1-2024 (May 10, 2024): 91–96. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-2024-91-2024.

Повний текст джерела
Анотація:
Abstract. Monitoring the deformation of long-span bridges is essential for assessing structural health and safety. The existing single-camera deformation measurement method is unable to meet the high-precision measurement requirement for the deformation of large-span bridges. This paper proposes a real-time deformation measurement method for the long-span bridge using multi-inertial camera system. The method is based on visual measurement which the target deformation is measured by multiple cameras. Meanwhile, inertial measurement is used to estimate the relative pose of the camera to compensate for camera motion errors. It is applied to the health monitoring system of a long-span bridge. The experimental results show that the method proposed in this paper is highly consistent with the hydrostatic leveling measurement, with a RMSE of 4.83mm. The method can accurately and real-time measure the deformation of large-span bridges with a promising engineering value.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Le Quinio, Azénor, Eric De Oliveira, Alexandre Girard, Jean Guillard, Jean-Marc Roussel, Fabrice Zaoui, and François Martignac. "Automatic detection, identification and counting of anguilliform fish using in situ acoustic camera data: Development of a cross-camera morphological analysis approach." PLOS ONE 18, no. 2 (February 24, 2023): e0273588. http://dx.doi.org/10.1371/journal.pone.0273588.

Повний текст джерела
Анотація:
Acoustic cameras are increasingly used in monitoring studies of diadromous fish populations, even though analyzing them is time-consuming. In complex in situ contexts, anguilliform fish may be especially difficult to identify automatically using acoustic camera data because the undulation of their body frequently results in fragmented targets. Our study aimed to develop a method based on a succession of computer vision techniques, in order to automatically detect, identify and count anguilliform fish using data from multiple models of acoustic cameras. Indeed, several models of cameras, owning specific technical characteristics, are used to monitor fish populations, causing major differences in the recorded data shapes and resolutions. The method was applied to two large datasets recorded at two distinct monitoring sites with populations of European eels with different length distributions. The method yielded promising results for large eels, with more than 75% of eels automatically identified successfully using datasets from ARIS and BlueView cameras. However, only 42% of eels shorter than 60 cm were detected, with the best model performances observed for detection ranges of 4–9 m. Although improvements are required to compensate for fish-length limitations, our cross-camera method is promising for automatically detecting and counting large eels in long-term monitoring studies in complex environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kalybek, Maksat, Mateusz Bocian, Wojciech Pakos, Jacek Grosel, and Nikolaos Nikitas. "Performance of Camera-Based Vibration Monitoring Systems in Input-Output Modal Identification Using Shaker Excitation." Remote Sensing 13, no. 17 (September 1, 2021): 3471. http://dx.doi.org/10.3390/rs13173471.

Повний текст джерела
Анотація:
Despite significant advances in the development of high-resolution digital cameras in the last couple of decades, their potential remains largely unexplored in the context of input-output modal identification. However, these remote sensors could greatly improve the efficacy of experimental dynamic characterisation of civil engineering structures. To this end, this study provides early evidence of the applicability of camera-based vibration monitoring systems in classical experimental modal analysis using an electromechanical shaker. A pseudo-random and sine chirp excitation is applied to a scaled model of a cable-stayed bridge at varying levels of intensity. The performance of vibration monitoring systems, consisting of a consumer-grade digital camera and two image processing algorithms, is analysed relative to that of a system based on accelerometry. A full set of modal parameters is considered in this process, including modal frequency, damping, mass and mode shapes. It is shown that the camera-based vibration monitoring systems can provide high accuracy results, although their effective application requires consideration of a number of issues related to the sensitivity, nature of the excitation force, and signal and image processing. Based on these findings, suggestions for best practice are provided to aid in the implementation of camera-based vibration monitoring systems in experimental modal analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Dhole, Vedant V. ,. "UNDER WATER DRONE WITH CAMERA." International Scientific Journal of Engineering and Management 03, no. 03 (March 23, 2024): 1–9. http://dx.doi.org/10.55041/isjem01445.

Повний текст джерела
Анотація:
The integration of unmanned underwater vehicles (UUVs) with high-definition cameras has opened new frontiers in underwater exploration, surveillance, and environmental monitoring. This project aims to design and develop an underwater drone equipped with a sophisticated camera system for versatile underwater applications. The primary objective is to create a compact, maneuverable, and cost-effective UUV capable of capturing high-quality images and videos in diverse aquatic environments. Key Words: Underwater Drone, Arduino Uno, BLDC Motors, ESC, ESP32 Camera, Subaquatic Exploration, Marine Robotics.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nazeem, Nur Nazifah Adlina Mohd, Siti Lailatul Mohd Hassan, Ili Shairah Abdul Halim, Wan Fadzlida Hanim Abdullah, and Nasri Sulaiman. "Microcontroller-based camera with the sound source localization for automated accident detection." International Journal of Advances in Applied Sciences 13, no. 3 (September 1, 2024): 639. http://dx.doi.org/10.11591/ijaas.v13.i3.pp639-646.

Повний текст джерела
Анотація:
This paper is on a microcontroller-based camera controller with sound source localization (SSL). With the rising frequency of highway accidents in Malaysia, there is a pressing need for a reliable detection system. The current approach, involving fixed-angled cameras, necessitates constant human monitoring, proving inefficient. To address this, the study introduces a hybrid camera system incorporating a camera for image capture and a microphone to detect collision sounds. By integrating a pan-tilt (PT) camera controller driven by time difference of arrival (TDOA) inputs, the system can swiftly move toward accident locations. The TDOA method is employed to convert sound arrival time differences into camera angles. The accuracy of the PT camera's rotation angle was analyzed based on the original sound source angle. As a result, this project produced an automated highway monitoring camera system that uses sound SSL to detect car crash sounds on highways. Its PT feature will help cover a large highway area and eliminate blind spots to capture possible accident scenes. The average inaccuracy of the experimental test of the pan and tilt angle of the camera is 19 and 23%, respectively. The accuracy of the pan tilt angle can be increased by adding more analog acoustic sensors.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Barbosa, Amanda S., and Dayana B. Costa. "Use of BIM and visual data collected by UAS and 360° camera for construction progress monitoring." IOP Conference Series: Earth and Environmental Science 1101, no. 8 (November 1, 2022): 082007. http://dx.doi.org/10.1088/1755-1315/1101/8/082007.

Повний текст джерела
Анотація:
Abstract Despite progress monitoring is an essential practice for achieving the success of construction, traditional monitoring methods based on manual information gathered through visual inspections are error-prone, depending on the experience of those who carry them out. Furthermore, most studies of progress monitoring using digital technologies focus on activities carried out outdoors, limiting the application of these methods in residential construction sites, which have several indoor activities. This study proposes a method for outdoor and indoor visual monitoring of construction progress using Building Information Modeling (BIM), 360° camera, and photogrammetry aided by an Unmanned Aerial System (UAS). For this purpose, exploratory case studies were carried out. The first exploratory study aimed to understand data collection and processing operationalization using the proposed technologies. Then, these technologies were used and evaluated to monitor progress in a second exploratory case study, enabling the development of a proposed method for using visual data collected by UAS and 360° camera integrated to BIM for progress monitoring. The status of the external area of the construction site was represented by point clouds generated through images collected by UAS. For monitoring inside the buildings, a 360° camera attached to the safety helmet was used. The results include evaluating the use of a 360° camera to monitor the internal progress of works, presenting its strengths, limitations, and use recommendations. In addition, the results also include the proposal of a method for visual progress monitoring of indoor and outdoor activities using BIM, UAS, and 360° cameras.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Idrees, Haroon, Mubarak Shah, and Ray Surette. "Enhancing camera surveillance using computer vision: a research note." Policing: An International Journal 41, no. 2 (April 9, 2018): 292–307. http://dx.doi.org/10.1108/pijpsm-11-2016-0158.

Повний текст джерела
Анотація:
Purpose The growth of police operated surveillance cameras has out-paced the ability of humans to monitor them effectively. Computer vision is a possible solution. An ongoing research project on the application of computer vision within a municipal police department is described. The paper aims to discuss these issues. Design/methodology/approach Following the demystification of computer vision technology, its potential for police agencies is developed within a focus on computer vision as a solution for two common surveillance camera tasks (live monitoring of multiple surveillance cameras and summarizing archived video files). Three unaddressed research questions (can specialized computer vision applications for law enforcement be developed at this time, how will computer vision be utilized within existing public safety camera monitoring rooms, and what are the system-wide impacts of a computer vision capability on local criminal justice systems) are considered. Findings Despite computer vision becoming accessible to law enforcement agencies the impact of computer vision has not been discussed or adequately researched. There is little knowledge of computer vision or its potential in the field. Originality/value This paper introduces and discusses computer vision from a law enforcement perspective and will be valuable to police personnel tasked with monitoring large camera networks and considering computer vision as a system upgrade.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

IMASATO, Motonobu. "Marine Monitoring Technique using Omnidirectional Camera." Journal of the Visualization Society of Japan 28-1, no. 1 (2008): 339. http://dx.doi.org/10.3154/jvs.28.339.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Hata, Yutaka, Tetsuya Fujisawa, Naotake Kamiura, Tadahito Egawa, and Kazuhiko Taniguchi. "Gas Consumption Measurement by Camera Monitoring." Transactions of the Institute of Systems, Control and Information Engineers 29, no. 9 (2016): 401–7. http://dx.doi.org/10.5687/iscie.29.401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії