Journal articles on the topic 'Visual Camera Failures'

To see the other types of publications on this topic, follow the link: Visual Camera Failures.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual Camera Failures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Atif, Muhammad, Andrea Ceccarelli, Tommaso Zoppi, and Andrea Bondavalli. "Tolerate Failures of the Visual Camera With Robust Image Classifiers." IEEE Access 11 (2023): 5132–43. http://dx.doi.org/10.1109/access.2023.3237394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xiaoguo, Qihan Liu, Bingqing Zheng, Huiqing Wang, and Qing Wang. "A visual simultaneous localization and mapping approach based on scene segmentation and incremental optimization." International Journal of Advanced Robotic Systems 17, no. 6 (November 1, 2020): 172988142097766. http://dx.doi.org/10.1177/1729881420977669.

Full text
Abstract:
Existing visual simultaneous localization and mapping (V-SLAM) algorithms are usually sensitive to the situation with sparse landmarks in the environment and large view transformation of camera motion, and they are liable to generate large pose errors that lead to track failures due to the decrease of the matching rate of feature points. Aiming at the above problems, this article proposes an improved V-SLAM method based on scene segmentation and incremental optimization strategy. In the front end, this article proposes a scene segmentation algorithm considering camera motion direction and angle. By segmenting the trajectory and adding camera motion direction to the tracking thread, an effective prediction model of camera motion in the scene with sparse landmarks and large view transformation is realized. In the back end, this article proposes an incremental optimization method combining segmentation information and an optimization method for tracking prediction model. By incrementally adding the state parameters and reusing the computed results, high-precision results of the camera trajectory and feature points are obtained with satisfactory computing speed. The performance of our algorithm is evaluated by two well-known datasets: TUM RGB-D and NYUDv2 RGB-D. The experimental results demonstrate that our method improves the computational efficiency by 10.2% compared with state-of-the-art V-SLAMs on the desktop platform and by 22.4% on the embedded platform, respectively. Meanwhile, the robustness of our method is better than that of ORB-SLAM2 on the TUM RGB-D dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Irmisch, P., D. Baumbach, and I. Ernst. "ROBUST VISUAL-INERTIAL ODOMETRY IN DYNAMIC ENVIRONMENTS USING SEMANTIC SEGMENTATION FOR FEATURE SELECTION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 435–42. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-435-2020.

Full text
Abstract:
Abstract. Camera based navigation in dynamic environments with high content of moving objects is challenging. Keypoint-based localization methods need to reliably reject features that do not belong to the static background. Here, traditional statistical methods for outlier rejection quickly reach their limits. A common approach is the combination with an inertial measurement unit for visual-inertial odometry. Also, deep learning based semantic segmentation was recently successfully applied in camera based localization to identify features on common objects. In this work, we study the application of mask-based feature selection based on semantic segmentation for robust localization in high dynamic environments. We focus on visual-inertial odometry, but similarly investigate a state-of-the-art pure vision-based method as baseline. For a versatile evaluation, we use challenging self-recorded datasets based on different sensor systems. This includes a combined dataset of a real world system and its synthetic clone with a large number of humans for in-depth analysis. We further deploy large-scale datasets from pedestrian navigation in a mall with escalator scenes and vehicle navigation during the day and at night. Our results show that visual-inertial odometry performs generally well in dynamic environments itself, but also shows significant failures in challenging scenes, which are prevented by using the segmentation aid.
APA, Harvard, Vancouver, ISO, and other styles
4

Bartram, Angela. "When the Image Takes over the Real: Holography and Its Potential within Acts of Visual Documentation." Arts 9, no. 1 (February 15, 2020): 24. http://dx.doi.org/10.3390/arts9010024.

Full text
Abstract:
In Camera Lucida, Roland Barthes discusses the capacity of the photographic image to represent “flat death”. Documentation of an event, happening, or time is traditionally reliant on the photographic to determine its ephemeral existence and to secure its legacy within history. However, the traditional photographic document is often unsuitable to capture the real essence and experience of the artwork in situ. The hologram, with its potential to offer a three-dimensional viewpoint, suggests a desirable solution. However, there are issues concerning how this type of photographic document successfully functions within an art context. Attitudes to methods necessary for artistic production, and holography’s place within the process, are responsible for this problem. The seductive qualities of holography may be attributable to any failure that ensues, but, if used precisely, the process can be effective to create a document for ephemeral art. The failures and successes of the hologram to be reliable as a document of experience are discussed in this article, together with a suggestion of how it might undergo a transformation and reactivation to become an artwork itself.
APA, Harvard, Vancouver, ISO, and other styles
5

Lodinger, Natalie R., and Patricia R. DeLucia. "Angle of Camera View Influences Resumption Lag in a Visual-Motor Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1291. http://dx.doi.org/10.1177/1541931213601803.

Full text
Abstract:
Prior research on interruptions examined the effects of different characteristics of the primary and interrupting tasks on performance of the primary task. One measure is the resumption lag– the time between the end of the interrupting task and the next action in the resumed primary task (Altmann & Trafton, 2004). Prior research showed that an increase in the workload of a task results in an increase in resumption lag (Iqbal & Bailey, 2005). A common feature of prior studies of resumption lag is the use of computer-based tasks. However, interruptions occur in other types of tasks, such as laparoscopic surgery in which errors can result in serious consequences for the patient (Gillespie Chaboyer & Fairweather, 2012). Common interruptions during laparoscopic surgery include equipment failures and communication with team members (e.g., Gillespie et al.,2012). In laparoscopic surgery, a small incision is made in the patient, and a laparoscope is placed inside the body cavity. The surgeon typically views the surgical site on a two-dimensional screen rather than in three-dimensions as in open surgery (Chan et al., 1997). The two-dimensional camera image imposes perceptual and cognitive demands on the surgeon, such as impaired depth perception (Chan et al., 1997; DeLucia & Griswold, 2011) and a limited field-of-view of the site (DeLucia & Griswold, 2011). The present study examined whether top-view and side-view camera angles, which putatively impose different cognitive demands (DeLucia & Griswold, 2011), would differentially affect the resumption lag in a visual-motor task. Participants completed a peg transfer task in which they were interrupted with a mental rotation task of different durations and rotation angles. The duration of the mental rotation task was either short (6 s) or long (12 s), representing relatively low and high cognitive demands, respectively. Smaller rotation angles (0, 60, and 300 degrees from vertical) and greater rotation angles (120, 180 and 240 degrees from vertical) presumably imposed smaller and larger cognitive demands, respectively. Resumption lag was measured as the time between the end of the interruption and the first time a peg was touched in the resumed peg transfer task. Participants needed significantly more time to resume the peg transfer task with the side view compared to the top view, and with the longer mental rotation task duration compared to the shorter duration. The main effect of rotation angle was not significant. The side view also resulted in higher ratings of mental demand, effort, and frustration on the Raw Task Load Index (RTLX), the ratings-only portion of the NASA-TLX (Hart, 2006). Thus, a visual-motor task that is higher in cognitive demand can result in more time to resume a primary task following an interruption. Practical implications are that camera viewing angles associated with lower cognitive demands should be preferred in the operating room when feasible, and that interruption durations should be minimized. However, results also indicated that the side view resulted in longer movement times than the top view, even without an interruption, suggesting that factors other than cognitive demands may account for effects of camera angle on resumption lag; this should be examined in future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Milella, Annalisa, Rosalia Maglietta, Massimo Caccia, and Gabriele Bruzzone. "Robotic inspection of ship hull surfaces using a magnetic crawler and a monocular camera." Sensor Review 37, no. 4 (September 18, 2017): 425–35. http://dx.doi.org/10.1108/sr-02-2017-0021.

Full text
Abstract:
Purpose Periodic inspection of large tonnage vessels is critical to assess integrity and prevent structural failures that could have catastrophic consequences for people and the environment. Currently, inspection operations are undertaken by human surveyors, often in extreme conditions. This paper aims to present an innovative system for the automatic visual inspection of ship hull surfaces, using a magnetic autonomous robotic crawler (MARC) equipped with a low-cost monocular camera. Design/methodology/approach MARC is provided with magnetic tracks that make it able to climb along the vertical walls of a vessel while acquiring close-up images of the traversed surfaces. A homography-based structure-from-motion algorithm is developed to build a mosaic image and also produce a metric representation of the inspected areas. To overcome low resolution and perspective distortion problems in far field due to the tilted and low camera position, a “near to far” strategy is implemented, which incrementally generates an overhead view of the surface, as long as it is traversed by the robot. Findings This paper demonstrates the use of an innovative robotic inspection system for automatic visual inspection of vessels. It presents and validates through experimental tests a mosaicking strategy to build a global view of the structure under inspection. The use of the mosaic image as input to an automatic corrosion detector is also demonstrated. Practical implications This paper may help to automate the inspection process, making it feasible to collect images from places otherwise difficult or impossible to reach for humans and automatically detect defects, such as corroded areas. Originality/value This paper provides a useful step towards the development of a new technology for automatic visual inspection of large tonnage ships.
APA, Harvard, Vancouver, ISO, and other styles
7

Congram, Benjamin, and Timothy Barfoot. "Field Testing and Evaluation of Single-Receiver GPS Odometry for Use in Robotic Navigation." Field Robotics 2, no. 1 (March 10, 2022): 1849–73. http://dx.doi.org/10.55417/fr.2022057.

Full text
Abstract:
Mobile robots rely on odometry to navigate in areas where localization fails. Visual odometry (VO), for instance, is a common solution for obtaining robust and consistent relative motion estimates of the vehicle frame. In contrast, Global Positioning System (GPS) measurements are typically used for absolute positioning and localization. However, when the constraint on absolute accuracy is relaxed, accurate relative position estimates can be found with one single-frequency GPS receiver by using time-differenced carrier phase (TDCP) measurements. In this paper, we implement and field test a single-receiver GPS odometry algorithm based on the existing theory of TDCP. We tailor our method for use on an unmanned ground vehicle (UGV) by incorporating proven robotics tools such as a vehicle motion model and robust cost functions. In the first half of our experiments, we evaluate our odometry on its own via a comparison with VO on the same test trajectories. After 4.3 km of testing, the results show our GPS odometry method has a 79% lower drift rate than a proven stereo VO method while maintaining a smooth error signal despite varying satellite availability. GPS odometry can also make robots more robust to catastrophic failures of their primary sensor when added to existing navigation pipelines. To prove this, we integrate our GPS odometry solution into Visual Teach and Repeat (VT&R), an established visual, path-following navigation framework. We perform further testing to show it can maintain accurate path following and prevent failures in challenging conditions including full camera dropouts. Code is available at https://github.com/utiasASRL/cpo.
APA, Harvard, Vancouver, ISO, and other styles
8

Montazer Zohour, Hamed, Bruno Belzile, Rafael Gomes Braga, and David St-Onge. "Minimize Tracking Occlusion in Collaborative Pick-and-Place Tasks: An Analytical Approach for Non-Wrist-Partitioned Manipulators." Sensors 22, no. 17 (August 26, 2022): 6430. http://dx.doi.org/10.3390/s22176430.

Full text
Abstract:
Several industrial pick-and-place applications, such as collaborative assembly lines, rely on visual tracking of the parts. Recurrent occlusions are caused by the manipulator motion decrease line productivity and can provoke failures. This work provides a complete solution for maintaining the occlusion-free line of sight between a variable-pose camera and the object to be picked by a 6R manipulator that is not wrist-partitioned. We consider potential occlusions by the manipulator as well as the operator working at the assembly station. An actuated camera detects the object goal (part to pick) and keeps track of the operator. The approach consists of using the complete set of solutions obtained from the derivation of the univariate polynomial equation solution to the inverse kinematics (IK). Compared to numerical iterative solving methods, our strategy grants us a set of joint positions (posture) for each root of the equation from which we extract the best (minimizing the risks of occlusion). Our analytical-based method, integrating collision and occlusion avoidance optimizations, can contribute to greatly enhancing the efficiency and safety of collaborative assembly workstations. We validate our approach with simulations as well as with physical deployments on commercial hardware.
APA, Harvard, Vancouver, ISO, and other styles
9

Bazeille, Stéphane, Jesus Ortiz, Francesco Rovida, Marco Camurri, Anis Meguenani, Darwin G. Caldwell, and Claudio Semini. "Active camera stabilization to enhance the vision of agile legged robots." Robotica 35, no. 4 (November 17, 2015): 942–60. http://dx.doi.org/10.1017/s0263574715000909.

Full text
Abstract:
SUMMARYLegged robots have the potential to navigate in more challenging terrains than wheeled robots. Unfortunately, their control is more demanding, because they have to deal with the common tasks of mapping and path planning as well as more specific issues of legged locomotion, like balancing and foothold planning. In this paper, we present the integration and the development of a stabilized vision system on the fully torque-controlled hydraulically actuated quadruped robot (HyQ). The active head added onto the robot is composed of a fast pan and tilt unit (PTU) and a high-resolution wide angle stereo camera. The PTU enables camera gaze shifting to a specific area in the environment (both to extend and refine the map) or to track an object while navigating. Moreover, as the quadruped locomotion induces strong regular vibrations, impacts or slippages on rough terrain, we took advantage of the PTU to mechanically compensate for the robot's motions. In this paper, we demonstrate the influence of legged locomotion on the quality of the visual data stream by providing a detailed study of HyQ's motions, which are compared against a rough terrain wheeled robot of the same size. Our proposed Inertial Measurement Unit (IMU)-based controller allows us to decouple the camera from the robot motions. We show through experiments that, by stabilizing the image feedback, we can improve the onboard vision-based processes of tracking and mapping. In particular, during the outdoor tests on the quadruped robot, the use of our camera stabilization system improved the accuracy on the 3D maps by 25%, with a decrease of 50% of mapping failures.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jianming, Yang Liu, Hehua Liu, and Jin Wang. "Learning Local–Global Multiple Correlation Filters for Robust Visual Tracking with Kalman Filter Redetection." Sensors 21, no. 4 (February 5, 2021): 1129. http://dx.doi.org/10.3390/s21041129.

Full text
Abstract:
Visual object tracking is a significant technology for camera-based sensor networks applications. Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance. However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally. In this paper, we propose a local–global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians. First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features. Then, we propose a local–global collaborative strategy to exchange information between local and global correlation filters. This strategy can avoid the wrong learning of the object appearance model. Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF. When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object. The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms. Moreover, our algorithm handles various challenges in object tracking well.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Yongsheng, Minzhen Wang, Xinheng Wang, Cheng Li, Ziwen Shang, and Liying Zhao. "A Novel Monocular Vision Technique for the Detection of Electric Transmission Tower Tilting Trend." Applied Sciences 13, no. 1 (December 28, 2022): 407. http://dx.doi.org/10.3390/app13010407.

Full text
Abstract:
Transmission lines are primarily deployed overhead, and the transmission tower, acting as the fulcrum, can be affected by the unbalanced force of the wire and extreme weather, resulting in the transmission tower tilt, deformation, or collapse. This can jeopardize the safe operation of the power grid and even cause widespread failures, resulting in significant economic losses. Given the limitations of current tower tilt detection methods, this paper proposes a tower tilt detection and analysis method based on monocular vision images. The monocular camera collects the profile and contour features of the tower, and the tower tilt model is combined to realize the calculation and analysis of the tower tilt. Through this improved monocular visual monitoring method, the perception accuracy of the tower tilt is improved by 7.5%, and the axial eccentricity is accurate to ±2 mm. The method provides real-time reliability and simple operation for detecting tower inclination, significantly reducing staff inspection intensity and ensuring the power system operates safely and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
12

Xu, Song, Wusheng Chou, and Hongyi Dong. "A Robust Indoor Localization System Integrating Visual Localization Aided by CNN-Based Image Retrieval with Monte Carlo Localization." Sensors 19, no. 2 (January 10, 2019): 249. http://dx.doi.org/10.3390/s19020249.

Full text
Abstract:
This paper proposes a novel multi-sensor-based indoor global localization system integrating visual localization aided by CNN-based image retrieval with a probabilistic localization approach. The global localization system consists of three parts: coarse place recognition, fine localization and re-localization from kidnapping. Coarse place recognition exploits a monocular camera to realize the initial localization based on image retrieval, in which off-the-shelf features extracted from a pre-trained Convolutional Neural Network (CNN) are adopted to determine the candidate locations of the robot. In the fine localization, a laser range finder is equipped to estimate the accurate pose of a mobile robot by means of an adaptive Monte Carlo localization, in which the candidate locations obtained by image retrieval are considered as seeds for initial random sampling. Additionally, to address the problem of robot kidnapping, we present a closed-loop localization mechanism to monitor the state of the robot in real time and make adaptive adjustments when the robot is kidnapped. The closed-loop mechanism effectively exploits the correlation of image sequences to realize the re-localization based on Long-Short Term Memory (LSTM) network. Extensive experiments were conducted and the results indicate that the proposed method not only exhibits great improvement on accuracy and speed, but also can recover from localization failures compared to two conventional localization methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Serrat, Carles, Anna Cellmer, Anna Banaszek, and Vicenç Gibert. "Exploring conditions and usefulness of UAVs in the BRAIN Massive Inspections Protocol." Open Engineering 9, no. 1 (January 31, 2019): 1–6. http://dx.doi.org/10.1515/eng-2019-0004.

Full text
Abstract:
AbstractIn this paper authors conduct a case study analysis by implementing the use of UAVs in the data collection within the BRAIN framework for the failures diagnosis of facades. The main goal is to assess the conditions and usefulness of UAVs in the BRAIN protocol by analyzing the goodness of fit to the fundamental requirements that support this inspection methodology. This preliminary qualitative approach allows the authors to investigate the benefits and potential of this high performance technology as a complement or alternative tool to the initial method, which is based on visual inspections supported, as maximum, by high resolution digital camera images. For the study a sample of facades has been selected in Poland. A full equipped UAV has been collecting the images. Finally, full procedure, collected data and positive and negative issues has been assessed under the perspective of the requirements involved in a multiscale BRAIN inspection. Overall scoring conditions has been determined and, as a conclusion, it can be stated that the use of UAVs for technical inspections in a population based predictive approach is, and even more it will be in the future, an interesting complementary tool for the data collection.
APA, Harvard, Vancouver, ISO, and other styles
14

Parlakyıldız, Sakir, Muhsin Tunay Gencoglu, and Mehmet Sait Cengiz. "Analysis Of Failure Detection And Visibility Criteria In Pantograph-catenary Interaction." Volume 28, Number 6, 2020, no. 03-2020 (December 2020): 127–35. http://dx.doi.org/10.33383/2020-040.

Full text
Abstract:
The main purpose of new studies investigating pantograph catenary interaction in electric rail systems is to detect malfunctions. In the pantograph catenary interaction studies, cameras with non-contact error detection methods are used extensively in the literature. However, none of these studies analyse lighting conditions that improve visual function for cameras. The main subject of this study is to increase the visibility of cameras used in railway systems. In this context, adequate illuminance of the test environment is one of the most important parameters that affect the failure detection success. With optimal lighting, the rate of fault detection increases. For this purpose, a camera, and a LED luminaire 18 W was placed on a wagon, one of the electric rail system elements. This study considered CIE140–2019 (2nd edition) standards. Thanks to the lighting made, it is easier for cameras to detect faults in the electric trains on the move. As a result, in scientific studies, especially in rail systems, the lighting of mobile test environments, such as pantograph-catenary, should be optimal. In environments where visibility conditions improve, the rate of fault detection increases.
APA, Harvard, Vancouver, ISO, and other styles
15

Xin, Jing, Han Cheng, and Baojing Ran. "Visual servoing of robot manipulator with weak field-of-view constraints." International Journal of Advanced Robotic Systems 18, no. 1 (January 1, 2021): 172988142199032. http://dx.doi.org/10.1177/1729881421990320.

Full text
Abstract:
Aiming at the problem of servoing task failure caused by the manipulated object deviating from the camera field-of-view (FOV) during the robot manipulator visual servoing (VS) process, a new VS method based on an improved tracking learning detection (TLD) algorithm is proposed in this article, which allows the manipulated object to deviate from the camera FOV in several continuous frames and maintains the smoothness of the robot manipulator motion during VS. Firstly, to implement the robot manipulator visual object tracking task with strong robustness under the weak FOV constraints, an improved TLD algorithm is proposed. Then, the algorithm is used to extract the image features (object in the camera FOV) or predict image features (object out of the camera FOV) of the manipulated object in the current frame. And then, the position of the manipulated object in the current image is further estimated. Finally, the visual sliding mode control law is designed according to the image feature errors to control the motion of the robot manipulator so as to complete the visual tracking task of the robot manipulator to the manipulated object in complex natural scenes with high robustness. Several robot manipulator VS experiments were conducted on a six-degrees-of-freedom MOTOMANSV3 industrial manipulator under different natural scenes. The experimental results show that the proposed robot manipulator VS method can relax the FOV constraint requirements on real-time visibility of manipulated object and effectively solve the problem of servoing task failure caused by the object deviating from the camera FOV during the VS.
APA, Harvard, Vancouver, ISO, and other styles
16

Orejón-Sánchez, Rami David, Manuel Jesús Hermoso-Orzáez, and Alfonso Gago-Calderón. "LED Lighting Installations in Professional Stadiums: Energy Efficiency, Visual Comfort, and Requirements of 4K TV Broadcast." Sustainability 12, no. 18 (September 17, 2020): 7684. http://dx.doi.org/10.3390/su12187684.

Full text
Abstract:
Nowadays, LED lighting technology reaches a higher Value of Energy Efficiency in Installations (VEEI) (W/m2*100 lux) than current luminaire lighting due to the number of lumens per watt that these are able to generate, as well as the directional nature of their emissions together with the adjustment capability through concentrator lenses with beam graduations that reach 5º. This achieves energy savings of up to 80%. Furthermore, also considering the substantial decrease in flickers, and that it noticeably improves usage of ultra-slow-motion cameras and the fading of switching-on or rearm times upon failure, LED technology stands out as the main solution for illumination of professional sports facilities. This article describes the evolution of regulatory requirements that are being imposed by the governing institutions of sports (FIFA, UEFA, FIBA, etc.) and professional leagues (LaLiga, Euroliga, etc.) in order to guarantee their competence as high-quality television products. In addition, the trends in requirements and specifications regarding lighting equipment and its installation, which are intended to convert stadiums into optimized centers for the celebration and dissemination of mass events, are analyzed (settings, photometry, etc.), particularly those concerning horizontal, vertical, and camera illuminance, average and extreme uniformities, glare, reduction of intrusive light in bleachers, flickers, color rendering index (CRI), correlated color temperature (CCT), and start-up times.
APA, Harvard, Vancouver, ISO, and other styles
17

Cao, Song Xiao, Xuan Yin Wang, Xiao Jie Fu, and Ke Xiang. "Servo Tracking of Moving Object Based on Particle Filter." Advanced Materials Research 271-273 (July 2011): 1130–35. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.1130.

Full text
Abstract:
We present a servo control model in a particle filter to realize robust visual object tracking. Particle filter has attracted much attention due to its robust tracking performance in cluttered environments. However, most methods are in the mode of moving object and stationary camera, as a result, the tracking will become failure if the object goes out of the field of view of the camera. In this paper, a closed loop control model based on speed regulation is proposed to drive the pan/tilt/zoom(PTZ) camera to keep the target in the center of the camera angle. The experiment result shows that our system can track the moving object well, and can always keep the object in the middle of the field of view. The system is computationally efficient and can run in real-time completely.
APA, Harvard, Vancouver, ISO, and other styles
18

Yekkehfallah, Majid, Ming Yang, Zhiao Cai, Liang Li, and Chuanxiang Wang. "Accurate 3D Localization Using RGB-TOF Camera and IMU for Industrial Mobile Robots." Robotica 39, no. 10 (February 22, 2021): 1816–33. http://dx.doi.org/10.1017/s0263574720001526.

Full text
Abstract:
SUMMARYLocalization based on visual natural landmarks is one of the state-of-the-art localization methods for automated vehicles that is, however, limited in fast motion and low-texture environments, which can lead to failure. This paper proposes an approach to solve these limitations with an extended Kalman filter (EKF) based on a state estimation algorithm that fuses information from a low-cost MEMS Inertial Measurement Unit and a Time-of-Flight camera. We demonstrate our results in an indoor environment. We show that the proposed approach does not require any global reflective landmark for localization and is fast, accurate, and easy to use with mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
19

Humnabad, Prashant S., M. B. Hanamantraygouda, and S. B. Halesh. "Fatigue Studies On Aluminum 6061/SiC Reinforcement Metal Matrix Composites." Journal of Mines, Metals and Fuels 70, no. 3A (July 12, 2022): 143. http://dx.doi.org/10.18311/jmmf/2022/30684.

Full text
Abstract:
<p>Fatigue is a process of progressive localized plastic deformation occurring in a material subjected to cyclic stresses and strains at high stress concentration locations that may culminate in cracks or complete fracture after a sufficient number of fluctuations. Fatigue testing is carried out using the ASTM D3479 with a notch or crack for investigating the initiation of crack. Several fatigue tests were conducted in tension-tension and/or tensioncompression loading at a frequency of 10Hz or sinusoidal wave’s frequency of 5Hz, and at constant-amplitude. The fatigue tests were interrupted by the researchers at regular intervals after a predetermined number of cycles to monitor crack advance and to observe the failure modes by various ways such as visual observation, digital camera, traveling microscope, CCD camera, etc.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Hrúz, Michal, Martin Bugaj, Andrej Novák, Branislav Kandera, and Benedikt Badánik. "The Use of UAV with Infrared Camera and RFID for Airframe Condition Monitoring." Applied Sciences 11, no. 9 (April 21, 2021): 3737. http://dx.doi.org/10.3390/app11093737.

Full text
Abstract:
The new progressive smart technologies announced in the fourth industrial revolution in aviation—Aviation 4.0—represent new possibilities and big challenges in aircraft maintenance processes. The main benefit of these technologies is the possibility to monitor, transfer, store, and analyze huge datasets. Based on analysis outputs, there is a possibility to improve current preventive maintenance processes and implement predictive maintenance processes. These solutions lower the downtime, save manpower, and extend the components’ lifetime; thus, the maximum effectivity and safety is achieved. The article deals with the possible implementation of an unmanned aerial vehicle (UAV) with an infrared camera and Radio Frequency Identification (RFID) as two of the smart hangar technologies for airframe condition monitoring. The presented implementations of smart technologies follow up the specific results of a case study focused on trainer aircraft failure monitoring and its impact on maintenance strategy changes. The case study failure indexes show the critical parts of aircraft that are subjected to damage the most. The aim of the article was to justify the need for thorough monitoring of critical parts of the aircraft and then analyze and propose a more effective and the most suitable form of technical condition monitoring of aircraft critical parts. The article describes the whole process of visual inspection performed by an unmanned aerial vehicle (UAV) with an IR camera and its related processes; in addition, it covers the possible usage of RFID tags as a labeling tool supporting the visual inspection. The implementations criteria apply to the repair and overhaul small aircraft maintenance organization, and later, it can also increase operational efficiency. The final suggestions describe the possible usage of proposed solutions, their main benefits, and also the limitations of their implementations in maintenance of trainer aircraft.
APA, Harvard, Vancouver, ISO, and other styles
21

Jesus, Thiago C., Paulo Portugal, Daniel G. Costa, and Francisco Vasques. "A Comprehensive Dependability Model for QoM-Aware Industrial WSN When Performing Visual Area Coverage in Occluded Scenarios." Sensors 20, no. 22 (November 16, 2020): 6542. http://dx.doi.org/10.3390/s20226542.

Full text
Abstract:
In critical industrial monitoring and control applications, dependability evaluation will be usually required. For wireless sensor networks deployed in industrial plants, dependability evaluation can provide valuable information, enabling proper preventive or contingency measures to assure their correct and safe operation. However, when employing sensor nodes equipped with cameras, visual coverage failures may have a deep impact on the perceived quality of industrial applications, besides the already expected impacts of hardware and connectivity failures. This article proposes a comprehensive mathematical model for dependability evaluation centered on the concept of Quality of Monitoring (QoM), processing availability, reliability and effective coverage parameters in a combined way. Practical evaluation issues are discussed and simulation results are presented to demonstrate how the proposed model can be applied in wireless industrial sensor networks when assessing and enhancing their dependability.
APA, Harvard, Vancouver, ISO, and other styles
22

Miyamoto, Ayaho. "Development of a Remote Collaborative Visual Inspection System for Road Condition Assessment." Key Engineering Materials 569-570 (July 2013): 135–42. http://dx.doi.org/10.4028/www.scientific.net/kem.569-570.135.

Full text
Abstract:
This paper describes a Remote Collaborative Visual Inspection System for integrated management by linking digital movie captured by a running vehicle and online road drawings. The newly developed road condition assessment system consists of commercially available on-board high-resolution video cameras during the visual inspection of road pavements and road appurtenances and a Web connection system. The system enables users to select a road section in a road register on an on-screen map and visually observe not only pavements but also road facilities, slopes, the state of vegetation, and road-occupying structures. The system, therefore, can be expected to help reduce visual detection failures compared with conventional visual observation from moving vehicles and make highly objective evaluation possible through observation by two or more persons. By using image data, the system also provides basic data that can be used for not only maintenance but also road planning.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Xue Сhang, Xu Zhang, Ying Hou Lou, and Jun Hua Chen. "Automatic Visual Inspection System of Gas Valve’s External Taper Thread Based on Image Domain." Applied Mechanics and Materials 37-38 (November 2010): 207–12. http://dx.doi.org/10.4028/www.scientific.net/amm.37-38.207.

Full text
Abstract:
The failure of the gas valve’s external screw taper thread is a very serious problem which may result in accidents. Routine inspection on the threads are thus necessary. The efficiency of traditional manual testing methods can not meet the production requirements. An automatic visual inspection system for gas valve’s exteranl screw taper thread was presented in the paper. The system consists of a high-performance 200 megapixels CCD camera, a LED backlight and a holder of gas valve. Through image acquisition, image processing, edge detection and feature dimension measurement, the system can effectively measure the parameters of taper thread such as angle of thread, depth of thread, taper angle of thread on image domain. It can meet the testing requirements of the external screw taper thread and has wide application perspective.
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Mingliang, Qingfeng Li, Jianwei Niu, Hao Su, Xiting Liu, Weiwei Xu, Pei Lv, Bing Zhou, and Yi Yang. "ART-UP: A Novel Method for Generating Scanning-Robust Aesthetic QR Codes." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1 (April 16, 2021): 1–23. http://dx.doi.org/10.1145/3418214.

Full text
Abstract:
Quick response (QR) codes are usually scanned in different environments, so they must be robust to variations in illumination, scale, coverage, and camera angles. Aesthetic QR codes improve the visual quality, but subtle changes in their appearance may cause scanning failure. In this article, a new method to generate scanning-robust aesthetic QR codes is proposed, which is based on a module-based scanning probability estimation model that can effectively balance the tradeoff between visual quality and scanning robustness. Our method locally adjusts the luminance of each module by estimating the probability of successful sampling. The approach adopts the hierarchical, coarse-to-fine strategy to enhance the visual quality of aesthetic QR codes, which sequentially generate the following three codes: a binary aesthetic QR code, a grayscale aesthetic QR code, and the final color aesthetic QR code. Our approach also can be used to create QR codes with different visual styles by adjusting some initialization parameters. User surveys and decoding experiments were adopted for evaluating our method compared with state-of-the-art algorithms, which indicates that the proposed approach has excellent performance in terms of both visual quality and scanning robustness.
APA, Harvard, Vancouver, ISO, and other styles
25

Remke, Alexander André, Jesus Rodrigo-Comino, Stefan Wirtz, and Johannes B. Ries. "Finding Possible Weakness in the Runoff Simulation Experiments to Assess Rill Erosion Changes without Non-Intermittent Surveying Capabilities." Sensors 20, no. 21 (November 2, 2020): 6254. http://dx.doi.org/10.3390/s20216254.

Full text
Abstract:
The Terrestrial Photogrammetry Scanner (TEPHOS) offers the possibility to precisely monitor linear erosion features using the Structure from Motion (SfM) technique. This is a static, multi-camera array and dynamically moves the digital videoframe camera designed to obtain 3-D models of rills before and after the runoff experiments. The main goals were to (1) obtain better insight into the rills; (2) reduce the technical gaps generated during the runoff experiments using only one camera; (3) enable the visual location of eroded, transported and accumulated material. In this study, we obtained a mean error for all pictures reaching up to 0.00433 pixels and every single one of them was under 0.15 pixel. So, we obtained an error of about 1/10th of the maximum possible resolution. A conservative value for the overall accuracy was one pixel, which means that, in our case, the accuracy was 0.0625 mm. The point density, in our example, reached 29,484,888 pts/m2. It became possible to get a glimpse of the hotspots of sidewall failure and rill-bed incision. We conclude that the combination of both approaches—rill experiment and 3D models—will make easy under laboratory conditions to describe the soil erosion processes accurately in a mathematical–physical way.
APA, Harvard, Vancouver, ISO, and other styles
26

Bortnowski, Piotr, Horst Gondek, Robert Król, Daniela Marasova, and Maksymilian Ozdoba. "Detection of Blockages of the Belt Conveyor Transfer Point Using an RGB Camera and CNN Autoencoder." Energies 16, no. 4 (February 7, 2023): 1666. http://dx.doi.org/10.3390/en16041666.

Full text
Abstract:
In the material transfer area, the belt is exposed to considerable damage, the energy of falling material is lost, and there is significant dust and noise. One of the most common causes of failure is transfer chute blockage, when the flow of material in the free fall or loading zone is disturbed by oversized rock parts or other objects, e.g., rock bolts. The failure of a single transfer point may cause the entire transport route to be excluded from work and associated with costly breakdowns. For this reason, those places require continuous monitoring and special surveillance measures. The number of methods for monitoring this type of blockage is limited. The article presents the research results on the possibility of visual monitoring of the transfer operating status on an object in an underground copper ore mine. A standard industrial RGB camera was used to obtain the video material from the transfer point area, and the recorded frames were processed by a detection algorithm based on a neural network. The CNN autoencoder was taught to reconstruct the image of regular transfer operating conditions. A data set with the recorded transfer blockage state was used for validation.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Jiexin, Zi Wang, Yunna Bao, Qiufu Wang, Xiaoliang Sun, and Qifeng Yu. "Robust monocular 3D object pose tracking for large visual range variation in robotic manipulation via scale-adaptive region-based method." International Journal of Advanced Robotic Systems 19, no. 1 (January 1, 2022): 172988062210769. http://dx.doi.org/10.1177/17298806221076978.

Full text
Abstract:
Many robot manipulation processes involve large visual range variation between the hand-eye camera and the object, which in turn causes object scale change of a large span in the image sequence captured by the camera. In order to accurately guide the manipulator, the relative 6 degree of freedom (6D) pose between the object and manipulator is continuously required in the process. The large-span scale change of the object in the image sequence often leads to the 6D pose tracking failure of the object for existing pose tracking methods. To tackle this problem, this article proposes a novel scale-adaptive region-based monocular pose tracking method. Firstly, the impact of the object scale on the convergence performance of the local region-based pose tracker is meticulously tested and analyzed. Then, a universal region radius calculation model based on object scale is built based on the statical analysis result. Finally, we develop a novel scale-adaptive localized region-based pose tracking model by merging the scale-adaptive radius selection mechanism into the local region-based method. The proposed method adjusts local region size according to the scale of the object projection and achieves robust pose tracking. Experiment results on synthetic and real image sequences indicate that the proposed method achieves better performance over the traditional localized region-based method in manipulator operation scenarios which involve large visual range variation.
APA, Harvard, Vancouver, ISO, and other styles
28

Luo, Kaiqing, Manling Lin, Pengcheng Wang, Siwei Zhou, Dan Yin, and Haolan Zhang. "Improved ORB-SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment." Mathematical Problems in Engineering 2020 (September 23, 2020): 1–13. http://dx.doi.org/10.1155/2020/4724310.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years. However, most visual SLAM systems are based on static assumptions which ignored motion effects. If image sequences are not rich in texture information or the camera rotates at a large angle, SLAM system will fail to locate and map. To solve these problems, this paper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing. The information entropy corresponding to the segmented image block is calculated, and the entropy threshold is determined by the adaptive algorithm of image entropy threshold, and then the image block which is smaller than the information entropy threshold is sharpened. The experimental results show that compared with the ORB-SLAM2 system, the relative trajectory error decreases by 36.1% and the absolute trajectory error decreases by 45.1% compared with ORB-SLAM2. Although these indicators are greatly improved, the processing time is not greatly increased. To some extent, the algorithm solves the problem of system localization and mapping failure caused by camera large angle rotation and insufficient image texture information.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Chang, and Hua Zhu. "Visual-inertial SLAM method based on optical flow in a GPS-denied environment." Industrial Robot: An International Journal 45, no. 3 (May 21, 2018): 401–6. http://dx.doi.org/10.1108/ir-01-2018-0002.

Full text
Abstract:
Purpose This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global positioning system (GPS) signal failure in buildings, trees and other obstacles. Design/methodology/approach In this framework, a feature extraction method distributes features on the image under texture-less scenes. The assumption of constant luminosity is improved, and the features are tracked by the optical flow to enhance the stability of the system. The camera data and inertial measurement unit data are tightly coupled to estimate the pose by nonlinear optimization. Findings The method is successfully performed on the mobile robot and steadily extracts the features on low texture environments and tracks features. The end-to-end error is 1.375 m with respect to the total length of 762 m. The authors achieve better relative pose error, scale and CPU load than ORB-SLAM2 on EuRoC data sets. Originality/value The main contribution of this study is the theoretical derivation and experimental application of a new visual-inertial SLAM method that has excellent accuracy and stability on weak texture scenes.
APA, Harvard, Vancouver, ISO, and other styles
30

Neves, Francisco Soares, Rafael Marques Claro, and Andry Maykol Pinto. "End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach." Sensors 23, no. 5 (February 22, 2023): 2434. http://dx.doi.org/10.3390/s23052434.

Full text
Abstract:
A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Chengbin, YaoYuan Tian, Liang Lin, SiFan Chen, HanWen Li, YuXin Wang, and KaiXiong Su. "Obtaining World Coordinate Information of UAV in GNSS Denied Environments." Sensors 20, no. 8 (April 15, 2020): 2241. http://dx.doi.org/10.3390/s20082241.

Full text
Abstract:
GNSS information is vulnerable to external interference and causes failure when unmanned aerial vehicles (UAVs) are in a fully autonomous flight in complex environments such as high-rise parks and dense forests. This paper presents a pan-tilt-based visual servoing (PBVS) method for obtaining world coordinate information. The system is equipped with an inertial measurement unit (IMU), an air pressure sensor, a magnetometer, and a pan-tilt-zoom (PTZ) camera. In this paper, we explain the physical model and the application method of the PBVS system, which can be briefly summarized as follows. We track the operation target with a UAV carrying a camera and output the information about the UAV’s position and the angle between the PTZ and the anchor point. In this way, we can obtain the current absolute position information of the UAV with its absolute altitude collected by the height sensing unit and absolute geographic coordinate information and altitude information of the tracked target. We set up an actual UAV experimental environment. To meet the calculation requirements, some sensor data will be sent to the cloud through the network. Through the field tests, it can be concluded that the systematic deviation of the overall solution is less than the error of GNSS sensor equipment, and it can provide navigation coordinate information for the UAV in complex environments. Compared with traditional visual navigation systems, our scheme has the advantage of obtaining absolute, continuous, accurate, and efficient navigation information at a short distance (within 15 m from the target). This system can be used in scenarios that require autonomous cruise, such as self-powered inspections of UAVs, patrols in parks, etc.
APA, Harvard, Vancouver, ISO, and other styles
32

Zha, Yufei, Min Wu, Zhuling Qiu, Jingxian Sun, Peng Zhang, and Wei Huang. "Online Semantic Subspace Learning with Siamese Network for UAV Tracking." Remote Sensing 12, no. 2 (January 19, 2020): 325. http://dx.doi.org/10.3390/rs12020325.

Full text
Abstract:
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime issues due to camera motion and viewpoint changes). Based on the Siamese network, tracking can be conducted efficiently in recent UAV datasets. Unfortunately, the learned convolutional neural network (CNN) features are not discriminative when identifying the target from the background/clutter, In particular for the distractor, and cannot capture the appearance variations temporally. Additionally, occlusion and disappearance are also reasons for tracking failure. In this paper, a semantic subspace module is designed to be integrated into the Siamese network tracker to encode the local fine-grained details of the target for UAV tracking. More specifically, the target’s semantic subspace is learned online to adapt to the target in the temporal domain. Additionally, the pixel-wise response of the semantic subspace can be used to detect occlusion and disappearance of the target, and this enables reasonable updating to relieve model drifting. Substantial experiments conducted on challenging UAV benchmarks illustrate that the proposed method can obtain competitive results in both accuracy and efficiency when they are applied to UAV videos.
APA, Harvard, Vancouver, ISO, and other styles
33

Osipyan, G. A., V. M. Sheludchenko, N. Y. Youssef, Kh Khraystin, R. A. Dzhalili, and E. I. Krasnolutskaya. "Combined Intrastromal Implantation of a Semipermeable Hydrogel Membrane in Case of Corneal Graft Failure and Multiple Keratoplasty (Clinical Observation)." Ophthalmology in Russia 18, no. 1 (April 4, 2021): 165–70. http://dx.doi.org/10.18008/1816-5095-2021-1-165-170.

Full text
Abstract:
Introduction: penetrating keratoplasty (PK) is an effective method for the surgical treatment of corneal failure and its layers and low visual acuity. It is well-known that the graft degrades over time, it is associated with “chronic immune destruction”. Rekeratoplasty is conducted in case of rapid decrease of transplant functions, but even with multiple rekeratoplasty iterations, the result can be unstable.Patient and methods. Patient D., 42 years old, complaints to low vision of left eye — arm movement 10 cm on face. Both eyes have been previously surgically operated for the last 10 years. Two iterations of an artificial iris transplantation in combination with IOL implantation, and Ahmed drainage implantation and five rekeratoplasty on the left eye were conducted. Corneal graft failure with transplant thickness — 802 μm. The patient suffers from Mediterranean fever and polyarthritis. We conducted a course of conservative therapy, which increased visual acuity to 0.05. Then we performed a surgical procedure for hybrid type of keratotransplantation. The following procedures were gradually conducted: mechanical removal of epithelium, femto-laser formation in a recipient’s replaceable corneal disk formation with 500 μm thick and a diameter of 7.0 mm, removal of disk, femto-laser formation of a central penetrating hole with a diameter of 3 mm opposite the artificial pupil, placing of the hydrogel graft 60 μm thick on the bottom of the bed. Hydrogel graft was covered by a donor corneal graft, which was fixed by interrupted sutures and soft contact lens.Results: Visual acuity of the left eye after 1 day after keratoplasty — 0.2; after 1 month — 0.3, the transplant was transparent; after 4 months — 0.4 with complex correction — 0.7, the transplant was transparent, the thickness of the donor disc — 275 μm.Conclusion. After multiple rekeratoplasty iterations the presented method of combined keratotransplantation allows to obtain a non-permanent effective result. At the same time, the polymer metabolism is preserved, since it has a circulation with the front camera. The case requires further observation.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Fubin, Xiaohua Gao, and Wenbo Song. "A Vision Aided Initial Alignment Method of Strapdown Inertial Navigation Systems in Polar Regions." Sensors 22, no. 13 (June 21, 2022): 4691. http://dx.doi.org/10.3390/s22134691.

Full text
Abstract:
The failure of the traditional initial alignment algorithm for the strapdown inertial navigation system (SINS) in high latitude is a significant challenge due to the rapid convergence of polar longitude. This paper presents a novel vision aided initial alignment method for the SINS of autonomous underwater vehicles (AUV) in polar regions. In this paper, we redesign the initial alignment model by combining inertial navigation mechanization equations in a transverse coordinate system (TCS) and visual measurement information obtained from a camera fixed on the vehicle. The observability of the proposed method is analyzed under different swing models, while the extended Kalman filter is chosen as an information fusion algorithm. Simulation results show that: the proposed method can improve the accuracy of the initial alignment for SINS in polar regions, and the deviation angle has a similar estimation accuracy in the case of uniaxial, biaxial, and triaxial swing modes, which is consistent with the results of the observable analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Torres, Adalberto, David E. Milov, Daniela Melendez, Joseph Negron, John J. Zhao, and Stephen T. Lawless. "A new approach to alarm management: mitigating failure-prone systems." Journal of Hospital Administration 3, no. 6 (October 9, 2014): 79. http://dx.doi.org/10.5430/jha.v3n6p79.

Full text
Abstract:
Alarm management that effectively reduces alarm fatigue and improves patient safety has yet to be convincingly demonstrated. The leaders of our newly constructed children’s hospital envisioned and created a hospital department dedicated to tackling this daunting task. The Clinical Logistics Center (CLC) is the hospital’s hub where all of its monitoring technology is integrated and tracked twenty-four hours a day, seven days a week by trained paramedics. Redundancy has been added to the alarm management process through automatic escalation of alarms from bedside staff to CLC staff in a timely manner. The paramedic alerting the bedside staff to true alarms based on good signal quality and confirmed by direct visual confirmation of the patient through bedside cameras distinguishes true alarms from nuisance/false alarms in real time. Communication between CLC and bedside staff occurs primarily via smartphone texts to avoid disruption of clinical activities. The paramedics also continuously monitor physiologic variables for early indicators of clinical deterioration, which leads to early interventions through mechanisms such as rapid response team activation. Hands-free voice communication via room intercoms facilitates CLC logistical support of the bedside staff during acute clinical crises/resuscitations. Standard work is maintained through protocol-driven process steps and serial training of both bedside and CLC staff. This innovative approach to prioritize alarms for the bedside staff is a promising solution to improving alarm management.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Xue Chang, Xu Zhang, and Junhua Chen. "Study on the Inspection Method of Gas Valve’s External Taper Screw Thread Based on Image Domain." Advanced Materials Research 189-193 (February 2011): 4195–200. http://dx.doi.org/10.4028/www.scientific.net/amr.189-193.4195.

Full text
Abstract:
The failure of the gas valve’s external screw taper thread is a very serious problem which may result in accidents. Routine inspection on the threads is thus necessary. The efficiency of traditional manual testing methods can not meet the production requirements. An automatic visual inspection system for gas valve’s external screw taper thread is presented in the paper. The system consists of a high-performance 200 mega pixels CCD camera, a LED backlight and a holder of gas valve. Through image acquisition, image processing, edge detection and feature dimension measurement, the system can effectively measure the parameters of taper thread such as half of thread angle, depth of thread, taper angle of thread on image domain. Due to the lead angle and installation error, actual half of thread angle is different from the theoretical values. Three methods are presented in the paper for the half of thread angle. Through analysis, Method 3 and Method 2 can be used to measure half of thread angle. The system can meet the testing requirements of the external screw taper thread and has wide application perspective.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Rong, Wan Li, Lizhi Tan, Haiyu Liu, Qiqing Le, Songyun Jiang, and Kevin T. Nguyen. "Pantograph Catenary Performance Detection of Energy High-Speed Train Based on Machine Vision." Mathematical Problems in Engineering 2022 (August 8, 2022): 1–8. http://dx.doi.org/10.1155/2022/9680545.

Full text
Abstract:
With the rapid development of high-speed rail in China, addressing the issue of safety assurance during the operation of the train is very important. A very important part of a train’s power supply system is the pantograph and catenary system, which consists of a pantograph and a catenary. Failure of the pantograph-catenary system can cause significant damage to the normal operation of the train. The dynamic performance of the pantograph-catheter system must be detected in real time during the operation of the train. This paper is based on the study and analysis of pantograph-catheter dynamic performance parameters and developed a system for real-time detection of pantograph-catheter dynamic performance parameters based on a car visual system. The results are as follows: based on this detection method, the visual error is low and the accuracy is high. The machine-based directional height detection module developed in this paper has a good detection effect and high test accuracy; the arcing detection module designed in this paper can effectively detect the arcing and store the arcing pictures and can display the duration of single arcing and the arcing rate of the section in real-time. The practical application effect is good. The results show that the focal length of the camera lens is 16 mm, and the error of the machine vision system is low. The system designed in this paper may make a great contribution to the operation condition monitoring and fault diagnosis of the pantograph-catenary system of a high-speed train in the future.
APA, Harvard, Vancouver, ISO, and other styles
38

Chapman, Michael A., Cao Min, and Deijin Zhang. "CONTINUOUS MAPPING OF TUNNEL WALLS IN A GNSS-DENIED ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 481–85. http://dx.doi.org/10.5194/isprs-archives-xli-b3-481-2016.

Full text
Abstract:
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
APA, Harvard, Vancouver, ISO, and other styles
39

Chapman, Michael A., Cao Min, and Deijin Zhang. "CONTINUOUS MAPPING OF TUNNEL WALLS IN A GNSS-DENIED ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 481–85. http://dx.doi.org/10.5194/isprsarchives-xli-b3-481-2016.

Full text
Abstract:
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
APA, Harvard, Vancouver, ISO, and other styles
40

Ishida, Yuki, Yoshitsugu Manabe, and Noriko Yata. "Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss." Journal of Imaging 8, no. 5 (April 26, 2022): 125. http://dx.doi.org/10.3390/jimaging8050125.

Full text
Abstract:
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation.
APA, Harvard, Vancouver, ISO, and other styles
41

Ishida, Yuki, Yoshitsugu Manabe, and Noriko Yata. "Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss." Journal of Imaging 8, no. 5 (April 26, 2022): 125. http://dx.doi.org/10.3390/jimaging8050125.

Full text
Abstract:
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation.
APA, Harvard, Vancouver, ISO, and other styles
42

Yoon, Howoon, S. M. Nadim Uddin, and Yong Ju Jung. "Multi-Scale Attention-Guided Non-Local Network for HDR Image Reconstruction." Sensors 22, no. 18 (September 17, 2022): 7044. http://dx.doi.org/10.3390/s22187044.

Full text
Abstract:
High-dynamic-range (HDR) image reconstruction methods are designed to fuse multiple Low-dynamic-range (LDR) images captured with different exposure values into a single HDR image. Recent CNN-based methods mostly perform local attention- or alignment-based fusion of multiple LDR images to create HDR contents. Depending on a single attention mechanism or alignment causes failure in compensating ghosting artifacts, which can arise in the synthesized HDR images due to the motion of objects or camera movement across different LDR image inputs. In this study, we propose a multi-scale attention-guided non-local network called MSANLnet for efficient HDR image reconstruction. To mitigate the ghosting artifacts, the proposed MSANLnet performs implicit alignment of LDR image features with multi-scale spatial attention modules and then reconstructs pixel intensity values using long-range dependencies through non-local means-based fusion. These modules adaptively select useful information that is not damaged by an object’s movement or unfavorable lighting conditions for image pixel fusion. Quantitative evaluations against several current state-of-the-art methods show that the proposed approach achieves higher performance than the existing methods. Moreover, comparative visual results show the effectiveness of the proposed method in restoring saturated information from original input images and mitigating ghosting artifacts caused by large movement of objects. Ablation studies show the effectiveness of the proposed method, architectural choices, and modules for efficient HDR reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
43

Tiago, Liliane Marques de Pinho, Diogo Fernandes dos Santos, Douglas Eulálio Antunes, Letícia Marques Pinho Tiago, and Isabela Maria Bernardes Goulart. "Assessment of neuropathic pain in leprosy patients with relapse or treatment failure by infrared thermography: A cross-sectional study." PLOS Neglected Tropical Diseases 15, no. 9 (September 23, 2021): e0009794. http://dx.doi.org/10.1371/journal.pntd.0009794.

Full text
Abstract:
Background Neuropathic pain (NP) is one of the main complications of leprosy, and its management is challenging. Infrared thermography (IRT) has been shown to be effective in the evaluation of peripheral autonomic function resulting from microcirculation flow changes in painful syndromes. This study used IRT to map the skin temperature on the hands and feet of leprosy patients with NP. Methodology/Principal findings This cross-sectional study included 20 controls and 55 leprosy patients, distributed into 29 with NP (PWP) and 26 without NP (PNP). Thermal images of the hands and feet were captured with infrared camera and clinical evaluations were performed. Electroneuromyography (ENMG) was used as a complementary neurological exam. Instruments used for the NP diagnosis were visual analog pain scale (VAS), Douleur Neuropathic en 4 questions (DN4), and simplified neurological assessment protocol. The prevalence of NP was 52.7%. Pain intensity showed that 93.1% of patients with NP had moderate/severe pain. The most frequent DN4 items in individuals with NP were numbness (86.2%), tingling (86.2%) and electric shocks (82.7%). Reactional episodes type 1 were statistically significant in the PWP group. Approximately 81.3% of patients showed a predominance of multiple mononeuropathy in ENMG, 79.6% had sensory loss, and 81.4% showed some degree of disability. The average temperature in the patients’ hands and feet was slightly lower than in the controls, but without a significant difference. Compared to controls, all patients showed significant temperature asymmetry in almost all points assessed on the hands, except for two palmar points and one dorsal point. In the feet, there was significant asymmetry in all points, indicating a greater involvement of the lower limbs. Conclusion IRT confirmed the asymmetric pattern of leprosy neuropathy, indicating a change in the function of the autonomic nervous system, and proving to be a useful method in the approach of pain.
APA, Harvard, Vancouver, ISO, and other styles
44

Velásquez, David, Alejandro Sánchez , Sebastian Sarmiento , Mauricio Toro , Mikel Maiza, and Basilio Sierra . "A Method for Detecting Coffee Leaf Rust through Wireless Sensor Networks, Remote Sensing, and Deep Learning: Case Study of the Caturra Variety in Colombia." Applied Sciences 10, no. 2 (January 19, 2020): 697. http://dx.doi.org/10.3390/app10020697.

Full text
Abstract:
Agricultural activity has always been threatened by the presence of pests and diseases that prevent the proper development of crops and negatively affect the economy of farmers. One of these pests is Coffee Leaf Rust (CLR), which is a fungal epidemic disease that affects coffee trees and causes massive defoliation. As an example, this disease has been affecting coffee trees in Colombia (the third largest producer of coffee worldwide) since the 1980s, leading to devastating losses between 70% and 80% of the harvest. Failure to detect pathogens at an early stage can result in infestations that cause massive destruction of plantations and significantly damage the commercial value of the products. The most common way to detect this disease is by walking through the crop and performing a human visual inspection. As a result of this problem, different research studies have proven that technological methods can help to identify these pathogens. Our contribution is an experiment that includes a CLR development stage diagnostic model in the Coffea arabica, Caturra variety, scale crop through the technological integration of remote sensing (through drone capable multispectral cameras), wireless sensor networks (multisensor approach), and Deep Learning (DL) techniques. Our diagnostic model achieved an F1-score of 0.775. The analysis of the results revealed a p-value of 0.231, which indicated that the difference between the disease diagnosis made employing a visual inspection and through the proposed technological integration was not statistically significant. The above shows that both methods were significantly similar to diagnose the disease.
APA, Harvard, Vancouver, ISO, and other styles
45

Rosas-Cervantes, Vinicio Alejandro, Quoc-Dong Hoang, Soon-Geul Lee, and Jae-Hwan Choi. "Multi-Robot 2.5D Localization and Mapping Using a Monte Carlo Algorithm on a Multi-Level Surface." Sensors 21, no. 13 (July 4, 2021): 4588. http://dx.doi.org/10.3390/s21134588.

Full text
Abstract:
Most indoor environments have wheelchair adaptations or ramps, providing an opportunity for mobile robots to navigate sloped areas avoiding steps. These indoor environments with integrated sloped areas are divided into different levels. The multi-level areas represent a challenge for mobile robot navigation due to the sudden change in reference sensors as visual, inertial, or laser scan instruments. Using multiple cooperative robots is advantageous for mapping and localization since they permit rapid exploration of the environment and provide higher redundancy than using a single robot. This study proposes a multi-robot localization using two robots (leader and follower) to perform a fast and robust environment exploration on multi-level areas. The leader robot is equipped with a 3D LIDAR for 2.5D mapping and a Kinect camera for RGB image acquisition. Using 3D LIDAR, the leader robot obtains information for particle localization, with particles sampled from the walls and obstacle tangents. We employ a convolutional neural network on the RGB images for multi-level area detection. Once the leader robot detects a multi-level area, it generates a path and sends a notification to the follower robot to go into the detected location. The follower robot utilizes a 2D LIDAR to explore the boundaries of the even areas and generate a 2D map using an extension of the iterative closest point. The 2D map is utilized as a re-localization resource in case of failure of the leader robot.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Fan, Liangliang Wang, Yilin Wen, Lei Yang, Jia Pan, Zheng Wang, and Wenping Wang. "Failure Handling of Robotic Pick and Place Tasks With Multimodal Cues Under Partial Object Occlusion." Frontiers in Neurorobotics 15 (March 8, 2021). http://dx.doi.org/10.3389/fnbot.2021.570507.

Full text
Abstract:
The success of a robotic pick and place task depends on the success of the entire procedure: from the grasp planning phase, to the grasp establishment phase, then the lifting and moving phase, and finally the releasing and placing phase. Being able to detect and recover from grasping failures throughout the entire process is therefore a critical requirement for both the robotic manipulator and the gripper, especially when considering the almost inevitable object occlusion by the gripper itself during the robotic pick and place task. With the rapid rising of soft grippers, which rely heavily on their under-actuated body and compliant, open-loop control, less information is available from the gripper for effective overall system control. Tackling on the effectiveness of robotic grasping, this work proposes a hybrid policy by combining visual cues and proprioception of our gripper for the effective failure detection and recovery in grasping, especially using a proprioceptive self-developed soft robotic gripper that is capable of contact sensing. We solved failure handling of robotic pick and place tasks and proposed (1) more accurate pose estimation of a known object by considering the edge-based cost besides the image-based cost; (2) robust object tracking techniques that work even when the object is partially occluded in the system and achieve mean overlap precision up to 80%; (3) contact and contact loss detection between the object and the gripper by analyzing internal pressure signals of our gripper; (4) robust failure handling with the combination of visual cues under partial occlusion and proprioceptive cues from our soft gripper to effectively detect and recover from different accidental grasping failures. The proposed system was experimentally validated with the proprioceptive soft robotic gripper mounted on a collaborative robotic manipulator, and a consumer-grade RGB camera, showing that combining visual cues and proprioception from our soft actuator robotic gripper was effective in improving the detection and recovery from the major grasping failures in different stages for the compliant and robust grasping.
APA, Harvard, Vancouver, ISO, and other styles
47

Labbé, Mathieu, and François Michaud. "Multi-Session Visual SLAM for Illumination-Invariant Re-Localization in Indoor Environments." Frontiers in Robotics and AI 9 (June 16, 2022). http://dx.doi.org/10.3389/frobt.2022.801886.

Full text
Abstract:
For robots navigating using only a camera, illumination changes in indoor environments can cause re-localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved re-localization capability. The approach presented is independent of the visual features used, and this is demonstrated by comparing re-localization performance between multi-session maps created using the RTAB-Map library with SURF, SIFT, BRIEF, BRISK, KAZE, DAISY, and SuperPoint visual features. The approach is tested on six mapping and six localization sessions recorded at 30 min intervals during sunset using a Google Tango phone in a real apartment.
APA, Harvard, Vancouver, ISO, and other styles
48

Тишин, П. М., А. В. Гончаров, and К. О. Куширець. "РОЗРОБКА ТА ДОСЛІДЖЕННЯ МУЛЬТИАГЕНТНОЇ СИСТЕМИ ДЛЯ ОХОРОНИ ОБ’ЄКТА." Automation of technological and business processes 9, no. 4 (February 5, 2018). http://dx.doi.org/10.15673/atbp.v10i4.825.

Full text
Abstract:
The article describes a multi-agent system based on the OWL ontology, taking into account the FIPA standards for the object security system. The given data on the work of intellectual agents and communication between them, as well as proposals for solving the tasks assigned. The proposed concept allows to significantly reduce the amount of electricity consumed during the operation of the object tracking system at a certain perimeter for the purpose of object protection. The range of tasks to be solved is not limited only to the implementation of the multi-agent system, among other things, such algorithms as constructing the proposed route of the object's movement, selecting a camera for tracking the object, diagnosing the system for errors are realized. Due to the peculiarities of the interaction of intelligent agents, this system is subject to simple expansion in the event of an increase in the surveillance area or an increase in the number of sensors in a certain territory. Structurally, the security system consists of a control center with a control system, where an operator sits observing the perimeter around the protected object; a set of microcomputers monitoring a certain sector of the protected area using sensors (main and auxiliary); two rotary cameras providing a visual inspection of the surroundings of the facility. An effective communication algorithm and a set of rules for agents allows you to capture the maximum number of objects using the shared resources of agents of two cameras. To prevent the failure of the elements of the security system, a monitoring system is provided, which in turn is controlled by an intelligent agent that interacts with the sensor agents and the decision support agent. In the future, the results of the study can be used to improve the protection of objects.
APA, Harvard, Vancouver, ISO, and other styles
49

Bolarinwa, Joseph, Iveta Eimontaite, Tom Mitchell, Sanja Dogramadzi, and Praminda Caleb-Solly. "Assessing the Role of Gaze Tracking in Optimizing Humans-In-The-Loop Telerobotic Operation Using Multimodal Feedback." Frontiers in Robotics and AI 8 (October 4, 2021). http://dx.doi.org/10.3389/frobt.2021.578596.

Full text
Abstract:
A key challenge in achieving effective robot teleoperation is minimizing teleoperators’ cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants’ video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants’ gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators’ gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators’ stand point.
APA, Harvard, Vancouver, ISO, and other styles
50

"A Hybrid Surf-Based Tracking Algorithm with Online Template Generation." International Journal of Innovative Technology and Exploring Engineering 8, no. 12 (October 10, 2019): 794–98. http://dx.doi.org/10.35940/ijitee.l3205.1081219.

Full text
Abstract:
Visual tracking is the most challenging fields in the computer vision scope. Occlusion full or partial remains to be a big mile stone to achieve .This paper deals with occlusion along with illumination change, pose variation, scaling, and unexpected camera motion. This algorithm is interest point based using SURF as detector descriptor algorithm. SURF based Mean-Shift algorithm is combined with Lukas-Kanade tracker. This solves the problem of generation of online templates. These two trackers over the time rectify each other, avoiding any tracking failure. Also, Unscented Kalman Filter is used to predict the location of target if target comes under the influence of any of the above mentioned challenges. This combination makes the algorithm robust and useful when required for long tenure of tracking. This is proven by the results obtained through experiments conducted on various data sets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography