Academic literature on the topic 'Visual Camera Failures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual Camera Failures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual Camera Failures"

1

Atif, Muhammad, Andrea Ceccarelli, Tommaso Zoppi, and Andrea Bondavalli. "Tolerate Failures of the Visual Camera With Robust Image Classifiers." IEEE Access 11 (2023): 5132–43. http://dx.doi.org/10.1109/access.2023.3237394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xiaoguo, Qihan Liu, Bingqing Zheng, Huiqing Wang, and Qing Wang. "A visual simultaneous localization and mapping approach based on scene segmentation and incremental optimization." International Journal of Advanced Robotic Systems 17, no. 6 (November 1, 2020): 172988142097766. http://dx.doi.org/10.1177/1729881420977669.

Full text
Abstract:
Existing visual simultaneous localization and mapping (V-SLAM) algorithms are usually sensitive to the situation with sparse landmarks in the environment and large view transformation of camera motion, and they are liable to generate large pose errors that lead to track failures due to the decrease of the matching rate of feature points. Aiming at the above problems, this article proposes an improved V-SLAM method based on scene segmentation and incremental optimization strategy. In the front end, this article proposes a scene segmentation algorithm considering camera motion direction and angle. By segmenting the trajectory and adding camera motion direction to the tracking thread, an effective prediction model of camera motion in the scene with sparse landmarks and large view transformation is realized. In the back end, this article proposes an incremental optimization method combining segmentation information and an optimization method for tracking prediction model. By incrementally adding the state parameters and reusing the computed results, high-precision results of the camera trajectory and feature points are obtained with satisfactory computing speed. The performance of our algorithm is evaluated by two well-known datasets: TUM RGB-D and NYUDv2 RGB-D. The experimental results demonstrate that our method improves the computational efficiency by 10.2% compared with state-of-the-art V-SLAMs on the desktop platform and by 22.4% on the embedded platform, respectively. Meanwhile, the robustness of our method is better than that of ORB-SLAM2 on the TUM RGB-D dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Irmisch, P., D. Baumbach, and I. Ernst. "ROBUST VISUAL-INERTIAL ODOMETRY IN DYNAMIC ENVIRONMENTS USING SEMANTIC SEGMENTATION FOR FEATURE SELECTION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 435–42. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-435-2020.

Full text
Abstract:
Abstract. Camera based navigation in dynamic environments with high content of moving objects is challenging. Keypoint-based localization methods need to reliably reject features that do not belong to the static background. Here, traditional statistical methods for outlier rejection quickly reach their limits. A common approach is the combination with an inertial measurement unit for visual-inertial odometry. Also, deep learning based semantic segmentation was recently successfully applied in camera based localization to identify features on common objects. In this work, we study the application of mask-based feature selection based on semantic segmentation for robust localization in high dynamic environments. We focus on visual-inertial odometry, but similarly investigate a state-of-the-art pure vision-based method as baseline. For a versatile evaluation, we use challenging self-recorded datasets based on different sensor systems. This includes a combined dataset of a real world system and its synthetic clone with a large number of humans for in-depth analysis. We further deploy large-scale datasets from pedestrian navigation in a mall with escalator scenes and vehicle navigation during the day and at night. Our results show that visual-inertial odometry performs generally well in dynamic environments itself, but also shows significant failures in challenging scenes, which are prevented by using the segmentation aid.
APA, Harvard, Vancouver, ISO, and other styles
4

Bartram, Angela. "When the Image Takes over the Real: Holography and Its Potential within Acts of Visual Documentation." Arts 9, no. 1 (February 15, 2020): 24. http://dx.doi.org/10.3390/arts9010024.

Full text
Abstract:
In Camera Lucida, Roland Barthes discusses the capacity of the photographic image to represent “flat death”. Documentation of an event, happening, or time is traditionally reliant on the photographic to determine its ephemeral existence and to secure its legacy within history. However, the traditional photographic document is often unsuitable to capture the real essence and experience of the artwork in situ. The hologram, with its potential to offer a three-dimensional viewpoint, suggests a desirable solution. However, there are issues concerning how this type of photographic document successfully functions within an art context. Attitudes to methods necessary for artistic production, and holography’s place within the process, are responsible for this problem. The seductive qualities of holography may be attributable to any failure that ensues, but, if used precisely, the process can be effective to create a document for ephemeral art. The failures and successes of the hologram to be reliable as a document of experience are discussed in this article, together with a suggestion of how it might undergo a transformation and reactivation to become an artwork itself.
APA, Harvard, Vancouver, ISO, and other styles
5

Lodinger, Natalie R., and Patricia R. DeLucia. "Angle of Camera View Influences Resumption Lag in a Visual-Motor Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1291. http://dx.doi.org/10.1177/1541931213601803.

Full text
Abstract:
Prior research on interruptions examined the effects of different characteristics of the primary and interrupting tasks on performance of the primary task. One measure is the resumption lag– the time between the end of the interrupting task and the next action in the resumed primary task (Altmann & Trafton, 2004). Prior research showed that an increase in the workload of a task results in an increase in resumption lag (Iqbal & Bailey, 2005). A common feature of prior studies of resumption lag is the use of computer-based tasks. However, interruptions occur in other types of tasks, such as laparoscopic surgery in which errors can result in serious consequences for the patient (Gillespie Chaboyer & Fairweather, 2012). Common interruptions during laparoscopic surgery include equipment failures and communication with team members (e.g., Gillespie et al.,2012). In laparoscopic surgery, a small incision is made in the patient, and a laparoscope is placed inside the body cavity. The surgeon typically views the surgical site on a two-dimensional screen rather than in three-dimensions as in open surgery (Chan et al., 1997). The two-dimensional camera image imposes perceptual and cognitive demands on the surgeon, such as impaired depth perception (Chan et al., 1997; DeLucia & Griswold, 2011) and a limited field-of-view of the site (DeLucia & Griswold, 2011). The present study examined whether top-view and side-view camera angles, which putatively impose different cognitive demands (DeLucia & Griswold, 2011), would differentially affect the resumption lag in a visual-motor task. Participants completed a peg transfer task in which they were interrupted with a mental rotation task of different durations and rotation angles. The duration of the mental rotation task was either short (6 s) or long (12 s), representing relatively low and high cognitive demands, respectively. Smaller rotation angles (0, 60, and 300 degrees from vertical) and greater rotation angles (120, 180 and 240 degrees from vertical) presumably imposed smaller and larger cognitive demands, respectively. Resumption lag was measured as the time between the end of the interruption and the first time a peg was touched in the resumed peg transfer task. Participants needed significantly more time to resume the peg transfer task with the side view compared to the top view, and with the longer mental rotation task duration compared to the shorter duration. The main effect of rotation angle was not significant. The side view also resulted in higher ratings of mental demand, effort, and frustration on the Raw Task Load Index (RTLX), the ratings-only portion of the NASA-TLX (Hart, 2006). Thus, a visual-motor task that is higher in cognitive demand can result in more time to resume a primary task following an interruption. Practical implications are that camera viewing angles associated with lower cognitive demands should be preferred in the operating room when feasible, and that interruption durations should be minimized. However, results also indicated that the side view resulted in longer movement times than the top view, even without an interruption, suggesting that factors other than cognitive demands may account for effects of camera angle on resumption lag; this should be examined in future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Milella, Annalisa, Rosalia Maglietta, Massimo Caccia, and Gabriele Bruzzone. "Robotic inspection of ship hull surfaces using a magnetic crawler and a monocular camera." Sensor Review 37, no. 4 (September 18, 2017): 425–35. http://dx.doi.org/10.1108/sr-02-2017-0021.

Full text
Abstract:
Purpose Periodic inspection of large tonnage vessels is critical to assess integrity and prevent structural failures that could have catastrophic consequences for people and the environment. Currently, inspection operations are undertaken by human surveyors, often in extreme conditions. This paper aims to present an innovative system for the automatic visual inspection of ship hull surfaces, using a magnetic autonomous robotic crawler (MARC) equipped with a low-cost monocular camera. Design/methodology/approach MARC is provided with magnetic tracks that make it able to climb along the vertical walls of a vessel while acquiring close-up images of the traversed surfaces. A homography-based structure-from-motion algorithm is developed to build a mosaic image and also produce a metric representation of the inspected areas. To overcome low resolution and perspective distortion problems in far field due to the tilted and low camera position, a “near to far” strategy is implemented, which incrementally generates an overhead view of the surface, as long as it is traversed by the robot. Findings This paper demonstrates the use of an innovative robotic inspection system for automatic visual inspection of vessels. It presents and validates through experimental tests a mosaicking strategy to build a global view of the structure under inspection. The use of the mosaic image as input to an automatic corrosion detector is also demonstrated. Practical implications This paper may help to automate the inspection process, making it feasible to collect images from places otherwise difficult or impossible to reach for humans and automatically detect defects, such as corroded areas. Originality/value This paper provides a useful step towards the development of a new technology for automatic visual inspection of large tonnage ships.
APA, Harvard, Vancouver, ISO, and other styles
7

Congram, Benjamin, and Timothy Barfoot. "Field Testing and Evaluation of Single-Receiver GPS Odometry for Use in Robotic Navigation." Field Robotics 2, no. 1 (March 10, 2022): 1849–73. http://dx.doi.org/10.55417/fr.2022057.

Full text
Abstract:
Mobile robots rely on odometry to navigate in areas where localization fails. Visual odometry (VO), for instance, is a common solution for obtaining robust and consistent relative motion estimates of the vehicle frame. In contrast, Global Positioning System (GPS) measurements are typically used for absolute positioning and localization. However, when the constraint on absolute accuracy is relaxed, accurate relative position estimates can be found with one single-frequency GPS receiver by using time-differenced carrier phase (TDCP) measurements. In this paper, we implement and field test a single-receiver GPS odometry algorithm based on the existing theory of TDCP. We tailor our method for use on an unmanned ground vehicle (UGV) by incorporating proven robotics tools such as a vehicle motion model and robust cost functions. In the first half of our experiments, we evaluate our odometry on its own via a comparison with VO on the same test trajectories. After 4.3 km of testing, the results show our GPS odometry method has a 79% lower drift rate than a proven stereo VO method while maintaining a smooth error signal despite varying satellite availability. GPS odometry can also make robots more robust to catastrophic failures of their primary sensor when added to existing navigation pipelines. To prove this, we integrate our GPS odometry solution into Visual Teach and Repeat (VT&R), an established visual, path-following navigation framework. We perform further testing to show it can maintain accurate path following and prevent failures in challenging conditions including full camera dropouts. Code is available at https://github.com/utiasASRL/cpo.
APA, Harvard, Vancouver, ISO, and other styles
8

Montazer Zohour, Hamed, Bruno Belzile, Rafael Gomes Braga, and David St-Onge. "Minimize Tracking Occlusion in Collaborative Pick-and-Place Tasks: An Analytical Approach for Non-Wrist-Partitioned Manipulators." Sensors 22, no. 17 (August 26, 2022): 6430. http://dx.doi.org/10.3390/s22176430.

Full text
Abstract:
Several industrial pick-and-place applications, such as collaborative assembly lines, rely on visual tracking of the parts. Recurrent occlusions are caused by the manipulator motion decrease line productivity and can provoke failures. This work provides a complete solution for maintaining the occlusion-free line of sight between a variable-pose camera and the object to be picked by a 6R manipulator that is not wrist-partitioned. We consider potential occlusions by the manipulator as well as the operator working at the assembly station. An actuated camera detects the object goal (part to pick) and keeps track of the operator. The approach consists of using the complete set of solutions obtained from the derivation of the univariate polynomial equation solution to the inverse kinematics (IK). Compared to numerical iterative solving methods, our strategy grants us a set of joint positions (posture) for each root of the equation from which we extract the best (minimizing the risks of occlusion). Our analytical-based method, integrating collision and occlusion avoidance optimizations, can contribute to greatly enhancing the efficiency and safety of collaborative assembly workstations. We validate our approach with simulations as well as with physical deployments on commercial hardware.
APA, Harvard, Vancouver, ISO, and other styles
9

Bazeille, Stéphane, Jesus Ortiz, Francesco Rovida, Marco Camurri, Anis Meguenani, Darwin G. Caldwell, and Claudio Semini. "Active camera stabilization to enhance the vision of agile legged robots." Robotica 35, no. 4 (November 17, 2015): 942–60. http://dx.doi.org/10.1017/s0263574715000909.

Full text
Abstract:
SUMMARYLegged robots have the potential to navigate in more challenging terrains than wheeled robots. Unfortunately, their control is more demanding, because they have to deal with the common tasks of mapping and path planning as well as more specific issues of legged locomotion, like balancing and foothold planning. In this paper, we present the integration and the development of a stabilized vision system on the fully torque-controlled hydraulically actuated quadruped robot (HyQ). The active head added onto the robot is composed of a fast pan and tilt unit (PTU) and a high-resolution wide angle stereo camera. The PTU enables camera gaze shifting to a specific area in the environment (both to extend and refine the map) or to track an object while navigating. Moreover, as the quadruped locomotion induces strong regular vibrations, impacts or slippages on rough terrain, we took advantage of the PTU to mechanically compensate for the robot's motions. In this paper, we demonstrate the influence of legged locomotion on the quality of the visual data stream by providing a detailed study of HyQ's motions, which are compared against a rough terrain wheeled robot of the same size. Our proposed Inertial Measurement Unit (IMU)-based controller allows us to decouple the camera from the robot motions. We show through experiments that, by stabilizing the image feedback, we can improve the onboard vision-based processes of tracking and mapping. In particular, during the outdoor tests on the quadruped robot, the use of our camera stabilization system improved the accuracy on the 3D maps by 25%, with a decrease of 50% of mapping failures.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jianming, Yang Liu, Hehua Liu, and Jin Wang. "Learning Local–Global Multiple Correlation Filters for Robust Visual Tracking with Kalman Filter Redetection." Sensors 21, no. 4 (February 5, 2021): 1129. http://dx.doi.org/10.3390/s21041129.

Full text
Abstract:
Visual object tracking is a significant technology for camera-based sensor networks applications. Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance. However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally. In this paper, we propose a local–global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians. First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features. Then, we propose a local–global collaborative strategy to exchange information between local and global correlation filters. This strategy can avoid the wrong learning of the object appearance model. Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF. When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object. The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms. Moreover, our algorithm handles various challenges in object tracking well.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual Camera Failures"

1

Atif. "Robust Recognition of Objects in the Safety Critical Systems: A Case Study of Traffic Sign Recognition." Doctoral thesis, 2023. https://hdl.handle.net/2158/1300518.

Full text
Abstract:
Computer vision allows to automatically detect and recognize objects from images. Nowadays, timely detection of relevant events and efficient recognition of objects in an environment is a critical activity for many Cyber-Physical Systems (CPSs). Particularly, the detection and recognition of traffic signs (TSDR) from images was and is currently being investigated, as it heavily impacts the behaviour of (semi-)autonomous vehicles. TSDR provides drivers with critical traffic sign information, constituting an enabling condition for autonomous driving and attaining a safe circulation of road vehicles. Misclassifying even a single sign may constitute a severe hazard to the environment, infrastructures, and human lives. In the last decades, researchers, practitioners, and companies have worked to devise more efficient and accurate Traffic Sign Recognition (TSR) subsystems or components to be integrated into CPSs. Mostly TSR relies on the same main blocks, namely: i) Datasets creation/identification and pre-processing (e.g., histogram equalization for improvement of contrast), ii) Feature extraction, i.e., Keypoint Detection and Feature Description, and iii) Model learning through non-deep or Deep Neural Networks (DNNs) classifiers. Unfortunately, despite many classifiers and feature extraction strategies applied to images sampled by sensors installed on vehicles that have been developed throughout the years; those efforts did not escalate into a clear benchmark nor provide a comparison of the most common techniques. The main target of this thesis is to improve the robustness and efficiency of TSR systems. Improving the efficiency of the TSR system means achieving better classification performance (classification accuracy) on publicly available datasets, while the robustness of an image classifier is defined as sustaining the performance of the model under various image corruptions or alterations that in our case due to visual camera malfunctioning. Albeit TSDR embraces both detection and recognition of traffic signs, here we focus on the latter aspect of recognition. In the literature, many researchers proposed different techniques for the detection of traffic signs in a full-scene image. Therefore, this thesis starts by providing a comprehensive quantitative comparison of non-deep Machine Learning (ML) algorithms with different feature sets and DNNs for the recognition of traffic signs from three publicly available datasets. Afterward, we propose a TSR system that analyses a sliding window of images instead of considering individual images sampled by sensors on a vehicle. Such TSR processes the last image and recent images sampled by sensors through ML algorithms that take advantage of these multiple information. Particularly, we focused on (i) Long Short-Term Memory (LSTM) networks and (ii) Stacking Meta-Learners, which allow for efficiently combining base-learning classification episodes into a unified and improved meta-level classification. Experimental results by using publicly available datasets show that Stacking Meta-Learners dramatically reduce misclassifications of traffic signs and achieve perfect classification on all three considered datasets. This shows the potential of our novel approach based on sliding windows to be used as an efficient solution for TSR. Furthermore, we consider the failures of visual cameras installed on vehicles that may compromise the correct acquisition of images, delivering a corrupted image to the TSR system. After going through the most common camera failures, we artificially injected 13 different types of visual camera failures into each image contained in the three traffic sign datasets. Then, we train three DNNs to classify a single image, and compare them to our TSR system that uses a sequence (i.e., a sliding window) of images. Experimental results show that sliding windows significantly improve the robustness of the TSR system against altered images. Further, we dig into the results using LIME, a toolbox for explainable Artificial Intelligence (AI). Explainable AI allows an understanding of how a classifier uses the input image to derive its output. We confirm our observations through explainable AI, which allows understand why different classifiers have different TSR performances in case of visual camera failures. Visual camera failures have a negative impact on TSR systems as they may lead to the creation of altered images: therefore, it is of utmost importance to build image classifiers that are robust to those failures. As such, this part of the thesis explores techniques to make TSR systems robust to visual camera failures such as broken lens, blurring, brightness, dead pixels or no noise reduction by image signal processor. Particularly, we discuss to what extent training image classifiers with images altered due to camera failures can improve the robustness of the whole TSR system. Results show that augmenting the training set with altered images significantly improves the overall classification performance of DNN image classifiers and makes the classifier robust against the majority of visual camera failures. In addition, we found that no noise reduction and brightness visual camera failures have a major impact on image classification. We discuss how image classifiers trained using altered images have better accuracy compared to classifiers trained only with original images, even in presence of such failures. Ultimately, we further improve the robustness of the TSR system by crafting a camera failure detector component in conjunction with image classifiers trained using altered images that further enhance the robustness of the TSR system against visual camera failures. We trained different ML-based camera failure detectors (binary classifiers) that work in sequence with DNN to check the images are altered to a certain level of failure severity. Based on the output of the failure detector, we decide that either image will be passed to DNN for classification or will alert the user that the image is severely degraded by visual camera failure and DNN may not be able to classify the image correctly. Experimental results reveal that the failure detector component in conjunction with image classifiers trained using altered images enhances the performance of the TSR system compared to the image classifiers trained using altered images, but it reduces the availability of the system which is 100% in case of image classifiers trained using altered images without camera failure detector component.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual Camera Failures"

1

Baron, Richard P. "Visual Examination and Photography in Failure Analysis." In Characterization and Failure Analysis of Plastics, 1–11. ASM International, 2022. http://dx.doi.org/10.31399/asm.hb.v11b.a0006851.

Full text
Abstract:
Abstract Failure analysis is an investigative process in which the visual observations of features present on a failed component and the surrounding environment are essential in determining the root cause of a failure. This article reviews the basic photographic principles and techniques that are applied to failure analysis, both in the field and in the laboratory. It discusses the processes involved in visual examination, field photographic documentation, and laboratory photographic documentation of failed components. The article describes the operating principles of each part of a professional digital camera. It covers basic photographic principles and manipulation of settings that assist in producing high-quality images. The need for accurate photographic documentation in failure analysis is also presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Dermata, Katerina. "The “Shaken Photos” Project as a Stimulus for Developing Creative Thinking with Preschoolers." In Visual Literacy in The Virtual Realm: The Book of Selected Readings 2021, 13–19. International Visual Literacy Association, 2021. http://dx.doi.org/10.52917/ivlatbsr.2021.012.

Full text
Abstract:
Using a digital camera to achieve a successful result requires from the user, first and foremost, to be familiar with the proper use of the medium and to have obtained basic knowledge of the principles of the art of photography. What is the result in those cases where the photographer either does not know the basic principles of photography or cannot apply them effectively in practice? Is the product considered a “failure” thus leaving photos with no clear and recognizable objects? This paper focuses on designing and implementing an applied educational intervention, themed on ”shaken” photos taken by preschoolers and using this material to create digital narratives. This case study examines "shaken" photos as an opportunity to develop imagination and creativity through photography.
APA, Harvard, Vancouver, ISO, and other styles
3

Cronin, Elizabeth. "Brilliant! Enthusiasm for the Aesthetic Qualities of Lippmann’s Interferential Photography." In Gabriel Lippmann's Colour Photography. Nieuwe Prinsengracht 89 1018 VR Amsterdam Nederland: Amsterdam University Press, 2022. http://dx.doi.org/10.5117/9789463728553_ch07.

Full text
Abstract:
In 1908 Edward Steichen wrote in Camera Work: “… we must not forget Professor Lippmann, who gave us what is undoubtedly the most wonderful process of colour photography.” Comparing the striking colour to that of a good Renoir, Steichen placed aesthetic value on the photographs and yet, the Lippmann method is remembered primarily for its scientific value and for its failure as a stunning but impractical process. This chapter resituates the Lippmann process as a visually compelling object by examining what its practitioners thought about it, where they exhibited the plates and how they were received. Lippmann photographs represent more than difficult science. They were the product of a diverse web of scientists, artists, and photographers all seeking to advance colour photography.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual Camera Failures"

1

Atif, Muhammad, Andrea Ceccarelli, and Andrea Bondavalli. "Reliable Traffic Sign Recognition System." In Anais Estendidos do Latin-American Symposium on Dependable Computing. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/ladc.2021.18528.

Full text
Abstract:
Traffic sign detection and recognition is an important part of Advance Driving Assistance Systems (ADAS), which aims to provide assistance to the driver, autonomous driving, or even monitoring of traffic signs for maintenance. Particularly, misclassification of traffic signs may have severe negative impact on safety of drivers, infrastructures, and human in the surrounding environment. In addition to shape and colors, there are many challenges to recognize traffic signs correctly such as occlusion, motion blur, visual camera’s failures, or physically altering the integrity of traffic signs. In Literature, different machine learning based classifiers and deep classifiers are utilized for Traffic Sign Recognition (TSR), with a few studies consider sequences of frames to commit final decision about traffic signs. This paper proposes a robust TSR against different attacks/failures such as camera related failures, occlusion, broken signs, and patches inserted on traffic signs. We are planning to utilize generative adversarial networks to corrupt images of traffic signs and investigate the robustness of TSR. Furthermore, we are currently working on designing a failure detector, which will help the TSR in advance before recognition, whether images are corrupted with some type of failure. Our conjecture is that failure detector with classifiers will improve the robustness of TSR system.
APA, Harvard, Vancouver, ISO, and other styles
2

Paradis, Olivia P., Nathan T. Jessurun, Mark Tehranipoor, and Navid Asadizanjani. "Color Normalization for Robust Automatic Bill of Materials Generation and Visual Inspection of PCBs." In ISTFA 2020. ASM International, 2020. http://dx.doi.org/10.31399/asm.cp.istfa2020p0172.

Full text
Abstract:
Abstract A Bill of Materials (BoM) is the list of all components present on a Printed Circuit Board (PCB). BoMs are useful for multiple forms of failure analysis and hardware assurance. In this paper, we build upon previous work and present an updated framework to automatically extract a BoM from optical images of PCBs in order to keep up to date with technological advancements. This is accomplished by revising the framework to emphasize the role of machine learning and by incorporating domain knowledge of PCB design and hardware Trojans. For accurate machine learning methods, it is critical that the input PCB images are normalized. Hence, we explore the effect of imaging conditions (e.g. camera type, lighting intensity, and lighting color) on component classification, before and after color correction. This is accomplished by collecting PCB images under a variety of imaging conditions and conducting a linear discriminant analysis before and after color checker profile correction, a method commonly used in photography. This paper shows color correction can effectively reduce the intraclass variance of different PCB components, which results in a higher component classification accuracy. This is extremely desirable for machine learning methods, as increased prior knowledge can decrease the number of ground truth images necessary for training. Finally, we detail the future work for data normalization for more accurate automatic BoM extraction. Index Terms – automatic visual inspection; PCB reverse engineering; PCB competitor analysis; hardware assurance; bill of materials
APA, Harvard, Vancouver, ISO, and other styles
3

Miyahara, Shinya, Hiroyasu Ishikawa, and Yoshio Yoshizawa. "Reaction Behavior of Carbon Dioxide With Liquid Sodium Pool." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75900.

Full text
Abstract:
Reaction behavior of carbon dioxide (CO2) with a liquid sodium pool was experimentally investigated to understand the consequences of boundary tube failure in a sodium-CO2 heat exchanger. In this study, two kinds of experiments were carried out to investigate the reaction behavior. In one experiment, about 1–5g of liquid sodium pool were poured into flowing CO2 to obtain the information mainly about the thermo-chemical conditions to initiate the reaction and the chemical constituents of reaction products. During the experiment, visual observation was made using video-camera and the temperature change of the sodium pool and near the surface was measured by thermocouples. The experimental parameters were the sodium pool diameter, the initial temperature of sodium and CO2, the CO2 flow direction against pool surface, and the initial moisture concentration in CO2. The solid products of sodium-CO2 reaction were sampled and analyzed by X-ray diffraction (XRD), Energy Dispersion X-ray analysis (EDX), Total Organic Carbon analysis (TOC), and chemical analysis. The reaction gas products were also sampled and analyzed by gas chromatography. In the other experiment, CO2 was injected into about 200g of liquid sodium pool to simulate the boundary failure in the sodium-CO2 heat exchanger. The CO2 was fed through a helical coil-type tube dipped into the pool to adjust the temperature to the sodium pool temperature, and injected upward into the pool from a pool bottom using a nozzle attached at the end-side of the tube. The experimental parameters were the initial temperature of sodium, the diameter of the nozzle, the flow rate and the injection time of CO2. The temperature change of sodium pool and the cover gas was measured by thermocouples during the experiment, and the reaction products were sampled and analyzed by the same manner as in the former experiments after the experiment. From these experiments, it became clear that the exothermic reaction occurred above a threshold temperature, and useful and indispensable information such as the resulting temperature and pressure rise and the behavior of solid reaction products in the pool was obtained to evaluate the consequences of boundary tube failure incident in a sodium-CO2 heat exchanger.
APA, Harvard, Vancouver, ISO, and other styles
4

Prakash, Sakthi Kumar Arul, Tobias Mahan, Glen Williams, Christopher McComb, Jessica Menold, and Conrad S. Tucker. "On the Analysis of a Compromised Additive Manufacturing System Using Spatio-Temporal Decomposition." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97732.

Full text
Abstract:
Abstract 3D printing systems have expanded the access to low cost, rapid methods for attaining physical prototypes or products. However, a cyber attack, system error, or operator error on a 3D printing system may result in catastrophic situations, ranging from complete product failure, to small types of defects which weaken the structural integrity of the product, making it unreliable for its intended use. Such defects can be introduced early-on via solid models or through G-codes for printer movements at a later stage. Previous works have studied the use of image classifiers to predict defects in real-time as a print is in progress and also by studying the printed entity once the print is complete. However, a major restriction in the functionality of these methods is the availability of a dataset capturing diverse attacks on printed entities or the printing process. This paper introduces a visual inspection technique that analyzes the amplitude and phase variations of the print head platform arising through induced system manipulations. The method uses an image sequence of a 3D printing process captured via an off the shelf camera to perform an offline multi-scale, multi-orientation decomposition to amplify imperceptible system movements attributable to a change in system parameters. The authors hypothesize that a change in the amplitude envelope and instantaneous phase response as a result of a change in the end effector translational instructions, to be correlated with an AM system compromise. A case study is presented that tests the hypothesis and provides statistical validity in support of the method. The method has the potential to enhance the robustness of cyber-physical systems such as 3D printers that rely on secure, high quality hardware and software to perform optimally.
APA, Harvard, Vancouver, ISO, and other styles
5

Camerini, Claudio, Miguel Freitas, Ricardo Artigas Langer, Jean Pierre von der Weid, and Robson Marnet. "Autonomous Underwater Riser Inspection Tool." In 2010 8th International Pipeline Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ipc2010-31485.

Full text
Abstract:
The inspection of the vertical section of an offshore pipeline, known as the riser, plays a critical part of any integrity management program. This section connects the pipe that runs on the seabed to the production facility, be it a floating platform or a FPSO. Hanging from the platform over deep waters, risers are subject to very extreme operating conditions such as high loads and underwater currents. Corrosion, fatigue, abrasion and damages caused by stray object collisions are factors that must be taken into account, so that oil and gas production are not compromised. A flexible pipeline, a well engineered solution used in most riser installations, provides high reliability while requiring little maintenance but, in spite of advances in project and installation, the inspection of riser pipelines is an immature field where technology has not yet met the user’s demands. In the search for better riser inspection techniques, a project was started to design a new inspection tool. The basic concept consists of an autonomous vehicle, the Autonomous Underwater Riser Inspection tool (AURI), that uses the riser itself for guidance. The AURI tool can control its own velocity and is suited to carry different types of inspection devices. The first AURI prototype is designed to perform visual inspection with an built-in camera system, covering 100% of the external riser surface. The AURI can reach water depths up to a thousand meters. It was built with several embedded security mechanisms to ensure tool recovery in case of failure and also to minimize chances of damage to the pipeline or other equipment. It uses two electrical thrusters to push it along the riser. The mission is set to a maximum depth to be inspected and is considered complete when one of the following conditions is met: (1) maximum pressure on depth sensor is reached or (2) the length of the run is achieved or (3) maximum mission duration is exceeded or (4) maximum allowed tilt is detected by the inclinometer. Thanks to its positive buoyancy, the AURI will always return to the surface even if the electronics fail or the batteries get exhausted. This paper presents the first AURI prototype as well as the preliminary test results.
APA, Harvard, Vancouver, ISO, and other styles
6

Chatar, Crispin, Suhas Suresha, Laetitia Shao, Soumya Gupta, and Indranil Roychoudhury. "Determining Rig State from Computer Vision Analytics." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204086-ms.

Full text
Abstract:
Abstract For years, many companies involved with drilling have searched for the ideal method to calculate the state of a drilling rig. While companies cannot agree on a standard definition of "rig state," they can agree that as we move forward in drilling optimization and with further use of remote operations and automation, that rig state calculation is mandatory in one form or the other. Internally in the service company, many methods exist for calculating rig state, but one new technology area holds promise to deliver a more efficient and cost-effective option with higher accuracy. This technology involves vision analytics. Currently, detection algorithms rely heavily on data collected by sensors installed on the rig. However, relying exclusively on sensor data is problematic because sensors are prone to failure and are expensive to maintain and install. By proposing a machine learning model that relies exclusively on videos collected on the rig floor to infer rig states, it is possible to move away from the existing methods as the industry moves to a future of high-tech rigs. Videos, in contrast to sensor data, are relatively easy to collect from small inexpensive cameras installed at strategic locations. Consequently, this paper presents machine learning pipeline that is implemented to perform rig state determination from videos captured on the rig floor of an operating rig. The pipeline can be described in two parts. Firstly, the annotation pipeline matches each frame of the video dataset to a rig state. A convolutional neural network (CNN) is used to match the time of the video with corresponding sensor data. Secondly, additional CNNs are trained, capturing both spatial and temporal information, to extract an estimation of rig state from videos. The models are trained on a dataset of 3 million frames on a cloud platform using graphics processing units (GPU). Some of the models used include a pretrained visual geometry group (VGG) network, a convolutional three-dimensional (C3D) model that used three-dimensional (3D) convolutions, and a two-stream model that uses optical flow to capture temporal information. The initial results demonstrate this pipeline to be effective in detecting rig states using computer vision analytics.
APA, Harvard, Vancouver, ISO, and other styles
7

Obst, Larry, Andrew Merlino, Alex Parlos, and Dario Rubio. "Fault Identification in a Subsea ESP Power Distribution System Using Electrical Waveform Monitoring." In SPE Gulf Coast Section Electric Submersible Pumps Symposium. SPE, 2021. http://dx.doi.org/10.2118/204506-ms.

Full text
Abstract:
Abstract This paper describes the technology and processes used to identify in a timely matter the source of an Instantaneous Over Current (IOC) trip during an ESP re-start at Shell Perdido SPAR. Monitoring health condition of subsea ESPs is challenging. ESPs operate in harsh and remote environments which makes it difficult to implement and maintain any in-situ monitoring system. Shell operates five subsea ESPs and implemented a topside conditioning monitoring system using electrical waveform analysis. The Perdido SPAR had a scheduled maintenance shutdown in April 2019. While ramping the facility down on April 19, 2019 the variable frequency drive (VFD) for ESP-E tripped on a cell overvoltage fault. The cell was changed, but the VFD continued to trip on instantaneous overcurrent. During ramp up beginning April 29, 2019 most equipment came back online smoothly, but the VFD of the particular ESP labeled ESP-E continued to experience the problem that was causing overcurrent trips, preventing restart. Initial investigations could not pinpoint the source of the issue. On May 1, 2019 Shell sought to investigate this issue using high-frequency electrical waveform data recorded topside as an attempt to better pinpoint the source of this trip. Analysis of electrical waveform before, during and after the IOC trip found an intermittent shorting/arcing at the VFD and ruled out any issues with the 7,000-foot-long umbilical cable or ESP motor. Upon further inspection, a VFD technician was able to visually identify the source of the problem. Relying in part on electrical waveform findings, VFD technician found failed outer jackets in the MV shielded cables at the output filter section creating a ground path from the VFD output bus via the cable shield. The cables were replaced, and the problem was alleviated allowing the system to return to normal operation. Shell credits quick and accurate analysis of electrical waveform with accelerating troubleshooting activities on the VFD, saving approximately 1-2 days of troubleshooting time and associated downtime savings, that translate to approximately 50,000 BOE deferment reduction. Analysis of high-frequency electrical waveform using physics-based and machine learning algorithms enables one to extract long-term changes in ESP health, while filtering out the shorter-term changes caused by operating condition variations. This novel approach to analysis provides operators with a reliable source of information for troubleshooting and diagnosing failure events to reduce work-over costs and limit production losses.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography