Littérature scientifique sur le sujet « Outdoor vision and weather »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Outdoor vision and weather ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Outdoor vision and weather"

1

Samo, Madiha, Jimiama Mosima Mafeni Mase et Grazziela Figueredo. « Deep Learning with Attention Mechanisms for Road Weather Detection ». Sensors 23, no 2 (10 janvier 2023) : 798. http://dx.doi.org/10.3390/s23020798.

Texte intégral
Résumé :
There is great interest in automatically detecting road weather and understanding its impacts on the overall safety of the transport network. This can, for example, support road condition-based maintenance or even serve as detection systems that assist safe driving during adverse climate conditions. In computer vision, previous work has demonstrated the effectiveness of deep learning in predicting weather conditions from outdoor images. However, training deep learning models to accurately predict weather conditions using real-world road-facing images is difficult due to: (1) the simultaneous occurrence of multiple weather conditions; (2) imbalanced occurrence of weather conditions throughout the year; and (3) road idiosyncrasies, such as road layouts, illumination, and road objects, etc. In this paper, we explore the use of a focal loss function to force the learning process to focus on weather instances that are hard to learn with the objective of helping address data imbalances. In addition, we explore the attention mechanism for pixel-based dynamic weight adjustment to handle road idiosyncrasies using state-of-the-art vision transformer models. Experiments with a novel multi-label road weather dataset show that focal loss significantly increases the accuracy of computer vision approaches for imbalanced weather conditions. Furthermore, vision transformers outperform current state-of-the-art convolutional neural networks in predicting weather conditions with a validation accuracy of 92% and an F1-score of 81.22%, which is impressive considering the imbalanced nature of the dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Karoon, Kholud A., et Zainab N. Nemer. « A Review of Methods of Removing Haze from An Image ». International Journal of Electrical and Electronics Research 10, no 3 (30 septembre 2022) : 742–46. http://dx.doi.org/10.37391/ijeer.100354.

Texte intégral
Résumé :
A literature review aids in comprehending and gaining further information about a certain area of a subject. The presence of haze, fog, smoke, rain, and other harsh weather conditions affects outdoor photos. Images taken in unnatural weather have weak contrast and poor colors. This may make detecting objects in the produced hazy pictures difficult. In computer vision, scenes and images taken in a foggy atmosphere suffer from blurring. This work covers a study of many remove haze algorithms for eliminating haze collected in real-world weather scenarios in order to recover haze-free images rapidly and with improved quality. The contrast, viewing range, and color accuracy have been enhanced. All of these techniques it is used in countless fields. Some of the applications that use this technology outdoor surveillance, object recognition, underwater photography, and so on.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kim, Bong Keun, et Yasushi Sumi. « Vision-Based Safety-Related Sensors in Low Visibility by Fog ». Sensors 20, no 10 (15 mai 2020) : 2812. http://dx.doi.org/10.3390/s20102812.

Texte intégral
Résumé :
Mobile service robots are expanding their use to outdoor areas affected by various weather conditions, but the outdoor environment directly affects the functional safety of robots implemented by vision-based safety-related sensors (SRSs). Therefore, this paper aims to set the fog as the environmental condition of the robot and to understand the relationship between the quantified value of the environmental conditions and the functional safety performance of the robot. To this end, the safety functions of the robot built using SRS and the requirements for the outdoor environment affecting them are described first. The method of controlling visibility for evaluating the safety function of SRS is described through the measurement and control of visibility, a quantitative means of expressing the concentration of fog, and wavelength analysis of various SRS light sources. Finally, object recognition experiments using vision-based SRS for robots are conducted at low visibility. Through this, it is verified that the proposed method is a specific and effective method for verifying the functional safety of the robot using the vision-based SRS, for low visibility environmental requirements.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Liu, Wei, Yue Yang et Longsheng Wei. « Weather Recognition of Street Scene Based on Sparse Deep Neural Networks ». Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no 3 (19 mai 2017) : 403–8. http://dx.doi.org/10.20965/jaciii.2017.p0403.

Texte intégral
Résumé :
Recognizing different weather conditions is a core component of many different applications of outdoor video analysis and computer vision. Street analysis performance, including detecting street objects, detecting road lines, recognizing street sign and etc., varies greatly with weather, so modeling based on weather recognition is the key resolution in this field. Features derived from intrinsic properties of different weather conditions contribute to successful classification. We first propose using deep learning features from convolutional neural networks (CNN) for fine recognition. In order to reduce the parameter redundancy in CNN, we used sparse decomposition to dramatically cut down the computation. Recognition results for databases show superior performance and indicate the effectiveness of extracted features.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Uhm, Taeyoung, Jeongwoo Park, Jungwoo Lee, Gideok Bae, Geonhui Ki et Youngho Choi. « Design of Multimodal Sensor Module for Outdoor Robot Surveillance System ». Electronics 11, no 14 (15 juillet 2022) : 2214. http://dx.doi.org/10.3390/electronics11142214.

Texte intégral
Résumé :
Recent studies on surveillance systems have employed various sensors to recognize and understand outdoor environments. In a complex outdoor environment, useful sensor data obtained under all weather conditions, during the night and day, can be utilized for application to robots in a real environment. Autonomous surveillance systems require a sensor system that can acquire various types of sensor data and can be easily mounted on fixed and mobile agents. In this study, we propose a method for modularizing multiple vision and sound sensors into one system, extracting data synchronized with 3D LiDAR sensors, and matching them to obtain data from various outdoor environments. The proposed multimodal sensor module can acquire six types of images: RGB, thermal, night vision, depth, fast RGB, and IR. Using the proposed module with a 3D LiDAR sensor, multimodal sensor data were obtained from fixed and mobile agents and tested for more than four years. To further prove its usefulness, this module was used as a monitoring system for six months to monitor anomalies occurring at a given site. In the future, we expect that the data obtained from multimodal sensor systems can be used for various applications in outdoor environments.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Osorio Quero, C., D. Durini, J. Rangel-Magdaleno, J. Martinez-Carranza et R. Ramos-Garcia. « Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions ». Micromachines 13, no 5 (20 mai 2022) : 795. http://dx.doi.org/10.3390/mi13050795.

Texte intégral
Résumé :
In the last decade, the vision systems have improved their capabilities to capture 3D images in bad weather scenarios. Currently, there exist several techniques for image acquisition in foggy or rainy scenarios that use infrared (IR) sensors. Due to the reduced light scattering at the IR spectra it is possible to discriminate the objects in a scene compared with the images obtained in the visible spectrum. Therefore, in this work, we proposed 3D image generation in foggy conditions using the single-pixel imaging (SPI) active illumination approach in combination with the Time-of-Flight technique (ToF) at 1550 nm wavelength. For the generation of 3D images, we make use of space-filling projection with compressed sensing (CS-SRCNN) and depth information based on ToF. To evaluate the performance, the vision system included a designed test chamber to simulate different fog and background illumination environments and calculate the parameters related to image quality.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Su, Cheng, Yuan Biao Zhang, Wei Xia Luan, Zhi Xiong Wei et Rui Ming Zeng. « Single Image Defogging Algorithm Based on Sparsity ». Applied Mechanics and Materials 373-375 (août 2013) : 558–63. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.558.

Texte intégral
Résumé :
In order to deal with the influence of adverse weather (such as dust fog) over vision system of the outdoor machine, this paper proposes a real-time image defogging method basing on sparsity. This method estimates the radiation intensity of airlight based on the dark channel prior statistical law and adopts sparse decomposition and reconstruction to figure out the atmospheric veil, taking advantage of the sparsity of this issue. By solving the equation for imaging based on the atmospheric scattering model, we can obtain the atmospheric radiation intensity under the ideal condition and succeed in image defogging. Experimental results show that this method can effectively improve the degradation of image in adverse weather (foggy) and raise its sharpness.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yang, Hee-Deok. « Restoring Raindrops Using Attentive Generative Adversarial Networks ». Applied Sciences 11, no 15 (30 juillet 2021) : 7034. http://dx.doi.org/10.3390/app11157034.

Texte intégral
Résumé :
Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kit Ng, Chin, Soon Nyean Cheong, Wen Wen-Jiun Yap et Yee Loo Foo. « Outdoor Illegal Parking Detection System Using Convolutional Neural Network on Raspberry Pi ». International Journal of Engineering & ; Technology 7, no 3.7 (4 juillet 2018) : 17. http://dx.doi.org/10.14419/ijet.v7i3.7.16197.

Texte intégral
Résumé :
This paper proposes a cost-effective vision-based outdoor illegal parking detection system, iConvPark, to automatize the detection of illegally parked vehicles by providing real-time notification regarding the occurrences and locations of illegal parking cases, thereby improving effectiveness of parking rules and regulations enforcement. The iConvPark is implemented on a Raspberry Pi with the use of Convolutional Neural Network as the classifier to identify illegally parked vehicles based on live parking lot image retrieved via an IP camera. The system has been implemented at a university parking lot to detect illegal parking events. Evaluation results show that our proposed system is capable of detecting illegally parked vehicles with precision rate of 1.00 and recall rate of 0.94, implying that the detection is robust against changes in light intensity and the presence of shadow effects under different weather conditions, attributed to the superiority offered by CNN.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jung-San Lee, Jung-San Lee, Yun-Yi Fan Jung-San Lee, Hsin-Yu Lee Yun-Yi Fan, Gah Wee Yong Hsin-Yu Lee et Ying-Chin Chen Gah Wee Yong. « Image Dehazing Technique Based on Sky Weight Detection and Fusion Transmission ». 網際網路技術學刊 23, no 5 (septembre 2022) : 967–80. http://dx.doi.org/10.53106/160792642022092305005.

Texte intégral
Résumé :
<p>Computer vision techniques are widely applied to the object detection, license plate recognition, remote sensing, and outdoor monitoring system. The performance of these applications mainly relies on the high quality of outdoor image. However, an outdoor image can be led to contrast decrease, color distortion, and unclear structure by poor weather conditions and human factors such as haze, fog, and air pollution. These issues may lower down the sharpness of a photo. Despite of the single-image dehazing is used to solve these issues, it cannot achieve a satisfactory result when the method deals with the bright scene and sky area. In this article, we aim to design an adaptive dehazing technique based on fusion transmission and sky weight detection. The sky weight detection is employed to distinguish the foreground and background, while detected results are applied to the fusion strategy to calculate deep and shallow transmissions. Thus, this can get rids of the subject of over-adjustment. Experimental results have demonstrated that the new method can outperform the latest state-of-the-art methods in terms of subjective and the objective assessments.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Outdoor vision and weather"

1

CROCI, ALBERTO. « A novel approach to rainfall measuring : methodology, field test and business opportunity ». Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2677708.

Texte intégral
Résumé :
Being able to measure rainfall is crucial in everyday life. The more rainfall measures are accurate, spatially distributed and detailed in time, the more forecast models - be they meteorological or hydrological - can be accurate. Safety on travel networks could be increased by informing users about the nearby roads’ conditions in real time. In the agricultural sector, being able to gain a detailed knowledge of rainfalls would allow for an optimal management of irrigation, nutrients and phytosanitary treatments. In the sport sector, a better measurement of rainfalls for outdoor events (e.g., motor, motorcycle or bike races) would increase athletes’ safety. Rain gauges are the most common and widely used tools for rainfall measurement. However, the existent monitoring networks still fail in providing accurate spatial representations of localized precipitation events due to the sparseness. This effect is magnified by the intrinsic nature of intense precipitation events, as they are naturally characterized by a great spatial and temporal variability. Potentially, coupling at-ground measures (i.e., coming from pluviometric and disdrometric networks) with remote measurement (e.g., radars or meteorological satellites) could allow to describe the rainfall phenomena in a more continuous and spatially detailed way. However, this kind of approach requires that at-ground measurements are used to calibrate the remote sensors relationships, which leads us back to the dearth of ground networks diffusion. Hence the need to increase the presence of ground measures, in order to gain a better description of the events, and to make a more productive use of the remote sensing technologies. The ambitious aim of the methodology developed in this thesis is to repurpose other sensors already available at ground (e.g., surveillance cameras, webcams, smartphones, cars, etc.) into new source of rain rate measures widely distributed over space and time. The technology, developed to function in daylight conditions, requires that the pictures collected during rainfall events are analyzed to identify and characterize each raindrop. The process leads to an instant measurement of the rain rate associated with the captured image. To improve the robustness of the measurement, we propose to elaborate a higher number of images within a predefined time span (i.e., 12 or more pictures per minute) and to provide an averaged measure over the observed time interval. A schematic summary of how the method works for each acquired image is represented hereinafter : 1. background removal; 2. identification of the rain drops; 3. positioning of each drop in the control volume, by using the blur effect; 4. estimation of drops’ diameters, under the hypothesis that each drop falls at its terminal velocity; 5. rain rate estimation, as the sum of the contributions of each drop. Different techniques for background recognition, drops detection and selection and noise reduction were investigated. Each solution has been applied to the same images sample, in order to identify the combination producing accuracy in the rainfall estimate. The best performing procedure was then validated, by applying it to a wider sample of images. Such a sample was acquired by an experimental station installed on the roof of the Laboratory of Hydraulics of the Politecnico di Torino. The sample includes rainfall events which took place between May 15th, 2016 and February 15th, 2017. Seasonal variability allowed to record events characterized by different intensity in varied light conditions. Moreover, the technology developed during this program of research was patented (2015) and represents the heart of WaterView, spinoff of the Politecnico di Torino founded in February 2015, which is currently in charge of the further development of this technology, its dissemination, and its commercial exploitation.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Asmar, Daniel. « Vision-Inertial SLAM using Natural Features in Outdoor Environments ». Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2843.

Texte intégral
Résumé :
Simultaneous Localization and Mapping (SLAM) is a recursive probabilistic inferencing process used for robot navigation when Global Positioning Systems (GPS) are unavailable. SLAM operates by building a map of the robot environment, while concurrently localizing the robot within this map. The ultimate goal of SLAM is to operate anywhere using the environment's natural features as landmarks. Such a goal is difficult to achieve for several reasons. Firstly, different environments contain different types of natural features, each exhibiting large variance in its shape and appearance. Secondly, objects look differently from different viewpoints and it is therefore difficult to always recognize them. Thirdly, in most outdoor environments it is not possible to predict the motion of a vehicle using wheel encoders because of errors caused by slippage. Finally, the design of a SLAM system to operate in a large-scale outdoor setting is in itself a challenge.

The above issues are addressed as follows. Firstly, a camera is used to recognize the environmental context (e. g. , indoor office, outdoor park) by analyzing the holistic spectral content of images of the robot's surroundings. A type of feature (e. g. , trees for a park) is then chosen for SLAM that is likely observable in the recognized setting. A novel tree detection system is introduced, which is based on perceptually organizing the content of images into quasi-vertical structures and marking those structures that intersect ground level as tree trunks. Secondly, a new tree recognition system is proposed, which is based on extracting Scale Invariant Feature Transform (SIFT) features on each tree trunk region and matching trees in feature space. Thirdly, dead-reckoning is performed via an Inertial Navigation System (INS), bounded by non-holonomic constraints. INS are insensitive to slippage and varying ground conditions. Finally, the developed Computer Vision and Inertial systems are integrated within the framework of an Extended Kalman Filter into a working Vision-INS SLAM system, named VisSLAM.

VisSLAM is tested on data collected during a real test run in an outdoor unstructured environment. Three test scenarios are proposed, ranging from semi-automatic detection, recognition, and initialization to a fully automated SLAM system. The first two scenarios are used to verify the presented inertial and Computer Vision algorithms in the context of localization, where results indicate accurate vehicle pose estimation for the majority of its journey. The final scenario evaluates the application of the proposed systems for SLAM, where results indicate successful operation for a long portion of the vehicle journey. Although the scope of this thesis is to operate in an outdoor park setting using tree trunks as landmarks, the developed techniques lend themselves to other environments using different natural objects as landmarks.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Catchpole, Jason James. « Adaptive Vision Based Scene Registration for Outdoor Augmented Reality ». The University of Waikato, 2008. http://hdl.handle.net/10289/2581.

Texte intégral
Résumé :
Augmented Reality (AR) involves adding virtual content into real scenes. Scenes are viewed using a Head-Mounted Display or other display type. In order to place content into the user's view of a scene, the user's position and orientation relative to the scene, commonly referred to as their pose, must be determined accurately. This allows the objects to be placed in the correct positions and to remain there when the user moves or the scene changes. It is achieved by tracking the user in relation to their environment using a variety of technology. One technology which has proven to provide accurate results is computer vision. Computer vision involves a computer analysing images and achieving an understanding of them. This may be locating objects such as faces in the images, or in the case of AR, determining the pose of the user. One of the ultimate goals of AR systems is to be capable of operating under any condition. For example, a computer vision system must be robust under a range of different scene types, and under unpredictable environmental conditions due to variable illumination and weather. The majority of existing literature tests algorithms under the assumption of ideal or 'normal' imaging conditions. To ensure robustness under as many circumstances as possible it is also important to evaluate the systems under adverse conditions. This thesis seeks to analyse the effects that variable illumination has on computer vision algorithms. To enable this analysis, test data is required to isolate weather and illumination effects, without other factors such as changes in viewpoint that would bias the results. A new dataset is presented which also allows controlled viewpoint differences in the presence of weather and illumination changes. This is achieved by capturing video from a camera undergoing a repeatable motion sequence. Ground truth data is stored per frame allowing images from the same position under differing environmental conditions, to be easily extracted from the videos. An in depth analysis of six detection algorithms and five matching techniques demonstrates the impact that non-uniform illumination changes can have on vision algorithms. Specifically, shadows can degrade performance and reduce confidence in the system, decrease reliability, or even completely prevent successful operation. An investigation into approaches to improve performance yields techniques that can help reduce the impact of shadows. A novel algorithm is presented that merges reference data captured at different times, resulting in reference data with minimal shadow effects. This can significantly improve performance and reliability when operating on images containing shadow effects. These advances improve the robustness of computer vision systems and extend the range of conditions in which they can operate. This can increase the usefulness of the algorithms and the AR systems that employ them.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ahmed, Maryum F. « Development of a stereo vision system for outdoor mobile robots ». [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0016205.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lin, Li-Heng. « Enhanced stereo vision SLAM for outdoor heavy machine rotation sensing ». Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/25966.

Texte intégral
Résumé :
The thesis presents an enhanced stereo vision Simultaneous Localization and Mapping (SLAM) algorithm that permits reliable camera pose estimation in the presence of directional sunlight illumination causing shadows and non-uniform scene lighting. The algorithm has been developed to measure a mining rope shovel's rotation angle about its vertical axis ("swing" axis). A stereo camera is mounted externally to the shovel house (upper revolvable portion of the shovel), with a clear view of the shovel's lower carbody. As the shovel house swings, the camera revolves with the shovel house in a circular orbit, seeing differing views of the carbody top. While the shovel swings, the algorithm records observed 3D features on the carbody as landmarks, and incrementally builds a 3D map of the landmarks as the camera revolves around the carbody. At the same time, the algorithm localizes the camera with respect to this map. The estimated camera position is in turn used to calculate the shovel swing angle. The algorithm enhancements include a "Locally Maximal" Harris corner selection method which allows for more consistent feature selection in the presence of directional sunlight causing shadows and non-uniform scene lighting. Another enhancement is the use of 3D "Feature Cluster" landmarks rather than single feature landmarks, which improves the robustness of the landmark matching and reduces the SLAM filter's computational cost. The vision-based sensor's maximum swing angle error is less than +/- 1 degree upon map convergence. Results demonstrate the improvements of using the novel techniques compared to past methods.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Alamgir, Nyma. « Computer vision based smoke and fire detection for outdoor environments ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201654/1/Nyma_Alamgir_Thesis.pdf.

Texte intégral
Résumé :
Surveillance Video-based detection of outdoor smoke and fire has been a challenging task due to the chaotic variations of shapes, movement, colour, texture, and density. This thesis contributes to the advancement of the contemporary efforts of smoke and fire detection by proposing novel technical methods and their possible integration into a complete fire safety model. The novel contributions of this thesis include an efficient feature calculation method combining local and global texture properties, the development of deep learning-based models and a conceptual framework to incorporate weather information in the fire safety model for improved accuracy in fire prediction and detection.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Williams, Samuel Grant Dawson. « Real-Time Hybrid Tracking for Outdoor Augmented Reality ». Thesis, University of Canterbury. Computer Science and Software Engineering, 2014. http://hdl.handle.net/10092/9188.

Texte intégral
Résumé :
Outdoor tracking and registration are important enabling technologies for mobile augmented reality. Sensor fusion and image processing can be used to improve global tracking and registration for low-cost mobile devices with limited computational power and sensor accuracy. Prior research has confirmed the benefits of this approach with high-end hardware, however the methods previously used are not ideal for current consumer mobile devices. We discuss the development of a hybrid tracking and registration algorithm that combines multiple sensors and image processing to improve on existing work in both performance and accuracy. As part of this, we developed the Transform Flow toolkit, which is one of the first open source systems for developing and quantifiably evaluating mobile AR tracking algorithms. We used this system to compare our proposed hybrid tracking algorithm with a purely sensor based approach, and to perform a user study to analyse the effects of improved precision on real world tracking tasks. Our results show that our implementation is an improvement over a purely sensor fusion based approach; accuracy is improved up to 25x in some cases with only 2-4ms additional processing per frame, in comparison with other algorithms which can take over 300ms.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Schreiber, Michael J. « Outdoor tracking using computer vision, xenon strobe illumination and retro-reflective landmarks ». Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/18940.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Rosenquist, Calle, et Andreas Evesson. « Visual Servoing In Semi-Structured Outdoor Environments ». Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-653.

Texte intégral
Résumé :

The field of autonomous vehicle navigation and localization is a highly active research

topic. The aim of this thesis is to evaluate the feasibility to use outdoor visual navigation in a semi-structured environment. The goal is to develop a visual navigation system for an autonomous golf ball collection vehicle operating on driving ranges.

The image feature extractors SIFT and PCA-SIFT was evaluated on an image database

consisting of images acquired from 19 outdoor locations over a period of several weeks to

allow different environmental conditions. The results from these tests show that SIFT-type

feature extractors are able to find and match image features with high accuracy. The results also show that this can be improved further by a combination of a lower nearest neighbour threshold and an outlier rejection method to allow more matches and a higher ratio of correct matches. Outliers were found and rejected by fitting the data to a homography model with the RANSAC robust estimator algorithm.

A simulator was developed to evaluate the suggested system with respect to pixel noise from illumination changes, weather and feature position accuracy as well as the distance to features, path shapes and the visual servoing target image (milestone) interval. The system was evaluated on a total of 3 paths, 40 test combinations and 137km driven. The results show that with the relatively simple visual servoing navigation system it is possible to use mono-vision as a sole sensor and navigate semi-structured outdoor environments such as driving ranges.

Styles APA, Harvard, Vancouver, ISO, etc.
10

Linegar, Chris. « Vision-only localisation under extreme appearance change ». Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:608762bd-5608-4e50-ab7b-da454dd52887.

Texte intégral
Résumé :
Robust localisation is a key requirement for autonomous vehicles. However, in order to achieve widespread adoption of this technology, we also require this function to be performed using low-cost hardware. Cameras are appealing due to their information-rich image content and low cost; however, camera-based localisation is difficult because of the problem of appearance change. For example, in outdoor en- vironments the appearance of the world can change dramatically and unpredictably with variations in lighting, weather, season and scene structure. We require autonomous vehicles to be robust under these challenging environmental conditions. This thesis presents Dub4, a vision-only localisation system for autonomous vehicles. The system is founded on the concept of experiences, where an "experience" is a visual memory which models the world under particular conditions. By allowing the system to build up and curate a map of these experiences, we are able to handle cyclic appearance change (lighting, weather and season) as well as adapt to slow structural change. We present a probabilistic framework for predicting which experiences are most likely to match successfully with the live image at run-time, conditioned on the robot's prior use of the map. In addition, we describe an unsupervised algorithm for detecting and modelling higher-level visual features in the environment for localisation. These features are trained on a per-experience basis and are robust to extreme changes in appearance, for example between rain and sun, or day and night. The system is tested on over 1500km of data, from urban and off-road environments, through sun, rain, snow, harsh lighting, at different times of the day and night, and through all seasons. In addition to this extensive offline testing, Dub4 has served as the primary localisation source on a number of autonomous vehicles, including the Oxford University's RobotCar, the 2016 Shell Eco-Marathon, the LUTZ PathFinder Project in Milton Keynes, and the GATEway Project in Greenwich, London.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Outdoor vision and weather"

1

Tian, Jiandong. All Weather Robot Vision. Singapore : Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6429-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Philip, Steele. Whatever the weather ! London : Purnell, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ganeri, Anita. Outdoor science. London : Evans Brothers, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

(Firm), Outdoor Life Books, dir. The extreme weather survival manual. San Francisco, California : Weldon Owen, Inc., 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Schreuder, Duco. Outdoor Lighting : Physics, Vision and Perception. Dordrecht : Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-8602-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

service), SpringerLink (Online, dir. Outdoor Lighting : Physics, Vision and Perception. Dordrecht : Springer Science + Business Media B.V, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Weatherwise : Practical weather lore for sailors and outdoor people. Newton Abbot : David & Charles, 1986.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Reading weather : Where will you be when the storm hits ? Helena, Mon : Falcon, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

United States. National Weather Service. Vision 2005 : National Weather Service strategic plan for weather, water, and climate services, 2000-2005. [Silver Spring, Md.?] : U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Weather Service, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Brown, Tom. The tracker : The vision ; Awakening spirits. New York : One Spirit, 2003.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Outdoor vision and weather"

1

Moodley, Jenade, et Serestina Viriri. « Weather Characterization from Outdoor Scene Images ». Dans Computer Vision and Graphics, 160–70. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00692-1_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yu, Ye, Abhimitra Meka, Mohamed Elgharib, Hans-Peter Seidel, Christian Theobalt et William A. P. Smith. « Self-supervised Outdoor Scene Relighting ». Dans Computer Vision – ECCV 2020, 84–101. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58542-6_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cohen, Andrea, Johannes L. Schönberger, Pablo Speciale, Torsten Sattler, Jan-Michael Frahm et Marc Pollefeys. « Indoor-Outdoor 3D Reconstruction Alignment ». Dans Computer Vision – ECCV 2016, 285–300. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46487-9_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Paulescu, Marius, Eugenia Paulescu, Paul Gravila et Viorel Badescu. « Outdoor Operation of PV Systems ». Dans Weather Modeling and Forecasting of PV Systems Operation, 271–324. London : Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4649-0_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tian, Jiandong. « Underwater Descattering from Light Field ». Dans All Weather Robot Vision, 271–87. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tian, Jiandong. « Applications and Future Work ». Dans All Weather Robot Vision, 289–311. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tian, Jiandong. « Spectral Power Distributions and Reflectance Calculations for Robot Vision ». Dans All Weather Robot Vision, 29–53. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tian, Jiandong. « Shadow Modeling and Detection ». Dans All Weather Robot Vision, 77–119. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Tian, Jiandong. « Imaging Modeling and Camera Sensitivity Recovery ». Dans All Weather Robot Vision, 55–75. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tian, Jiandong. « Rain and Snow Removal ». Dans All Weather Robot Vision, 189–227. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6429-8_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Outdoor vision and weather"

1

Zhang, Jinsong, Kalyan Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenman et Jean-Francois Lalonde. « All-Weather Deep Outdoor Lighting Estimation ». Dans 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01040.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

« A Method of Weather Recognition based on Outdoor Images ». Dans International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0004724005100516.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pan, Yiqun, Yan Qu et Yuming Li. « Cooling Loads Prediction of 2010 Shanghai World Expo ». Dans ASME 2009 3rd International Conference on Energy Sustainability collocated with the Heat Transfer and InterPACK09 Conferences. ASMEDC, 2009. http://dx.doi.org/10.1115/es2009-90263.

Texte intégral
Résumé :
The paper predicts and studies on the cooling loads of the pavilions in 2010 Shanghai World Expo based on the general planning of the expo. The simulation models are established using DOE-2, for the various pavilions: 5 permanent pavilions, national pavilions, international organization pavilions, corporate pavilions, and temporary exhibition pavilions. The modularization method is used to simplify the simulation models of the temporary exhibition pavilions. The cooling loads of the various pavilions from May 1st to Oct 31st 2010 are simulated and analyzed, including hourly cooling loads, monthly cooling loads and hourly cooling loads on summer design day. Lastly, three factors — weather, visitor flow rate and outdoor air supplying mode, are selected to conduct the uncertainty analysis on their impact on the cooling loads.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Federici, John F., Jianjun Ma et Lothar Moeller. « Weather Impact on Outdoor Terahertz Wireless Links ». Dans NANOCOM' 15 : ACM The Second Annual International Conference on Nanoscale Computing and Communication. New York, NY, USA : ACM, 2015. http://dx.doi.org/10.1145/2800795.2800823.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pentland, A., B. Bolles, S. Barnard et M. Fischler. « Outdoor Model-Based Vision ». Dans Machine Vision. Washington, D.C. : Optica Publishing Group, 1987. http://dx.doi.org/10.1364/mv.1987.wa3.

Texte intégral
Résumé :
DARPA’s Autonomous Land Vehicle (ALV) project is intended to develop and demonstrate vision systems capable of navigating in outdoor, natural environments. The major problems faced by this project, as we see it, are (1) developing a general-purpose vocabulary of models that are sufficient to describe most of the important landmarks that the vehicle will encounder, and (2) developing recognition techiques that will allow us to recognize these models from sensor data in both in a directed, top-down manner as well as in an unguided bottom-up fashion.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Narasimhan, Srinivasa G., et Shree K. Nayar. « Vision and the weather ». Dans Photonics West 2001 - Electronic Imaging, sous la direction de Bernice E. Rogowitz et Thrasyvoulos N. Pappas. SPIE, 2001. http://dx.doi.org/10.1117/12.429497.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Nayar, S. K., et S. G. Narasimhan. « Vision in bad weather ». Dans Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE, 1999. http://dx.doi.org/10.1109/iccv.1999.790306.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kawakami, Sota, Kei Okada, Naoko Nitta, Kazuaki Nakamura et Noboru Babaguchi. « Semi-Supervised Outdoor Image Generation Conditioned on Weather Signals ». Dans 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412139.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Anderson, Mark C., Kent L. Gee, Daniel J. Novakovich, Logan T. Mathews et Zachary T. Jones. « Comparing two weather-robust microphone configurations for outdoor measurements ». Dans 179th Meeting of the Acoustical Society of America. ASA, 2020. http://dx.doi.org/10.1121/2.0001561.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Campbell, NW, WPJ Mackeown, BT Thomas et T. Troscianko. « Automatic Interpretation of Outdoor Scenes. » Dans British Machine Vision Conference 1995. British Machine Vision Association, 1995. http://dx.doi.org/10.5244/c.9.30.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie