Literatura académica sobre el tema "Deep learning, computer vision, safety, road scene understanding"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Deep learning, computer vision, safety, road scene understanding".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Deep learning, computer vision, safety, road scene understanding"

1

Trabelsi, Rim, Redouane Khemmar, Benoit Decoux, Jean-Yves Ertaud y Rémi Butteau. "Recent Advances in Vision-Based On-Road Behaviors Understanding: A Critical Survey". Sensors 22, n.º 7 (30 de marzo de 2022): 2654. http://dx.doi.org/10.3390/s22072654.

Texto completo
Resumen
On-road behavior analysis is a crucial and challenging problem in the autonomous driving vision-based area. Several endeavors have been proposed to deal with different related tasks and it has gained wide attention recently. Much of the excitement about on-road behavior understanding has been the labor of advancement witnessed in the fields of computer vision, machine, and deep learning. Remarkable achievements have been made in the Road Behavior Understanding area over the last years. This paper reviews 100+ papers of on-road behavior analysis related work in the light of the milestones achieved, spanning over the last 2 decades. This review paper provides the first attempt to draw smart mobility researchers’ attention to the road behavior understanding field and its potential impact on road safety to the whole road agents such as: drivers, pedestrians, stuffs, etc. To push for an holistic understanding, we investigate the complementary relationships between different elementary tasks that we define as the main components of road behavior understanding to achieve a comprehensive understanding of approaches and techniques. For this, five related topics have been covered in this review, including situational awareness, driver-road interaction, road scene understanding, trajectories forecast, driving activities, and status analysis. This paper also reviews the contribution of deep learning approaches and makes an in-depth analysis of recent benchmarks as well, with a specific taxonomy that can help stakeholders in selecting their best-fit architecture. We also finally provide a comprehensive discussion leading us to identify novel research directions some of which have been implemented and validated in our current smart mobility research work. This paper presents the first survey of road behavior understanding-related work without overlap with existing reviews.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Samo, Madiha, Jimiama Mosima Mafeni Mase y Grazziela Figueredo. "Deep Learning with Attention Mechanisms for Road Weather Detection". Sensors 23, n.º 2 (10 de enero de 2023): 798. http://dx.doi.org/10.3390/s23020798.

Texto completo
Resumen
There is great interest in automatically detecting road weather and understanding its impacts on the overall safety of the transport network. This can, for example, support road condition-based maintenance or even serve as detection systems that assist safe driving during adverse climate conditions. In computer vision, previous work has demonstrated the effectiveness of deep learning in predicting weather conditions from outdoor images. However, training deep learning models to accurately predict weather conditions using real-world road-facing images is difficult due to: (1) the simultaneous occurrence of multiple weather conditions; (2) imbalanced occurrence of weather conditions throughout the year; and (3) road idiosyncrasies, such as road layouts, illumination, and road objects, etc. In this paper, we explore the use of a focal loss function to force the learning process to focus on weather instances that are hard to learn with the objective of helping address data imbalances. In addition, we explore the attention mechanism for pixel-based dynamic weight adjustment to handle road idiosyncrasies using state-of-the-art vision transformer models. Experiments with a novel multi-label road weather dataset show that focal loss significantly increases the accuracy of computer vision approaches for imbalanced weather conditions. Furthermore, vision transformers outperform current state-of-the-art convolutional neural networks in predicting weather conditions with a validation accuracy of 92% and an F1-score of 81.22%, which is impressive considering the imbalanced nature of the dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pavel, Monirul Islam, Siok Yee Tan y Azizi Abdullah. "Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review". Applied Sciences 12, n.º 14 (6 de julio de 2022): 6831. http://dx.doi.org/10.3390/app12146831.

Texto completo
Resumen
In the past decade, autonomous vehicle systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. However, the AVS is still far away from mass production because of the high cost of sensor fusion and a lack of combination of top-tier solutions to tackle uncertainty on roads. To reduce sensor dependency and to increase manufacturing along with enhancing research, deep learning-based approaches could be the best alternative for developing practical AVS. With this vision, in this systematic review paper, we broadly discussed the literature of deep learning for AVS from the past decade for real-life implementation in core fields. The systematic review on AVS implementing deep learning is categorized into several modules that cover activities including perception analysis (vehicle detection, traffic signs and light identification, pedestrian detection, lane and curve detection, road object localization, traffic scene analysis), decision making, end-to-end controlling and prediction, path and motion planning and augmented reality-based HUD, analyzing research works from 2011 to 2021 that focus on RGB camera vision. The literature is also analyzed for final representative outcomes as visualization in augmented reality-based head-up display (AR-HUD) with categories such as early warning, road markings for improved navigation and enhanced safety with overlapping on vehicles and pedestrians in extreme visual conditions to reduce collisions. The contribution of the literature review includes detailed analysis of current state-of-the-art deep learning methods that only rely on RGB camera vision rather than complex sensor fusion. It is expected to offer a pathway for the rapid development of cost-efficient and more secure practical autonomous vehicle systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Mohamed, Mohamed Gomaa y Nicolas Saunier. "Behavior Analysis Using a Multilevel Motion Pattern Learning Framework". Transportation Research Record: Journal of the Transportation Research Board 2528, n.º 1 (enero de 2015): 116–27. http://dx.doi.org/10.3141/2528-13.

Texto completo
Resumen
The increasing availability of video data, through existing traffic cameras or dedicated field data collection, and the development of computer vision techniques pave the way for the collection of massive data sets about the microscopic behavior of road users. Analysis of such data sets helps in understanding normal road user behavior and can be used for realistic prediction of motion and computation of surrogate safety indicators. A multilevel motion pattern learning framework was developed to enable automated scene interpretation, anomalous behavior detection, and surrogate safety analysis. First, points of interest (POIs) were learned on the basis of the Gaussian mixture model and the expectation maximization algorithm and then used to form activity paths (APs). Second, motion patterns, represented by trajectory prototypes, were learned from road users' trajectories in each AP by using a two-stage trajectory clustering method based on spatial then temporal (speed) information. Finally, motion prediction relied on matching at each instant partial trajectories to the learned prototypes to evaluate potential for collision by using computing indicators. An intersection case study demonstrates the framework's ability in many ways: it helps reduce the computation cost up to 90%; it cleans the trajectory data set from tracking outliers; it uses actual trajectories as prototypes without any pre- and postprocessing; and it predicts future motion realistically to compute surrogate safety indicators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kherraki, Amine, Shahzaib Saqib Warraich, Muaz Maqbool y Rajae El Ouazzani. "Residual balanced attention network for real-time traffic scene semantic segmentation". International Journal of Electrical and Computer Engineering (IJECE) 13, n.º 3 (1 de junio de 2023): 3281. http://dx.doi.org/10.11591/ijece.v13i3.pp3281-3289.

Texto completo
Resumen
<span lang="EN-US">Intelligent transportation systems (ITS) are among the most focused research in this century. Actually, autonomous driving provides very advanced tasks in terms of road safety monitoring which include identifying dangers on the road and protecting pedestrians. In the last few years, deep learning (DL) approaches and especially convolutional neural networks (CNNs) have been extensively used to solve ITS problems such as traffic scene semantic segmentation and traffic signs classification. Semantic segmentation is an important task that has been addressed in computer vision (CV). Indeed, traffic scene semantic segmentation using CNNs requires high precision with few computational resources to perceive and segment the scene in real-time. However, we often find related work focusing only on one aspect, the precision, or the number of computational parameters. In this regard, we propose RBANet, a robust and lightweight CNN which uses a new proposed balanced attention module, and a new proposed residual module. Afterward, we have simulated our proposed RBANet using three loss functions to get the best combination using only 0.74M parameters. The RBANet has been evaluated on CamVid, the most used dataset in semantic segmentation, and it has performed well in terms of parameters’ requirements and precision compared to related work.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mauri, Antoine, Redouane Khemmar, Benoit Decoux, Madjid Haddad y Rémi Boutteau. "Real-Time 3D Multi-Object Detection and Localization Based on Deep Learning for Road and Railway Smart Mobility". Journal of Imaging 7, n.º 8 (12 de agosto de 2021): 145. http://dx.doi.org/10.3390/jimaging7080145.

Texto completo
Resumen
For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new real-time deep learning approach for 3D multi-object detection for smart mobility not only on roads, but also on railways. To obtain the 3D bounding boxes of the objects, we modified a proven real-time 2D detector, YOLOv3, to predict 3D object localization, object dimensions, and object orientation. Our method has been evaluated on KITTI’s road dataset as well as on our own hybrid virtual road/rail dataset acquired from the video game Grand Theft Auto (GTA) V. The evaluation of our method on these two datasets shows good accuracy, but more importantly that it can be used in real-time conditions, in road and rail traffic environments. Through our experimental results, we also show the importance of the accuracy of prediction of the regions of interest (RoIs) used in the estimation of 3D bounding box parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lee, Dong-Gyu. "Fast Drivable Areas Estimation with Multi-Task Learning for Real-Time Autonomous Driving Assistant". Applied Sciences 11, n.º 22 (13 de noviembre de 2021): 10713. http://dx.doi.org/10.3390/app112210713.

Texto completo
Resumen
Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder–decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Garg, Prateek, Anirudh Srinivasan Chakravarthy, Murari Mandal, Pratik Narang, Vinay Chamola y Mohsen Guizani. "ISDNet: AI-enabled Instance Segmentation of Aerial Scenes for Smart Cities". ACM Transactions on Internet Technology 21, n.º 3 (31 de agosto de 2021): 1–18. http://dx.doi.org/10.1145/3418205.

Texto completo
Resumen
Aerial scenes captured by UAVs have immense potential in IoT applications related to urban surveillance, road and building segmentation, land cover classification, and so on, which are necessary for the evolution of smart cities. The advancements in deep learning have greatly enhanced visual understanding, but the domain of aerial vision remains largely unexplored. Aerial images pose many unique challenges for performing proper scene parsing such as high-resolution data, small-scaled objects, a large number of objects in the camera view, dense clustering of objects, background clutter, and so on, which greatly hinder the performance of the existing deep learning methods. In this work, we propose ISDNet (Instance Segmentation and Detection Network), a novel network to perform instance segmentation and object detection on visual data captured by UAVs. This work enables aerial image analytics for various needs in a smart city. In particular, we use dilated convolutions to generate improved spatial context, leading to better discrimination between foreground and background features. The proposed network efficiently reuses the segment-mask features by propagating them from early stages using residual connections. Furthermore, ISDNet makes use of effective anchors to accommodate varying object scales and sizes. The proposed method obtains state-of-the-art results in the aerial context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ibrahim, Mohamed, James Haworth y Tao Cheng. "WeatherNet: Recognising Weather and Visual Conditions from Street-Level Images Using Deep Residual Learning". ISPRS International Journal of Geo-Information 8, n.º 12 (30 de noviembre de 2019): 549. http://dx.doi.org/10.3390/ijgi8120549.

Texto completo
Resumen
Extracting information related to weather and visual conditions at a given time and space is indispensable for scene awareness, which strongly impacts our behaviours, from simply walking in a city to riding a bike, driving a car, or autonomous drive-assistance. Despite the significance of this subject, it has still not been fully addressed by the machine intelligence relying on deep learning and computer vision to detect the multi-labels of weather and visual conditions with a unified method that can be easily used in practice. What has been achieved to-date are rather sectorial models that address a limited number of labels that do not cover the wide spectrum of weather and visual conditions. Nonetheless, weather and visual conditions are often addressed individually. In this paper, we introduce a novel framework to automatically extract this information from street-level images relying on deep learning and computer vision using a unified method without any pre-defined constraints in the processed images. A pipeline of four deep convolutional neural network (CNN) models, so-called WeatherNet, is trained, relying on residual learning using ResNet50 architecture, to extract various weather and visual conditions such as dawn/dusk, day and night for time detection, glare for lighting conditions, and clear, rainy, snowy, and foggy for weather conditions. WeatherNet shows strong performance in extracting this information from user-defined images or video streams that can be used but are not limited to autonomous vehicles and drive-assistance systems, tracking behaviours, safety-related research, or even for better understanding cities through images for policy-makers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Gite, Shilpa, Ketan Kotecha y Gheorghita Ghinea. "Context–aware assistive driving: an overview of techniques for mitigating the risks of driver in real-time driving environment". International Journal of Pervasive Computing and Communications ahead-of-print, ahead-of-print (16 de agosto de 2021). http://dx.doi.org/10.1108/ijpcc-11-2020-0192.

Texto completo
Resumen
Purpose This study aims to analyze driver risks in the driving environment. A complete analysis of context aware assistive driving techniques. Context awareness in assistive driving by probabilistic modeling techniques. Advanced techniques using Spatio-temporal techniques, computer vision and deep learning techniques. Design/methodology/approach Autonomous vehicles have been aimed to increase driver safety by introducing vehicle control from the driver to Advanced Driver Assistance Systems (ADAS). The core objective of these systems is to cut down on road accidents by helping the user in various ways. Early anticipation of a particular action would give a prior benefit to the driver to successfully handle the dangers on the road. In this paper, the advancements that have taken place in the use of multi-modal machine learning for assistive driving systems are surveyed. The aim is to help elucidate the recent progress and techniques in the field while also identifying the scope for further research and improvement. The authors take an overview of context-aware driver assistance systems that alert drivers in case of maneuvers by taking advantage of multi-modal human processing to better safety and drivability. Findings There has been a huge improvement and investment in ADAS being a key concept for road safety. In such applications, data is processed and information is extracted from multiple data sources, thus requiring training of machine learning algorithms in a multi-modal style. The domain is fast gaining traction owing to its applications across multiple disciplines with crucial gains. Research limitations/implications The research is focused on deep learning and computer vision-based techniques to generate a context for assistive driving and it would definitely adopt by the ADAS manufacturers. Social implications As context-aware assistive driving would work in real-time and it would save the lives of many drivers, pedestrians. Originality/value This paper provides an understanding of context-aware deep learning frameworks for assistive driving. The research is mainly focused on deep learning and computer vision-based techniques to generate a context for assistive driving. It incorporates the latest state-of-the-art techniques using suitable driving context and the driver is alerted. Many automobile manufacturing companies and researchers would refer to this study for their enhancements.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Deep learning, computer vision, safety, road scene understanding"

1

Schoen, Fabio. "Deep learning methods for safety-critical driving events analysis". Doctoral thesis, 2022. http://hdl.handle.net/2158/1260238.

Texto completo
Resumen
In this thesis, we propose to study the data of crash and near-crash events, collectively called safety-critical driving events. Such data include a footage of the event, acquired from a camera mounted inside the vehicle, and the data from a GPS/IMU module, i.e., speed, acceleration and angular velocity. First, we introduce a novel problem, that we call unsafe maneuver classification, that aims at classifying safety-critical driving events based on the maneuver that leads to the unsafe situation and we propose a two-stream neural architecture based on Convolutional Neural Networks that performs sensor fusion and address the classification task. Then, we propose to integrate the output of an object detector in the classification task, to provide the network explicit knowledge of the entities in the scene. We design a specific architecture that leverages a tracking algorithm to extract information of a single real-world object over time, and then uses attention to ground the prediction on a single (or a few) objects, i.e., the dangerous or in danger ones, leveraging a solution that we called Spatio-Temporal Attention Selector (STAS). Finally, we propose to address video captioning of safety-critical events, with the goal of providing a description of the dangerous situation in a human-understandable form.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Deep learning, computer vision, safety, road scene understanding"

1

Manfio Barbosa, Felipe y Fernando Santos Osório. "3D Perception for Autonomous Mobile Robots Navigation Using Deep Learning for Safe Zones Detection: A Comparative Study". En Computer on the Beach. São José: Universidade do Vale do Itajaí, 2021. http://dx.doi.org/10.14210/cotb.v12.p072-079.

Texto completo
Resumen
Computer vision plays an important role in intelligent systems, particularly for autonomous mobile robots and intelligent vehicles. It is essential to the correct operation of such systems, increasing safety for users/passengers and also for other people in the environment. One of its many levels of analysis is semantic segmentation, which provides powerful insights in scene understanding, a task of utmost importance in autonomous navigation. Recent developments have shown the power of deep learning models applied to semantic segmentation. Besides, 3D data shows up as a richer representation of the world. Although there are many studies comparing the performances of several semantic segmentation models, they mostly consider the task over 2D images and none of them include the recent GAN models in the analysis. In this paper, we carry out the study, implementation and comparison of recent deep learning models for 3D semantic image segmentation. We consider the FCN, SegNet and Pix2Pix models. The 3D images are captured indoors and gathered in a dataset created for the scope of this project. Our main objective is to evaluate and compare the models’ performances and efficiency in detecting obstacles, safe and unsafe zones for autonomous mobile robots navigation. Considering as metrics the mean IoU values, number of parameters and inference time, our experiments show that Pix2Pix, a recent Conditional Generative Adversarial Network, outperforms the FCN and SegNet models in the
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía