Littérature scientifique sur le sujet « Road scene understanding »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Road scene understanding ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Road scene understanding"
Zhou, Wujie, Sijia Lv, Qiuping Jiang et Lu Yu. « Deep Road Scene Understanding ». IEEE Signal Processing Letters 26, no 4 (avril 2019) : 587–91. http://dx.doi.org/10.1109/lsp.2019.2896793.
Texte intégralHuang, Wenqi, Fuzheng Zhang, Aidong Xu, Huajun Chen et Peng Li. « Fusion-based holistic road scene understanding ». Journal of Engineering 2018, no 16 (1 novembre 2018) : 1623–28. http://dx.doi.org/10.1049/joe.2018.8319.
Texte intégralWang, Chao, Huan Wang, Rui Li Wang et Chun Xia Zhao. « Robust Zebra-Crossing Detection for Autonomous Land Vehicles and Driving Assistance Systems ». Applied Mechanics and Materials 556-562 (mai 2014) : 2732–39. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2732.
Texte intégralLiu, Huajun, Cailing Wang et Jingyu Yang. « Vanishing points estimation and road scene understanding based on Bayesian posterior probability ». Industrial Robot : An International Journal 43, no 1 (18 janvier 2016) : 12–21. http://dx.doi.org/10.1108/ir-05-2015-0095.
Texte intégralYasrab, Robail. « ECRU : An Encoder-Decoder Based Convolution Neural Network (CNN) for Road-Scene Understanding ». Journal of Imaging 4, no 10 (8 octobre 2018) : 116. http://dx.doi.org/10.3390/jimaging4100116.
Texte intégralTopfer, Daniel, Jens Spehr, Jan Effertz et Christoph Stiller. « Efficient Road Scene Understanding for Intelligent Vehicles Using Compositional Hierarchical Models ». IEEE Transactions on Intelligent Transportation Systems 16, no 1 (février 2015) : 441–51. http://dx.doi.org/10.1109/tits.2014.2354243.
Texte intégralQin, Yuting, Yuren Chen et Kunhui Lin. « Quantifying the Effects of Visual Road Information on Drivers’ Speed Choices to Promote Self-Explaining Roads ». International Journal of Environmental Research and Public Health 17, no 7 (3 avril 2020) : 2437. http://dx.doi.org/10.3390/ijerph17072437.
Texte intégralJeong, Jinhan, Yook Hyun Yoon et Jahng Hyon Park. « Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation ». Sensors 20, no 9 (26 avril 2020) : 2457. http://dx.doi.org/10.3390/s20092457.
Texte intégralSun, Jee-Young, Seung-Won Jung et Sung-Jea Ko. « Lightweight Prediction and Boundary Attention-Based Semantic Segmentation for Road Scene Understanding ». IEEE Access 8 (2020) : 108449–60. http://dx.doi.org/10.1109/access.2020.3001679.
Texte intégralDeng, Yanzi, Zhaoyang Lu et Jing Li. « Coarse-to-fine road scene segmentation via hierarchical graphical models ». International Journal of Advanced Robotic Systems 16, no 2 (1 mars 2019) : 172988141983116. http://dx.doi.org/10.1177/1729881419831163.
Texte intégralThèses sur le sujet "Road scene understanding"
Habibi, Aghdam Hamed. « Understanding Road Scenes using Deep Neural Networks ». Doctoral thesis, Universitat Rovira i Virgili, 2018. http://hdl.handle.net/10803/461607.
Texte intégralComprender las escenas de la carretera es crucial para los automóviles autónomos. Esto requiere segmentar escenas de carretera en regiones semánticamente significativas y reconocer objetos en una escena. Mientras que los objetos tales como coches y peatones tienen que segmentarse con precisión, puede que no sea necesario detectar y localizar estos objetos en una escena. Sin embargo, la detección y clasificación de objetos tales como señales de tráfico es esencial para ajustarse a las reglas de la carretera. En esta tesis, proponemos un método para la clasificación de señales de tráfico utilizando atributos visuales y redes bayesianas. A continuación, proponemos dos redes neuronales para este fin y desarrollar un nuevo método para crear un conjunto de modelos. A continuación, se estudia la sensibilidad de las redes neuronales frente a las muestras adversarias y se proponen dos redes destructoras que se unen a las redes de clasificación para aumentar su estabilidad frente al ruido. En la segunda parte de la tesis, proponemos una red para detectar señales de tráfico en imágenes de alta resolución en tiempo real y mostrar cómo implementar la técnica de ventana de escaneo dentro de nuestra red usando circunvoluciones dilatadas. A continuación, formulamos el problema de detección como un problema de segmentación y proponemos una red completamente convolucional para detectar señales de tráfico. Finalmente, proponemos una nueva red totalmente convolucional compuesta de módulos de fuego, conexiones de bypass y circunvoluciones consecutivas dilatadas en la última parte de la tesis para escenarios de carretera segmentinc en regiones semánticamente significativas y muestran que es más accuarate y computacionalmente más eficiente en comparación con redes similares
Understanding road scenes is crucial for autonomous cars. This requires segmenting road scenes into semantically meaningful regions and recognizing objects in a scene. While objects such as cars and pedestrians has to be segmented accurately, it might not be necessary to detect and locate these objects in a scene. However, detecting and classifying objects such as traffic signs is essential for conforming to road rules. In this thesis, we first propose a method for classifying traffic signs using visual attributes and Bayesian networks. Then, we propose two neural network for this purpose and develop a new method for creating an ensemble of models. Next, we study sensitivity of neural networks against adversarial samples and propose two denoising networks that are attached to the classification networks to increase their stability against noise. In the second part of the thesis, we first propose a network to detect traffic signs in high-resolution images in real-time and show how to implement the scanning window technique within our network using dilated convolutions. Then, we formulate the detection problem as a segmentation problem and propose a fully convolutional network for detecting traffic signs. Finally, we propose a new fully convolutional network composed of fire modules, bypass connections and consecutive dilated convolutions in the last part of the thesis for segmenting road scenes into semantically meaningful regions and show that it is more accurate and computationally more efficient compared to similar networks.
Lee, Jong Ho. « Understanding the Visual Appearance of Road Scenes Using a Monocular Camera ». Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/795.
Texte intégralWang, Fan. « How polarimetry may contribute to understand reflective road scenes : theory and applications ». Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0003/document.
Texte intégralAdvance Driver Assistance Systems (ADAS) aim to automate/adapt/enhance trans-portation systems for safety and better driving. Various research topics are emerged to focus around the ADAS, including the object detection and recognition, image understanding, disparity map estimation etc. The presence of the specular highlights restricts the accuracy of such algorithms, since it covers the original image texture and leads to the lost of information. Light polarization implicitly encodes the object related information, such as the surface direction, material nature, roughness etc. Under the context of ADAS, we are inspired to further inspect the usage of polarization imaging to remove image highlights and analyze the road scenes.We firstly propose in this thesis to remove the image specularity through polarization by applying a global energy minimization. Polarization information provides a color constraint that reduces the color distortion of the results. The global smoothness assumption further integrates the long range information in the image and produces an improved diffuse image.We secondly propose to use polarization images as a new feature, since for the road scenes, the high reflection appears only upon certain objects such as cars. Polarization features are applied in image understanding and car detection in two different ways. The experimental results show that, once properly fused with rgb-based features, the complementary information provided by the polarization images improve the algorithm accuracy. We finally test the polarization imaging for depth estimation. A post-aggregation stereo matching method is firstly proposed and validated on a color database. A fusion rule is then proposed to use the polarization imaging as a constraint to the disparity map estimation. From these applications, we proved the potential and the feasibility to apply polariza-tion imaging in outdoor tasks for ADAS
Kung, Wen Yao, et 龔芠瑤. « Road Scene Understanding with Semantic Segmentation and Object Hazard Level Prediction ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/60494150880821921387.
Texte intégral國立清華大學
資訊工程學系
104
We introduce a method for understanding road scenes and simultaneously predicting the hazard levels of three categories of objects in road scene images by using a fully convolutional network (FCN) architecture. In our approach, with a single input image, the multi-task model produces a _ne segmentation result and a prediction of hazard levels in a form of heatmap. The model can be divided into three parts: shared net, segmentation net, and hazard level net. The shared net and segmentation net use the encoder-decoder architecture provided by Badrinarayanan et al . [2]. The hazard level net is a fully convolution network estimating hazard level of a segment with a coarse segmentation result. We also provide a dataset with the object segmentation ground truth and the hazard levels for training and evaluating the proposed deep networks. To prove that our network can learn highly semantic attributes of objects, we use two measurements to evaluate the performance of our method, and compare our method with a saliency-based method to show the difference between predicting hazard levels and estimating human eyes fixations.
Schoen, Fabio. « Deep learning methods for safety-critical driving events analysis ». Doctoral thesis, 2022. http://hdl.handle.net/2158/1260238.
Texte intégralHummel, Britta [Verfasser]. « Description logic for scene understanding at the example of urban road intersections / von Britta Hummel ». 2009. http://d-nb.info/1000324818/34.
Texte intégralLivres sur le sujet "Road scene understanding"
Voparil, Chris. Reconstructing Pragmatism. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197605721.001.0001.
Texte intégralBiel Portero, Israel, Andrea Carolina Casanova Mejía, Amanda Janneth Riascos Mora, Alba Lucy Ortega Salas, Luis Andrés Salas Zambrano, Franco Andrés Montenegro Coral, Julie Andrea Benavides Melo et al. Challenges and alternatives towards peacebuilding. Sous la direction de Ángela Marcela Castillo Burbano et Claudia Andrea Guerrero Martínez. Ediciones Universidad Cooperativa de Colombia, 2020. http://dx.doi.org/10.16925/9789587602388.
Texte intégralChapitres de livres sur le sujet "Road scene understanding"
Holder, Christopher J., et Toby P. Breckon. « Encoding Stereoscopic Depth Features for Scene Understanding in off-Road Environments ». Dans Lecture Notes in Computer Science, 427–34. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93000-8_48.
Texte intégralKembhavi, Aniruddha, Tom Yeh et Larry S. Davis. « Why Did the Person Cross the Road (There) ? Scene Understanding Using Probabilistic Logic Models and Common Sense Reasoning ». Dans Computer Vision – ECCV 2010, 693–706. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15552-9_50.
Texte intégralAlvarez, Jose M., Felipe Lumbreras, Antonio M. Lopez et Theo Gevers. « Understanding Road Scenes Using Visual Cues and GPS Information ». Dans Computer Vision – ECCV 2012. Workshops and Demonstrations, 635–38. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33885-4_70.
Texte intégralOeljeklaus, Malte. « 5 Global Road Topology from Scene Context Recognition ». Dans An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 38–49. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-38.
Texte intégralOeljeklaus, Malte. « 7 Road Users from Bounding Box Detection ». Dans An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 64–83. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-64.
Texte intégralLapierre, Isabelle, et Claude Laurgeau. « A Road Scene Understanding System based on a Blackboard Architecture ». Dans Advances In Structural And Syntactic Pattern Recognition, 571–85. WORLD SCIENTIFIC, 1993. http://dx.doi.org/10.1142/9789812797919_0048.
Texte intégralOeljeklaus, Malte. « 6 Drivable Road Area from Semantic Image Segmentation ». Dans An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 50–63. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-50.
Texte intégralWinter, Tim. « The Routes of Civilization ». Dans The Silk Road, 23–33. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197605059.003.0002.
Texte intégralActes de conférences sur le sujet "Road scene understanding"
Dhiman, Vikas, Quoc-Huy Tran, Jason J. Corso et Manmohan Chandraker. « A Continuous Occlusion Model for Road Scene Understanding ». Dans 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.469.
Texte intégralSun, Yuan, Hongbo Lu et Zhimin Zhang. « RvGIST : A Holistic Road Feature for Real-Time Road-Scene Understanding ». Dans 2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD). IEEE, 2013. http://dx.doi.org/10.1109/snpd.2013.86.
Texte intégralVenkateshkumar, Suhas Kashetty, Muralikrishna Sridhar et Patrick Ott. « Latent Hierarchical Part Based Models for Road Scene Understanding ». Dans 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2015. http://dx.doi.org/10.1109/iccvw.2015.25.
Texte intégralTsukada, A., M. Ogawa et F. Galpin. « Road structure based scene understanding for intelligent vehicle systems ». Dans 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5653532.
Texte intégralSturgess, Paul, Karteek Alahari, Lubor Ladicky et Philip H. S. Torr. « Combining Appearance and Structure from Motion Features for Road Scene Understanding ». Dans British Machine Vision Conference 2009. British Machine Vision Association, 2009. http://dx.doi.org/10.5244/c.23.62.
Texte intégralMurthy, J. Krishna, G. V. Sai Krishna, Falak Chhaya et K. Madhava Krishna. « Reconstructing vehicles from a single image : Shape priors for road scene understanding ». Dans 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989089.
Texte intégralDuong, Tin Trung, Huy-Hung Nguyen et Jae Wook Jeon. « TSS-Net : Time-based Semantic Segmentation Neural Network for Road Scene Understanding ». Dans 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM). IEEE, 2021. http://dx.doi.org/10.1109/imcom51814.2021.9377401.
Texte intégralTopfer, Daniel, Jens Spehr, Jan Effertz et Christoph Stiller. « Efficient scene understanding for intelligent vehicles using a part-based road representation ». Dans 2013 16th International IEEE Conference on Intelligent Transportation Systems - (ITSC 2013). IEEE, 2013. http://dx.doi.org/10.1109/itsc.2013.6728212.
Texte intégralYang, Mingdong, Hongkun Zhou, Wenjun Huo et Guanglu Ren. « JDSNet : Joint Detection and Segmentation Network for Real-Time Road Scene Understanding ». Dans 2022 7th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2022. http://dx.doi.org/10.1109/icivc55077.2022.9886996.
Texte intégralNurhadiyatna, Adi, et Sven Loncaric. « Multistage Shallow Pyramid Parsing for Road Scene Understanding Based on Semantic Segmentation ». Dans 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2019. http://dx.doi.org/10.1109/ispa.2019.8868554.
Texte intégral