Literatura académica sobre el tema "Road scene understanding"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Road scene understanding".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Road scene understanding"
Zhou, Wujie, Sijia Lv, Qiuping Jiang y Lu Yu. "Deep Road Scene Understanding". IEEE Signal Processing Letters 26, n.º 4 (abril de 2019): 587–91. http://dx.doi.org/10.1109/lsp.2019.2896793.
Texto completoHuang, Wenqi, Fuzheng Zhang, Aidong Xu, Huajun Chen y Peng Li. "Fusion-based holistic road scene understanding". Journal of Engineering 2018, n.º 16 (1 de noviembre de 2018): 1623–28. http://dx.doi.org/10.1049/joe.2018.8319.
Texto completoWang, Chao, Huan Wang, Rui Li Wang y Chun Xia Zhao. "Robust Zebra-Crossing Detection for Autonomous Land Vehicles and Driving Assistance Systems". Applied Mechanics and Materials 556-562 (mayo de 2014): 2732–39. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2732.
Texto completoLiu, Huajun, Cailing Wang y Jingyu Yang. "Vanishing points estimation and road scene understanding based on Bayesian posterior probability". Industrial Robot: An International Journal 43, n.º 1 (18 de enero de 2016): 12–21. http://dx.doi.org/10.1108/ir-05-2015-0095.
Texto completoYasrab, Robail. "ECRU: An Encoder-Decoder Based Convolution Neural Network (CNN) for Road-Scene Understanding". Journal of Imaging 4, n.º 10 (8 de octubre de 2018): 116. http://dx.doi.org/10.3390/jimaging4100116.
Texto completoTopfer, Daniel, Jens Spehr, Jan Effertz y Christoph Stiller. "Efficient Road Scene Understanding for Intelligent Vehicles Using Compositional Hierarchical Models". IEEE Transactions on Intelligent Transportation Systems 16, n.º 1 (febrero de 2015): 441–51. http://dx.doi.org/10.1109/tits.2014.2354243.
Texto completoQin, Yuting, Yuren Chen y Kunhui Lin. "Quantifying the Effects of Visual Road Information on Drivers’ Speed Choices to Promote Self-Explaining Roads". International Journal of Environmental Research and Public Health 17, n.º 7 (3 de abril de 2020): 2437. http://dx.doi.org/10.3390/ijerph17072437.
Texto completoJeong, Jinhan, Yook Hyun Yoon y Jahng Hyon Park. "Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation". Sensors 20, n.º 9 (26 de abril de 2020): 2457. http://dx.doi.org/10.3390/s20092457.
Texto completoSun, Jee-Young, Seung-Won Jung y Sung-Jea Ko. "Lightweight Prediction and Boundary Attention-Based Semantic Segmentation for Road Scene Understanding". IEEE Access 8 (2020): 108449–60. http://dx.doi.org/10.1109/access.2020.3001679.
Texto completoDeng, Yanzi, Zhaoyang Lu y Jing Li. "Coarse-to-fine road scene segmentation via hierarchical graphical models". International Journal of Advanced Robotic Systems 16, n.º 2 (1 de marzo de 2019): 172988141983116. http://dx.doi.org/10.1177/1729881419831163.
Texto completoTesis sobre el tema "Road scene understanding"
Habibi, Aghdam Hamed. "Understanding Road Scenes using Deep Neural Networks". Doctoral thesis, Universitat Rovira i Virgili, 2018. http://hdl.handle.net/10803/461607.
Texto completoComprender las escenas de la carretera es crucial para los automóviles autónomos. Esto requiere segmentar escenas de carretera en regiones semánticamente significativas y reconocer objetos en una escena. Mientras que los objetos tales como coches y peatones tienen que segmentarse con precisión, puede que no sea necesario detectar y localizar estos objetos en una escena. Sin embargo, la detección y clasificación de objetos tales como señales de tráfico es esencial para ajustarse a las reglas de la carretera. En esta tesis, proponemos un método para la clasificación de señales de tráfico utilizando atributos visuales y redes bayesianas. A continuación, proponemos dos redes neuronales para este fin y desarrollar un nuevo método para crear un conjunto de modelos. A continuación, se estudia la sensibilidad de las redes neuronales frente a las muestras adversarias y se proponen dos redes destructoras que se unen a las redes de clasificación para aumentar su estabilidad frente al ruido. En la segunda parte de la tesis, proponemos una red para detectar señales de tráfico en imágenes de alta resolución en tiempo real y mostrar cómo implementar la técnica de ventana de escaneo dentro de nuestra red usando circunvoluciones dilatadas. A continuación, formulamos el problema de detección como un problema de segmentación y proponemos una red completamente convolucional para detectar señales de tráfico. Finalmente, proponemos una nueva red totalmente convolucional compuesta de módulos de fuego, conexiones de bypass y circunvoluciones consecutivas dilatadas en la última parte de la tesis para escenarios de carretera segmentinc en regiones semánticamente significativas y muestran que es más accuarate y computacionalmente más eficiente en comparación con redes similares
Understanding road scenes is crucial for autonomous cars. This requires segmenting road scenes into semantically meaningful regions and recognizing objects in a scene. While objects such as cars and pedestrians has to be segmented accurately, it might not be necessary to detect and locate these objects in a scene. However, detecting and classifying objects such as traffic signs is essential for conforming to road rules. In this thesis, we first propose a method for classifying traffic signs using visual attributes and Bayesian networks. Then, we propose two neural network for this purpose and develop a new method for creating an ensemble of models. Next, we study sensitivity of neural networks against adversarial samples and propose two denoising networks that are attached to the classification networks to increase their stability against noise. In the second part of the thesis, we first propose a network to detect traffic signs in high-resolution images in real-time and show how to implement the scanning window technique within our network using dilated convolutions. Then, we formulate the detection problem as a segmentation problem and propose a fully convolutional network for detecting traffic signs. Finally, we propose a new fully convolutional network composed of fire modules, bypass connections and consecutive dilated convolutions in the last part of the thesis for segmenting road scenes into semantically meaningful regions and show that it is more accurate and computationally more efficient compared to similar networks.
Lee, Jong Ho. "Understanding the Visual Appearance of Road Scenes Using a Monocular Camera". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/795.
Texto completoWang, Fan. "How polarimetry may contribute to understand reflective road scenes : theory and applications". Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0003/document.
Texto completoAdvance Driver Assistance Systems (ADAS) aim to automate/adapt/enhance trans-portation systems for safety and better driving. Various research topics are emerged to focus around the ADAS, including the object detection and recognition, image understanding, disparity map estimation etc. The presence of the specular highlights restricts the accuracy of such algorithms, since it covers the original image texture and leads to the lost of information. Light polarization implicitly encodes the object related information, such as the surface direction, material nature, roughness etc. Under the context of ADAS, we are inspired to further inspect the usage of polarization imaging to remove image highlights and analyze the road scenes.We firstly propose in this thesis to remove the image specularity through polarization by applying a global energy minimization. Polarization information provides a color constraint that reduces the color distortion of the results. The global smoothness assumption further integrates the long range information in the image and produces an improved diffuse image.We secondly propose to use polarization images as a new feature, since for the road scenes, the high reflection appears only upon certain objects such as cars. Polarization features are applied in image understanding and car detection in two different ways. The experimental results show that, once properly fused with rgb-based features, the complementary information provided by the polarization images improve the algorithm accuracy. We finally test the polarization imaging for depth estimation. A post-aggregation stereo matching method is firstly proposed and validated on a color database. A fusion rule is then proposed to use the polarization imaging as a constraint to the disparity map estimation. From these applications, we proved the potential and the feasibility to apply polariza-tion imaging in outdoor tasks for ADAS
Kung, Wen Yao y 龔芠瑤. "Road Scene Understanding with Semantic Segmentation and Object Hazard Level Prediction". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/60494150880821921387.
Texto completo國立清華大學
資訊工程學系
104
We introduce a method for understanding road scenes and simultaneously predicting the hazard levels of three categories of objects in road scene images by using a fully convolutional network (FCN) architecture. In our approach, with a single input image, the multi-task model produces a _ne segmentation result and a prediction of hazard levels in a form of heatmap. The model can be divided into three parts: shared net, segmentation net, and hazard level net. The shared net and segmentation net use the encoder-decoder architecture provided by Badrinarayanan et al . [2]. The hazard level net is a fully convolution network estimating hazard level of a segment with a coarse segmentation result. We also provide a dataset with the object segmentation ground truth and the hazard levels for training and evaluating the proposed deep networks. To prove that our network can learn highly semantic attributes of objects, we use two measurements to evaluate the performance of our method, and compare our method with a saliency-based method to show the difference between predicting hazard levels and estimating human eyes fixations.
Schoen, Fabio. "Deep learning methods for safety-critical driving events analysis". Doctoral thesis, 2022. http://hdl.handle.net/2158/1260238.
Texto completoHummel, Britta [Verfasser]. "Description logic for scene understanding at the example of urban road intersections / von Britta Hummel". 2009. http://d-nb.info/1000324818/34.
Texto completoLibros sobre el tema "Road scene understanding"
Voparil, Chris. Reconstructing Pragmatism. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197605721.001.0001.
Texto completoBiel Portero, Israel, Andrea Carolina Casanova Mejía, Amanda Janneth Riascos Mora, Alba Lucy Ortega Salas, Luis Andrés Salas Zambrano, Franco Andrés Montenegro Coral, Julie Andrea Benavides Melo et al. Challenges and alternatives towards peacebuilding. Editado por Ángela Marcela Castillo Burbano y Claudia Andrea Guerrero Martínez. Ediciones Universidad Cooperativa de Colombia, 2020. http://dx.doi.org/10.16925/9789587602388.
Texto completoCapítulos de libros sobre el tema "Road scene understanding"
Holder, Christopher J. y Toby P. Breckon. "Encoding Stereoscopic Depth Features for Scene Understanding in off-Road Environments". En Lecture Notes in Computer Science, 427–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93000-8_48.
Texto completoKembhavi, Aniruddha, Tom Yeh y Larry S. Davis. "Why Did the Person Cross the Road (There)? Scene Understanding Using Probabilistic Logic Models and Common Sense Reasoning". En Computer Vision – ECCV 2010, 693–706. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15552-9_50.
Texto completoAlvarez, Jose M., Felipe Lumbreras, Antonio M. Lopez y Theo Gevers. "Understanding Road Scenes Using Visual Cues and GPS Information". En Computer Vision – ECCV 2012. Workshops and Demonstrations, 635–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33885-4_70.
Texto completoOeljeklaus, Malte. "5 Global Road Topology from Scene Context Recognition". En An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 38–49. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-38.
Texto completoOeljeklaus, Malte. "7 Road Users from Bounding Box Detection". En An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 64–83. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-64.
Texto completoLapierre, Isabelle y Claude Laurgeau. "A Road Scene Understanding System based on a Blackboard Architecture". En Advances In Structural And Syntactic Pattern Recognition, 571–85. WORLD SCIENTIFIC, 1993. http://dx.doi.org/10.1142/9789812797919_0048.
Texto completoOeljeklaus, Malte. "6 Drivable Road Area from Semantic Image Segmentation". En An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 50–63. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-50.
Texto completoWinter, Tim. "The Routes of Civilization". En The Silk Road, 23–33. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197605059.003.0002.
Texto completoActas de conferencias sobre el tema "Road scene understanding"
Dhiman, Vikas, Quoc-Huy Tran, Jason J. Corso y Manmohan Chandraker. "A Continuous Occlusion Model for Road Scene Understanding". En 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.469.
Texto completoSun, Yuan, Hongbo Lu y Zhimin Zhang. "RvGIST: A Holistic Road Feature for Real-Time Road-Scene Understanding". En 2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD). IEEE, 2013. http://dx.doi.org/10.1109/snpd.2013.86.
Texto completoVenkateshkumar, Suhas Kashetty, Muralikrishna Sridhar y Patrick Ott. "Latent Hierarchical Part Based Models for Road Scene Understanding". En 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2015. http://dx.doi.org/10.1109/iccvw.2015.25.
Texto completoTsukada, A., M. Ogawa y F. Galpin. "Road structure based scene understanding for intelligent vehicle systems". En 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5653532.
Texto completoSturgess, Paul, Karteek Alahari, Lubor Ladicky y Philip H. S. Torr. "Combining Appearance and Structure from Motion Features for Road Scene Understanding". En British Machine Vision Conference 2009. British Machine Vision Association, 2009. http://dx.doi.org/10.5244/c.23.62.
Texto completoMurthy, J. Krishna, G. V. Sai Krishna, Falak Chhaya y K. Madhava Krishna. "Reconstructing vehicles from a single image: Shape priors for road scene understanding". En 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989089.
Texto completoDuong, Tin Trung, Huy-Hung Nguyen y Jae Wook Jeon. "TSS-Net: Time-based Semantic Segmentation Neural Network for Road Scene Understanding". En 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM). IEEE, 2021. http://dx.doi.org/10.1109/imcom51814.2021.9377401.
Texto completoTopfer, Daniel, Jens Spehr, Jan Effertz y Christoph Stiller. "Efficient scene understanding for intelligent vehicles using a part-based road representation". En 2013 16th International IEEE Conference on Intelligent Transportation Systems - (ITSC 2013). IEEE, 2013. http://dx.doi.org/10.1109/itsc.2013.6728212.
Texto completoYang, Mingdong, Hongkun Zhou, Wenjun Huo y Guanglu Ren. "JDSNet: Joint Detection and Segmentation Network for Real-Time Road Scene Understanding". En 2022 7th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2022. http://dx.doi.org/10.1109/icivc55077.2022.9886996.
Texto completoNurhadiyatna, Adi y Sven Loncaric. "Multistage Shallow Pyramid Parsing for Road Scene Understanding Based on Semantic Segmentation". En 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2019. http://dx.doi.org/10.1109/ispa.2019.8868554.
Texto completo