Добірка наукової літератури з теми "Road scene understanding"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Road scene understanding".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Road scene understanding"
Zhou, Wujie, Sijia Lv, Qiuping Jiang, and Lu Yu. "Deep Road Scene Understanding." IEEE Signal Processing Letters 26, no. 4 (April 2019): 587–91. http://dx.doi.org/10.1109/lsp.2019.2896793.
Повний текст джерелаHuang, Wenqi, Fuzheng Zhang, Aidong Xu, Huajun Chen, and Peng Li. "Fusion-based holistic road scene understanding." Journal of Engineering 2018, no. 16 (November 1, 2018): 1623–28. http://dx.doi.org/10.1049/joe.2018.8319.
Повний текст джерелаWang, Chao, Huan Wang, Rui Li Wang, and Chun Xia Zhao. "Robust Zebra-Crossing Detection for Autonomous Land Vehicles and Driving Assistance Systems." Applied Mechanics and Materials 556-562 (May 2014): 2732–39. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2732.
Повний текст джерелаLiu, Huajun, Cailing Wang, and Jingyu Yang. "Vanishing points estimation and road scene understanding based on Bayesian posterior probability." Industrial Robot: An International Journal 43, no. 1 (January 18, 2016): 12–21. http://dx.doi.org/10.1108/ir-05-2015-0095.
Повний текст джерелаYasrab, Robail. "ECRU: An Encoder-Decoder Based Convolution Neural Network (CNN) for Road-Scene Understanding." Journal of Imaging 4, no. 10 (October 8, 2018): 116. http://dx.doi.org/10.3390/jimaging4100116.
Повний текст джерелаTopfer, Daniel, Jens Spehr, Jan Effertz, and Christoph Stiller. "Efficient Road Scene Understanding for Intelligent Vehicles Using Compositional Hierarchical Models." IEEE Transactions on Intelligent Transportation Systems 16, no. 1 (February 2015): 441–51. http://dx.doi.org/10.1109/tits.2014.2354243.
Повний текст джерелаQin, Yuting, Yuren Chen, and Kunhui Lin. "Quantifying the Effects of Visual Road Information on Drivers’ Speed Choices to Promote Self-Explaining Roads." International Journal of Environmental Research and Public Health 17, no. 7 (April 3, 2020): 2437. http://dx.doi.org/10.3390/ijerph17072437.
Повний текст джерелаJeong, Jinhan, Yook Hyun Yoon, and Jahng Hyon Park. "Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation." Sensors 20, no. 9 (April 26, 2020): 2457. http://dx.doi.org/10.3390/s20092457.
Повний текст джерелаSun, Jee-Young, Seung-Won Jung, and Sung-Jea Ko. "Lightweight Prediction and Boundary Attention-Based Semantic Segmentation for Road Scene Understanding." IEEE Access 8 (2020): 108449–60. http://dx.doi.org/10.1109/access.2020.3001679.
Повний текст джерелаDeng, Yanzi, Zhaoyang Lu, and Jing Li. "Coarse-to-fine road scene segmentation via hierarchical graphical models." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141983116. http://dx.doi.org/10.1177/1729881419831163.
Повний текст джерелаДисертації з теми "Road scene understanding"
Habibi, Aghdam Hamed. "Understanding Road Scenes using Deep Neural Networks." Doctoral thesis, Universitat Rovira i Virgili, 2018. http://hdl.handle.net/10803/461607.
Повний текст джерелаComprender las escenas de la carretera es crucial para los automóviles autónomos. Esto requiere segmentar escenas de carretera en regiones semánticamente significativas y reconocer objetos en una escena. Mientras que los objetos tales como coches y peatones tienen que segmentarse con precisión, puede que no sea necesario detectar y localizar estos objetos en una escena. Sin embargo, la detección y clasificación de objetos tales como señales de tráfico es esencial para ajustarse a las reglas de la carretera. En esta tesis, proponemos un método para la clasificación de señales de tráfico utilizando atributos visuales y redes bayesianas. A continuación, proponemos dos redes neuronales para este fin y desarrollar un nuevo método para crear un conjunto de modelos. A continuación, se estudia la sensibilidad de las redes neuronales frente a las muestras adversarias y se proponen dos redes destructoras que se unen a las redes de clasificación para aumentar su estabilidad frente al ruido. En la segunda parte de la tesis, proponemos una red para detectar señales de tráfico en imágenes de alta resolución en tiempo real y mostrar cómo implementar la técnica de ventana de escaneo dentro de nuestra red usando circunvoluciones dilatadas. A continuación, formulamos el problema de detección como un problema de segmentación y proponemos una red completamente convolucional para detectar señales de tráfico. Finalmente, proponemos una nueva red totalmente convolucional compuesta de módulos de fuego, conexiones de bypass y circunvoluciones consecutivas dilatadas en la última parte de la tesis para escenarios de carretera segmentinc en regiones semánticamente significativas y muestran que es más accuarate y computacionalmente más eficiente en comparación con redes similares
Understanding road scenes is crucial for autonomous cars. This requires segmenting road scenes into semantically meaningful regions and recognizing objects in a scene. While objects such as cars and pedestrians has to be segmented accurately, it might not be necessary to detect and locate these objects in a scene. However, detecting and classifying objects such as traffic signs is essential for conforming to road rules. In this thesis, we first propose a method for classifying traffic signs using visual attributes and Bayesian networks. Then, we propose two neural network for this purpose and develop a new method for creating an ensemble of models. Next, we study sensitivity of neural networks against adversarial samples and propose two denoising networks that are attached to the classification networks to increase their stability against noise. In the second part of the thesis, we first propose a network to detect traffic signs in high-resolution images in real-time and show how to implement the scanning window technique within our network using dilated convolutions. Then, we formulate the detection problem as a segmentation problem and propose a fully convolutional network for detecting traffic signs. Finally, we propose a new fully convolutional network composed of fire modules, bypass connections and consecutive dilated convolutions in the last part of the thesis for segmenting road scenes into semantically meaningful regions and show that it is more accurate and computationally more efficient compared to similar networks.
Lee, Jong Ho. "Understanding the Visual Appearance of Road Scenes Using a Monocular Camera." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/795.
Повний текст джерелаWang, Fan. "How polarimetry may contribute to understand reflective road scenes : theory and applications." Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0003/document.
Повний текст джерелаAdvance Driver Assistance Systems (ADAS) aim to automate/adapt/enhance trans-portation systems for safety and better driving. Various research topics are emerged to focus around the ADAS, including the object detection and recognition, image understanding, disparity map estimation etc. The presence of the specular highlights restricts the accuracy of such algorithms, since it covers the original image texture and leads to the lost of information. Light polarization implicitly encodes the object related information, such as the surface direction, material nature, roughness etc. Under the context of ADAS, we are inspired to further inspect the usage of polarization imaging to remove image highlights and analyze the road scenes.We firstly propose in this thesis to remove the image specularity through polarization by applying a global energy minimization. Polarization information provides a color constraint that reduces the color distortion of the results. The global smoothness assumption further integrates the long range information in the image and produces an improved diffuse image.We secondly propose to use polarization images as a new feature, since for the road scenes, the high reflection appears only upon certain objects such as cars. Polarization features are applied in image understanding and car detection in two different ways. The experimental results show that, once properly fused with rgb-based features, the complementary information provided by the polarization images improve the algorithm accuracy. We finally test the polarization imaging for depth estimation. A post-aggregation stereo matching method is firstly proposed and validated on a color database. A fusion rule is then proposed to use the polarization imaging as a constraint to the disparity map estimation. From these applications, we proved the potential and the feasibility to apply polariza-tion imaging in outdoor tasks for ADAS
Kung, Wen Yao, and 龔芠瑤. "Road Scene Understanding with Semantic Segmentation and Object Hazard Level Prediction." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/60494150880821921387.
Повний текст джерела國立清華大學
資訊工程學系
104
We introduce a method for understanding road scenes and simultaneously predicting the hazard levels of three categories of objects in road scene images by using a fully convolutional network (FCN) architecture. In our approach, with a single input image, the multi-task model produces a _ne segmentation result and a prediction of hazard levels in a form of heatmap. The model can be divided into three parts: shared net, segmentation net, and hazard level net. The shared net and segmentation net use the encoder-decoder architecture provided by Badrinarayanan et al . [2]. The hazard level net is a fully convolution network estimating hazard level of a segment with a coarse segmentation result. We also provide a dataset with the object segmentation ground truth and the hazard levels for training and evaluating the proposed deep networks. To prove that our network can learn highly semantic attributes of objects, we use two measurements to evaluate the performance of our method, and compare our method with a saliency-based method to show the difference between predicting hazard levels and estimating human eyes fixations.
Schoen, Fabio. "Deep learning methods for safety-critical driving events analysis." Doctoral thesis, 2022. http://hdl.handle.net/2158/1260238.
Повний текст джерелаHummel, Britta [Verfasser]. "Description logic for scene understanding at the example of urban road intersections / von Britta Hummel." 2009. http://d-nb.info/1000324818/34.
Повний текст джерелаКниги з теми "Road scene understanding"
Voparil, Chris. Reconstructing Pragmatism. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197605721.001.0001.
Повний текст джерелаBiel Portero, Israel, Andrea Carolina Casanova Mejía, Amanda Janneth Riascos Mora, Alba Lucy Ortega Salas, Luis Andrés Salas Zambrano, Franco Andrés Montenegro Coral, Julie Andrea Benavides Melo, et al. Challenges and alternatives towards peacebuilding. Edited by Ángela Marcela Castillo Burbano and Claudia Andrea Guerrero Martínez. Ediciones Universidad Cooperativa de Colombia, 2020. http://dx.doi.org/10.16925/9789587602388.
Повний текст джерелаЧастини книг з теми "Road scene understanding"
Holder, Christopher J., and Toby P. Breckon. "Encoding Stereoscopic Depth Features for Scene Understanding in off-Road Environments." In Lecture Notes in Computer Science, 427–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93000-8_48.
Повний текст джерелаKembhavi, Aniruddha, Tom Yeh, and Larry S. Davis. "Why Did the Person Cross the Road (There)? Scene Understanding Using Probabilistic Logic Models and Common Sense Reasoning." In Computer Vision – ECCV 2010, 693–706. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15552-9_50.
Повний текст джерелаAlvarez, Jose M., Felipe Lumbreras, Antonio M. Lopez, and Theo Gevers. "Understanding Road Scenes Using Visual Cues and GPS Information." In Computer Vision – ECCV 2012. Workshops and Demonstrations, 635–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33885-4_70.
Повний текст джерелаOeljeklaus, Malte. "5 Global Road Topology from Scene Context Recognition." In An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 38–49. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-38.
Повний текст джерелаOeljeklaus, Malte. "7 Road Users from Bounding Box Detection." In An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 64–83. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-64.
Повний текст джерелаLapierre, Isabelle, and Claude Laurgeau. "A Road Scene Understanding System based on a Blackboard Architecture." In Advances In Structural And Syntactic Pattern Recognition, 571–85. WORLD SCIENTIFIC, 1993. http://dx.doi.org/10.1142/9789812797919_0048.
Повний текст джерелаOeljeklaus, Malte. "6 Drivable Road Area from Semantic Image Segmentation." In An Integrated Approach for Traffic Scene Understanding from Monocular Cameras, 50–63. VDI Verlag, 2021. http://dx.doi.org/10.51202/9783186815125-50.
Повний текст джерелаWinter, Tim. "The Routes of Civilization." In The Silk Road, 23–33. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197605059.003.0002.
Повний текст джерелаТези доповідей конференцій з теми "Road scene understanding"
Dhiman, Vikas, Quoc-Huy Tran, Jason J. Corso, and Manmohan Chandraker. "A Continuous Occlusion Model for Road Scene Understanding." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.469.
Повний текст джерелаSun, Yuan, Hongbo Lu, and Zhimin Zhang. "RvGIST: A Holistic Road Feature for Real-Time Road-Scene Understanding." In 2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD). IEEE, 2013. http://dx.doi.org/10.1109/snpd.2013.86.
Повний текст джерелаVenkateshkumar, Suhas Kashetty, Muralikrishna Sridhar, and Patrick Ott. "Latent Hierarchical Part Based Models for Road Scene Understanding." In 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2015. http://dx.doi.org/10.1109/iccvw.2015.25.
Повний текст джерелаTsukada, A., M. Ogawa, and F. Galpin. "Road structure based scene understanding for intelligent vehicle systems." In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5653532.
Повний текст джерелаSturgess, Paul, Karteek Alahari, Lubor Ladicky, and Philip H. S. Torr. "Combining Appearance and Structure from Motion Features for Road Scene Understanding." In British Machine Vision Conference 2009. British Machine Vision Association, 2009. http://dx.doi.org/10.5244/c.23.62.
Повний текст джерелаMurthy, J. Krishna, G. V. Sai Krishna, Falak Chhaya, and K. Madhava Krishna. "Reconstructing vehicles from a single image: Shape priors for road scene understanding." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989089.
Повний текст джерелаDuong, Tin Trung, Huy-Hung Nguyen, and Jae Wook Jeon. "TSS-Net: Time-based Semantic Segmentation Neural Network for Road Scene Understanding." In 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM). IEEE, 2021. http://dx.doi.org/10.1109/imcom51814.2021.9377401.
Повний текст джерелаTopfer, Daniel, Jens Spehr, Jan Effertz, and Christoph Stiller. "Efficient scene understanding for intelligent vehicles using a part-based road representation." In 2013 16th International IEEE Conference on Intelligent Transportation Systems - (ITSC 2013). IEEE, 2013. http://dx.doi.org/10.1109/itsc.2013.6728212.
Повний текст джерелаYang, Mingdong, Hongkun Zhou, Wenjun Huo, and Guanglu Ren. "JDSNet: Joint Detection and Segmentation Network for Real-Time Road Scene Understanding." In 2022 7th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2022. http://dx.doi.org/10.1109/icivc55077.2022.9886996.
Повний текст джерелаNurhadiyatna, Adi, and Sven Loncaric. "Multistage Shallow Pyramid Parsing for Road Scene Understanding Based on Semantic Segmentation." In 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2019. http://dx.doi.org/10.1109/ispa.2019.8868554.
Повний текст джерела