Literatura académica sobre el tema "3D semantic scene completion"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "3D semantic scene completion".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "3D semantic scene completion"
Luo, Shoutong, Zhengxing Sun, Yunhan Sun y Yi Wang. "Resolution‐switchable 3D Semantic Scene Completion". Computer Graphics Forum 41, n.º 7 (octubre de 2022): 121–30. http://dx.doi.org/10.1111/cgf.14662.
Texto completoTang, Jiaxiang, Xiaokang Chen, Jingbo Wang y Gang Zeng. "Not All Voxels Are Equal: Semantic Scene Completion from the Point-Voxel Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junio de 2022): 2352–60. http://dx.doi.org/10.1609/aaai.v36i2.20134.
Texto completoBehley, Jens, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Jürgen Gall y Cyrill Stachniss. "Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset". International Journal of Robotics Research 40, n.º 8-9 (20 de abril de 2021): 959–67. http://dx.doi.org/10.1177/02783649211006735.
Texto completoXu, Jinfeng, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu y Min Chen. "CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene Completion by Dense Feature Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junio de 2023): 3018–26. http://dx.doi.org/10.1609/aaai.v37i3.25405.
Texto completoLi, Siqi, Changqing Zou, Yipeng Li, Xibin Zhao y Yue Gao. "Attention-Based Multi-Modal Fusion Network for Semantic Scene Completion". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11402–9. http://dx.doi.org/10.1609/aaai.v34i07.6803.
Texto completoWang, Yu y Chao Tong. "H2GFormer: Horizontal-to-Global Voxel Transformer for 3D Semantic Scene Completion". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 5722–30. http://dx.doi.org/10.1609/aaai.v38i6.28384.
Texto completoWang, Xuzhi, Di Lin y Liang Wan. "FFNet: Frequency Fusion Network for Semantic Scene Completion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 2550–57. http://dx.doi.org/10.1609/aaai.v36i3.20156.
Texto completoShan, Y., Y. Xia, Y. Chen y D. Cremers. "SCP: SCENE COMPLETION PRE-TRAINING FOR 3D OBJECT DETECTION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 de diciembre de 2023): 41–46. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-41-2023.
Texto completoDing, Junzhe, Jin Zhang, Luqin Ye y Cheng Wu. "Kalman-Based Scene Flow Estimation for Point Cloud Densification and 3D Object Detection in Dynamic Scenes". Sensors 24, n.º 3 (31 de enero de 2024): 916. http://dx.doi.org/10.3390/s24030916.
Texto completoPark, Sang-Min y Jong-Eun Ha. "3D Semantic Scene Completion With Multi-scale Feature Maps and Masked Autoencoder". Journal of Institute of Control, Robotics and Systems 29, n.º 12 (31 de diciembre de 2023): 966–72. http://dx.doi.org/10.5302/j.icros.2023.23.0143.
Texto completoTesis sobre el tema "3D semantic scene completion"
Roldão, Jimenez Luis Guillermo. "3D Scene Reconstruction and Completion for Autonomous Driving". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.
Texto completoIn this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
Garbade, Martin [Verfasser]. "Semantic Segmentation and Completion of 2D and 3D Scenes / Martin Garbade". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1201728010/34.
Texto completoJaritz, Maximilian. "2D-3D scene understanding for autonomous driving". Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Texto completoIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Dewan, Ayush [Verfasser] y Wolfram [Akademischer Betreuer] Burgard. "Leveraging motion and semantic cues for 3D scene understanding". Freiburg : Universität, 2020. http://d-nb.info/1215499493/34.
Texto completoLind, Johan. "Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene Models". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143599.
Texto completoPiewak, Florian [Verfasser] y J. M. [Akademischer Betreuer] Zöllner. "LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer: J. M. Zöllner". Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Texto completoPiewak, Florian Pierre Joseph [Verfasser] y J. M. [Akademischer Betreuer] Zöllner. "LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer: J. M. Zöllner". Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Texto completoMinto, Ludovico. "Deep learning for scene understanding with color and depth data". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3422424.
Texto completoNegli ultimi anni sono stati raggiunti notevoli progressi sia per quanto concerne l'acquisizione di dati sia per quanto riguarda la strumentazione e gli algoritmi necessari per processarli. Da un lato, l'introduzione di sensori di profondità nel mercato del grande consumo ha reso possibile l'acquisizione di dati tridimensionali ad un costo irrisorio, permettendo così di superare le limitazioni cui sono tipicamente soggette svariate applicazioni basate solamente sull'elaborazione del colore. Al tempo stesso, processori grafici sempre più performanti hanno consentito l'estensione della ricerca ad algoritmi computazionalmente onerosi e la loro applicazione a grandi moli di dati. Dall'altro lato, lo sviluppo di algoritmi sempre più efficaci per l'apprendimento automatico, ivi incluse tecniche di apprendimento profondo, ha permesso di sfruttare l'enorme quantità di dati oggi a disposizione. Alla luce di queste premesse, vengono presentati in questa tesi tre tipici problemi nell'ambito della visione computazionale proponendo altrettanti approcci per una loro soluzione in grado di sfruttare sia l'utilizzo di reti neurali convoluzionali sia l'informazione congiunta convogliata da dati di colore e profondità. In particolare, viene presentato un approccio per la segmentazione semantica di immagini colore/profondità che utilizza sia l'informazione estratta con l'aiuto di una rete neurale convoluzionale sia l'informazione geometrica ricavata attraverso algoritmi più tradizionali. Viene descritto un metodo per la classificazione di forme tridimensionali basato anch'esso sull'utilizzo di una rete neurale convoluzionale operante su particolari rappresentazioni dei dati 3D a disposizione. Infine, viene proposto l'utilizzo dei una rete convoluzionale per stimare la confidenza associata a dati di profondità rispettivamente raccolti con un sensore ToF ed un sistema stereo al fine di guidare con successo la loro fusione senza impiegare, per lo stesso scopo, complicati modelli di rumore.
Lai, Po Kong. "Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Texto completoYalcin, Bayramoglu Neslihan. "Range Data Recognition: Segmentation, Matching, And Similarity Retrieval". Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613586/index.pdf.
Texto completohowever, there is still a gap in the 3D semantic analysis between the requirements of the applications and the obtained results. In this thesis we studied 3D semantic analysis of range data. Under this broad title we address segmentation of range scenes, correspondence matching of range images and the similarity retrieval of range models. Inputs are considered as single view depth images. First, possible research topics related to 3D semantic analysis are introduced. Planar structure detection in range scenes are analyzed and some modifications on available methods are proposed. Also, a novel algorithm to segment 3D point cloud (obtained via TOF camera) into objects by using the spatial information is presented. We proposed a novel local range image matching method that combines 3D surface properties with the 2D scale invariant feature transform. Next, our proposal for retrieving similar models where the query and the database both consist of only range models is presented. Finally, analysis of heat diffusion process on range data is presented. Challenges and some experimental results are presented.
Capítulos de libros sobre el tema "3D semantic scene completion"
Ding, Laiyan, Panwen Hu, Jie Li y Rui Huang. "Towards Balanced RGB-TSDF Fusion for Consistent Semantic Scene Completion by 3D RGB Feature Completion and a Classwise Entropy Loss Function". En Pattern Recognition and Computer Vision, 128–41. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_11.
Texto completoRomero-González, Cristina, Jesus Martínez-Gómez y Ismael García-Varea. "3D Semantic Maps for Scene Segmentation". En ROBOT 2017: Third Iberian Robotics Conference, 603–12. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70833-1_49.
Texto completoZhang, Jiahui, Hao Zhao, Anbang Yao, Yurong Chen, Li Zhang y Hongen Liao. "Efficient Semantic Scene Completion Network with Spatial Group Convolution". En Computer Vision – ECCV 2018, 749–65. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01258-8_45.
Texto completoAkadas, Kiran y Shankar Gangisetty. "3D Semantic Segmentation for Large-Scale Scene Understanding". En Computer Vision – ACCV 2020 Workshops, 87–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69756-3_7.
Texto completoDai, Angela y Matthias Nießner. "3DMV: Joint 3D-Multi-view Prediction for 3D Semantic Scene Segmentation". En Computer Vision – ECCV 2018, 458–74. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_28.
Texto completoHenlein, Alexander, Attila Kett, Daniel Baumartz, Giuseppe Abrami, Alexander Mehler, Johannes Bastian, Yannic Blecher et al. "Semantic Scene Builder: Towards a Context Sensitive Text-to-3D Scene Framework". En Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, 461–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35748-0_32.
Texto completoWang, Jianan, Hanyu Xuan y Zhiliang Wu. "Semantic-Guided Completion Network for Video Inpainting in Complex Urban Scene". En Pattern Recognition and Computer Vision, 224–36. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8552-4_18.
Texto completoSrinivasan, Sharadha, Shreya Kumar, Vallikannu Chockalingam y Chitrakala S. "3DSRASG: 3D Scene Retrieval and Augmentation Using Semantic Graphs". En Progress in Artificial Intelligence, 313–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86230-5_25.
Texto completoBultmann, Simon y Sven Behnke. "3D Semantic Scene Perception Using Distributed Smart Edge Sensors". En Intelligent Autonomous Systems 17, 313–29. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22216-0_22.
Texto completoCao, Chuqi, Mohammad Rafiq Swash y Hongying Meng. "Semantic 3D Scene Classification Based on Holoscopic 3D Camera for Autonomous Vehicles". En Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 897–904. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_96.
Texto completoActas de conferencias sobre el tema "3D semantic scene completion"
Garbade, Martin, Yueh-Tung Chen, Johann Sawatzky y Juergen Gall. "Two Stream 3D Semantic Scene Completion". En 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00055.
Texto completoCao, Anh-Quan y Raoul de Charette. "MonoScene: Monocular 3D Semantic Scene Completion". En 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00396.
Texto completoWang, Yida, David Joseph Tan, Nassir Navab y Federico Tombari. "Adversarial Semantic Scene Completion from a Single Depth Image". En 2018 International Conference on 3D Vision (3DV). IEEE, 2018. http://dx.doi.org/10.1109/3dv.2018.00056.
Texto completoWu, Shun-Cheng, Keisuke Tateno, Nassir Navab y Federico Tombari. "SCFusion: Real-time Incremental Scene Reconstruction with Semantic Completion". En 2020 International Conference on 3D Vision (3DV). IEEE, 2020. http://dx.doi.org/10.1109/3dv50981.2020.00090.
Texto completoLi, Jie, Kai Han, Peng Wang, Yu Liu y Xia Yuan. "Anisotropic Convolutional Networks for 3D Semantic Scene Completion". En 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00341.
Texto completoLi, Jie, Laiyan Ding y Rui Huang. "IMENet: Joint 3D Semantic Scene Completion and 2D Semantic Segmentation through Iterative Mutual Enhancement". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/110.
Texto completoZhang, Pingping, Wei Liu, Yinjie Lei, Huchuan Lu y Xiaoyun Yang. "Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion". En 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00789.
Texto completoDourado, Aloisio, Frederico Guth y Teofilo de Campos. "Data Augmented 3D Semantic Scene Completion with 2D Segmentation Priors". En 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2022. http://dx.doi.org/10.1109/wacv51458.2022.00076.
Texto completoYao, Jiawei, Chuming Li, Keqiang Sun, Yingjie Cai, Hao Li, Wanli Ouyang y Hongsheng Li. "NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized Device Coordinates Space". En 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00867.
Texto completoGuo, Yuxiao y Xin Tong. "View-Volume Network for Semantic Scene Completion from a Single Depth Image". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/101.
Texto completo