Auswahl der wissenschaftlichen Literatur zum Thema „3D semantic scene completion“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "3D semantic scene completion" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "3D semantic scene completion"
Luo, Shoutong, Zhengxing Sun, Yunhan Sun und Yi Wang. „Resolution‐switchable 3D Semantic Scene Completion“. Computer Graphics Forum 41, Nr. 7 (Oktober 2022): 121–30. http://dx.doi.org/10.1111/cgf.14662.
Der volle Inhalt der QuelleTang, Jiaxiang, Xiaokang Chen, Jingbo Wang und Gang Zeng. „Not All Voxels Are Equal: Semantic Scene Completion from the Point-Voxel Perspective“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 2 (28.06.2022): 2352–60. http://dx.doi.org/10.1609/aaai.v36i2.20134.
Der volle Inhalt der QuelleBehley, Jens, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Jürgen Gall und Cyrill Stachniss. „Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset“. International Journal of Robotics Research 40, Nr. 8-9 (20.04.2021): 959–67. http://dx.doi.org/10.1177/02783649211006735.
Der volle Inhalt der QuelleXu, Jinfeng, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu und Min Chen. „CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene Completion by Dense Feature Fusion“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 3 (26.06.2023): 3018–26. http://dx.doi.org/10.1609/aaai.v37i3.25405.
Der volle Inhalt der QuelleLi, Siqi, Changqing Zou, Yipeng Li, Xibin Zhao und Yue Gao. „Attention-Based Multi-Modal Fusion Network for Semantic Scene Completion“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 11402–9. http://dx.doi.org/10.1609/aaai.v34i07.6803.
Der volle Inhalt der QuelleWang, Yu, und Chao Tong. „H2GFormer: Horizontal-to-Global Voxel Transformer for 3D Semantic Scene Completion“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5722–30. http://dx.doi.org/10.1609/aaai.v38i6.28384.
Der volle Inhalt der QuelleWang, Xuzhi, Di Lin und Liang Wan. „FFNet: Frequency Fusion Network for Semantic Scene Completion“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 3 (28.06.2022): 2550–57. http://dx.doi.org/10.1609/aaai.v36i3.20156.
Der volle Inhalt der QuelleShan, Y., Y. Xia, Y. Chen und D. Cremers. „SCP: SCENE COMPLETION PRE-TRAINING FOR 3D OBJECT DETECTION“. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 41–46. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-41-2023.
Der volle Inhalt der QuelleDing, Junzhe, Jin Zhang, Luqin Ye und Cheng Wu. „Kalman-Based Scene Flow Estimation for Point Cloud Densification and 3D Object Detection in Dynamic Scenes“. Sensors 24, Nr. 3 (31.01.2024): 916. http://dx.doi.org/10.3390/s24030916.
Der volle Inhalt der QuellePark, Sang-Min, und Jong-Eun Ha. „3D Semantic Scene Completion With Multi-scale Feature Maps and Masked Autoencoder“. Journal of Institute of Control, Robotics and Systems 29, Nr. 12 (31.12.2023): 966–72. http://dx.doi.org/10.5302/j.icros.2023.23.0143.
Der volle Inhalt der QuelleDissertationen zum Thema "3D semantic scene completion"
Roldão, Jimenez Luis Guillermo. „3D Scene Reconstruction and Completion for Autonomous Driving“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.
Der volle Inhalt der QuelleIn this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
Garbade, Martin [Verfasser]. „Semantic Segmentation and Completion of 2D and 3D Scenes / Martin Garbade“. Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1201728010/34.
Der volle Inhalt der QuelleJaritz, Maximilian. „2D-3D scene understanding for autonomous driving“. Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Der volle Inhalt der QuelleIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Dewan, Ayush [Verfasser], und Wolfram [Akademischer Betreuer] Burgard. „Leveraging motion and semantic cues for 3D scene understanding“. Freiburg : Universität, 2020. http://d-nb.info/1215499493/34.
Der volle Inhalt der QuelleLind, Johan. „Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene Models“. Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143599.
Der volle Inhalt der QuellePiewak, Florian [Verfasser], und J. M. [Akademischer Betreuer] Zöllner. „LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer: J. M. Zöllner“. Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Der volle Inhalt der QuellePiewak, Florian Pierre Joseph [Verfasser], und J. M. [Akademischer Betreuer] Zöllner. „LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer: J. M. Zöllner“. Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Der volle Inhalt der QuelleMinto, Ludovico. „Deep learning for scene understanding with color and depth data“. Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3422424.
Der volle Inhalt der QuelleNegli ultimi anni sono stati raggiunti notevoli progressi sia per quanto concerne l'acquisizione di dati sia per quanto riguarda la strumentazione e gli algoritmi necessari per processarli. Da un lato, l'introduzione di sensori di profondità nel mercato del grande consumo ha reso possibile l'acquisizione di dati tridimensionali ad un costo irrisorio, permettendo così di superare le limitazioni cui sono tipicamente soggette svariate applicazioni basate solamente sull'elaborazione del colore. Al tempo stesso, processori grafici sempre più performanti hanno consentito l'estensione della ricerca ad algoritmi computazionalmente onerosi e la loro applicazione a grandi moli di dati. Dall'altro lato, lo sviluppo di algoritmi sempre più efficaci per l'apprendimento automatico, ivi incluse tecniche di apprendimento profondo, ha permesso di sfruttare l'enorme quantità di dati oggi a disposizione. Alla luce di queste premesse, vengono presentati in questa tesi tre tipici problemi nell'ambito della visione computazionale proponendo altrettanti approcci per una loro soluzione in grado di sfruttare sia l'utilizzo di reti neurali convoluzionali sia l'informazione congiunta convogliata da dati di colore e profondità. In particolare, viene presentato un approccio per la segmentazione semantica di immagini colore/profondità che utilizza sia l'informazione estratta con l'aiuto di una rete neurale convoluzionale sia l'informazione geometrica ricavata attraverso algoritmi più tradizionali. Viene descritto un metodo per la classificazione di forme tridimensionali basato anch'esso sull'utilizzo di una rete neurale convoluzionale operante su particolari rappresentazioni dei dati 3D a disposizione. Infine, viene proposto l'utilizzo dei una rete convoluzionale per stimare la confidenza associata a dati di profondità rispettivamente raccolti con un sensore ToF ed un sistema stereo al fine di guidare con successo la loro fusione senza impiegare, per lo stesso scopo, complicati modelli di rumore.
Lai, Po Kong. „Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Der volle Inhalt der QuelleYalcin, Bayramoglu Neslihan. „Range Data Recognition: Segmentation, Matching, And Similarity Retrieval“. Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613586/index.pdf.
Der volle Inhalt der Quellehowever, there is still a gap in the 3D semantic analysis between the requirements of the applications and the obtained results. In this thesis we studied 3D semantic analysis of range data. Under this broad title we address segmentation of range scenes, correspondence matching of range images and the similarity retrieval of range models. Inputs are considered as single view depth images. First, possible research topics related to 3D semantic analysis are introduced. Planar structure detection in range scenes are analyzed and some modifications on available methods are proposed. Also, a novel algorithm to segment 3D point cloud (obtained via TOF camera) into objects by using the spatial information is presented. We proposed a novel local range image matching method that combines 3D surface properties with the 2D scale invariant feature transform. Next, our proposal for retrieving similar models where the query and the database both consist of only range models is presented. Finally, analysis of heat diffusion process on range data is presented. Challenges and some experimental results are presented.
Buchteile zum Thema "3D semantic scene completion"
Ding, Laiyan, Panwen Hu, Jie Li und Rui Huang. „Towards Balanced RGB-TSDF Fusion for Consistent Semantic Scene Completion by 3D RGB Feature Completion and a Classwise Entropy Loss Function“. In Pattern Recognition and Computer Vision, 128–41. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_11.
Der volle Inhalt der QuelleRomero-González, Cristina, Jesus Martínez-Gómez und Ismael García-Varea. „3D Semantic Maps for Scene Segmentation“. In ROBOT 2017: Third Iberian Robotics Conference, 603–12. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70833-1_49.
Der volle Inhalt der QuelleZhang, Jiahui, Hao Zhao, Anbang Yao, Yurong Chen, Li Zhang und Hongen Liao. „Efficient Semantic Scene Completion Network with Spatial Group Convolution“. In Computer Vision – ECCV 2018, 749–65. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01258-8_45.
Der volle Inhalt der QuelleAkadas, Kiran, und Shankar Gangisetty. „3D Semantic Segmentation for Large-Scale Scene Understanding“. In Computer Vision – ACCV 2020 Workshops, 87–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69756-3_7.
Der volle Inhalt der QuelleDai, Angela, und Matthias Nießner. „3DMV: Joint 3D-Multi-view Prediction for 3D Semantic Scene Segmentation“. In Computer Vision – ECCV 2018, 458–74. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_28.
Der volle Inhalt der QuelleHenlein, Alexander, Attila Kett, Daniel Baumartz, Giuseppe Abrami, Alexander Mehler, Johannes Bastian, Yannic Blecher et al. „Semantic Scene Builder: Towards a Context Sensitive Text-to-3D Scene Framework“. In Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, 461–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35748-0_32.
Der volle Inhalt der QuelleWang, Jianan, Hanyu Xuan und Zhiliang Wu. „Semantic-Guided Completion Network for Video Inpainting in Complex Urban Scene“. In Pattern Recognition and Computer Vision, 224–36. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8552-4_18.
Der volle Inhalt der QuelleSrinivasan, Sharadha, Shreya Kumar, Vallikannu Chockalingam und Chitrakala S. „3DSRASG: 3D Scene Retrieval and Augmentation Using Semantic Graphs“. In Progress in Artificial Intelligence, 313–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86230-5_25.
Der volle Inhalt der QuelleBultmann, Simon, und Sven Behnke. „3D Semantic Scene Perception Using Distributed Smart Edge Sensors“. In Intelligent Autonomous Systems 17, 313–29. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22216-0_22.
Der volle Inhalt der QuelleCao, Chuqi, Mohammad Rafiq Swash und Hongying Meng. „Semantic 3D Scene Classification Based on Holoscopic 3D Camera for Autonomous Vehicles“. In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 897–904. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_96.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "3D semantic scene completion"
Garbade, Martin, Yueh-Tung Chen, Johann Sawatzky und Juergen Gall. „Two Stream 3D Semantic Scene Completion“. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00055.
Der volle Inhalt der QuelleCao, Anh-Quan, und Raoul de Charette. „MonoScene: Monocular 3D Semantic Scene Completion“. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00396.
Der volle Inhalt der QuelleWang, Yida, David Joseph Tan, Nassir Navab und Federico Tombari. „Adversarial Semantic Scene Completion from a Single Depth Image“. In 2018 International Conference on 3D Vision (3DV). IEEE, 2018. http://dx.doi.org/10.1109/3dv.2018.00056.
Der volle Inhalt der QuelleWu, Shun-Cheng, Keisuke Tateno, Nassir Navab und Federico Tombari. „SCFusion: Real-time Incremental Scene Reconstruction with Semantic Completion“. In 2020 International Conference on 3D Vision (3DV). IEEE, 2020. http://dx.doi.org/10.1109/3dv50981.2020.00090.
Der volle Inhalt der QuelleLi, Jie, Kai Han, Peng Wang, Yu Liu und Xia Yuan. „Anisotropic Convolutional Networks for 3D Semantic Scene Completion“. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00341.
Der volle Inhalt der QuelleLi, Jie, Laiyan Ding und Rui Huang. „IMENet: Joint 3D Semantic Scene Completion and 2D Semantic Segmentation through Iterative Mutual Enhancement“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/110.
Der volle Inhalt der QuelleZhang, Pingping, Wei Liu, Yinjie Lei, Huchuan Lu und Xiaoyun Yang. „Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion“. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00789.
Der volle Inhalt der QuelleDourado, Aloisio, Frederico Guth und Teofilo de Campos. „Data Augmented 3D Semantic Scene Completion with 2D Segmentation Priors“. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2022. http://dx.doi.org/10.1109/wacv51458.2022.00076.
Der volle Inhalt der QuelleYao, Jiawei, Chuming Li, Keqiang Sun, Yingjie Cai, Hao Li, Wanli Ouyang und Hongsheng Li. „NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized Device Coordinates Space“. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00867.
Der volle Inhalt der QuelleGuo, Yuxiao, und Xin Tong. „View-Volume Network for Semantic Scene Completion from a Single Depth Image“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/101.
Der volle Inhalt der Quelle