Littérature scientifique sur le sujet « 3D semantic scene completion »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « 3D semantic scene completion ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "3D semantic scene completion"
Luo, Shoutong, Zhengxing Sun, Yunhan Sun et Yi Wang. « Resolution‐switchable 3D Semantic Scene Completion ». Computer Graphics Forum 41, no 7 (octobre 2022) : 121–30. http://dx.doi.org/10.1111/cgf.14662.
Texte intégralTang, Jiaxiang, Xiaokang Chen, Jingbo Wang et Gang Zeng. « Not All Voxels Are Equal : Semantic Scene Completion from the Point-Voxel Perspective ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 2 (28 juin 2022) : 2352–60. http://dx.doi.org/10.1609/aaai.v36i2.20134.
Texte intégralBehley, Jens, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Jürgen Gall et Cyrill Stachniss. « Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences : The SemanticKITTI Dataset ». International Journal of Robotics Research 40, no 8-9 (20 avril 2021) : 959–67. http://dx.doi.org/10.1177/02783649211006735.
Texte intégralXu, Jinfeng, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu et Min Chen. « CasFusionNet : A Cascaded Network for Point Cloud Semantic Scene Completion by Dense Feature Fusion ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 3 (26 juin 2023) : 3018–26. http://dx.doi.org/10.1609/aaai.v37i3.25405.
Texte intégralLi, Siqi, Changqing Zou, Yipeng Li, Xibin Zhao et Yue Gao. « Attention-Based Multi-Modal Fusion Network for Semantic Scene Completion ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 07 (3 avril 2020) : 11402–9. http://dx.doi.org/10.1609/aaai.v34i07.6803.
Texte intégralWang, Yu, et Chao Tong. « H2GFormer : Horizontal-to-Global Voxel Transformer for 3D Semantic Scene Completion ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 6 (24 mars 2024) : 5722–30. http://dx.doi.org/10.1609/aaai.v38i6.28384.
Texte intégralWang, Xuzhi, Di Lin et Liang Wan. « FFNet : Frequency Fusion Network for Semantic Scene Completion ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 3 (28 juin 2022) : 2550–57. http://dx.doi.org/10.1609/aaai.v36i3.20156.
Texte intégralShan, Y., Y. Xia, Y. Chen et D. Cremers. « SCP : SCENE COMPLETION PRE-TRAINING FOR 3D OBJECT DETECTION ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13 décembre 2023) : 41–46. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-41-2023.
Texte intégralDing, Junzhe, Jin Zhang, Luqin Ye et Cheng Wu. « Kalman-Based Scene Flow Estimation for Point Cloud Densification and 3D Object Detection in Dynamic Scenes ». Sensors 24, no 3 (31 janvier 2024) : 916. http://dx.doi.org/10.3390/s24030916.
Texte intégralPark, Sang-Min, et Jong-Eun Ha. « 3D Semantic Scene Completion With Multi-scale Feature Maps and Masked Autoencoder ». Journal of Institute of Control, Robotics and Systems 29, no 12 (31 décembre 2023) : 966–72. http://dx.doi.org/10.5302/j.icros.2023.23.0143.
Texte intégralThèses sur le sujet "3D semantic scene completion"
Roldão, Jimenez Luis Guillermo. « 3D Scene Reconstruction and Completion for Autonomous Driving ». Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS415.
Texte intégralIn this thesis, we address the challenges of 3D scene reconstruction and completion from sparse and heterogeneous density point clouds. Therefore proposing different techniques to create a 3D model of the surroundings.In the first part, we study the use of 3-dimensional occupancy grids for multi-frame reconstruction, useful for localization and HD-Maps applications. This is done by exploiting ray-path information to resolve ambiguities in partially occupied cells. Our sensor model reduces discretization inaccuracies and enables occupancy updates in dynamic scenarios.We also focus on single-frame environment perception by the introduction of a 3D implicit surface reconstruction algorithm capable to deal with heterogeneous density data by employing an adaptive neighborhood strategy. Our method completes small regions of missing data and outputs a continuous representation useful for physical modeling or terrain traversability assessment.We dive into deep learning applications for the novel task of semantic scene completion, which completes and semantically annotates entire 3D input scans. Given the little consensus found in the literature, we present an in-depth survey of existing methods and introduce our lightweight multiscale semantic completion network for outdoor scenarios. Our method employs a new hybrid pipeline based on a 2D CNN backbone branch to reduce computation overhead and 3D segmentation heads to predict the complete semantic scene at different scales, being significantly lighter and faster than existing approaches
Garbade, Martin [Verfasser]. « Semantic Segmentation and Completion of 2D and 3D Scenes / Martin Garbade ». Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1201728010/34.
Texte intégralJaritz, Maximilian. « 2D-3D scene understanding for autonomous driving ». Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Texte intégralIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Dewan, Ayush [Verfasser], et Wolfram [Akademischer Betreuer] Burgard. « Leveraging motion and semantic cues for 3D scene understanding ». Freiburg : Universität, 2020. http://d-nb.info/1215499493/34.
Texte intégralLind, Johan. « Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene Models ». Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143599.
Texte intégralPiewak, Florian [Verfasser], et J. M. [Akademischer Betreuer] Zöllner. « LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer : J. M. Zöllner ». Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Texte intégralPiewak, Florian Pierre Joseph [Verfasser], et J. M. [Akademischer Betreuer] Zöllner. « LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding / Florian Pierre Joseph Piewak ; Betreuer : J. M. Zöllner ». Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1212512405/34.
Texte intégralMinto, Ludovico. « Deep learning for scene understanding with color and depth data ». Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3422424.
Texte intégralNegli ultimi anni sono stati raggiunti notevoli progressi sia per quanto concerne l'acquisizione di dati sia per quanto riguarda la strumentazione e gli algoritmi necessari per processarli. Da un lato, l'introduzione di sensori di profondità nel mercato del grande consumo ha reso possibile l'acquisizione di dati tridimensionali ad un costo irrisorio, permettendo così di superare le limitazioni cui sono tipicamente soggette svariate applicazioni basate solamente sull'elaborazione del colore. Al tempo stesso, processori grafici sempre più performanti hanno consentito l'estensione della ricerca ad algoritmi computazionalmente onerosi e la loro applicazione a grandi moli di dati. Dall'altro lato, lo sviluppo di algoritmi sempre più efficaci per l'apprendimento automatico, ivi incluse tecniche di apprendimento profondo, ha permesso di sfruttare l'enorme quantità di dati oggi a disposizione. Alla luce di queste premesse, vengono presentati in questa tesi tre tipici problemi nell'ambito della visione computazionale proponendo altrettanti approcci per una loro soluzione in grado di sfruttare sia l'utilizzo di reti neurali convoluzionali sia l'informazione congiunta convogliata da dati di colore e profondità. In particolare, viene presentato un approccio per la segmentazione semantica di immagini colore/profondità che utilizza sia l'informazione estratta con l'aiuto di una rete neurale convoluzionale sia l'informazione geometrica ricavata attraverso algoritmi più tradizionali. Viene descritto un metodo per la classificazione di forme tridimensionali basato anch'esso sull'utilizzo di una rete neurale convoluzionale operante su particolari rappresentazioni dei dati 3D a disposizione. Infine, viene proposto l'utilizzo dei una rete convoluzionale per stimare la confidenza associata a dati di profondità rispettivamente raccolti con un sensore ToF ed un sistema stereo al fine di guidare con successo la loro fusione senza impiegare, per lo stesso scopo, complicati modelli di rumore.
Lai, Po Kong. « Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera ». Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Texte intégralYalcin, Bayramoglu Neslihan. « Range Data Recognition : Segmentation, Matching, And Similarity Retrieval ». Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613586/index.pdf.
Texte intégralhowever, there is still a gap in the 3D semantic analysis between the requirements of the applications and the obtained results. In this thesis we studied 3D semantic analysis of range data. Under this broad title we address segmentation of range scenes, correspondence matching of range images and the similarity retrieval of range models. Inputs are considered as single view depth images. First, possible research topics related to 3D semantic analysis are introduced. Planar structure detection in range scenes are analyzed and some modifications on available methods are proposed. Also, a novel algorithm to segment 3D point cloud (obtained via TOF camera) into objects by using the spatial information is presented. We proposed a novel local range image matching method that combines 3D surface properties with the 2D scale invariant feature transform. Next, our proposal for retrieving similar models where the query and the database both consist of only range models is presented. Finally, analysis of heat diffusion process on range data is presented. Challenges and some experimental results are presented.
Chapitres de livres sur le sujet "3D semantic scene completion"
Ding, Laiyan, Panwen Hu, Jie Li et Rui Huang. « Towards Balanced RGB-TSDF Fusion for Consistent Semantic Scene Completion by 3D RGB Feature Completion and a Classwise Entropy Loss Function ». Dans Pattern Recognition and Computer Vision, 128–41. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_11.
Texte intégralRomero-González, Cristina, Jesus Martínez-Gómez et Ismael García-Varea. « 3D Semantic Maps for Scene Segmentation ». Dans ROBOT 2017 : Third Iberian Robotics Conference, 603–12. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70833-1_49.
Texte intégralZhang, Jiahui, Hao Zhao, Anbang Yao, Yurong Chen, Li Zhang et Hongen Liao. « Efficient Semantic Scene Completion Network with Spatial Group Convolution ». Dans Computer Vision – ECCV 2018, 749–65. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01258-8_45.
Texte intégralAkadas, Kiran, et Shankar Gangisetty. « 3D Semantic Segmentation for Large-Scale Scene Understanding ». Dans Computer Vision – ACCV 2020 Workshops, 87–102. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69756-3_7.
Texte intégralDai, Angela, et Matthias Nießner. « 3DMV : Joint 3D-Multi-view Prediction for 3D Semantic Scene Segmentation ». Dans Computer Vision – ECCV 2018, 458–74. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_28.
Texte intégralHenlein, Alexander, Attila Kett, Daniel Baumartz, Giuseppe Abrami, Alexander Mehler, Johannes Bastian, Yannic Blecher et al. « Semantic Scene Builder : Towards a Context Sensitive Text-to-3D Scene Framework ». Dans Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, 461–79. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35748-0_32.
Texte intégralWang, Jianan, Hanyu Xuan et Zhiliang Wu. « Semantic-Guided Completion Network for Video Inpainting in Complex Urban Scene ». Dans Pattern Recognition and Computer Vision, 224–36. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8552-4_18.
Texte intégralSrinivasan, Sharadha, Shreya Kumar, Vallikannu Chockalingam et Chitrakala S. « 3DSRASG : 3D Scene Retrieval and Augmentation Using Semantic Graphs ». Dans Progress in Artificial Intelligence, 313–24. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86230-5_25.
Texte intégralBultmann, Simon, et Sven Behnke. « 3D Semantic Scene Perception Using Distributed Smart Edge Sensors ». Dans Intelligent Autonomous Systems 17, 313–29. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22216-0_22.
Texte intégralCao, Chuqi, Mohammad Rafiq Swash et Hongying Meng. « Semantic 3D Scene Classification Based on Holoscopic 3D Camera for Autonomous Vehicles ». Dans Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 897–904. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_96.
Texte intégralActes de conférences sur le sujet "3D semantic scene completion"
Garbade, Martin, Yueh-Tung Chen, Johann Sawatzky et Juergen Gall. « Two Stream 3D Semantic Scene Completion ». Dans 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00055.
Texte intégralCao, Anh-Quan, et Raoul de Charette. « MonoScene : Monocular 3D Semantic Scene Completion ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00396.
Texte intégralWang, Yida, David Joseph Tan, Nassir Navab et Federico Tombari. « Adversarial Semantic Scene Completion from a Single Depth Image ». Dans 2018 International Conference on 3D Vision (3DV). IEEE, 2018. http://dx.doi.org/10.1109/3dv.2018.00056.
Texte intégralWu, Shun-Cheng, Keisuke Tateno, Nassir Navab et Federico Tombari. « SCFusion : Real-time Incremental Scene Reconstruction with Semantic Completion ». Dans 2020 International Conference on 3D Vision (3DV). IEEE, 2020. http://dx.doi.org/10.1109/3dv50981.2020.00090.
Texte intégralLi, Jie, Kai Han, Peng Wang, Yu Liu et Xia Yuan. « Anisotropic Convolutional Networks for 3D Semantic Scene Completion ». Dans 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00341.
Texte intégralLi, Jie, Laiyan Ding et Rui Huang. « IMENet : Joint 3D Semantic Scene Completion and 2D Semantic Segmentation through Iterative Mutual Enhancement ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/110.
Texte intégralZhang, Pingping, Wei Liu, Yinjie Lei, Huchuan Lu et Xiaoyun Yang. « Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion ». Dans 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00789.
Texte intégralDourado, Aloisio, Frederico Guth et Teofilo de Campos. « Data Augmented 3D Semantic Scene Completion with 2D Segmentation Priors ». Dans 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2022. http://dx.doi.org/10.1109/wacv51458.2022.00076.
Texte intégralYao, Jiawei, Chuming Li, Keqiang Sun, Yingjie Cai, Hao Li, Wanli Ouyang et Hongsheng Li. « NDC-Scene : Boost Monocular 3D Semantic Scene Completion in Normalized Device Coordinates Space ». Dans 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00867.
Texte intégralGuo, Yuxiao, et Xin Tong. « View-Volume Network for Semantic Scene Completion from a Single Depth Image ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/101.
Texte intégral