Gotowa bibliografia na temat „Gaussian splatting”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Gaussian splatting”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Gaussian splatting"
Radl, Lukas, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl i Markus Steinberger. "StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering". ACM Transactions on Graphics 43, nr 4 (19.07.2024): 1–17. http://dx.doi.org/10.1145/3658187.
Pełny tekst źródłaSMIRNOV, A. O. "Camera Pose Estimation Using a 3D Gaussian Splatting Radiance Field". Kibernetika i vyčislitelʹnaâ tehnika 216, nr 2(216) (26.06.2024): 15–25. http://dx.doi.org/10.15407/kvt216.02.015.
Pełny tekst źródłaGao, Lin, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu i Yu-Kun Lai. "Real-time Large-scale Deformation of Gaussian Splatting". ACM Transactions on Graphics 43, nr 6 (19.11.2024): 1–17. http://dx.doi.org/10.1145/3687756.
Pełny tekst źródłaJäger, Miriam, Theodor Kapler, Michael Feßenbecker, Felix Birkelbach, Markus Hillemann i Boris Jutzi. "HoloGS: Instant Depth-based 3D Gaussian Splatting with Microsoft HoloLens 2". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2-2024 (11.06.2024): 159–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-2024-159-2024.
Pełny tekst źródłaChen, Meida, Devashish Lal, Zifan Yu, Jiuyi Xu, Andrew Feng, Suya You, Abdul Nurunnabi i Yangming Shi. "Large-Scale 3D Terrain Reconstruction Using 3D Gaussian Splatting for Visualization and Simulation". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2-2024 (11.06.2024): 49–54. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-2024-49-2024.
Pełny tekst źródłaDu, Yu, Zhisheng Zhang, Peng Zhang, Fuchun Sun i Xiao Lv. "UDR-GS: Enhancing Underwater Dynamic Scene Reconstruction with Depth Regularization". Symmetry 16, nr 8 (8.08.2024): 1010. http://dx.doi.org/10.3390/sym16081010.
Pełny tekst źródłaLyu, Xiaoyang, Yang-Tian Sun, Yi-Hua Huang, Xiuzhe Wu, Ziyi Yang, Yilun Chen, Jiangmiao Pang i Xiaojuan Qi. "3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting". ACM Transactions on Graphics 43, nr 6 (19.11.2024): 1–12. http://dx.doi.org/10.1145/3687952.
Pełny tekst źródłaSmirnov, Anton О. "Dynamic map management for Gaussian Splatting SLAM". Control Systems and Computers, nr 2 (306) (lipiec 2024): 3–9. http://dx.doi.org/10.15407/csc.2024.02.003.
Pełny tekst źródłaKerbl, Bernhard, Andreas Meuleman, Georgios Kopanas, Michael Wimmer, Alexandre Lanvin i George Drettakis. "A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets". ACM Transactions on Graphics 43, nr 4 (19.07.2024): 1–15. http://dx.doi.org/10.1145/3658160.
Pełny tekst źródłaDong, Zheng, Ke Xu, Yaoan Gao, Hujun Bao, Weiwei Xu i Rynson W. H. Lau. "Gaussian Surfel Splatting for Live Human Performance Capture". ACM Transactions on Graphics 43, nr 6 (19.11.2024): 1–17. http://dx.doi.org/10.1145/3687993.
Pełny tekst źródłaRozprawy doktorskie na temat "Gaussian splatting"
Dey, Arnab. "Rendu neuronal pour la représentation humaine en 3D avec des caractéristiques biomécaniques". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4036.
Pełny tekst źródłaThe digital representation of real-world scenes, particularly human subjects, has long been a significant area of research due to its wide-ranging applications in various domains. Realistic virtual human avatars are critical for applications in medical diagnosis, augmented reality/virtual reality (AR/VR), and the entertainment industry. These avatars must accurately represent the human geometry, texture, and human biomechanics properties. This thesis focuses on the above mentioned topics by introducing innovative techniques for efficiently generating highly realistic virtual human avatars that capture both external visual features and underlying biomechanical properties using neural rendering techniques.Neural rendering techniques, particularly with the introduction of Neural Radiance Fields (NeRF) and Gaussian splatting, have recently shown great potential in generating photorealistic 3D scene representations from multiview images. Neural rendering has become an attractive choice for the 3D reconstruction community, not just due to its impressive photo-realistic quality, but also because of its simplicity, which has made it a popular choice for 3D human reconstruction as well as scene representation. However, one of the drawbacks of early NeRF methods was that they often struggled to estimate accurate 3D geometry and lacked additional properties such as structural human features and poses information. Building upon the benefits of neural rendering techniques, this thesis proposes novel approaches to address these limitations, enabling the generation of accurate 3D human avatars with biomechanical properties in real time.First, we address the broader issues of NeRF's inaccurate geometry and long training time by proposing Mip-NeRF RGB-D, a novel approach that leverages depth information to reduce training time and improve geometry, thereby enhancing the performance of NeRF-based techniques. Second, we focus on issues regarding NeRF-based human representation and introduce GHNeRF, a method designed to learn 2D and 3D joint locations of human subjects using the NeRF framework. GHNeRF utilizes pre-trained 2D image encoders to extract essential human features from 2D images, which are then integrated into the NeRF framework to estimate crucial biomechanical properties. Finally, we propose HFGaussian, a technique for generating virtual humans with 3D pose and biomechanical features in real time using a Gaussian splatting method. HFGaussian employs image encoders to extract relevant human features and a 3D pose estimation network to predict 3D human pose. The proposed methods have shown significant improvements in estimating photometric, geometric, and biomechanic properties through neural rendering techniques.The techniques presented in this thesis aim to enable the development of highly realistic virtual human avatars, allowing for a more engaging and natural user experiences in virtual environments. Furthermore, these methods have substantial potential to be applied in other domains such as medical applications, including diagnostic purposes, surgical planning, patient education, and biomechanical analysis
Części książek na temat "Gaussian splatting"
Lee, Byeonghyeon, Howoong Lee, Xiangyu Sun, Usman Ali i Eunbyung Park. "Deblurring 3D Gaussian Splatting". W Lecture Notes in Computer Science, 127–43. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73636-0_8.
Pełny tekst źródłaZhao, Lingzhe, Peng Wang i Peidong Liu. "BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting". W Lecture Notes in Computer Science, 233–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72698-9_14.
Pełny tekst źródłaRota Bulò, Samuel, Lorenzo Porzi i Peter Kontschieder. "Revising Densification in Gaussian Splatting". W Lecture Notes in Computer Science, 347–62. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73036-8_20.
Pełny tekst źródłaLiang, Zhihao, Qi Zhang, Wenbo Hu, Lei Zhu, Ying Feng i Kui Jia. "Analytic-Splatting: Anti-Aliased 3D Gaussian Splatting via Analytic Integration". W Lecture Notes in Computer Science, 281–97. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72643-9_17.
Pełny tekst źródłaWang, Yuxuan, Xuanyu Yi, Zike Wu, Na Zhao, Long Chen i Hanwang Zhang. "View-Consistent 3D Editing with Gaussian Splatting". W Lecture Notes in Computer Science, 404–20. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72761-0_23.
Pełny tekst źródłaChang, Jiahao, Yinglin Xu, Yihao Li, Yuantao Chen, Wensen Feng i Xiaoguang Han. "GaussReg: Fast 3D Registration with Gaussian Splatting". W Lecture Notes in Computer Science, 407–23. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72633-0_23.
Pełny tekst źródłaBae, Jeongmin, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang i Youngjung Uh. "Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting". W Lecture Notes in Computer Science, 321–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72633-0_18.
Pełny tekst źródłaBonilla, Sierra, Shuai Zhang, Dimitrios Psychogyios, Danail Stoyanov, Francisco Vasconcelos i Sophia Bano. "Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction". W Lecture Notes in Computer Science, 274–83. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72089-5_26.
Pełny tekst źródłaZhang, Dongbin, Chuming Wang, Weitao Wang, Peihao Li, Minghan Qin i Haoqian Wang. "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections". W Lecture Notes in Computer Science, 341–59. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73116-7_20.
Pełny tekst źródłaLi, Yanyan, Chenyu Lyu, Yan Di, Guangyao Zhai, Gim Hee Lee i Federico Tombari. "GeoGaussian: Geometry-Aware Gaussian Splatting for Scene Rendering". W Lecture Notes in Computer Science, 441–57. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72761-0_25.
Pełny tekst źródłaStreszczenia konferencji na temat "Gaussian splatting"
Matsuki, Hidenobu, Riku Murai, Paul H. J. Kelly i Andrew J. Davison. "Gaussian Splatting SLAM". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18039–48. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01708.
Pełny tekst źródłaYu, Zehao, Anpei Chen, Binbin Huang, Torsten Sattler i Andreas Geiger. "Mip-Splatting: Alias-Free 3D Gaussian Splatting". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19447–56. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01839.
Pełny tekst źródłaYu, Heng, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma i László A. Jeni. "CoGS: Controllable Gaussian Splatting". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21624–33. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02043.
Pełny tekst źródłaQin, Minghan, Wanhua Li, Jiawei Zhou, Haoqian Wang i Hanspeter Pfister. "LangSplat: 3D Language Gaussian Splatting". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20051–60. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01895.
Pełny tekst źródłaDeguchi, Hiroyuki, Mana Masuda, Takuya Nakabayashi i Hideo Saito. "E2GS: Event Enhanced Gaussian Splatting". W 2024 IEEE International Conference on Image Processing (ICIP), 1676–82. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647607.
Pełny tekst źródłaChen, Zilong, Feng Wang, Yikai Wang i Huaping Liu. "Text-to-3D using Gaussian Splatting". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21401–12. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02022.
Pełny tekst źródłaHornáček, Martin, i Gregor Rozinaj. "Exploring 3D Gaussian Splatting: An Algorithmic Perspective". W 2024 International Symposium ELMAR, 149–52. IEEE, 2024. http://dx.doi.org/10.1109/elmar62909.2024.10693978.
Pełny tekst źródłaLiang, Zhihao, Qi Zhang, Ying Feng, Ying Shan i Kui Jia. "GS-IR: 3D Gaussian Splatting for Inverse Rendering". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21644–53. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02045.
Pełny tekst źródłaZhang, Jiahui, Fangneng Zhan, Muyu Xu, Shijian Lu i Eric Xing. "FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21424–33. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02024.
Pełny tekst źródłaKung, Pou-Chun, Seth Isaacson, Ram Vasudevan i Katherine A. Skinner. "SAD-GS: Shape-aligned Depth-supervised Gaussian Splatting". W 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2842–51. IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00290.
Pełny tekst źródła