Literatura académica sobre el tema "Gaussian splatting"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Gaussian splatting".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Gaussian splatting"

1

Radl, Lukas, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl y Markus Steinberger. "StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering". ACM Transactions on Graphics 43, n.º 4 (19 de julio de 2024): 1–17. http://dx.doi.org/10.1145/3658187.

Texto completo
Resumen
Gaussian Splatting has emerged as a prominent model for constructing 3D representations from images across diverse domains. However, the efficiency of the 3D Gaussian Splatting rendering pipeline relies on several simplifications. Notably, reducing Gaussian to 2D splats with a single viewspace depth introduces popping and blending artifacts during view rotation. Addressing this issue requires accurate per-pixel depth computation, yet a full per-pixel sort proves excessively costly compared to a global sort operation. In this paper, we present a novel hierarchical rasterization approach that systematically resorts and culls splats with minimal processing overhead. Our software rasterizer effectively eliminates popping artifacts and view inconsistencies, as demonstrated through both quantitative and qualitative measurements. Simultaneously, our method mitigates the potential for cheating view-dependent effects with popping, ensuring a more authentic representation. Despite the elimination of cheating, our approach achieves comparable quantitative results for test images, while increasing the consistency for novel view synthesis in motion. Due to its design, our hierarchical approach is only 4% slower on average than the original Gaussian Splatting. Notably, enforcing consistency enables a reduction in the number of Gaussians by approximately half with nearly identical quality and view-consistency. Consequently, rendering performance is nearly doubled, making our approach 1.6x faster than the original Gaussian Splatting, with a 50% reduction in memory requirements. Our renderer is publicly available at https://github.com/r4dl/StopThePop.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

SMIRNOV, A. O. "Camera Pose Estimation Using a 3D Gaussian Splatting Radiance Field". Kibernetika i vyčislitelʹnaâ tehnika 216, n.º 2(216) (26 de junio de 2024): 15–25. http://dx.doi.org/10.15407/kvt216.02.015.

Texto completo
Resumen
Introduction. Accurate camera pose estimation is crucial for many applications ranging from robotics to virtual and augmented reality. The process of determining agents pose from a set of observations is called odometry. This work focuses on visual odometry, which utilizes only images from camera as the input data. The purpose of the paper is to demonstrate an approach for small-scale camera pose estimation using 3D Gaussians as the environment representation. Methods. Given the rise of neural volumetric representations for the environment reconstruction, this work relies on Gaussian Splatting algorithm for high-fidelity volumetric representation. Results. For a trained Gaussian Splatting model and the target image, unseen during training, we estimate its camera pose using differentiable rendering and gradient-based optimization methods. Gradients with respect to camera pose are computed directly from image-space per-pixel loss function via backpropagation. The choice of Gaussian Splatting as representation is particularly appealing because it allows for end-to-end estimation and removes several stages that are common for more classical algorithms. And differentiable rasterization as the image formation algorithm provides real-time performance which facilitates its use in real-world applications. Conclusions. This end-to-end approach greatly simplifies camera pose estimation, avoiding compounding errors that are common for multi-stage algorithms and provides a high-quality camera pose estimation. Keywords: radiance fields, scientific computing, odometry, slam, pose estimation, Gaussian Splatting, differentiable rendering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gao, Lin, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu y Yu-Kun Lai. "Real-time Large-scale Deformation of Gaussian Splatting". ACM Transactions on Graphics 43, n.º 6 (19 de noviembre de 2024): 1–17. http://dx.doi.org/10.1145/3687756.

Texto completo
Resumen
Neural implicit representations, including Neural Distance Fields and Neural Radiance Fields, have demonstrated significant capabilities for reconstructing surfaces with complicated geometry and topology, and generating novel views of a scene. Nevertheless, it is challenging for users to directly deform or manipulate these implicit representations with large deformations in a real-time fashion. Gaussian Splatting (GS) has recently become a promising method with explicit geometry for representing static scenes and facilitating high-quality and real-time synthesis of novel views. However, it cannot be easily deformed due to the use of discrete Gaussians and the lack of explicit topology. To address this, we develop a novel GS-based method (GaussianMesh) that enables interactive deformation. Our key idea is to design an innovative mesh-based GS representation, which is integrated into Gaussian learning and manipulation. 3D Gaussians are defined over an explicit mesh, and they are bound with each other: the rendering of 3D Gaussians guides the mesh face split for adaptive refinement, and the mesh face split directs the splitting of 3D Gaussians. Moreover, the explicit mesh constraints help regularize the Gaussian distribution, suppressing poor-quality Gaussians ( e.g. , misaligned Gaussians, long-narrow shaped Gaussians), thus enhancing visual quality and reducing artifacts during deformation. Based on this representation, we further introduce a large-scale Gaussian deformation technique to enable deformable GS, which alters the parameters of 3D Gaussians according to the manipulation of the associated mesh. Our method benefits from existing mesh deformation datasets for more realistic data-driven Gaussian deformation. Extensive experiments show that our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate (65 FPS on average on a single commodity GPU).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jäger, Miriam, Theodor Kapler, Michael Feßenbecker, Felix Birkelbach, Markus Hillemann y Boris Jutzi. "HoloGS: Instant Depth-based 3D Gaussian Splatting with Microsoft HoloLens 2". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2-2024 (11 de junio de 2024): 159–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-2024-159-2024.

Texto completo
Resumen
Abstract. In the fields of photogrammetry, computer vision and computer graphics, the task of neural 3D scene reconstruction has led to the exploration of various techniques. Among these, 3D Gaussian Splatting stands out for its explicit representation of scenes using 3D Gaussians, making it appealing for tasks like 3D point cloud extraction and surface reconstruction. Motivated by its potential, we address the domain of 3D scene reconstruction, aiming to leverage the capabilities of the Microsoft HoloLens 2 for instant 3D Gaussian Splatting. We present HoloGS, a novel workflow utilizing HoloLens sensor data, which bypasses the need for pre-processing steps like Structure from Motion by instantly accessing the required input data i.e. the images, camera poses and the point cloud from depth sensing. We provide comprehensive investigations, including the training process and the rendering quality, assessed through the Peak Signal-to-Noise Ratio, and the geometric 3D accuracy of the densified point cloud from Gaussian centers, measured by Chamfer Distance. We evaluate our approach on two self-captured scenes: An outdoor scene of a cultural heritage statue and an indoor scene of a fine-structured plant. Our results show that the HoloLens data, including RGB images, corresponding camera poses, and depth sensing based point clouds to initialize the Gaussians, are suitable as input for 3D Gaussian Splatting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Chen, Meida, Devashish Lal, Zifan Yu, Jiuyi Xu, Andrew Feng, Suya You, Abdul Nurunnabi y Yangming Shi. "Large-Scale 3D Terrain Reconstruction Using 3D Gaussian Splatting for Visualization and Simulation". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2-2024 (11 de junio de 2024): 49–54. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-2024-49-2024.

Texto completo
Resumen
Abstract. The fusion of low-cost unmanned aerial systems (UAS) with advanced photogrammetric techniques has revolutionized 3D terrain reconstruction, enabling the automated creation of detailed models. Concurrently, the advent of 3D Gaussian Splatting has introduced a paradigm shift in 3D data representation, offering visually realistic renditions distinct from traditional polygon-based models. Our research builds upon this foundation, aiming to integrate Gaussian Splatting into interactive simulations for immersive virtual environments. We address challenges such as collision detection by adopting a hybrid approach, combining Gaussian Splatting with photogrammetry-derived meshes. Through comprehensive experimentation covering varying terrain sizes and Gaussian densities, we evaluate scalability, performance, and limitations. Our findings contribute to advancing the use of advanced computer graphics techniques for enhanced 3D terrain visualization and simulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Du, Yu, Zhisheng Zhang, Peng Zhang, Fuchun Sun y Xiao Lv. "UDR-GS: Enhancing Underwater Dynamic Scene Reconstruction with Depth Regularization". Symmetry 16, n.º 8 (8 de agosto de 2024): 1010. http://dx.doi.org/10.3390/sym16081010.

Texto completo
Resumen
Representing and rendering dynamic underwater scenes present significant challenges due to the medium’s inherent properties, which result in image blurring and information ambiguity. To overcome these challenges and accomplish real-time rendering of dynamic underwater environments while maintaining efficient training and storage, we propose Underwater Dynamic Scene Reconstruction Gaussian Splatting (UDR-GS), a method based on Gaussian Splatting. By leveraging prior information from a pre-trained depth estimation model and smoothness constraints between adjacent images, our approach uses the estimated depth as a geometric prior to aid in color-based optimization, significantly reducing artifacts and improving geometric accuracy. By integrating depth guidance into the Gaussian Splatting (GS) optimization process, we achieve more precise geometric estimations. To ensure higher stability, smoothness constraints are applied between adjacent images, maintaining consistent depth for neighboring 3D points in the absence of boundary conditions. The symmetry concept is inherently applied in our method by maintaining uniform depth and color information across multiple viewpoints, which enhances the reconstruction quality and visual coherence. Using 4D Gaussian Splatting (4DGS) as a baseline, our strategy demonstrates superior performance in both RGB novel view synthesis and 3D geometric reconstruction. On average, across multiple datasets, our method shows an improvement of approximately 1.41% in PSNR and a 0.75% increase in SSIM compared with the baseline 4DGS method, significantly enhancing the visual quality and geometric fidelity of dynamic underwater scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lyu, Xiaoyang, Yang-Tian Sun, Yi-Hua Huang, Xiuzhe Wu, Ziyi Yang, Yilun Chen, Jiangmiao Pang y Xiaojuan Qi. "3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting". ACM Transactions on Graphics 43, n.º 6 (19 de noviembre de 2024): 1–12. http://dx.doi.org/10.1145/3687952.

Texto completo
Resumen
In this paper, we present an implicit surface reconstruction method with 3D Gaussian Splatting (3DGS), namely 3DGSR, that allows for accurate 3D reconstruction with intricate details while inheriting the high efficiency and rendering quality of 3DGS. The key insight is to incorporate an implicit signed distance field (SDF) within 3D Gaussians for surface modeling, and to enable the alignment and joint optimization of both SDF and 3D Gaussians. To achieve this, we design coupling strategies that align and associate the SDF with 3D Gaussians, allowing for unified optimization and enforcing surface constraints on the 3D Gaussians. With alignment, optimizing the 3D Gaussians provides supervisory signals for SDF learning, enabling the reconstruction of intricate details. However, this only offers sparse supervisory signals to the SDF at locations occupied by Gaussians, which is insufficient for learning a continuous SDF. Then, to address this limitation, we incorporate volumetric rendering and align the rendered geometric attributes (depth, normal) with that derived from 3DGS. In sum, these two designs allow SDF and 3DGS to be aligned, jointly optimized, and mutually boosted. Our extensive experimental results demonstrate that our 3DGSR enables high-quality 3D surface reconstruction while preserving the efficiency and rendering quality of 3DGS. Besides, our method competes favorably with leading surface reconstruction techniques while offering a more efficient learning process and much better rendering qualities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Smirnov, Anton О. "Dynamic map management for Gaussian Splatting SLAM". Control Systems and Computers, n.º 2 (306) (julio de 2024): 3–9. http://dx.doi.org/10.15407/csc.2024.02.003.

Texto completo
Resumen
Map representation and management for Simultaneous Localization and Mapping (SLAM) systems is at the core of such algorithms. Being able to efficiently construct new KeyFrames (KF), remove redundant ones, constructing covisibility graphs has direct impact on the performance and accuracy of SLAM. In this work we outline the algorithm for maintaining dynamic map and its management for SLAM algorithm based on Gaussian Splatting as the environment representation. Gaussian Splatting allows for high-fidelity photorealistic environment reconstruction using differentiable rasterization and is able to perform in real-time making it a great candidate for map representation in SLAM. Its end-to-end nature and gradient-based optimization significantly simplifies map optimization, camera pose estimation and KeyFrame management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kerbl, Bernhard, Andreas Meuleman, Georgios Kopanas, Michael Wimmer, Alexandre Lanvin y George Drettakis. "A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets". ACM Transactions on Graphics 43, n.º 4 (19 de julio de 2024): 1–15. http://dx.doi.org/10.1145/3658160.

Texto completo
Resumen
Novel view synthesis has seen major advances in recent years, with 3D Gaussian splatting offering an excellent level of visual quality, fast training and real-time rendering. However, the resources needed for training and rendering inevitably limit the size of the captured scenes that can be represented with good visual quality. We introduce a hierarchy of 3D Gaussians that preserves visual quality for very large scenes, while offering an efficient Level-of-Detail (LOD) solution for efficient rendering of distant content with effective level selection and smooth transitions between levels. We introduce a divide-and-conquer approach that allows us to train very large scenes in independent chunks. We consolidate the chunks into a hierarchy that can be optimized to further improve visual quality of Gaussians merged into intermediate nodes. Very large captures typically have sparse coverage of the scene, presenting many challenges to the original 3D Gaussian splatting training method; we adapt and regularize training to account for these issues. We present a complete solution, that enables real-time rendering of very large scenes and can adapt to available resources thanks to our LOD method. We show results for captured scenes with up to tens of thousands of images with a simple and affordable rig, covering trajectories of up to several kilometers and lasting up to one hour.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dong, Zheng, Ke Xu, Yaoan Gao, Hujun Bao, Weiwei Xu y Rynson W. H. Lau. "Gaussian Surfel Splatting for Live Human Performance Capture". ACM Transactions on Graphics 43, n.º 6 (19 de noviembre de 2024): 1–17. http://dx.doi.org/10.1145/3687993.

Texto completo
Resumen
High-quality real-time rendering using user-affordable capture rigs is an essential property of human performance capture systems for real-world applications. However, state-of-the-art performance capture methods may not yield satisfactory rendering results under a very sparse (e.g., four) capture setting. Specifically, neural radiance field (NeRF)-based methods and 3D Gaussian Splatting (3DGS)-based methods tend to produce local geometry errors for unseen performers, while occupancy field (PIFu)-based methods often produce unrealistic rendering results. In this paper, we propose a novel generalizable neural approach to reconstruct and render the performers from very sparse RGBD streams in high quality. The core of our method is a novel point-based generalizable human (PGH) representation conditioned on the pixel-aligned RGBD features. The PGH representation learns a surface implicit function for the regression of surface points and a Gaussian implicit function for parameterizing the radiance fields of the regressed surface points with 2D Gaussian surfels, and uses surfel splatting for fast rendering. We learn this hybrid human representation via two novel networks. First, we propose a novel point-regressing network (PRNet) with a depth-guided point cloud initialization (DPI) method to regress an accurate surface point cloud based on the denoised depth information. Second, we propose a novel neural blending-based surfel splatting network (SPNet) to render high-quality geometries and appearances in novel views based on the regressed surface points and high-resolution RGBD features of adjacent views. Our method produces free-view human performance videos of 1K resolution at 12 fps on average. Experiments on two benchmarks show that our method outperforms state-of-the-art human performance capture methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Gaussian splatting"

1

Dey, Arnab. "Rendu neuronal pour la représentation humaine en 3D avec des caractéristiques biomécaniques". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4036.

Texto completo
Resumen
La représentation numérique des scènes réel en général, et des personnes en particulier, est depuis longtemps un domaine de recherche important en raison de ses applications variées. Les avatars humains réalistes sont essentiels pour les applications en diagnostic médical, de réalité augmentée/virtuelle (AR/VR) et dans l'industrie du divertissement. Ces avatars doivent représenter avec précision la géométrie humaine ainsi que la texture en accord avec les propriétés biomécaniques des humains. Cette thèse présente des techniques innovantes pour générer efficacement et de manière réalistes des avatars humains capturant à la fois les caractéristiques visuelles externes et les propriétés biomécaniques sous-jacentes et ce en utilisant des techniques de rendu neuronal.Les techniques de rendu neuronal, notamment avec l'introduction des champs de radiance neuronaux (NeRF) et de la technique d'éclaboussures de gaussiennes (Gaussian Splatting), ont récemment montré un grand potentiel pour générer des représentations photoréalistes de scènes 3D à partir de prises de vues multiples. Le rendu neuronal est devenu un choix attractif pour la communauté de la reconstruction 3D, non seulement en raison de sa qualité photoréaliste impressionnante, mais aussi en raison de sa simplicité, ce qui en a fait un choix populaire pour les techniques de reconstruction humaine 3D ainsi que pour la représentation de scènes. Cependant, les premières méthodes NeRF ont souvent eu du mal à estimer une géométrie 3D précise pour les humains et manquaient de propriétés supplémentaires telles que les caractéristiques structurelles humaines et les informations de pose. En s'appuyant sur les avantages des techniques de rendu neuronal, cette thèse propose de nouvelles approches pour surmonter les limitations actuelles, permettant la génération d'avatars humains 3D précis avec des propriétés biomécaniques en temps réel.Tout d'abord, nous abordons les problèmes plus généraux de la géométrie inexacte des NeRF et du temps d'entraînement long en proposant Mip-NeRF RGB-D, une approche novatrice qui exploite les informations de profondeur pour réduire le temps d'entraînement et améliorer la géométrie, améliorant ainsi la performance des techniques basées sur les NeRF. Ensuite, nous nous concentrons sur les problèmes concernant la représentation humaine basée sur les NeRF et introduisons GHNeRF, une méthode conçue pour apprendre les emplacements des articulations 2D et 3D des sujets humains en utilisant le cadre NeRF. GHNeRF utilise des encodeurs d'images 2D pré-entraînés pour extraire les caractéristiques humaines essentielles des images 2D, qui sont ensuite intégrées dans le cadre des NeRF pour estimer les propriétés biomécaniques cruciales. Enfin, nous proposons HFGaussian, une technique pour générer des avatars humains virtuels avec des poses 3D et des caractéristiques biomécaniques en temps réel en utilisant une méthode de Gaussian Splatting. HFGaussian utilise des encodeurs d'images pour extraire les caractéristiques humaines pertinentes et un réseau d'estimation de pose 3D pour prédire la pose 3D. Les méthodes proposées ont montré des améliorations significatives en termes d'estimation des propriétés photométriques, géométriques et biomécaniques grâce aux techniques de rendu neuronal.Les techniques présentées dans cette thèse visent à permettre le développement d'avatars humains réalistes, afin de rendre l'expérience utilisateur la plus immersives et la plus naturelle naturelles possible en environnement virtuel. De plus, ces méthodes ont un potentiel substantiel dans différents domaines tels que les applications médicales, y compris à des fins de diagnostique, la planification chirurgicale, l'éducation des patients et l'analyse biomécanique
The digital representation of real-world scenes, particularly human subjects, has long been a significant area of research due to its wide-ranging applications in various domains. Realistic virtual human avatars are critical for applications in medical diagnosis, augmented reality/virtual reality (AR/VR), and the entertainment industry. These avatars must accurately represent the human geometry, texture, and human biomechanics properties. This thesis focuses on the above mentioned topics by introducing innovative techniques for efficiently generating highly realistic virtual human avatars that capture both external visual features and underlying biomechanical properties using neural rendering techniques.Neural rendering techniques, particularly with the introduction of Neural Radiance Fields (NeRF) and Gaussian splatting, have recently shown great potential in generating photorealistic 3D scene representations from multiview images. Neural rendering has become an attractive choice for the 3D reconstruction community, not just due to its impressive photo-realistic quality, but also because of its simplicity, which has made it a popular choice for 3D human reconstruction as well as scene representation. However, one of the drawbacks of early NeRF methods was that they often struggled to estimate accurate 3D geometry and lacked additional properties such as structural human features and poses information. Building upon the benefits of neural rendering techniques, this thesis proposes novel approaches to address these limitations, enabling the generation of accurate 3D human avatars with biomechanical properties in real time.First, we address the broader issues of NeRF's inaccurate geometry and long training time by proposing Mip-NeRF RGB-D, a novel approach that leverages depth information to reduce training time and improve geometry, thereby enhancing the performance of NeRF-based techniques. Second, we focus on issues regarding NeRF-based human representation and introduce GHNeRF, a method designed to learn 2D and 3D joint locations of human subjects using the NeRF framework. GHNeRF utilizes pre-trained 2D image encoders to extract essential human features from 2D images, which are then integrated into the NeRF framework to estimate crucial biomechanical properties. Finally, we propose HFGaussian, a technique for generating virtual humans with 3D pose and biomechanical features in real time using a Gaussian splatting method. HFGaussian employs image encoders to extract relevant human features and a 3D pose estimation network to predict 3D human pose. The proposed methods have shown significant improvements in estimating photometric, geometric, and biomechanic properties through neural rendering techniques.The techniques presented in this thesis aim to enable the development of highly realistic virtual human avatars, allowing for a more engaging and natural user experiences in virtual environments. Furthermore, these methods have substantial potential to be applied in other domains such as medical applications, including diagnostic purposes, surgical planning, patient education, and biomechanical analysis
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Gaussian splatting"

1

Lee, Byeonghyeon, Howoong Lee, Xiangyu Sun, Usman Ali y Eunbyung Park. "Deblurring 3D Gaussian Splatting". En Lecture Notes in Computer Science, 127–43. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73636-0_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Lingzhe, Peng Wang y Peidong Liu. "BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting". En Lecture Notes in Computer Science, 233–50. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72698-9_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rota Bulò, Samuel, Lorenzo Porzi y Peter Kontschieder. "Revising Densification in Gaussian Splatting". En Lecture Notes in Computer Science, 347–62. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73036-8_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liang, Zhihao, Qi Zhang, Wenbo Hu, Lei Zhu, Ying Feng y Kui Jia. "Analytic-Splatting: Anti-Aliased 3D Gaussian Splatting via Analytic Integration". En Lecture Notes in Computer Science, 281–97. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72643-9_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wang, Yuxuan, Xuanyu Yi, Zike Wu, Na Zhao, Long Chen y Hanwang Zhang. "View-Consistent 3D Editing with Gaussian Splatting". En Lecture Notes in Computer Science, 404–20. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72761-0_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chang, Jiahao, Yinglin Xu, Yihao Li, Yuantao Chen, Wensen Feng y Xiaoguang Han. "GaussReg: Fast 3D Registration with Gaussian Splatting". En Lecture Notes in Computer Science, 407–23. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72633-0_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bae, Jeongmin, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang y Youngjung Uh. "Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting". En Lecture Notes in Computer Science, 321–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72633-0_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bonilla, Sierra, Shuai Zhang, Dimitrios Psychogyios, Danail Stoyanov, Francisco Vasconcelos y Sophia Bano. "Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction". En Lecture Notes in Computer Science, 274–83. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72089-5_26.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Dongbin, Chuming Wang, Weitao Wang, Peihao Li, Minghan Qin y Haoqian Wang. "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections". En Lecture Notes in Computer Science, 341–59. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73116-7_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Yanyan, Chenyu Lyu, Yan Di, Guangyao Zhai, Gim Hee Lee y Federico Tombari. "GeoGaussian: Geometry-Aware Gaussian Splatting for Scene Rendering". En Lecture Notes in Computer Science, 441–57. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72761-0_25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Gaussian splatting"

1

Matsuki, Hidenobu, Riku Murai, Paul H. J. Kelly y Andrew J. Davison. "Gaussian Splatting SLAM". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18039–48. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01708.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yu, Zehao, Anpei Chen, Binbin Huang, Torsten Sattler y Andreas Geiger. "Mip-Splatting: Alias-Free 3D Gaussian Splatting". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19447–56. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01839.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Yu, Heng, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma y László A. Jeni. "CoGS: Controllable Gaussian Splatting". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21624–33. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02043.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Qin, Minghan, Wanhua Li, Jiawei Zhou, Haoqian Wang y Hanspeter Pfister. "LangSplat: 3D Language Gaussian Splatting". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20051–60. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01895.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Deguchi, Hiroyuki, Mana Masuda, Takuya Nakabayashi y Hideo Saito. "E2GS: Event Enhanced Gaussian Splatting". En 2024 IEEE International Conference on Image Processing (ICIP), 1676–82. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647607.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chen, Zilong, Feng Wang, Yikai Wang y Huaping Liu. "Text-to-3D using Gaussian Splatting". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21401–12. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hornáček, Martin y Gregor Rozinaj. "Exploring 3D Gaussian Splatting: An Algorithmic Perspective". En 2024 International Symposium ELMAR, 149–52. IEEE, 2024. http://dx.doi.org/10.1109/elmar62909.2024.10693978.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liang, Zhihao, Qi Zhang, Ying Feng, Ying Shan y Kui Jia. "GS-IR: 3D Gaussian Splatting for Inverse Rendering". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21644–53. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02045.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Jiahui, Fangneng Zhan, Muyu Xu, Shijian Lu y Eric Xing. "FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21424–33. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02024.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kung, Pou-Chun, Seth Isaacson, Ram Vasudevan y Katherine A. Skinner. "SAD-GS: Shape-aligned Depth-supervised Gaussian Splatting". En 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2842–51. IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00290.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía