Gotowa bibliografia na temat „3D point cloud representation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „3D point cloud representation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "3D point cloud representation"
Arya, Hemlata, Parul Saxena i Jaimala Jha. "Detection of 3D Object in Point Cloud: Cloud Semantic Segmentation in Lane Marking". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 10s (7.10.2023): 376–81. http://dx.doi.org/10.17762/ijritcc.v11i10s.7645.
Pełny tekst źródłaBarnefske, E., i H. Sternberg. "PCCT: A POINT CLOUD CLASSIFICATION TOOL TO CREATE 3D TRAINING DATA TO ADJUST AND DEVELOP 3D CONVNET". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (17.09.2019): 35–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-35-2019.
Pełny tekst źródłaOrts-Escolano, Sergio, Jose Garcia-Rodriguez, Miguel Cazorla, Vicente Morell, Jorge Azorin, Marcelo Saval, Alberto Garcia-Garcia i Victor Villena. "Bioinspired point cloud representation: 3D object tracking". Neural Computing and Applications 29, nr 9 (16.09.2016): 663–72. http://dx.doi.org/10.1007/s00521-016-2585-0.
Pełny tekst źródłaRai, A., N. Srivastava, K. Khoshelham i K. Jain. "SEMANTIC ENRICHMENT OF 3D POINT CLOUDS USING 2D IMAGE SEGMENTATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (14.12.2023): 1659–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1659-2023.
Pełny tekst źródłaSun, Yichen. "3D point cloud domain generalization via adversarial training". Applied and Computational Engineering 13, nr 1 (23.10.2023): 160–68. http://dx.doi.org/10.54254/2755-2721/13/20230725.
Pełny tekst źródłaYang, Zexin, Qin Ye, Jantien Stoter i Liangliang Nan. "Enriching Point Clouds with Implicit Representations for 3D Classification and Segmentation". Remote Sensing 15, nr 1 (22.12.2022): 61. http://dx.doi.org/10.3390/rs15010061.
Pełny tekst źródłaQuach, Maurice, Aladine Chetouani, Giuseppe Valenzise i Frederic Dufaux. "A deep perceptual metric for 3D point clouds". Electronic Imaging 2021, nr 9 (18.01.2021): 257–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-257.
Pełny tekst źródłaDecker, Kevin T., i Brett J. Borghetti. "Hyperspectral Point Cloud Projection for the Semantic Segmentation of Multimodal Hyperspectral and Lidar Data with Point Convolution-Based Deep Fusion Neural Networks". Applied Sciences 13, nr 14 (14.07.2023): 8210. http://dx.doi.org/10.3390/app13148210.
Pełny tekst źródłaLi, Shidi, Miaomiao Liu i Christian Walder. "EditVAE: Unsupervised Parts-Aware Controllable 3D Point Cloud Shape Generation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 2 (28.06.2022): 1386–94. http://dx.doi.org/10.1609/aaai.v36i2.20027.
Pełny tekst źródłaBello, Saifullahi Aminu, Shangshu Yu, Cheng Wang, Jibril Muhmmad Adam i Jonathan Li. "Review: Deep Learning on 3D Point Clouds". Remote Sensing 12, nr 11 (28.05.2020): 1729. http://dx.doi.org/10.3390/rs12111729.
Pełny tekst źródłaRozprawy doktorskie na temat "3D point cloud representation"
Diskin, Yakov. "Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366386933.
Pełny tekst źródłaDiskin, Yakov. "Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models". University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429293660.
Pełny tekst źródłaMorell, Vicente. "Contributions to 3D Data Registration and Representation". Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/42364.
Pełny tekst źródłaOrts-Escolano, Sergio. "A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs". Doctoral thesis, Universidad de Alicante, 2013. http://hdl.handle.net/10045/36484.
Pełny tekst źródłaZhao, Yongheng. "3D feature representations for visual perception and geometric shape understanding". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424787.
Pełny tekst źródłaKonradsson, Albin, i Gustav Bohman. "3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations". Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.
Pełny tekst źródłaCao, Chao. "Compression d'objets 3D représentés par nuages de points". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS015.
Pełny tekst źródłaWith the rapid growth of multimedia content, 3D objects are becoming more and more popular. Most of the time, they are modeled as complex polygonal meshes or dense point clouds, providing immersive experiences in different industrial and consumer multimedia applications. The point cloud, which is easier to acquire than mesh and is widely applicable, has raised many interests in both the academic and commercial worlds.A point cloud is a set of points with different properties such as their geometrical locations and the associated attributes (e.g., color, material properties, etc.). The number of the points within a point cloud can range from a thousand, to constitute simple 3D objects, up to billions, to realistically represent complex 3D scenes. Such huge amounts of data bring great technological challenges in terms of transmission, processing, and storage of point clouds.In recent years, numerous research works focused their efforts on the compression of meshes, while less was addressed for point clouds. We have identified two main approaches in the literature: a purely geometric one based on octree decomposition, and a hybrid one based on both geometry and video coding. The first approach can provide accurate 3D geometry information but contains weak temporal consistency. The second one can efficiently remove the temporal redundancy yet a decrease of geometrical precision can be observed after the projection. Thus, the tradeoff between compression efficiency and accurate prediction needs to be optimized.We focused on exploring the temporal correlations between dynamic dense point clouds. We proposed different approaches to improve the compression performance of the MPEG (Moving Picture Experts Group) V-PCC (Video-based Point Cloud Compression) test model, which provides state-of-the-art compression on dynamic dense point clouds.First, an octree-based adaptive segmentation is proposed to cluster the points with different motion amplitudes into 3D cubes. Then, motion estimation is applied to these cubes using affine transformation. Gains in terms of rate-distortion (RD) performance have been observed in sequences with relatively low motion amplitudes. However, the cost of building an octree for the dense point cloud remains expensive while the resulting octree structures contain poor temporal consistency for the sequences with higher motion amplitudes.An anatomical structure is then proposed to model the motion of the point clouds representing humanoids more inherently. With the help of 2D pose estimation tools, the motion is estimated from 14 anatomical segments using affine transformation.Moreover, we propose a novel solution for color prediction and discuss the residual coding from prediction. It is shown that instead of encoding redundant texture information, it is more valuable to code the residuals, which leads to a better RD performance.Although our contributions have improved the performances of the V-PCC test models, the temporal compression of dynamic point clouds remains a highly challenging task. Due to the limitations of the current acquisition technology, the acquired point clouds can be noisy in both geometry and attribute domains, which makes it challenging to achieve accurate motion estimation. In future studies, the technologies used for 3D meshes may be exploited and adapted to provide temporal-consistent connectivity information between dynamic 3D point clouds
Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.
Pełny tekst źródłaSmith, Michael. "Non-parametric workspace modelling for mobile robots using push broom lasers". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:50224eb9-73e8-4c8a-b8c5-18360d11e21b.
Pełny tekst źródłaRoure, Garcia Ferran. "Tools for 3D point cloud registration". Doctoral thesis, Universitat de Girona, 2017. http://hdl.handle.net/10803/403345.
Pełny tekst źródłaEn aquesta tesi, hem fet una revisió en profunditat de l'estat de l'art del registre 3D, avaluant els mètodes més populars. Donada la falta d'estandardització de la literatura, també hem proposat una nomenclatura i una classificació per tal d'unificar els sistemes d'avaluació i poder comparar els diferents algorismes sota els mateixos criteris. La contribució més gran de la tesi és el Toolbox de Registre, que consisteix en un software i una base de dades de models 3D. El software presentat aquí consisteix en una Pipeline de registre 3D escrit en C++ que permet als investigadors provar diferents mètodes, així com afegir-n'hi de nous i comparar-los. En aquesta Pipeline, no només hem implementat els mètodes més populars de la literatura, sinó que també hem afegit tres mètodes nous que contribueixen a millorar l'estat de l'art de la tecnologia. D'altra banda, la base de dades proporciona una sèrie de models 3D per poder dur a terme les proves necessàries per validar el bon funcionament dels mètodes. Finalment, també hem presentat una nova estructura de dades híbrida especialment enfocada a la cerca de veïns. Hem testejat la nostra proposta conjuntament amb altres estructures de dades i hem obtingut resultats molt satisfactoris, superant en molts casos les millors alternatives actuals. Totes les estructures testejades estan també disponibles al nostre Pipeline. Aquesta Toolbox està pensada per ésser una eina útil per tota la comunitat i està a disposició dels investigadors sota llicència Creative-Commons
Książki na temat "3D point cloud representation"
Liu, Shan, Min Zhang, Pranav Kadam i C. C. Jay Kuo. 3D Point Cloud Analysis. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0.
Pełny tekst źródłaZhang, Guoxiang, i YangQuan Chen. Towards Optimal Point Cloud Processing for 3D Reconstruction. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96110-7.
Pełny tekst źródłaChen, YangQuan, i Guoxiang Zhang. Towards Optimal Point Cloud Processing for 3D Reconstruction. Springer International Publishing AG, 2022.
Znajdź pełny tekst źródła3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2021.
Znajdź pełny tekst źródła3D Point Cloud Analysis: Traditional, Deep Learning, and Explainable Machine Learning Methods. Springer International Publishing AG, 2022.
Znajdź pełny tekst źródłaCzęści książek na temat "3D point cloud representation"
Zdobylak, Adrian, i Maciej Zieba. "Semi-supervised Representation Learning for 3D Point Clouds". W Intelligent Information and Database Systems, 480–91. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41964-6_41.
Pełny tekst źródłaLiu, Jingya, Oguz Akin i Yingli Tian. "Rethinking Pulmonary Nodule Detection in Multi-view 3D CT Point Cloud Representation". W Machine Learning in Medical Imaging, 80–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87589-3_9.
Pełny tekst źródłaMiyachi, Hideo, i Koshiro Murakami. "A Study of 3D Shape Similarity Search in Point Representation by Using Machine Learning". W Advances on P2P, Parallel, Grid, Cloud and Internet Computing, 265–74. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33509-0_24.
Pełny tekst źródłaĆurković, Milan, i Damir Vučina. "Adaptive Representation of Large 3D Point Clouds for Shape Optimization". W Operations Research Proceedings, 547–53. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42902-1_74.
Pełny tekst źródłaHe, Tong, Dong Gong, Zhi Tian i Chunhua Shen. "Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation". W Computer Vision – ECCV 2020, 564–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58523-5_33.
Pełny tekst źródłaSant, Rohit, Ninad Kulkarni, Ainesh Bakshi, Salil Kapur i Kratarth Goel. "Autonomous Robot Navigation: Path Planning on a Detail-Preserving Reduced-Complexity Representation of 3D Point Clouds". W Lecture Notes in Computer Science, 173–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39402-7_18.
Pełny tekst źródłaHéno, Raphaële, i Laure Chandelier. "Point Cloud Processing". W 3D Modeling of Buildings, 133–81. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118648889.ch5.
Pełny tekst źródłaWeinmann, Martin. "Point Cloud Registration". W Reconstruction and Analysis of 3D Scenes, 55–110. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29246-5_4.
Pełny tekst źródłaLi, Ge, Wei Gao i Wen Gao. "MPEG AI-Based 3D Graphics Coding Standard". W Point Cloud Compression, 219–41. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1957-0_10.
Pełny tekst źródłaLiu, Shan, Min Zhang, Pranav Kadam i C. C. Jay Kuo. "Deep Learning-Based Point Cloud Analysis". W 3D Point Cloud Analysis, 53–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0_3.
Pełny tekst źródłaStreszczenia konferencji na temat "3D point cloud representation"
Eybposh, M. Hossein, Changjia Cai, Diptodip Deb, Miguel A. B. Schott, Longtian Ye, Gert-Jan Both, Srinivas C. Turaga, Jose Rodriguez-Romaguera i Nicolas C. Pégard. "Computer-Generated Holography Using Point Cloud Processing Neural Networks". W 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/3d.2023.dw5a.4.
Pełny tekst źródłaWang, Lihui, Jing Chen i Baozong Yuan. "Simplified representation for 3D point cloud data". W 2010 10th International Conference on Signal Processing (ICSP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icosp.2010.5656972.
Pełny tekst źródłaLi, Zongmin, Yupeng Zhang i Yun Bai. "Geometric Invariant Representation Learning for 3D Point Cloud". W 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00235.
Pełny tekst źródłaFeng, Tuo, Wenguan Wang, Xiaohan Wang, Yi Yang i Qinghua Zheng. "Clustering based Point Cloud Representation Learning for 3D Analysis". W 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.00761.
Pełny tekst źródłaSu, Zhuo, Max Welling, Matti Pietikainen i Li Liu. "SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation". W 2022 International Conference on 3D Vision (3DV). IEEE, 2022. http://dx.doi.org/10.1109/3dv57658.2022.00084.
Pełny tekst źródłaIshikawa, H., i H. Saito. "Point cloud representation of 3D shape for laser-plasma scanning 3D display". W IECON 2008 - 34th Annual Conference of IEEE Industrial Electronics Society. IEEE, 2008. http://dx.doi.org/10.1109/iecon.2008.4758248.
Pełny tekst źródłaKambhamettu, Chandra. "3DSAINT Representation for 3D Point Clouds". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2023. http://dx.doi.org/10.1109/cvprw59228.2023.00277.
Pełny tekst źródłaFan, Tingyu, Linyao Gao, Yiling Xu, Zhu Li i Dong Wang. "D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/126.
Pełny tekst źródłaNguyen, Van Tung, Trung-Thien Tran, Van-Toan Cao i Denis Laurendeau. "3D Point Cloud Registration Based on the Vector Field Representation". W 2013 2nd IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2013. http://dx.doi.org/10.1109/acpr.2013.111.
Pełny tekst źródłaWells, Lee J., Mohammed S. Shafae i Jaime A. Camelio. "Automated Part Inspection Using 3D Point Clouds". W ASME 2013 International Manufacturing Science and Engineering Conference collocated with the 41st North American Manufacturing Research Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/msec2013-1212.
Pełny tekst źródłaRaporty organizacyjne na temat "3D point cloud representation"
Smith, Curtis L., Steven Prescott, Kellie Kvarfordt, Ram Sampath i Katie Larson. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development. Office of Scientific and Technical Information (OSTI), wrzesień 2015. http://dx.doi.org/10.2172/1245516.
Pełny tekst źródłaBlundell, S., i Philip Devine. Creation, transformation, and orientation adjustment of a building façade model for feature segmentation : transforming 3D building point cloud models into 2D georeferenced feature overlays. Engineer Research and Development Center (U.S.), styczeń 2020. http://dx.doi.org/10.21079/11681/35115.
Pełny tekst źródłaEnnasr, Osama, Charles Ellison, Anton Netchaev, Ahmet Soylemezoglu i Garry Glaspell. Unmanned ground vehicle (UGV) path planning in 2.5D and 3D. Engineer Research and Development Center (U.S.), sierpień 2023. http://dx.doi.org/10.21079/11681/47459.
Pełny tekst źródłaEnnasr, Osama, Michael Paquette i Garry Glaspell. UGV SLAM payload for low-visibility environments. Engineer Research and Development Center (U.S.), wrzesień 2023. http://dx.doi.org/10.21079/11681/47589.
Pełny tekst źródłaHabib, Ayman, Darcy M. Bullock, Yi-Chun Lin i Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.
Pełny tekst źródła