Auswahl der wissenschaftlichen Literatur zum Thema „Immersive video coding“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Immersive video coding" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Immersive video coding"
Boyce, Jill M., Renaud Dore, Adrian Dziembowski, Julien Fleureau, Joel Jung, Bart Kroon, Basel Salahieh, Vinod Kumar Malamal Vadakital und Lu Yu. „MPEG Immersive Video Coding Standard“. Proceedings of the IEEE 109, Nr. 9 (September 2021): 1521–36. http://dx.doi.org/10.1109/jproc.2021.3062590.
Der volle Inhalt der QuelleMieloch, Dawid, Adrian Dziembowski, Marek Domański, Gwangsoon Lee und Jun Young Jeong. „Color-dependent pruning in immersive video coding“. Journal of WSCG 30, Nr. 1-2 (2022): 91–98. http://dx.doi.org/10.24132/jwscg.2022.11.
Der volle Inhalt der QuelleWien, Mathias, Jill M. Boyce, Thomas Stockhammer und Wen-Hsiao Peng. „Standardization Status of Immersive Video Coding“. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, Nr. 1 (März 2019): 5–17. http://dx.doi.org/10.1109/jetcas.2019.2898948.
Der volle Inhalt der QuelleDziembowski, Adrian, Dawid Mieloch, Marek Domański, Gwangsoon Lee und Jun Young Jeong. „Spatiotemporal redundancy removal in immersive video coding“. Journal of WSCG 30, Nr. 1-2 (2022): 54–62. http://dx.doi.org/10.24132/jwscg.2022.7.
Der volle Inhalt der QuelleWien, Mathias, Jill M. Boyce, Thomas Stockhammer und Wen-Hsiao Peng. „Guest Editorial Immersive Video Coding and Transmission“. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, Nr. 1 (März 2019): 1–4. http://dx.doi.org/10.1109/jetcas.2019.2899531.
Der volle Inhalt der QuelleSalahieh, Basel, Wayne Cochran und Jill Boyce. „Delivering Object-Based Immersive Video Experiences“. Electronic Imaging 2021, Nr. 18 (18.01.2021): 103–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-103.
Der volle Inhalt der QuelleJeong, JongBeom, Dongmin Jang, Jangwoo Son und Eun-Seok Ryu. „3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming“. Sensors 18, Nr. 9 (18.09.2018): 3148. http://dx.doi.org/10.3390/s18093148.
Der volle Inhalt der QuelleSamelak, Jarosław, Adrian Dziembowski und Dawid Mieloch. „Advanced HEVC Screen Content Coding for MPEG Immersive Video“. Electronics 11, Nr. 23 (05.12.2022): 4040. http://dx.doi.org/10.3390/electronics11234040.
Der volle Inhalt der QuelleStorch, Iago, Luis A. da Silva Cruz, Luciano Agostini, Bruno Zatt und Daniel Palomino. „The Impacts of Equirectangular 360-degrees Videos in the Intra-Frame Prediction of HEVC“. Journal of Integrated Circuits and Systems 14, Nr. 1 (29.04.2019): 1–10. http://dx.doi.org/10.29292/jics.v14i1.46.
Der volle Inhalt der QuellePark, Dohyeon, Sung-Gyun Lim, Kwan-Jung Oh, Gwangsoon Lee und Jae-Gon Kim. „Nonlinear Depth Quantization Using Piecewise Linear Scaling for Immersive Video Coding“. IEEE Access 10 (2022): 4483–94. http://dx.doi.org/10.1109/access.2022.3140537.
Der volle Inhalt der QuelleDissertationen zum Thema "Immersive video coding"
Milovanovic, Marta. „Pruning and compression of multi-view content for immersive video coding“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.
Der volle Inhalt der QuelleThis thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
Dricot, Antoine. „Light-field image and video compression for future immersive applications“. Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.
Der volle Inhalt der QuelleEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Dricot, Antoine. „Light-field image and video compression for future immersive applications“. Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.
Der volle Inhalt der QuelleEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Bücher zum Thema "Immersive video coding"
Heath, Sebastian, Hrsg. DATAM: Digital Approaches to Teaching the Ancient Mediterranean. The Digital Press at the University of North Dakota, 2020. http://dx.doi.org/10.31356/dpb016.
Der volle Inhalt der QuelleBuchteile zum Thema "Immersive video coding"
Tanimoto, Masayuki. „International Standardization of FTV“. In Proceedings e report, 92–99. Florence: Firenze University Press, 2018. http://dx.doi.org/10.36253/978-88-6453-707-8.23.
Der volle Inhalt der QuelleMarvie, Jean-Eudes, Maja Krivokuća und Danillo Graziosi. „Coding of dynamic 3D meshes“. In Immersive Video Technologies, 387–423. Elsevier, 2023. http://dx.doi.org/10.1016/b978-0-32-391755-1.00020-1.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Immersive video coding"
Szekiełda, Jakub, Adrian Dziembowski und Dawid Mieloch. „The Influence of Coding Tools on Immersive Video Coding“. In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.21.
Der volle Inhalt der QuelleSzekiełda, Jakub, Adrian Dziembowski und Dawid Mieloch. „The Influence of Coding Tools on Immersive Video Coding“. In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.21.
Der volle Inhalt der QuelleGarus, Patrick, Joel Jung, Thomas Maugey und Christine Guillemot. „Bypassing Depth Maps Transmission For Immersive Video Coding“. In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954543.
Der volle Inhalt der QuelleSamelak, Jarosław, Adrian Dziembowski, Dawid Mieloch, Marek Domański und Maciej Wawrzyniak. „Efficient Immersive Video Compression using Screen Content Coding“. In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.22.
Der volle Inhalt der QuelleRoodaki, Hoda, und Shervin Shirmohammadi. „Scalable multiview video coding for immersive video streaming systems“. In 2016 Visual Communications and Image Processing (VCIP). IEEE, 2016. http://dx.doi.org/10.1109/vcip.2016.7805454.
Der volle Inhalt der QuelleSalahieh, Basel, Sumit Bhatia und Jill Boyce. „Multi-Pass Renderer in MPEG Test Model for Immersive Video“. In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954515.
Der volle Inhalt der QuelleShen, Xueyuan, Jing Chen, Wei Liu und Tingkai Zhou. „Rate control for immersive video depth map coding“. In Third International Conference on Signal Image Processing and Communication (ICSIPC 2023), herausgegeben von Gang Wang und Lei Chen. SPIE, 2023. http://dx.doi.org/10.1117/12.3004727.
Der volle Inhalt der QuelleSamelak, Jarosław, Adrian Dziembowski, Dawid Mieloch, Marek Domański und Maciej Wawrzyniak. „Efficient Immersive Video Compression using Screen Content Coding“. In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.22.
Der volle Inhalt der QuelleMilovanovic, Marta, Felix Henry und Marco Cagnazzo. „Depth Patch Selection for Decoder-Side Depth Estimation in MPEG Immersive Video“. In 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018042.
Der volle Inhalt der QuelleGudumasu, Srinivas, Gireg Maury, Ariel Glasroth und Ahmed Hamza. „Adaptive Streaming of Visual Volumetric Video-based Coding Media“. In MMVE '23: 15th International Workshop on Immersive Mixed and Virtual Environment Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3592834.3592876.
Der volle Inhalt der Quelle