Academic literature on the topic 'Immersive video coding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Immersive video coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Immersive video coding":

1

Boyce, Jill M., Renaud Dore, Adrian Dziembowski, Julien Fleureau, Joel Jung, Bart Kroon, Basel Salahieh, Vinod Kumar Malamal Vadakital, and Lu Yu. "MPEG Immersive Video Coding Standard." Proceedings of the IEEE 109, no. 9 (September 2021): 1521–36. http://dx.doi.org/10.1109/jproc.2021.3062590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mieloch, Dawid, Adrian Dziembowski, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Color-dependent pruning in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 91–98. http://dx.doi.org/10.24132/jwscg.2022.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the color-dependent method of removing inter-view redundancy from multiview video. The pruning of input views decides which fragments of views are redundant, i.e., do not provide new information about the three-dimensional scene, as these fragments were already visible from different views. The proposed modification of the pruning uses both color and depth and utilizes the adaptive pruning threshold which increases the robustness against the noisy input. As performed experiments have shown, the proposal provides significant improvement in the quality of encoded multiview videos and decreases erroneous areas in the decoded video caused by different camera characteristics, specular surfaces, and mirror-like reflections. The pruning method proposed by the authors of this paper was evaluated by experts of the ISO/IEC JTC1/SC29/WG 11 MPEG and included by them in the Test Model of MPEG Immersive Video.
3

Wien, Mathias, Jill M. Boyce, Thomas Stockhammer, and Wen-Hsiao Peng. "Standardization Status of Immersive Video Coding." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, no. 1 (March 2019): 5–17. http://dx.doi.org/10.1109/jetcas.2019.2898948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dziembowski, Adrian, Dawid Mieloch, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Spatiotemporal redundancy removal in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 54–62. http://dx.doi.org/10.24132/jwscg.2022.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the authors describe two methods designed for reducing the spatiotemporal redundancy of the video within the MPEG Immersive video (MIV) encoder: patch occupation modification and cluster splitting. These methods allow optimizing two important parameters of the immersive video: bitrate and pixelrate. The patch occupation modification method significantly decreases the number of active pixels within texture and depth video produced by the MIV encoder. Cluster splitting decreases the total area needed for storing the texture and depth information from multiple input views, decreasing the pixelrate. Both methods proposed by the authors of this paper were appreciated by the experts of the ISO/IEC JTC1/SC29/WG11 MPEG and are included in the Test Model for MPEG Immersive video (TMIV), which is the reference software implementation of the MIV standard.
5

Wien, Mathias, Jill M. Boyce, Thomas Stockhammer, and Wen-Hsiao Peng. "Guest Editorial Immersive Video Coding and Transmission." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, no. 1 (March 2019): 1–4. http://dx.doi.org/10.1109/jetcas.2019.2899531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salahieh, Basel, Wayne Cochran, and Jill Boyce. "Delivering Object-Based Immersive Video Experiences." Electronic Imaging 2021, no. 18 (January 18, 2021): 103–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Immersive video enables interactive natural consumption of visual content by empowering a user to navigate through six degrees of freedom, with motion parallax and wide-angle rotation. Supporting immersive experiences requires content captured by multiple cameras and efficient video coding to meet bandwidth and decoder complexity constraints, while delivering high quality video to end users. The Moving Picture Experts Group (MPEG) is developing an immersive video (MIV) standard to data access and delivery of such content. One of MIV operating modes is an objectbased immersive video coding which enables innovative use cases where the streaming bandwidth can be better allocated to objects of interest and users can personalize the rendered streamed content. In this paper, we describe a software implementation of the object-based solution on top of the MPEG Test Model for Immersive Video (TMIV). We demonstrate how encoding foreground objects can lead to a significant saving in pixel rate and bitrate while still delivering better subjective and objective results compared to the generic MIV operating mode without the object-based solution.
7

Jeong, JongBeom, Dongmin Jang, Jangwoo Son, and Eun-Seok Ryu. "3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming." Sensors 18, no. 9 (September 18, 2018): 3148. http://dx.doi.org/10.3390/s18093148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, with the increasing demand for virtual reality (VR), experiencing immersive contents with VR has become easier. However, a tremendous amount of calculation and bandwidth is required when processing 360 videos. Moreover, additional information such as the depth of the video is required to enjoy stereoscopic 360 contents. Therefore, this paper proposes an efficient method of streaming high-quality 360 videos. To reduce the bandwidth when streaming and synthesizing the 3DoF+ 360 videos, which supports limited movements of the user, a proper down-sampling ratio and quantization parameter are offered from the analysis of the graph between bitrate and peak signal-to-noise ratio. High-efficiency video coding (HEVC) is used to encode and decode the 360 videos, and the view synthesizer produces the video of intermediate view, providing the user with an immersive experience.
8

Samelak, Jarosław, Adrian Dziembowski, and Dawid Mieloch. "Advanced HEVC Screen Content Coding for MPEG Immersive Video." Electronics 11, no. 23 (December 5, 2022): 4040. http://dx.doi.org/10.3390/electronics11234040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the modified HEVC Screen Content Coding (SCC) that was adapted to be more efficient as an internal video coding of the MPEG Immersive Video (MIV) codec. The basic, unmodified SCC is already known to be useful in such an application. However, in this paper, we propose three additional improvements to SCC to increase the efficiency of immersive video coding. First, we analyze using the quarter-pel accuracy in the intra block copy technique to provide a more effective search of the best candidate block to be copied in the encoding process. The second proposal is the use of tiles to allow inter-view prediction inside MIV atlases. The last proposed improvement is the addition of the MIV bitstream parser in the HEVC encoder that enables selecting the most efficient coding configuration depending on the type of currently encoded data. The experimental results show that the proposal increases the compression efficiency for natural content sequences by almost 7% and simultaneously decreases the computational time of encoding by more than 15%, making the proposal very valuable for further research on immersive video coding.
9

Storch, Iago, Luis A. da Silva Cruz, Luciano Agostini, Bruno Zatt, and Daniel Palomino. "The Impacts of Equirectangular 360-degrees Videos in the Intra-Frame Prediction of HEVC." Journal of Integrated Circuits and Systems 14, no. 1 (April 29, 2019): 1–10. http://dx.doi.org/10.29292/jics.v14i1.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent technological advancements allowed videos to come from a simple sequence of 2D images to be displayed in a flat screen display into spherical representations of one’s surroundings, capable of creating a realistic immersive experience when allied to head-mounted displays. In order to explore the existing infrastructure for video coding, 360-degrees videos are pre-processed and then encoded by conventional video coding standards. However, the flattened version of 360-degrees videos present some peculiarities which are not present in conventional videos, and therefore, may not be properly exploited by conventional video coders. Aiming to find evidence that conventional video encoders can be adapted to perform better over 360-degrees videos, this work performs an evaluation on the intra-frame prediction performed by the High Efficiency Video Coding over 360-degrees videos in the equirectangular projection. Experimental results point that 360-degrees videos present spatial properties that make some regions of the frame likely to be encoded using a reduced set of prediction modes and block sizes. This behavior could be used in the development of fast decision and energy saving algorithms by evaluating a reduced set of prediction modes and block sizes depending on the regions of the frame being encoded.
10

Park, Dohyeon, Sung-Gyun Lim, Kwan-Jung Oh, Gwangsoon Lee, and Jae-Gon Kim. "Nonlinear Depth Quantization Using Piecewise Linear Scaling for Immersive Video Coding." IEEE Access 10 (2022): 4483–94. http://dx.doi.org/10.1109/access.2022.3140537.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Immersive video coding":

1

Milovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse aborde le problème de la compression efficace de contenus vidéo immersifs, représentés avec le format Multiview Video plus Depth (MVD). Le standard du Moving Picture Experts Group (MPEG) pour la transmission des données MVD est appelé MPEG Immersive Video (MIV), qui utilise des codecs vidéo 2D compresser les informations de texture et de profondeur de la source. Par rapport au codage vidéo traditionnel, le codage vidéo immersif est complexe et limité non seulement par le compromis entre le débit binaire et la qualité, mais aussi par le débit de pixels. C'est pourquoi la MIV utilise le pruning pour réduire le débit de pixels et les corrélations entre les vues et crée une mosaïque de morceaux d'images (patches). L'estimation de la profondeur côté décodeur (DSDE) est apparue comme une approche alternative pour améliorer le système vidéo immersif en évitant la transmission de cartes de profondeur et en déplaçant le processus d'estimation de la profondeur du côté du décodeur. DSDE a été étudiée dans le cas de nombreuses vues entièrement transmises (sans pruning). Dans cette thèse, nous démontrons les avancées possibles en matière de codage vidéo immersif, en mettant l'accent sur le pruning du contenu de source. Nous allons au-delà du DSDE et examinons l'effet distinct de la restauration de la profondeur au niveau du patch du côté du décodeur. Nous proposons deux approches pour intégrer la DSDE sur le contenu traité avec le pruning du MIV. La première approche exclut un sous-ensemble de cartes de profondeur de la transmission, et la seconde approche utilise la qualité des patchs de profondeur estimés du côté de l'encodeur pour distinguer ceux qui doivent être transmis de ceux qui peuvent être récupérés du côté du décodeur. Nos expériences montrent un gain de 4.63 BD-rate pour Y-PSNR en moyenne. En outre, nous étudions également l'utilisation de techniques neuronales de synthèse basées sur l'image (IBR) pour améliorer la qualité de la synthèse de nouvelles vues et nous montrons que la synthèse neuronale elle-même fournit les informations nécessaires au pruning du contenu. Nos résultats montrent un bon compromis entre le taux de pixels et la qualité de la synthèse, permettant d'améliorer la synthèse visuelle de 3.6 dB en moyenne
This thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
2

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
3

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks

Books on the topic "Immersive video coding":

1

Heath, Sebastian, ed. DATAM: Digital Approaches to Teaching the Ancient Mediterranean. The Digital Press at the University of North Dakota, 2020. http://dx.doi.org/10.31356/dpb016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
DATAM: Digital Approaches to Teaching the Ancient Mediterranean provides a series of new critical studies that explore digital practices for teaching the Ancient Mediterranean world at a wide range of institutions and levels. These practical examples demonstrate how gaming, coding, immersive video, and 3D imaging can bridge the disciplinary and digital divide between the Ancient world and contemporary technology, information literacy, and student engagement. While the articles focus on Classics, Ancient History, and Mediterranean archaeology, the issues and approaches considered throughout this book are relevant for anyone who thinks critically and practically about the use of digital technology in the college level classroom. DATAM features contributions from Sebastian Heath, Lisl Walsh, David Ratzan, Patrick Burns, Sandra Blakely, Eric Poehler, William Caraher, Marie-Claire Beaulieu and Anthony Bucci as well as a critical introduction by Shawn Graham and preface by Society of Classical Studies Executive Director Helen Cullyer.

Book chapters on the topic "Immersive video coding":

1

Tanimoto, Masayuki. "International Standardization of FTV." In Proceedings e report, 92–99. Florence: Firenze University Press, 2018. http://dx.doi.org/10.36253/978-88-6453-707-8.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
FTV (Free-viewpoint Television) is visual media that transmits all ray information of a 3D space and enables immersive 3D viewing. The international standardization of FTV has been conducted in MPEG. The first phase of FTV is multiview video coding (MVC), and the second phase is 3D video (3DV). The third phase of FTV is MPEG-FTV, which targets revolutionized viewing of 3D scenes via super multiview, free navigation, and 360-degree 3D. After the success of exploration experiments and Call for Evidence, MPEG-FTV moved MPEG Immersive project (MPEG-I), where it is in charge of video part as MPEG-I Visual. MPEG-I will create standards for immersive audio-visual services.
2

Marvie, Jean-Eudes, Maja Krivokuća, and Danillo Graziosi. "Coding of dynamic 3D meshes." In Immersive Video Technologies, 387–423. Elsevier, 2023. http://dx.doi.org/10.1016/b978-0-32-391755-1.00020-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Immersive video coding":

1

Szekiełda, Jakub, Adrian Dziembowski, and Dawid Mieloch. "The Influence of Coding Tools on Immersive Video Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper summarizes the research on the influence of HEVC(High Efficiency Video Coding)configuration on immersive video coding. The research was focused on the newest MPEG standard for immersive video compression –MIV (MPEG Immersive Video). The MIV standard is used as a preprocessing step before the typical video compression thus is agnostic to the video codec. Uncommon characteristics of videos produced by MIV causes, that the typical configuration of the video encoder (optimized for compression of natural sequences) is not optimal for such content. The experimental results prove, that the performance of video compression for immersive video can be significantly increased when selected coding tools are being used.
2

Szekiełda, Jakub, Adrian Dziembowski, and Dawid Mieloch. "The Influence of Coding Tools on Immersive Video Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper summarizes the research on the influence of HEVC (High Efficiency Video Coding) configuration on immersive video coding. The research was focused on the newest MPEG standard for immersive video compression – MIV (MPEG Immersive Video). The MIV standard is used as a preprocessing step before the typical video compression thus is agnostic to the video codec. Uncommon characteristics of videos produced by MIV causes, that the typical configuration of the video encoder (optimized for compression of natural sequences) is not optimal for such content. The experimental results prove, that the performance of video compression for immersive video can be significantly increased when selected coding tools are being used.
3

Garus, Patrick, Joel Jung, Thomas Maugey, and Christine Guillemot. "Bypassing Depth Maps Transmission For Immersive Video Coding." In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Samelak, Jarosław, Adrian Dziembowski, Dawid Mieloch, Marek Domański, and Maciej Wawrzyniak. "Efficient Immersive Video Compression using Screen Content Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper deals with efficient compression of immersive video representations for the synthesis of video relatedto virtual viewports, i.e., to selected virtual viewer positions and selected virtual directions of watching. Thegoal is to obtain possibly high quality of virtual video obtained from compressed representations of immersivevideo acquired from multiple omnidirectional and planar (perspective) cameras, or from computer animation. Inthe paper, we describe a solution based on HEVC (High Efficiency Video Coding) compression and the recentlyproposed MPEG Test Model for Immersive Video. The idea is to use standard-compliant Screen Content Codingtools that were proposed for other applications and have never been used for immersive video compression. Theexperimental results with standard test video sequences are reported for the normalized experimental conditionsdefined by MPEG. In the paper, it is demonstrated that the proposed solution yields up to 20% of bitrate reductionfor the constant quality of virtual video.
5

Roodaki, Hoda, and Shervin Shirmohammadi. "Scalable multiview video coding for immersive video streaming systems." In 2016 Visual Communications and Image Processing (VCIP). IEEE, 2016. http://dx.doi.org/10.1109/vcip.2016.7805454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salahieh, Basel, Sumit Bhatia, and Jill Boyce. "Multi-Pass Renderer in MPEG Test Model for Immersive Video." In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Xueyuan, Jing Chen, Wei Liu, and Tingkai Zhou. "Rate control for immersive video depth map coding." In Third International Conference on Signal Image Processing and Communication (ICSIPC 2023), edited by Gang Wang and Lei Chen. SPIE, 2023. http://dx.doi.org/10.1117/12.3004727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Samelak, Jarosław, Adrian Dziembowski, Dawid Mieloch, Marek Domański, and Maciej Wawrzyniak. "Efficient Immersive Video Compression using Screen Content Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milovanovic, Marta, Felix Henry, and Marco Cagnazzo. "Depth Patch Selection for Decoder-Side Depth Estimation in MPEG Immersive Video." In 2022 Picture Coding Symposium (PCS). IEEE, 2022. http://dx.doi.org/10.1109/pcs56426.2022.10018042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gudumasu, Srinivas, Gireg Maury, Ariel Glasroth, and Ahmed Hamza. "Adaptive Streaming of Visual Volumetric Video-based Coding Media." In MMVE '23: 15th International Workshop on Immersive Mixed and Virtual Environment Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3592834.3592876.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography