Academic literature on the topic 'MPEG Immersive Video'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'MPEG Immersive Video.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "MPEG Immersive Video":

1

Boyce, Jill M., Renaud Dore, Adrian Dziembowski, Julien Fleureau, Joel Jung, Bart Kroon, Basel Salahieh, Vinod Kumar Malamal Vadakital, and Lu Yu. "MPEG Immersive Video Coding Standard." Proceedings of the IEEE 109, no. 9 (September 2021): 1521–36. http://dx.doi.org/10.1109/jproc.2021.3062590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dziembowski, Adrian, Dawid Mieloch, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Spatiotemporal redundancy removal in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 54–62. http://dx.doi.org/10.24132/jwscg.2022.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the authors describe two methods designed for reducing the spatiotemporal redundancy of the video within the MPEG Immersive video (MIV) encoder: patch occupation modification and cluster splitting. These methods allow optimizing two important parameters of the immersive video: bitrate and pixelrate. The patch occupation modification method significantly decreases the number of active pixels within texture and depth video produced by the MIV encoder. Cluster splitting decreases the total area needed for storing the texture and depth information from multiple input views, decreasing the pixelrate. Both methods proposed by the authors of this paper were appreciated by the experts of the ISO/IEC JTC1/SC29/WG11 MPEG and are included in the Test Model for MPEG Immersive video (TMIV), which is the reference software implementation of the MIV standard.
3

Mieloch, Dawid, Adrian Dziembowski, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Color-dependent pruning in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 91–98. http://dx.doi.org/10.24132/jwscg.2022.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the color-dependent method of removing inter-view redundancy from multiview video. The pruning of input views decides which fragments of views are redundant, i.e., do not provide new information about the three-dimensional scene, as these fragments were already visible from different views. The proposed modification of the pruning uses both color and depth and utilizes the adaptive pruning threshold which increases the robustness against the noisy input. As performed experiments have shown, the proposal provides significant improvement in the quality of encoded multiview videos and decreases erroneous areas in the decoded video caused by different camera characteristics, specular surfaces, and mirror-like reflections. The pruning method proposed by the authors of this paper was evaluated by experts of the ISO/IEC JTC1/SC29/WG 11 MPEG and included by them in the Test Model of MPEG Immersive Video.
4

Timmerer, Christian. "MPEG Column: 129th MPEG Meeting in Brussels, Belgium." ACM SIGMultimedia Records 12, no. 1 (March 2020): 1. http://dx.doi.org/10.1145/3548555.3548559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The 129th MPEG meeting concluded on January 17, 2020 in Brussels, Belgium with the following topics: •Coded representation of immersive media - WG11 promotes Network-Based Media Processing (NBMP) to the final stage •Coded representation of immersive media - Publication of the Technical Report on Architectures for Immersive Media •Genomic information representation - WG11 receives answers to the joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5 •Open font format - WG11 promotes Amendment of Open Font Format to the final stage •High efficiency coding and media delivery in heterogeneous environments - WG11 progresses Baseline Profile for MPEG-H 3D Audio •Multimedia content description interface - Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage
5

Salahieh, Basel, Wayne Cochran, and Jill Boyce. "Delivering Object-Based Immersive Video Experiences." Electronic Imaging 2021, no. 18 (January 18, 2021): 103–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Immersive video enables interactive natural consumption of visual content by empowering a user to navigate through six degrees of freedom, with motion parallax and wide-angle rotation. Supporting immersive experiences requires content captured by multiple cameras and efficient video coding to meet bandwidth and decoder complexity constraints, while delivering high quality video to end users. The Moving Picture Experts Group (MPEG) is developing an immersive video (MIV) standard to data access and delivery of such content. One of MIV operating modes is an objectbased immersive video coding which enables innovative use cases where the streaming bandwidth can be better allocated to objects of interest and users can personalize the rendered streamed content. In this paper, we describe a software implementation of the object-based solution on top of the MPEG Test Model for Immersive Video (TMIV). We demonstrate how encoding foreground objects can lead to a significant saving in pixel rate and bitrate while still delivering better subjective and objective results compared to the generic MIV operating mode without the object-based solution.
6

Timmerer, Christian. "MPEG column: 125th MPEG meeting in Marrakesh, Morocco." ACM SIGMultimedia Records 11, no. 1 (March 2019): 1. http://dx.doi.org/10.1145/3458462.3458467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. The 125th MPEG meeting concluded on January 18, 2019 in Marrakesh, Morocco with the following topics: Network-Based Media Processing (NBMP) - MPEG promotes NBMP to Committee Draft stage 3DoF+ Visual - MPEG issues Call for Proposals on Immersive 3DoF+ Video Coding Technology MPEG-5 Essential Video Coding (EVC) - MPEG starts work on MPEG-5 Essential Video Coding ISOBMFF - MPEG issues Final Draft International Standard of Conformance and Reference software for formats based on the ISO Base Media File Format (ISOBMFF) MPEG-21 User Description - MPEG finalizes 2nd edition of the MPEG-21 User Description The corresponding press release of the 125th MPEG meeting can be found here. In this blog post I'd like to focus on those topics potentially relevant for over-the-top (OTT), namely NBMP, EVC, and ISOBMFF.
7

Timmerer, Christian. "MPEG column: 128th MPEG meeting in Geneva, Switzerland." ACM SIGMultimedia Records 11, no. 4 (December 2019): 1. http://dx.doi.org/10.1145/3530839.3530846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The 128th MPEG meeting concluded on October 11, 2019 in Geneva, Switzerland with the following topics: •Low Complexity Enhancement Video Coding (LCEVC) Promoted to Committee Draft •2nd Edition of Omnidirectional Media Format (OMAF) has reached the first milestone •Genomic Information Representation --- Part 4 Reference Software and Part 5 Conformance Promoted to Draft International Standard The corresponding press release of the 128th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/128. In this report we will focus on video coding aspects (i.e., LCEVC) and immersive media applications (i.e., OMAF). At the end, we will provide an update related to adaptive streaming (i.e., DASH and CMAF).
8

Samelak, Jarosław, Adrian Dziembowski, and Dawid Mieloch. "Advanced HEVC Screen Content Coding for MPEG Immersive Video." Electronics 11, no. 23 (December 5, 2022): 4040. http://dx.doi.org/10.3390/electronics11234040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the modified HEVC Screen Content Coding (SCC) that was adapted to be more efficient as an internal video coding of the MPEG Immersive Video (MIV) codec. The basic, unmodified SCC is already known to be useful in such an application. However, in this paper, we propose three additional improvements to SCC to increase the efficiency of immersive video coding. First, we analyze using the quarter-pel accuracy in the intra block copy technique to provide a more effective search of the best candidate block to be copied in the encoding process. The second proposal is the use of tiles to allow inter-view prediction inside MIV atlases. The last proposed improvement is the addition of the MIV bitstream parser in the HEVC encoder that enables selecting the most efficient coding configuration depending on the type of currently encoded data. The experimental results show that the proposal increases the compression efficiency for natural content sequences by almost 7% and simultaneously decreases the computational time of encoding by more than 15%, making the proposal very valuable for further research on immersive video coding.
9

DZIEMBOWSKI, Adrian. "Nowe techniki kompresji wizji dla rzeczywistości wirtualnej – MPEG Immersive Video." PRZEGLĄD TELEKOMUNIKACYJNY - WIADOMOŚCI TELEKOMUNIKACYJNE 1, no. 4 (August 9, 2022): 26–33. http://dx.doi.org/10.15199/59.2022.4.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vadakital, Vinod Kumar Malamal, Adrian Dziembowski, Gauthier Lafruit, Franck Thudor, Gwangsoon Lee, and Patrice Rondao Alface. "The MPEG Immersive Video Standard—Current Status and Future Outlook." IEEE MultiMedia 29, no. 3 (July 1, 2022): 101–11. http://dx.doi.org/10.1109/mmul.2022.3175654.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "MPEG Immersive Video":

1

Milovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse aborde le problème de la compression efficace de contenus vidéo immersifs, représentés avec le format Multiview Video plus Depth (MVD). Le standard du Moving Picture Experts Group (MPEG) pour la transmission des données MVD est appelé MPEG Immersive Video (MIV), qui utilise des codecs vidéo 2D compresser les informations de texture et de profondeur de la source. Par rapport au codage vidéo traditionnel, le codage vidéo immersif est complexe et limité non seulement par le compromis entre le débit binaire et la qualité, mais aussi par le débit de pixels. C'est pourquoi la MIV utilise le pruning pour réduire le débit de pixels et les corrélations entre les vues et crée une mosaïque de morceaux d'images (patches). L'estimation de la profondeur côté décodeur (DSDE) est apparue comme une approche alternative pour améliorer le système vidéo immersif en évitant la transmission de cartes de profondeur et en déplaçant le processus d'estimation de la profondeur du côté du décodeur. DSDE a été étudiée dans le cas de nombreuses vues entièrement transmises (sans pruning). Dans cette thèse, nous démontrons les avancées possibles en matière de codage vidéo immersif, en mettant l'accent sur le pruning du contenu de source. Nous allons au-delà du DSDE et examinons l'effet distinct de la restauration de la profondeur au niveau du patch du côté du décodeur. Nous proposons deux approches pour intégrer la DSDE sur le contenu traité avec le pruning du MIV. La première approche exclut un sous-ensemble de cartes de profondeur de la transmission, et la seconde approche utilise la qualité des patchs de profondeur estimés du côté de l'encodeur pour distinguer ceux qui doivent être transmis de ceux qui peuvent être récupérés du côté du décodeur. Nos expériences montrent un gain de 4.63 BD-rate pour Y-PSNR en moyenne. En outre, nous étudions également l'utilisation de techniques neuronales de synthèse basées sur l'image (IBR) pour améliorer la qualité de la synthèse de nouvelles vues et nous montrons que la synthèse neuronale elle-même fournit les informations nécessaires au pruning du contenu. Nos résultats montrent un bon compromis entre le taux de pixels et la qualité de la synthèse, permettant d'améliorer la synthèse visuelle de 3.6 dB en moyenne
This thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average

Book chapters on the topic "MPEG Immersive Video":

1

Tanimoto, Masayuki. "International Standardization of FTV." In Proceedings e report, 92–99. Florence: Firenze University Press, 2018. http://dx.doi.org/10.36253/978-88-6453-707-8.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
FTV (Free-viewpoint Television) is visual media that transmits all ray information of a 3D space and enables immersive 3D viewing. The international standardization of FTV has been conducted in MPEG. The first phase of FTV is multiview video coding (MVC), and the second phase is 3D video (3DV). The third phase of FTV is MPEG-FTV, which targets revolutionized viewing of 3D scenes via super multiview, free navigation, and 360-degree 3D. After the success of exploration experiments and Call for Evidence, MPEG-FTV moved MPEG Immersive project (MPEG-I), where it is in charge of video part as MPEG-I Visual. MPEG-I will create standards for immersive audio-visual services.
2

Garus, Patrick, Marta Milovanović, Joël Jung, and Marco Cagnazzo. "MPEG immersive video." In Immersive Video Technologies, 327–56. Elsevier, 2023. http://dx.doi.org/10.1016/b978-0-32-391755-1.00018-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fachada, Sarah, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis Tool for VR Immersive Video." In Computer Game Development [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This chapter addresses the view synthesis of natural scenes in virtual reality (VR) using depth image-based rendering (DIBR). This method reaches photorealistic results as it directly warps photos to obtain the output, avoiding the need to photograph every possible viewpoint or to make a 3D reconstruction of a scene followed by a ray-tracing rendering. An overview of the DIBR approach and frequently encountered challenges (disocclusion and ghosting artifacts, multi-view blending, handling of non-Lambertian objects) are described. Such technology finds applications in VR immersive displays and holography. Finally, a comprehensive manual of the Reference View Synthesis software (RVS), an open-source tool tested on open datasets and recognized by the MPEG-I standardization activities (where”I″ refers to”immersive”) is described for hands-on practicing.

Conference papers on the topic "MPEG Immersive Video":

1

Szekiełda, Jakub, Adrian Dziembowski, and Dawid Mieloch. "The Influence of Coding Tools on Immersive Video Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper summarizes the research on the influence of HEVC(High Efficiency Video Coding)configuration on immersive video coding. The research was focused on the newest MPEG standard for immersive video compression –MIV (MPEG Immersive Video). The MIV standard is used as a preprocessing step before the typical video compression thus is agnostic to the video codec. Uncommon characteristics of videos produced by MIV causes, that the typical configuration of the video encoder (optimized for compression of natural sequences) is not optimal for such content. The experimental results prove, that the performance of video compression for immersive video can be significantly increased when selected coding tools are being used.
2

Szekiełda, Jakub, Adrian Dziembowski, and Dawid Mieloch. "The Influence of Coding Tools on Immersive Video Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper summarizes the research on the influence of HEVC (High Efficiency Video Coding) configuration on immersive video coding. The research was focused on the newest MPEG standard for immersive video compression – MIV (MPEG Immersive Video). The MIV standard is used as a preprocessing step before the typical video compression thus is agnostic to the video codec. Uncommon characteristics of videos produced by MIV causes, that the typical configuration of the video encoder (optimized for compression of natural sequences) is not optimal for such content. The experimental results prove, that the performance of video compression for immersive video can be significantly increased when selected coding tools are being used.
3

Lee, Gwangsoon, Hong-Chang Shin, Jun-Young Jeong, and Jeonil Seo. "Realization of Motion Parallax through Realistic and Immersive Video Rendering." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/3d.2022.jw5a.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we introduce the results of providing three-dimensionality and immersion in ordinary UHDTV by realizing the motion parallax from multi-view images in MIV (MPEG Immersive Video) atlas format.
4

Samelak, Jarosław, Adrian Dziembowski, Dawid Mieloch, Marek Domański, and Maciej Wawrzyniak. "Efficient Immersive Video Compression using Screen Content Coding." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper deals with efficient compression of immersive video representations for the synthesis of video relatedto virtual viewports, i.e., to selected virtual viewer positions and selected virtual directions of watching. Thegoal is to obtain possibly high quality of virtual video obtained from compressed representations of immersivevideo acquired from multiple omnidirectional and planar (perspective) cameras, or from computer animation. Inthe paper, we describe a solution based on HEVC (High Efficiency Video Coding) compression and the recentlyproposed MPEG Test Model for Immersive Video. The idea is to use standard-compliant Screen Content Codingtools that were proposed for other applications and have never been used for immersive video compression. Theexperimental results with standard test video sequences are reported for the normalized experimental conditionsdefined by MPEG. In the paper, it is demonstrated that the proposed solution yields up to 20% of bitrate reductionfor the constant quality of virtual video.
5

Salahieh, Basel, Mengyu Chen, and Jill Boyce. "An Overview of MPEG Immersive Video." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2021. http://dx.doi.org/10.1364/3d.2021.3w5a.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jeong, Jong-Beom, Soonbin Lee, and Eun-Seok Ryu. "Delta QP Allocation for MPEG Immersive Video." In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2022. http://dx.doi.org/10.1109/ictc55196.2022.9952936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grzelka, Adam, Adrian Dziembowski, Dawid Mieloch, and Marek Domański. "The Study of the Video Encoder Efficiency in Decoder-side Depth Estimation Applications." In WSCG'2022 - 30. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2022. Západočeská univerzita, 2022. http://dx.doi.org/10.24132/csrn.3201.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper presents a study of a lossy compression impact on depth estimation and virtual view quality. Two scenarios were considered: the approach based on ISO/IEC 23090-12 coder-agnostic MPEG Immersive video standard, and the more general approach based on simulcast video coding. The commonly used compression techniques were tested: VVC (MPEG-I Part 3 / H.266), HEVC (MPEG H part 2 / H.265), AVC (MPEG 4 part 10 / H.264), MPEG-2 (MPEG 2 part 2 / H.262), AV1 (AOMedia Video 1), VP9 (AOMedia VP9). The quality of virtual views generated from the encoded stream was assessed by the IV-PSNR metric which is adapted to synthesized images. The results were presented as a relationship between virtual view quality and the quality of decoded real views. The main conclusion from performed experiments is that encoding quality and virtual view quality are encoder-dependent, therefore, the used video encoder should be carefully chosen to achieve the best quality in decoder-side depth estimation.
8

Klóska, Dominika, Adrian Dziembowski, and Jarosław Samelak. "Versatile Input View Selection for Efficient Immersive Video Transmission." In WSCG 2023 – 31. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. University of West Bohemia, Czech Republic, 2023. http://dx.doi.org/10.24132/csrn.3301.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper we deal with the problem of the optimal selection of input views, which are transmitted within an immersive video bitstream. Due to limited bitrate and pixel rate, only a subset of input views available on the encoder side can be fully transmitted to the decoder. Remaining views are – in the simplest approach – omitted or – in the newest immersive video encoding standard (MPEG immersive video, MIV) – pruned in order to remove less important information. Selecting proper views for transmission is crucial in terms of the quality of immersive video system user’s experience. In the paper we have analyzed which input views have to be selected for providing the best possible quality of virtual views, independently on the viewport requested by the viewer. Moreover, we have proposed an algorithm, which takes into account a non-uniform probability of user’s viewing direction, allowing for the increase of the subjective quality of virtual navigation for omnidirectional content.
9

Jeong, Jong-Beom, Soonbin Lee, and Eun-Seok Ryu. "Rethinking Fatigue-Aware 6DoF Video Streaming: Focusing on MPEG Immersive Video." In 2022 International Conference on Information Networking (ICOIN). IEEE, 2022. http://dx.doi.org/10.1109/icoin53446.2022.9687247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Milovanovic, Marta, Felix Henry, Marco Cagnazzo, and Joel Jung. "Patch Decoder-Side Depth Estimation In Mpeg Immersive Video." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414056.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography