Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Versatile Video Coding“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Versatile Video Coding" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Versatile Video Coding"
Choe, Jaeryun, Haechul Choi, Heeji Han und Daehyeok Gwon. „Novel video coding methods for versatile video coding“. International Journal of Computational Vision and Robotics 11, Nr. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.10040489.
Der volle Inhalt der QuelleHan, Heeji, Daehyeok Gwon, Jaeryun Choe und Haechul Choi. „Novel video coding methods for versatile video coding“. International Journal of Computational Vision and Robotics 11, Nr. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.117582.
Der volle Inhalt der QuelleTakamura, Seishi. „Versatile Video Coding: a Next-generation Video Coding Standard“. NTT Technical Review 17, Nr. 6 (Juni 2019): 49–52. http://dx.doi.org/10.53829/ntr201906gls.
Der volle Inhalt der QuelleSilva, Giovane Gomes, Ícaro Gonçalves Siqueira, Mateus Grellert und Claudio Machado Diniz. „Approximate Hardware Architecture for Interpolation Filter of Versatile Video Coding“. Journal of Integrated Circuits and Systems 16, Nr. 2 (15.08.2021): 1–8. http://dx.doi.org/10.29292/jics.v16i2.327.
Der volle Inhalt der QuelleSullivan, Gary J. „Video Coding Standards Progress Report: Joint Video Experts Team Launches the Versatile Video Coding Project“. SMPTE Motion Imaging Journal 127, Nr. 8 (September 2018): 94–98. http://dx.doi.org/10.5594/jmi.2018.2846098.
Der volle Inhalt der QuelleMishra, Amit Kumar. „Versatile Video Coding (VVC) Standard: Overview and Applications“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, Nr. 2 (10.09.2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.
Der volle Inhalt der QuellePalau, Roberta De Carvalho Nobre, Bianca Santos da Cunha Silveira, Robson André Domanski, Marta Breunig Loose, Arthur Alves Cerveira, Felipe Martin Sampaio, Daniel Palomino, Marcelo Schiavon Porto, Guilherme Ribeiro Corrêa und Luciano Volcan Agostini. „Modern Video Coding: Methods, Challenges and Systems“. Journal of Integrated Circuits and Systems 16, Nr. 2 (16.08.2021): 1–12. http://dx.doi.org/10.29292/jics.v16i2.503.
Der volle Inhalt der QuelleAdhuran, Jayasingam, Gosala Kulupana, Chathura Galkandage und Anil Fernando. „Multiple Quantization Parameter Optimization in Versatile Video Coding for 360° Videos“. IEEE Transactions on Consumer Electronics 66, Nr. 3 (August 2020): 213–22. http://dx.doi.org/10.1109/tce.2020.3001231.
Der volle Inhalt der QuelleLi, Wei, Xiantao Jiang, Jiayuan Jin, Tian Song und Fei Richard Yu. „Saliency-Enabled Coding Unit Partitioning and Quantization Control for Versatile Video Coding“. Information 13, Nr. 8 (19.08.2022): 394. http://dx.doi.org/10.3390/info13080394.
Der volle Inhalt der QuellePARK, Dohyeon, Jinho LEE, Jung-Won KANG und Jae-Gon KIM. „Simplified Triangular Partitioning Mode in Versatile Video Coding“. IEICE Transactions on Information and Systems E103.D, Nr. 2 (01.02.2020): 472–75. http://dx.doi.org/10.1587/transinf.2019edl8084.
Der volle Inhalt der QuelleDissertationen zum Thema "Versatile Video Coding"
Nasrallah, Anthony. „Novel compression techniques for next-generation video coding“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.
Der volle Inhalt der QuelleVideo content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
Aklouf, Mourad. „Video for events : Compression and transport of the next generation video codec“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Der volle Inhalt der QuelleThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Bücher zum Thema "Versatile Video Coding"
Saldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. Versatile Video Coding (VVC). Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7.
Der volle Inhalt der QuelleRao, K. R., und Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Den vollen Inhalt der Quelle findenVersatile Video Coding. River Publishers, 2019.
Den vollen Inhalt der Quelle findenRao, K. R., und Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2019.
Den vollen Inhalt der Quelle findenRao, K. R., und Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Den vollen Inhalt der Quelle findenRao, K. R., und Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Den vollen Inhalt der Quelle findenSaldanha, Mário, Gustavo Sanchez, Luciano Agostini und César Marcon. Versatile Video Coding: Machine Learning and Heuristics. Springer International Publishing AG, 2022.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Versatile Video Coding"
Domínguez, Humberto Ochoa, und K. R. Rao. „Screen Content Coding for HEVC“. In Versatile Video Coding, 139–51. New York: River Publishers, 2022. http://dx.doi.org/10.1201/9781003339991-4.
Der volle Inhalt der QuelleDomínguez, Humberto Ochoa, und K. R. Rao. „Lossless and Visually Lossless Coding Algorithms“. In Versatile Video Coding, 153–80. New York: River Publishers, 2022. http://dx.doi.org/10.1201/9781003339991-5.
Der volle Inhalt der QuelleDomínguez, Humberto Ochoa, und K. R. Rao. „HEVC Encoder“. In Versatile Video Coding, 19–137. New York: River Publishers, 2022. http://dx.doi.org/10.1201/9781003339991-3.
Der volle Inhalt der QuelleDomínguez, Humberto Ochoa, und K. R. Rao. „Introduction“. In Versatile Video Coding, 1. New York: River Publishers, 2022. http://dx.doi.org/10.1201/9781003339991-1.
Der volle Inhalt der QuelleDomínguez, Humberto Ochoa, und K. R. Rao. „Beyond High Efficiency Video Coding (HEVC)“. In Versatile Video Coding, 3–18. New York: River Publishers, 2022. http://dx.doi.org/10.1201/9781003339991-2.
Der volle Inhalt der QuelleSaldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. „Versatile Video Coding (VVC)“. In Versatile Video Coding (VVC), 7–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_2.
Der volle Inhalt der QuelleSaldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. „Learning-Based Fast Decision for Intra-frame Prediction Mode Selection for Luminance“. In Versatile Video Coding (VVC), 89–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_8.
Der volle Inhalt der QuelleSaldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. „State-of-the-Art Overview“. In Versatile Video Coding (VVC), 35–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_4.
Der volle Inhalt der QuelleSaldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. „Light Gradient Boosting Machine Configurable Fast Block Partitioning for Luminance“. In Versatile Video Coding (VVC), 71–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_7.
Der volle Inhalt der QuelleSaldanha, Mário, Gustavo Sanchez, César Marcon und Luciano Agostini. „Performance Analysis of VVC Intra-frame Prediction“. In Versatile Video Coding (VVC), 43–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_5.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Versatile Video Coding"
Sullivan, Gary. „Versatile Video Coding (VVC) Arrives“. In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301847.
Der volle Inhalt der QuelleKulupana, Gosala, Venkata Phani Kumar M und Saverio Blasi. „Fast Versatile Video Coding using Specialised Decision Trees“. In 2021 Picture Coding Symposium (PCS). IEEE, 2021. http://dx.doi.org/10.1109/pcs50896.2021.9477461.
Der volle Inhalt der QuelleWien, Mathias, und Benjamin Bross. „Versatile Video Coding – Algorithms and Specification“. In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301820.
Der volle Inhalt der QuelleLi, Yiming, Zizheng Liu, Zhenzhong Chen und Shan Liu. „Rate Control For Versatile Video Coding“. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191125.
Der volle Inhalt der QuelleCerveira, Arthur, Luciano Agostini, Bruno Zatt und Felipe Sampaio. „Memory Assessment Of Versatile Video Coding“. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191358.
Der volle Inhalt der QuelleBlaser, Max, Han Gao, Semih Esenlik, Elena Alshina, Zhijie Zhao, Christian Rohlfing und Eckehard Steinbach. „Low-Complexity Geometric Inter-Prediction for Versatile Video Coding“. In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954504.
Der volle Inhalt der QuelleXu, Xiaozhong, Xiang Li und Shan Liu. „Current Picture Referencing in Versatile Video Coding“. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019. http://dx.doi.org/10.1109/mipr.2019.00013.
Der volle Inhalt der QuelleLi, Congrui, Zhenghui Zhao, Junru Li, Xiang Zhang, Siwei Ma und Chen Li. „Bi-Intra Prediction for Versatile Video Coding“. In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00099.
Der volle Inhalt der QuelleChang, Tsui-Shan, Yu-Chen Sun, Ling Zhu und Jian Lou. „Adaptive Resolution Change for Versatile Video Coding“. In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301762.
Der volle Inhalt der QuelleGudumasu, Srinivas, Saurav Bandyopadhyay und Yong He. „Software-based versatile video coding decoder parallelization“. In MMSys '20: 11th ACM Multimedia Systems Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3339825.3391871.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Versatile Video Coding"
Zhao, S., S. Wenger, Y. Sanchez, Y. K. Wang und M. M Hannuksela. RTP Payload Format for Versatile Video Coding (VVC). RFC Editor, Dezember 2022. http://dx.doi.org/10.17487/rfc9328.
Der volle Inhalt der Quelle