Gotowa bibliografia na temat „VVC, Versatile Video Coding”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „VVC, Versatile Video Coding”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "VVC, Versatile Video Coding"
Silva, Giovane Gomes, Ícaro Gonçalves Siqueira, Mateus Grellert i Claudio Machado Diniz. "Approximate Hardware Architecture for Interpolation Filter of Versatile Video Coding". Journal of Integrated Circuits and Systems 16, nr 2 (15.08.2021): 1–8. http://dx.doi.org/10.29292/jics.v16i2.327.
Pełny tekst źródłaChoi, Kiho. "A Study on Fast and Low-Complexity Algorithms for Versatile Video Coding". Sensors 22, nr 22 (20.11.2022): 8990. http://dx.doi.org/10.3390/s22228990.
Pełny tekst źródłaZouidi, Naima, Amina Kessentini, Wassim Hamidouche, Nouri Masmoudi i Daniel Menard. "Multitask Learning Based Intra-Mode Decision Framework for Versatile Video Coding". Electronics 11, nr 23 (2.12.2022): 4001. http://dx.doi.org/10.3390/electronics11234001.
Pełny tekst źródłaAmrutha Valli Pamidi, Lakshmi, i Purnachand Nalluri. "Optimized in-loop filtering in versatile video coding using improved fast guided filter". Indonesian Journal of Electrical Engineering and Computer Science 33, nr 2 (1.02.2024): 911. http://dx.doi.org/10.11591/ijeecs.v33.i2.pp911-919.
Pełny tekst źródłaJung, Seongwon, i Dongsan Jun. "Context-Based Inter Mode Decision Method for Fast Affine Prediction in Versatile Video Coding". Electronics 10, nr 11 (24.05.2021): 1243. http://dx.doi.org/10.3390/electronics10111243.
Pełny tekst źródła高啟洲, 高啟洲, i 賴美妤 Chi-Chou Kao. "基於深度學習之改良式多功能影像編碼快速畫面內模式決策研究". 理工研究國際期刊 12, nr 1 (kwiecień 2022): 037–48. http://dx.doi.org/10.53106/222344892022041201004.
Pełny tekst źródłaMishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, nr 2 (10.09.2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.
Pełny tekst źródłaLi, Minghui, Zhaohong Li i Zhenzhen Zhang. "A VVC Video Steganography Based on Coding Units in Chroma Components with a Deep Learning Network". Symmetry 15, nr 1 (31.12.2022): 116. http://dx.doi.org/10.3390/sym15010116.
Pełny tekst źródłaSaha, Anup, Miguel Chavarrías, Fernando Pescador, Ángel M. Groba, Kheyter Chassaigne i Pedro L. Cebrián. "Complexity Analysis of a Versatile Video Coding Decoder over Embedded Systems and General Purpose Processors". Sensors 21, nr 10 (11.05.2021): 3320. http://dx.doi.org/10.3390/s21103320.
Pełny tekst źródłaChen, Guojie, i Min Lin. "Sample-Based Gradient Edge and Angular Prediction for VVC Lossless Intra-Coding". Applied Sciences 14, nr 4 (18.02.2024): 1653. http://dx.doi.org/10.3390/app14041653.
Pełny tekst źródłaRozprawy doktorskie na temat "VVC, Versatile Video Coding"
Nasrallah, Anthony. "Novel compression techniques for next-generation video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.
Pełny tekst źródłaVideo content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
Aklouf, Mourad. "Video for events : Compression and transport of the next generation video codec". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Pełny tekst źródłaThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Książki na temat "VVC, Versatile Video Coding"
Saldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. Versatile Video Coding (VVC). Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7.
Pełny tekst źródłaRao, K. R., i Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Znajdź pełny tekst źródłaVersatile Video Coding. River Publishers, 2019.
Znajdź pełny tekst źródłaRao, K. R., i Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2019.
Znajdź pełny tekst źródłaRao, K. R., i Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Znajdź pełny tekst źródłaRao, K. R., i Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.
Znajdź pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, Luciano Agostini i César Marcon. Versatile Video Coding: Machine Learning and Heuristics. Springer International Publishing AG, 2022.
Znajdź pełny tekst źródłaRao, K. R., Humberto Ochoa Domínguez i Shreyanka Subbarayappa. Digital Video Coding for Next Generation Multimedia: H. 264, HEVC, VVC, EVC Video Compression. River Publishers, 2021.
Znajdź pełny tekst źródłaRao, K. R., Humberto Ochoa Domínguez i Shreyanka Subbarayappa. Digital Video Coding for Next Generation Multimedia: H. 264, HEVC, VVC, EVC Video Compression. River Publishers, 2021.
Znajdź pełny tekst źródłaCzęści książek na temat "VVC, Versatile Video Coding"
Saldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Versatile Video Coding (VVC)". W Versatile Video Coding (VVC), 7–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_2.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "VVC Intra-frame Prediction". W Versatile Video Coding (VVC), 23–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_3.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Learning-Based Fast Decision for Intra-frame Prediction Mode Selection for Luminance". W Versatile Video Coding (VVC), 89–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_8.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "State-of-the-Art Overview". W Versatile Video Coding (VVC), 35–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_4.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Light Gradient Boosting Machine Configurable Fast Block Partitioning for Luminance". W Versatile Video Coding (VVC), 71–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_7.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Performance Analysis of VVC Intra-frame Prediction". W Versatile Video Coding (VVC), 43–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_5.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Fast Intra-frame Prediction Transform for Luminance Using Decision Trees". W Versatile Video Coding (VVC), 99–105. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_9.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Heuristic-Based Fast Block Partitioning Scheme for Chrominance". W Versatile Video Coding (VVC), 107–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_10.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Heuristic-Based Fast Multi-type Tree Decision Scheme for Luminance". W Versatile Video Coding (VVC), 63–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_6.
Pełny tekst źródłaSaldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "Conclusions and Open Research Possibilities". W Versatile Video Coding (VVC), 119–21. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_11.
Pełny tekst źródłaStreszczenia konferencji na temat "VVC, Versatile Video Coding"
Sullivan, Gary. "Versatile Video Coding (VVC) Arrives". W 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301847.
Pełny tekst źródłaFan, Kui, Ronggang Wang, Weisi Lin, Jong-Uk Hou, Lingyu Duan, Ge Li i Wen Gao. "Separable KLT for Intra Coding in Versatile Video Coding (VVC)". W 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00083.
Pełny tekst źródłaFu, Tianliang, Xiaozhen Zheng, Shanshe Wang i Siwei Ma. "Composite Long-Term Reference Coding for Versatile Video Coding(VVC)". W 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803708.
Pełny tekst źródłaWang, Suhong, Xiang Zhang, Shanshe Wang, Siwei Ma i Wen Gao. "Adaptive Wavelet Domain Filter for Versatile Video Coding (VVC)". W 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00015.
Pełny tekst źródłaAbdoli, Mohsen, Felix Henry, Patrice Brault, Frederic Dufaux i Pierre Duhamel. "Transform Coefficient Coding for Screen Content in Versatile Video Coding (VVC)". W ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683285.
Pełny tekst źródłaBross, Benjamin, Mathias Wien, Jens-Rainer Ohm, Gary J. Sullivan i Yan Ye. "Update on the emerging versatile video coding (VVC) standard and its applications". W MHV '22: Mile-High Video Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510450.3517315.
Pełny tekst źródłaAklouf, Mourad, Marc Leny, Frederic Dufaux i Michel Kieffer. "Low Complexity Versatile Video Coding (VVC) for Low Bitrate Applications". W 2019 8th European Workshop on Visual Information Processing (EUVIP). IEEE, 2019. http://dx.doi.org/10.1109/euvip47703.2019.8946261.
Pełny tekst źródłaCanMert, Ahmet, Ercan Kalali i Ilker Hamzaoglu. "A Low Power Versatile Video Coding (VVC) Fractional Interpolation Hardware". W 2018 Conference on Design and Architectures for Signal and Image Processing (DASIP). IEEE, 2018. http://dx.doi.org/10.1109/dasip.2018.8597040.
Pełny tekst źródłaKarjee, Jyotirmoy, Aryan Dubey i Anurag Chaudhary. "Coding Unit Partitions Using Depth-Wise Separable Convolution in Versatile Video Coding (VVC)". W 2023 IEEE Globecom Workshops (GC Wkshps). IEEE, 2023. http://dx.doi.org/10.1109/gcwkshps58843.2023.10464401.
Pełny tekst źródłaMou, Xuanqin, i Yang Li. "Transform coefficients distribution of the future versatile video coding (VVC) standard". W Optoelectronic Imaging and Multimedia Technology V, redaktorzy Qionghai Dai i Tsutomu Shimura. SPIE, 2018. http://dx.doi.org/10.1117/12.2503138.
Pełny tekst źródłaRaporty organizacyjne na temat "VVC, Versatile Video Coding"
Zhao, S., S. Wenger, Y. Sanchez, Y. K. Wang i M. M Hannuksela. RTP Payload Format for Versatile Video Coding (VVC). RFC Editor, grudzień 2022. http://dx.doi.org/10.17487/rfc9328.
Pełny tekst źródła