Letteratura scientifica selezionata sul tema "VVC, Versatile Video Coding"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "VVC, Versatile Video Coding".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "VVC, Versatile Video Coding":

1

Silva, Giovane Gomes, Ícaro Gonçalves Siqueira, Mateus Grellert e Claudio Machado Diniz. "Approximate Hardware Architecture for Interpolation Filter of Versatile Video Coding". Journal of Integrated Circuits and Systems 16, n. 2 (15 agosto 2021): 1–8. http://dx.doi.org/10.29292/jics.v16i2.327.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The new Versatile Video Coding (VVC) standard was recently developed to improve compression efficiency of previous video coding standards and to support new applications. This was achieved at the cost of an increase in the computational complexity of the encoder algorithms, which leads to the need to develop hardware accelerators and to apply approximate computing techniques to achieve the performance and power dissipation required for systems that encode video. This work proposes the implementation of an approximate hardware architecture for interpolation filters defined in the VVC standard targeting real-time processing of high resolution videos. The architecture is able to process up to 2560x1600 pixels videos at 30 fps with power dissipation of 23.9 mW when operating at a frequency of 522 MHz, with an average compression efficiency degradation of only 0.41% compared to default VVC video encoder software configuration.
2

Choi, Kiho. "A Study on Fast and Low-Complexity Algorithms for Versatile Video Coding". Sensors 22, n. 22 (20 novembre 2022): 8990. http://dx.doi.org/10.3390/s22228990.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Versatile Video Coding (VVC)/H.266, completed in 2020, provides half the bitrate of the previous video coding standard (i.e., High-Efficiency Video Coding (HEVC)/H.265) while maintaining the same visual quality. The primary goal of VVC/H.266 is to achieve a compression capability that is noticeably better than that of HEVC/H.265, as well as the functionality to support a variety of applications with a single profile. Although VVC/H.266 has improved its coding performance by incorporating new advanced technologies with flexible partitioning, the increased encoding complexity has become a challenging issue in practical market usage. To address the complexity issue of VVC/H.266, significant efforts have been expended to develop practical methods for reducing the encoding and decoding processes of VVC/H.266. In this study, we provide an overview of the VVC/H.266 standard, and compared with previous video coding standards, examine a key challenge to VVC/H.266 coding. Furthermore, we survey and present recent technical advances in fast and low-complexity VVC/H.266, focusing on key technical areas.
3

Zouidi, Naima, Amina Kessentini, Wassim Hamidouche, Nouri Masmoudi e Daniel Menard. "Multitask Learning Based Intra-Mode Decision Framework for Versatile Video Coding". Electronics 11, n. 23 (2 dicembre 2022): 4001. http://dx.doi.org/10.3390/electronics11234001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In mid-2020, the new international video coding standard, namely versatile video coding (VVC), was officially released by the Joint Video Expert Team (JVET). As its name indicates, the VVC enables a higher level of versatility with better compression performance compared to its predecessor, high-efficiency video coding (HEVC). VVC introduces several new coding tools like multiple reference lines (MRL) and matrix-weighted intra-prediction (MIP), along with several improvements on the block-based hybrid video coding scheme such as quatree with nested multi-type tree (QTMT) and finer-granularity intra-prediction modes (IPMs). Because finding the best encoding decisions is usually preceded by optimizing the rate distortion (RD) cost, introducing new coding tools or enhancing existing ones requires additional computations. In fact, the VVC is 31 times more complex than the HEVC. Therefore, this paper aims to reduce the computational complexity of the VVC. It establishes a large database for intra-prediction and proposes a multitask learning (MTL)-based intra-mode decision framework. Experimental results show that our proposal enables up to 30% of complexity reduction while slightly increasing the Bjontegaard bit rate (BD-BR).
4

Amrutha Valli Pamidi, Lakshmi, e Purnachand Nalluri. "Optimized in-loop filtering in versatile video coding using improved fast guided filter". Indonesian Journal of Electrical Engineering and Computer Science 33, n. 2 (1 febbraio 2024): 911. http://dx.doi.org/10.11591/ijeecs.v33.i2.pp911-919.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p>Devices with varying display capabilities from a common source may face degradation in video quality because of the limitation in transmission bandwidth and storage. The solution to overcome this challenge is to enrich the video quality. For the mentioned purpose, this paper introduces an improved fast guided filter (IFGF) for the contemporary video coding standard H.266/VVC (versatile video coding), a continuation of H.265/HEVC (high efficiency video coding). VVC includes several types of coding techniques to enhance video coding efficiency over existing video coding standards. Despite that, blocking artifacts are still present in the images. Hence, the proposed method focuses on denoising the image and the increase of video quality, which is measured in terms of peak signal-to-noise (PSNR). The objective is achieved by using an IFGF for in-loop filtering in VVC to denoise the reconstructed images. VTM (VVC test model)-17.2 is used to simulate the various video sequences with the proposed filter. This method achieves a 0.67% Bjontegaard delta (BD)-rate reduction in low-delay configuration accompanied by an encoder run time increase of 4%.</p>
5

Jung, Seongwon, e Dongsan Jun. "Context-Based Inter Mode Decision Method for Fast Affine Prediction in Versatile Video Coding". Electronics 10, n. 11 (24 maggio 2021): 1243. http://dx.doi.org/10.3390/electronics10111243.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM).
6

高啟洲, 高啟洲, e 賴美妤 Chi-Chou Kao. "基於深度學習之改良式多功能影像編碼快速畫面內模式決策研究". 理工研究國際期刊 12, n. 1 (aprile 2022): 037–48. http://dx.doi.org/10.53106/222344892022041201004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p>H.266/Versatile Video Coding (VVC) 是針對 4K 以上的超高畫質影片,且能適用在高動態範圍(High Dynamic Range Imaging, HDR)及廣色域(wide color gamut, WCG)中,但基於四元樹加二元樹(Quadtree plus Binary Tree, QTBT)的編碼單元(Coding Unit, CU)結構增加了 H.266/VVC 編碼的計算複雜性。本論文提出了一種基於深度學習之改良式多功能影像編碼快速畫面內模式決策方法,減少 H.266/VVC 內編碼複雜性以加快H.266/VVC 的編碼速度,並將畫面內影像編碼結合卷積神經網路(Convolutional Neural Networks, CNN)在 H.266/VVC 畫面內編碼的模式預測決策,以達到比原始編碼方式(JEM7.0)更好的編碼效能。</p> <p>&nbsp;</p><p>H.266/VVC is ultra-high-definition video over 4K, and can be applied in High Dynamic Range Imaging (HDR) and wide color gamut (WCG). However, it has high coding computational complexity based on the coding unit (CU) structure of a quadtree plus binary tree (QTBT). This plan first proposes a fast coding unit spatial features decision method to reduce the coding complexity in H.266/VVC such that the H.266/VVC coding can be speed up. Another important contribution of this plan is to combine video coding with Convolutional Neural Networks (CNNs) in H.266/VVC in-frame coding mode prediction decision. It can be shown that the proposed methods can achieve better encoding performance than the original encoding method (JEM7.0).</p> <p>&nbsp;</p>
7

Mishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, n. 2 (10 settembre 2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
8

Li, Minghui, Zhaohong Li e Zhenzhen Zhang. "A VVC Video Steganography Based on Coding Units in Chroma Components with a Deep Learning Network". Symmetry 15, n. 1 (31 dicembre 2022): 116. http://dx.doi.org/10.3390/sym15010116.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Versatile Video Coding (VVC) is the latest video coding standard, but currently, most steganographic algorithms are based on High-Efficiency Video Coding (HEVC). The concept of symmetry is often adopted in deep neural networks. With the rapid rise of new multimedia, video steganography shows great research potential. This paper proposes a VVC steganographic algorithm based on Coding Units (CUs). Considering the novel techniques in VVC, the proposed steganography only uses chroma CUs to embed secret information. Based on modifying the partition modes of chroma CUs, we propose four different embedding levels to satisfy the different needs of visual quality, capacity and video bitrate. In order to reduce the bitrate of stego-videos and improve the distortion caused by modifying them, we propose a novel convolutional neural network (CNN) as an additional in-loop filter in the VVC codec to achieve better restoration. Furthermore, the proposed steganography algorithm based on chroma components has an advantage in resisting most of the video steganalysis algorithms, since few VVC steganalysis algorithms have been proposed thus far and most HEVC steganalysis algorithms are based on the luminance component. Experimental results show that the proposed VVC steganography algorithm achieves excellent performance on visual quality, bitrate cost and capacity.
9

Saha, Anup, Miguel Chavarrías, Fernando Pescador, Ángel M. Groba, Kheyter Chassaigne e Pedro L. Cebrián. "Complexity Analysis of a Versatile Video Coding Decoder over Embedded Systems and General Purpose Processors". Sensors 21, n. 10 (11 maggio 2021): 3320. http://dx.doi.org/10.3390/s21103320.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The increase in high-quality video consumption requires increasingly efficient video coding algorithms. Versatile video coding (VVC) is the current state-of-the-art video coding standard. Compared to the previous video standard, high efficiency video coding (HEVC), VVC demands approximately 50% higher video compression while maintaining the same quality and significantly increasing the computational complexity. In this study, coarse-grain profiling of a VVC decoder over two different platforms was performed: One platform was based on a high-performance general purpose processor (HGPP), and the other platform was based on an embedded general purpose processor (EGPP). For the most intensive computational modules, fine-grain profiling was also performed. The results allowed the identification of the most intensive computational modules necessary to carry out subsequent acceleration processes. Additionally, the correlation between the performance of each module on both platforms was determined to identify the influence of the hardware architecture.
10

Chen, Guojie, e Min Lin. "Sample-Based Gradient Edge and Angular Prediction for VVC Lossless Intra-Coding". Applied Sciences 14, n. 4 (18 febbraio 2024): 1653. http://dx.doi.org/10.3390/app14041653.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Lossless coding is a compression method in the Versatile Video Coding (VVC) standard, which can compress video without distortion. Lossless coding has great application prospects in fields with high requirements for video quality. Since the current VVC standard is mainly designed for lossy coding, the compression efficiency of VVC lossless coding makes it hard to meet people’s needs. In order to improve the performance of VVC lossless coding, this paper proposes a sample-based intra-gradient edge detection and angular prediction (SGAP) method. SGAP utilizes the characteristics of lossless intra-coding to employ samples adjacent to the current sample as reference samples and performs prediction through sample iteration. SGAP aims to improve the prediction accuracy for edge regions, smooth regions and directional texture regions in images. Experimental results on the VVC Test Model (VTM) 12.3 reveal that SGAP achieves 7.31% bit-rate savings on average in VVC lossless intra-coding, while the encoding time is only increased by 5.4%. Compared with existing advanced sample-based intra-prediction methods, SGAP can provide significantly higher coding performance gain.

Tesi sul tema "VVC, Versatile Video Coding":

1

Nasrallah, Anthony. "Novel compression techniques for next-generation video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le contenu vidéo occupe aujourd'hui environ 82% du trafic Internet mondial. Ce pourcentage important est dû à la révolution des contenus vidéo. D’autre part, le marché exige de plus en plus des vidéos avec des résolutions et des qualités plus élevées. De ce fait, développer des algorithmes de codage encore plus efficaces que ceux existants devient une nécessité afin de limiter afin de limiter l’augmentation de la quantité de données vidéo circulant sur internet et assurer une meilleure qualité de service. En outre, la consommation impressionnante de contenu multimédia dans les produits électroniques impacte l’aspect écologique. Par conséquent, trouver un compromis entre la complexité des algorithmes et l’efficacité des implémentations s’impose comme nouveau défi. Pour cela, une équipe collaborative a été créée dans le but de développer une nouvelle norme de codage vidéo, Versatile Video Coding – VVC/H.266. Bien que VVC ait pu aboutir à une réduction de plus de 40% du débit par rapport à HEVC, cela ne signifie pas du tout qu’il n’y a plus de besoin pour améliorer encore l’efficacité du codage. De plus, VVC ajoute une complexité remarquable par rapport à HEVC. Cette thèse vient répondre à ces problématiques en proposant trois nouvelles méthodes d'encodage. Les apports de cette recherche se répartissent en deux axes principaux. Le premier axe consiste à proposer et mettre en œuvre de nouveaux outils de compression dans la nouvelle norme, capables de générer des gains de codage supplémentaires. Deux méthodes ont été proposées pour ce premier axe. Le point commun entre ces deux méthodes est la dérivation des informations de prédiction du côté du décodeur. En effet, l’augmentation des choix de l’encodeur peut améliorer la précision des prédictions et donne moins de résidus d’énergie, conduisant à une réduction du débit. Néanmoins, plus de modes de prédiction impliquent plus de signalisation à envoyer dans le flux binaire pour informer le décodeur des choix qui ont été faits au niveau de l’encodeur. Les gains mentionnés ci-dessus sont donc largement compensés par la signalisation ajoutée. Si l’information de prédiction est dérivée au niveau du décodeur, ce dernier n’est plus passif, mais devient actif, c’est le concept de décodeur intelligent. Ainsi, il sera inutile de signaler l’information, d’où un gain en signalisation. Chacune des deux méthodes propose une technique intelligente différente pour prédire l’information au niveau du décodeur. La première technique construit un histogramme de gradients pour déduire différents modes de prédiction intra pouvant ensuite être combinés, pour obtenir le mode de prédiction intra final pour un bloc donné. Cette propriété de fusion permet de prédire plus précisément les zones avec des textures complexes, ce qui, dans les schémas de codage conventionnels, nécessiterait plutôt un partitionnement et/ou une transmission plus fine des résidus à haute énergie. La deuxième technique consiste à donner à VVC la possibilité de basculer entre différents filtres d’interpolation pour la prédiction inter. La déduction du filtre optimal sélectionné par l’encodeur est réalisée grâce à des réseaux de neurones convolutifs. Le deuxième axe, contrairement au premier, ne cherche pas à ajouter une contribution à l’algorithme de base de VVC. Cet axe vise plutôt à permettre une utilisation optimisée de l’algorithme déjà existant. L’objectif ultime est de trouver le meilleur compromis possible entre l’efficacité de compression fournie et la complexité imposée par les outils VVC. Ainsi, un système d’optimisation est conçu pour déterminer une technique efficace d’adaptation de l’activation des outils au contenu. La détermination de ces outils peut être effectuée soit en utilisant des réseaux de neurones artificiels, soit sans aucune technique d’intelligence artificielle
Video content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
2

Aklouf, Mourad. "Video for events : Compression and transport of the next generation video codec". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'acquisition et la diffusion de contenus avec une latence minimale sont devenus essentiel dans plusieurs domaines d'activités tels que la diffusion d'évènements sportifs, la vidéoconférence, la télé-présence, la télé-opération de véhicules ou le contrôle à distance de systèmes. L'industrie de la diffusion en direct a connu une croissance en 2020, et son importance va encore croitre au cours des prochaines années grâce à l'émergence de nouveaux codecs vidéo à haute efficacité reposant sur le standard Versatile Video Coding(VVC)et à la cinquième génération de réseaux mobiles (5G).Les méthodes de streaming de type HTTP Adaptive Streaming (HAS) telles que MPEG-DASH, grâce aux algorithmes d'adaptation du débit de transmission de vidéo compressée, se sont révélées très efficaces pour améliorer la qualité d'expérience (QoE) dans un contexte de vidéo à la demande (VOD).Cependant, dans les applications où la latence est critique, minimiser le délai entre l'acquisition de l'image et son affichage au récepteur est essentiel. La plupart des algorithmes d'adaptation de débit sont développés pour optimiser la transmission vidéo d'un serveur situé dans le cœur de réseau vers des clients mobiles. Dans les applications nécessitant un streaming à faible latence, le rôle du serveur est joué par un terminal mobile qui va acquérir, compresser et transmettre les images via une liaison montante comportant un canal radio vers un ou plusieurs clients. Les approches d'adaptation de débit pilotées par le client sont par conséquent inadaptées. De plus, les HAS, pour lesquelles la prise de décision se fait avec une périodicité de l'ordre de la seconde ne sont pas suffisamment réactives lors d'une mobilité importante du serveur et peuvent engendrer des délais importants. Il est donc essentiel d'utiliser une granularité d'adaptation très fine afin de réduire le délai de bout-en-bout. En effet, la taille réduite des tampons d'émission et de réception afin de minimiser la latence rend plus délicate l'adaptation du débit dans notre cas d'usage. Lorsque la bande passante varie avec une constante de temps plus petite que la période avec laquelle la régulation est faite, les mauvaises décisions de débit de transmission peuvent induire un surcroit de latence important.L'objet de cette thèse est d'apporter des éléments de réponse à la problématique de la transmission vidéo à faible latence depuis des terminaux (émetteurs) mobiles. Nous présentons d'abord un algorithme d'adaptation de débit image-par-image pour la diffusion à faible latence. Une approche de type Model Predictive Control (MPC) est proposée pour déterminer le débit de codage de chaque image à transmettre. Cette approche utilise des informations relatives au niveau de tampon de l'émetteur et aux caractéristiques du canal de transmission. Les images étant codées en direct, un modèle reliant le paramètre de quantification (QP) au débit de sortie du codeur vidéo est nécessaire. Nous avons donc proposé un nouveau modèle reliant le débit au paramètre de quantification et à la distorsion de l'image précédente. Ce modèle fournit de bien meilleurs résultats dans le contexte d'une décision prise image par image du débit de codage que les modèle de référence de la littérature.En complément des techniques précédentes, nous avons également proposé des outils permettant de réduire la complexité de codeurs vidéo tels que VVC. La version actuelle du codeur VVC (VTM10) a un temps d'exécution neuf fois supérieur à celui du codeur HEVC. Par conséquent, le codeur VVC n'est pas adapté aux applications de codage et diffusion en temps réel sur les plateformes actuellement disponibles. Dans ce contexte, nous présentons une méthode systématique, de type branch-and-prune, permettant d'identifier un ensemble d'outils de codage pouvant être désactivés tout en satisfaisant une contrainte sur l'efficacité de codage. Ce travail contribue à la réalisation d'un codeur VVC temps réel
The acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder

Libri sul tema "VVC, Versatile Video Coding":

1

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. Versatile Video Coding (VVC). Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Rao, K. R., e Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Rao, K. R., e Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2019.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Rao, K. R., e Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2019.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rao, K. R., e Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Rao, K. R., e Humberto Ochoa Dominguez. Versatile Video Coding. River Publishers, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Saldanha, Mário, Gustavo Sanchez, Luciano Agostini e César Marcon. Versatile Video Coding: Machine Learning and Heuristics. Springer International Publishing AG, 2022.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Rao, K. R., Humberto Ochoa Domínguez e Shreyanka Subbarayappa. Digital Video Coding for Next Generation Multimedia: H. 264, HEVC, VVC, EVC Video Compression. River Publishers, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rao, K. R., Humberto Ochoa Domínguez e Shreyanka Subbarayappa. Digital Video Coding for Next Generation Multimedia: H. 264, HEVC, VVC, EVC Video Compression. River Publishers, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "VVC, Versatile Video Coding":

1

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Versatile Video Coding (VVC)". In Versatile Video Coding (VVC), 7–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "VVC Intra-frame Prediction". In Versatile Video Coding (VVC), 23–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Learning-Based Fast Decision for Intra-frame Prediction Mode Selection for Luminance". In Versatile Video Coding (VVC), 89–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "State-of-the-Art Overview". In Versatile Video Coding (VVC), 35–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Light Gradient Boosting Machine Configurable Fast Block Partitioning for Luminance". In Versatile Video Coding (VVC), 71–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Performance Analysis of VVC Intra-frame Prediction". In Versatile Video Coding (VVC), 43–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Fast Intra-frame Prediction Transform for Luminance Using Decision Trees". In Versatile Video Coding (VVC), 99–105. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Heuristic-Based Fast Block Partitioning Scheme for Chrominance". In Versatile Video Coding (VVC), 107–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Heuristic-Based Fast Multi-type Tree Decision Scheme for Luminance". In Versatile Video Coding (VVC), 63–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Saldanha, Mário, Gustavo Sanchez, César Marcon e Luciano Agostini. "Conclusions and Open Research Possibilities". In Versatile Video Coding (VVC), 119–21. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "VVC, Versatile Video Coding":

1

Sullivan, Gary. "Versatile Video Coding (VVC) Arrives". In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2020. http://dx.doi.org/10.1109/vcip49819.2020.9301847.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Fan, Kui, Ronggang Wang, Weisi Lin, Jong-Uk Hou, Lingyu Duan, Ge Li e Wen Gao. "Separable KLT for Intra Coding in Versatile Video Coding (VVC)". In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00083.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Fu, Tianliang, Xiaozhen Zheng, Shanshe Wang e Siwei Ma. "Composite Long-Term Reference Coding for Versatile Video Coding(VVC)". In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803708.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Suhong, Xiang Zhang, Shanshe Wang, Siwei Ma e Wen Gao. "Adaptive Wavelet Domain Filter for Versatile Video Coding (VVC)". In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Abdoli, Mohsen, Felix Henry, Patrice Brault, Frederic Dufaux e Pierre Duhamel. "Transform Coefficient Coding for Screen Content in Versatile Video Coding (VVC)". In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683285.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bross, Benjamin, Mathias Wien, Jens-Rainer Ohm, Gary J. Sullivan e Yan Ye. "Update on the emerging versatile video coding (VVC) standard and its applications". In MHV '22: Mile-High Video Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510450.3517315.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Aklouf, Mourad, Marc Leny, Frederic Dufaux e Michel Kieffer. "Low Complexity Versatile Video Coding (VVC) for Low Bitrate Applications". In 2019 8th European Workshop on Visual Information Processing (EUVIP). IEEE, 2019. http://dx.doi.org/10.1109/euvip47703.2019.8946261.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

CanMert, Ahmet, Ercan Kalali e Ilker Hamzaoglu. "A Low Power Versatile Video Coding (VVC) Fractional Interpolation Hardware". In 2018 Conference on Design and Architectures for Signal and Image Processing (DASIP). IEEE, 2018. http://dx.doi.org/10.1109/dasip.2018.8597040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Karjee, Jyotirmoy, Aryan Dubey e Anurag Chaudhary. "Coding Unit Partitions Using Depth-Wise Separable Convolution in Versatile Video Coding (VVC)". In 2023 IEEE Globecom Workshops (GC Wkshps). IEEE, 2023. http://dx.doi.org/10.1109/gcwkshps58843.2023.10464401.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Mou, Xuanqin, e Yang Li. "Transform coefficients distribution of the future versatile video coding (VVC) standard". In Optoelectronic Imaging and Multimedia Technology V, a cura di Qionghai Dai e Tsutomu Shimura. SPIE, 2018. http://dx.doi.org/10.1117/12.2503138.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "VVC, Versatile Video Coding":

1

Zhao, S., S. Wenger, Y. Sanchez, Y. K. Wang e M. M Hannuksela. RTP Payload Format for Versatile Video Coding (VVC). RFC Editor, dicembre 2022. http://dx.doi.org/10.17487/rfc9328.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia