Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Versatile Video Coding.

Zeitschriftenartikel zum Thema „Versatile Video Coding“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Versatile Video Coding" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Choe, Jaeryun, Haechul Choi, Heeji Han und Daehyeok Gwon. „Novel video coding methods for versatile video coding“. International Journal of Computational Vision and Robotics 11, Nr. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.10040489.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Han, Heeji, Daehyeok Gwon, Jaeryun Choe und Haechul Choi. „Novel video coding methods for versatile video coding“. International Journal of Computational Vision and Robotics 11, Nr. 5 (2021): 526. http://dx.doi.org/10.1504/ijcvr.2021.117582.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Takamura, Seishi. „Versatile Video Coding: a Next-generation Video Coding Standard“. NTT Technical Review 17, Nr. 6 (Juni 2019): 49–52. http://dx.doi.org/10.53829/ntr201906gls.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Silva, Giovane Gomes, Ícaro Gonçalves Siqueira, Mateus Grellert und Claudio Machado Diniz. „Approximate Hardware Architecture for Interpolation Filter of Versatile Video Coding“. Journal of Integrated Circuits and Systems 16, Nr. 2 (15.08.2021): 1–8. http://dx.doi.org/10.29292/jics.v16i2.327.

Der volle Inhalt der Quelle
Annotation:
The new Versatile Video Coding (VVC) standard was recently developed to improve compression efficiency of previous video coding standards and to support new applications. This was achieved at the cost of an increase in the computational complexity of the encoder algorithms, which leads to the need to develop hardware accelerators and to apply approximate computing techniques to achieve the performance and power dissipation required for systems that encode video. This work proposes the implementation of an approximate hardware architecture for interpolation filters defined in the VVC standard targeting real-time processing of high resolution videos. The architecture is able to process up to 2560x1600 pixels videos at 30 fps with power dissipation of 23.9 mW when operating at a frequency of 522 MHz, with an average compression efficiency degradation of only 0.41% compared to default VVC video encoder software configuration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sullivan, Gary J. „Video Coding Standards Progress Report: Joint Video Experts Team Launches the Versatile Video Coding Project“. SMPTE Motion Imaging Journal 127, Nr. 8 (September 2018): 94–98. http://dx.doi.org/10.5594/jmi.2018.2846098.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mishra, Amit Kumar. „Versatile Video Coding (VVC) Standard: Overview and Applications“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, Nr. 2 (10.09.2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Der volle Inhalt der Quelle
Annotation:
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Palau, Roberta De Carvalho Nobre, Bianca Santos da Cunha Silveira, Robson André Domanski, Marta Breunig Loose, Arthur Alves Cerveira, Felipe Martin Sampaio, Daniel Palomino, Marcelo Schiavon Porto, Guilherme Ribeiro Corrêa und Luciano Volcan Agostini. „Modern Video Coding: Methods, Challenges and Systems“. Journal of Integrated Circuits and Systems 16, Nr. 2 (16.08.2021): 1–12. http://dx.doi.org/10.29292/jics.v16i2.503.

Der volle Inhalt der Quelle
Annotation:
With the increasing demand for digital video applications in our daily lives, video coding and decoding become critical tasks that must be supported by several types of devices and systems. This paper presents a discussion of the main challenges to design dedicated hardware architectures based on modern hybrid video coding formats, such as the High Efficiency Video Coding (HEVC), the AOMedia Video 1 (AV1) and the Versatile Video Coding (VVC). The paper discusses eachstep of the hybrid video coding process, highlighting the main challenges for each codec and discussing the main hardware solutions published in the literature. The discussions presented in the paper show that there are still many challenges to be overcome and open research opportunities, especially for the AV1 and VVC codecs. Most of these challenges are related to the high throughput required for processing high and ultrahigh resolution videos in real time and to energy constraints of multimedia-capable devices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Adhuran, Jayasingam, Gosala Kulupana, Chathura Galkandage und Anil Fernando. „Multiple Quantization Parameter Optimization in Versatile Video Coding for 360° Videos“. IEEE Transactions on Consumer Electronics 66, Nr. 3 (August 2020): 213–22. http://dx.doi.org/10.1109/tce.2020.3001231.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Wei, Xiantao Jiang, Jiayuan Jin, Tian Song und Fei Richard Yu. „Saliency-Enabled Coding Unit Partitioning and Quantization Control for Versatile Video Coding“. Information 13, Nr. 8 (19.08.2022): 394. http://dx.doi.org/10.3390/info13080394.

Der volle Inhalt der Quelle
Annotation:
The latest video coding standard, versatile video coding (VVC), has greatly improved coding efficiency over its predecessor standard high efficiency video coding (HEVC), but at the expense of sharply increased complexity. In the context of perceptual video coding (PVC), the visual saliency model that utilizes the characteristics of the human visual system to improve coding efficiency has become a reliable method due to advances in computer performance and visual algorithms. In this paper, a novel VVC optimization scheme compliant PVC framework is proposed, which consists of fast coding unit (CU) partition algorithm and quantization control algorithm. Firstly, based on the visual saliency model, we proposed a fast CU division scheme, including the redetermination of the CU division depth by calculating Scharr operator and variance, as well as the executive decision for intra sub-partitions (ISP), to reduce the coding complexity. Secondly, a quantization control algorithm is proposed by adjusting the quantization parameter based on multi-level classification of saliency values at the CU level to reduce the bitrate. In comparison with the reference model, experimental results indicate that the proposed method can reduce about 47.19% computational complexity and achieve a bitrate saving of 3.68% on average. Meanwhile, the proposed algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

PARK, Dohyeon, Jinho LEE, Jung-Won KANG und Jae-Gon KIM. „Simplified Triangular Partitioning Mode in Versatile Video Coding“. IEICE Transactions on Information and Systems E103.D, Nr. 2 (01.02.2020): 472–75. http://dx.doi.org/10.1587/transinf.2019edl8084.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Farajallah, Mousa, Guillaume Gautier, Wassim Hamidouche, Olivier Deforges und Safwan El Assad. „Selective Encryption of the Versatile Video Coding Standard“. IEEE Access 10 (2022): 21821–35. http://dx.doi.org/10.1109/access.2022.3149599.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Amrutha Valli Pamidi, Lakshmi, und Purnachand Nalluri. „Optimized in-loop filtering in versatile video coding using improved fast guided filter“. Indonesian Journal of Electrical Engineering and Computer Science 33, Nr. 2 (01.02.2024): 911. http://dx.doi.org/10.11591/ijeecs.v33.i2.pp911-919.

Der volle Inhalt der Quelle
Annotation:
<p>Devices with varying display capabilities from a common source may face degradation in video quality because of the limitation in transmission bandwidth and storage. The solution to overcome this challenge is to enrich the video quality. For the mentioned purpose, this paper introduces an improved fast guided filter (IFGF) for the contemporary video coding standard H.266/VVC (versatile video coding), a continuation of H.265/HEVC (high efficiency video coding). VVC includes several types of coding techniques to enhance video coding efficiency over existing video coding standards. Despite that, blocking artifacts are still present in the images. Hence, the proposed method focuses on denoising the image and the increase of video quality, which is measured in terms of peak signal-to-noise (PSNR). The objective is achieved by using an IFGF for in-loop filtering in VVC to denoise the reconstructed images. VTM (VVC test model)-17.2 is used to simulate the various video sequences with the proposed filter. This method achieves a 0.67% Bjontegaard delta (BD)-rate reduction in low-delay configuration accompanied by an encoder run time increase of 4%.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

JABALLAH, Sami, und Mohamed-Chaker LARABI. „Complexity Optimization for the Upcoming Versatile Video Coding Standard“. Electronic Imaging 2020, Nr. 9 (26.01.2020): 286–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-286.

Der volle Inhalt der Quelle
Annotation:
The Versatile Video Coding (VVC) is forseen as the next generation video coding standard. The main objective is to achieve coding efficiency improvement of about 50% bit-rate reduction compared to the previous standard HEVC at the same visual quality by 2020. In this paper, a fast VVC encoder is proposed based on an early split termination for fast intra CU selection. Taking into account edge complexity of the block and the best intra prediction mode obtained at the current block size, an early split termination is proposed. Using spatial neighboring coding unit depths (quad-tree, binary-tree and ternary-tree depths), the depth probability measure is computed and used to define the stopping criterion. The proposed algorithm is evaluated on nine commoly used test video sequences. Compared to the current VTM3.0 in all intra high efficiency and LowDelayP configuration cases, the proposed algorithm outperforms the anchor scheme in terms of encoding time with a slightly degradation in coding efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Zouidi, Naima, Amina Kessentini, Wassim Hamidouche, Nouri Masmoudi und Daniel Menard. „Multitask Learning Based Intra-Mode Decision Framework for Versatile Video Coding“. Electronics 11, Nr. 23 (02.12.2022): 4001. http://dx.doi.org/10.3390/electronics11234001.

Der volle Inhalt der Quelle
Annotation:
In mid-2020, the new international video coding standard, namely versatile video coding (VVC), was officially released by the Joint Video Expert Team (JVET). As its name indicates, the VVC enables a higher level of versatility with better compression performance compared to its predecessor, high-efficiency video coding (HEVC). VVC introduces several new coding tools like multiple reference lines (MRL) and matrix-weighted intra-prediction (MIP), along with several improvements on the block-based hybrid video coding scheme such as quatree with nested multi-type tree (QTMT) and finer-granularity intra-prediction modes (IPMs). Because finding the best encoding decisions is usually preceded by optimizing the rate distortion (RD) cost, introducing new coding tools or enhancing existing ones requires additional computations. In fact, the VVC is 31 times more complex than the HEVC. Therefore, this paper aims to reduce the computational complexity of the VVC. It establishes a large database for intra-prediction and proposes a multitask learning (MTL)-based intra-mode decision framework. Experimental results show that our proposal enables up to 30% of complexity reduction while slightly increasing the Bjontegaard bit rate (BD-BR).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Jiang, Xiantao, Mo Xiang, Jiayuan Jin und Tian Song. „Extreme Learning Machine-Enabled Coding Unit Partitioning Algorithm for Versatile Video Coding“. Information 14, Nr. 9 (07.09.2023): 494. http://dx.doi.org/10.3390/info14090494.

Der volle Inhalt der Quelle
Annotation:
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning algorithm based on an extreme learning machine (ELM), which can reduce the coding complexity while ensuring coding efficiency. Firstly, the coding unit size decision is modeled as a classification problem. Secondly, an ELM classifier is trained to predict the coding unit size. In the experiment, the proposed approach is verified based on the VVC reference model. The results show that the proposed method can reduce coding complexity significantly, and good image quality can be obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Cho, Seunghyun, Dong-Wook Kim und Seung-Won Jung. „Quality enhancement of VVC intra-frame coding for multimedia services over the Internet“. International Journal of Distributed Sensor Networks 16, Nr. 5 (Mai 2020): 155014772091764. http://dx.doi.org/10.1177/1550147720917647.

Der volle Inhalt der Quelle
Annotation:
In this article, versatile video coding, the next-generation video coding standard, is combined with a deep convolutional neural network to achieve state-of-the-art image compression efficiency. The proposed hierarchical grouped residual dense network exhaustively exploits hierarchical features in each architectural level to maximize the image quality enhancement capability. The basic building block employed for hierarchical grouped residual dense network is residual dense block which exploits hierarchical features from internal convolutional layers. Residual dense blocks are then combined into a grouped residual dense block exploiting hierarchical features from residual dense blocks. Finally, grouped residual dense blocks are connected to comprise a hierarchical grouped residual dense block so that hierarchical features from grouped residual dense blocks can also be exploited for quality enhancement of versatile video coding intra-coded images. Various non-architectural and architectural aspects affecting the training efficiency and performance of hierarchical grouped residual dense network are explored. The proposed hierarchical grouped residual dense network respectively obtained 10.72% and 14.3% of Bjøntegaard-delta-rate gains against versatile video coding in the experiments conducted on two public image datasets with different characteristics to verify the image compression efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Choi, Kiho. „A Study on Fast and Low-Complexity Algorithms for Versatile Video Coding“. Sensors 22, Nr. 22 (20.11.2022): 8990. http://dx.doi.org/10.3390/s22228990.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC)/H.266, completed in 2020, provides half the bitrate of the previous video coding standard (i.e., High-Efficiency Video Coding (HEVC)/H.265) while maintaining the same visual quality. The primary goal of VVC/H.266 is to achieve a compression capability that is noticeably better than that of HEVC/H.265, as well as the functionality to support a variety of applications with a single profile. Although VVC/H.266 has improved its coding performance by incorporating new advanced technologies with flexible partitioning, the increased encoding complexity has become a challenging issue in practical market usage. To address the complexity issue of VVC/H.266, significant efforts have been expended to develop practical methods for reducing the encoding and decoding processes of VVC/H.266. In this study, we provide an overview of the VVC/H.266 standard, and compared with previous video coding standards, examine a key challenge to VVC/H.266 coding. Furthermore, we survey and present recent technical advances in fast and low-complexity VVC/H.266, focusing on key technical areas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Bross, Benjamin, Jianle Chen, Jens-Rainer Ohm, Gary J. Sullivan und Ye-Kui Wang. „Developments in International Video Coding Standardization After AVC, With an Overview of Versatile Video Coding (VVC)“. Proceedings of the IEEE 109, Nr. 9 (September 2021): 1463–93. http://dx.doi.org/10.1109/jproc.2020.3043399.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sjoberg, Rickard, Jacob Strom, Lukasz Litwic und Kenneth Andersson. „Versatile Video Coding explained – The Future of Video in a 5G World“. Ericsson Technology Review 2020, Nr. 10 (Oktober 2020): 2–12. http://dx.doi.org/10.23919/etr.2020.9905504.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Saha, Anup, Miguel Chavarrías, Fernando Pescador, Ángel M. Groba, Kheyter Chassaigne und Pedro L. Cebrián. „Complexity Analysis of a Versatile Video Coding Decoder over Embedded Systems and General Purpose Processors“. Sensors 21, Nr. 10 (11.05.2021): 3320. http://dx.doi.org/10.3390/s21103320.

Der volle Inhalt der Quelle
Annotation:
The increase in high-quality video consumption requires increasingly efficient video coding algorithms. Versatile video coding (VVC) is the current state-of-the-art video coding standard. Compared to the previous video standard, high efficiency video coding (HEVC), VVC demands approximately 50% higher video compression while maintaining the same quality and significantly increasing the computational complexity. In this study, coarse-grain profiling of a VVC decoder over two different platforms was performed: One platform was based on a high-performance general purpose processor (HGPP), and the other platform was based on an embedded general purpose processor (EGPP). For the most intensive computational modules, fine-grain profiling was also performed. The results allowed the identification of the most intensive computational modules necessary to carry out subsequent acceleration processes. Additionally, the correlation between the performance of each module on both platforms was determined to identify the influence of the hardware architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wang, Meng, Shiqi Wang, Junru Li, Li Zhang, Yue Wang, Siwei Ma und Sam Kwong. „Low Complexity Trellis-Coded Quantization in Versatile Video Coding“. IEEE Transactions on Image Processing 30 (2021): 2378–93. http://dx.doi.org/10.1109/tip.2021.3051460.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Pan, Zhaoqing, He Qin, Xiaokai Yi, Yuhui Zheng und Asifullah Khan. „Low complexity versatile video coding for traffic surveillance system“. International Journal of Sensor Networks 30, Nr. 2 (2019): 116. http://dx.doi.org/10.1504/ijsnet.2019.099473.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Qin, He, Asifullah Khan, Yuhui Zheng, Xiaokai Yi und Zhaoqing Pan. „Low complexity versatile video coding for traffic surveillance system“. International Journal of Sensor Networks 30, Nr. 2 (2019): 116. http://dx.doi.org/10.1504/ijsnet.2019.10020953.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kim, Seonjae, Dongsan Jun, Byung-Gyu Kim, Seungkwon Beack, Misuk Lee und Taejin Lee. „Two-Dimensional Audio Compression Method Using Video Coding Schemes“. Electronics 10, Nr. 9 (06.05.2021): 1094. http://dx.doi.org/10.3390/electronics10091094.

Der volle Inhalt der Quelle
Annotation:
As video compression is one of the core technologies that enables seamless media streaming within the available network bandwidth, it is crucial to employ media codecs to support powerful coding performance and higher visual quality. Versatile Video Coding (VVC) is the latest video coding standard developed by the Joint Video Experts Team (JVET) that can compress original data hundreds of times in the image or video; the latest audio coding standard, Unified Speech and Audio Coding (USAC), achieves a compression rate of about 20 times for audio or speech data. In this paper, we propose a pre-processing method to generate a two-dimensional (2D) audio signal as an input of a VVC encoder, and investigate the applicability to 2D audio compression using the video coding scheme. To evaluate the coding performance, we measure both signal-to-noise ratio (SNR) and bits per sample (bps). The experimental result shows the possibility of researching 2D audio encoding using video coding schemes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Jung, Seongwon, und Dongsan Jun. „Context-Based Inter Mode Decision Method for Fast Affine Prediction in Versatile Video Coding“. Electronics 10, Nr. 11 (24.05.2021): 1243. http://dx.doi.org/10.3390/electronics10111243.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

高啟洲, 高啟洲, und 賴美妤 Chi-Chou Kao. „基於深度學習之改良式多功能影像編碼快速畫面內模式決策研究“. 理工研究國際期刊 12, Nr. 1 (April 2022): 037–48. http://dx.doi.org/10.53106/222344892022041201004.

Der volle Inhalt der Quelle
Annotation:
<p>H.266/Versatile Video Coding (VVC) 是針對 4K 以上的超高畫質影片,且能適用在高動態範圍(High Dynamic Range Imaging, HDR)及廣色域(wide color gamut, WCG)中,但基於四元樹加二元樹(Quadtree plus Binary Tree, QTBT)的編碼單元(Coding Unit, CU)結構增加了 H.266/VVC 編碼的計算複雜性。本論文提出了一種基於深度學習之改良式多功能影像編碼快速畫面內模式決策方法,減少 H.266/VVC 內編碼複雜性以加快H.266/VVC 的編碼速度,並將畫面內影像編碼結合卷積神經網路(Convolutional Neural Networks, CNN)在 H.266/VVC 畫面內編碼的模式預測決策,以達到比原始編碼方式(JEM7.0)更好的編碼效能。</p> <p>&nbsp;</p><p>H.266/VVC is ultra-high-definition video over 4K, and can be applied in High Dynamic Range Imaging (HDR) and wide color gamut (WCG). However, it has high coding computational complexity based on the coding unit (CU) structure of a quadtree plus binary tree (QTBT). This plan first proposes a fast coding unit spatial features decision method to reduce the coding complexity in H.266/VVC such that the H.266/VVC coding can be speed up. Another important contribution of this plan is to combine video coding with Convolutional Neural Networks (CNNs) in H.266/VVC in-frame coding mode prediction decision. It can be shown that the proposed methods can achieve better encoding performance than the original encoding method (JEM7.0).</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Teng, Guowei, Danqi Xiong, Ran Ma und Ping An. „Decision tree accelerated CTU partition algorithm for intra prediction in versatile video coding“. PLOS ONE 16, Nr. 11 (08.11.2021): e0258890. http://dx.doi.org/10.1371/journal.pone.0258890.

Der volle Inhalt der Quelle
Annotation:
Versatile video coding (VVC) achieves enormous improvement over the advanced high efficiency video coding (HEVC) standard due to the adoption of the quadtree with nested multi-type tree (QTMT) partition structure and other coding tools. However, the computational complexity increases dramatically as well. To tackle this problem, we propose a decision tree accelerated coding tree units (CTU) partition algorithm for intra prediction in VVC. Firstly, specially designated image features are extracted to characterize the coding unit (CU) complexity. Then, the trained decision tree is employed to predict the partition results. Finally, based on our newly designed intra prediction framework, the partition process is early terminated or redundant partition modes are screened out. The experimental results show that the proposed algorithm could achieve around 52% encoding time reduction for various test video sequences on average with only 1.75% Bjontegaard delta bit rate increase compared with the reference test model VTM9.0 of VVC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Cao, Jian, Fan Liang und Jun Wang. „Intra Block Copy Mirror Mode for Screen Content Coding in Versatile Video Coding“. IEEE Access 9 (2021): 31390–400. http://dx.doi.org/10.1109/access.2021.3060448.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Lee, Minhun, HyeonJu Song, Jeeyoon Park, Byeungwoo Jeon, Jungwon Kang, Jae-Gon Kim, Yung-Lyul Lee, Je-Won Kang und Donggyu Sim. „Overview of Versatile Video Coding (H.266/VVC) and Its Coding Performance Analysis“. IEIE Transactions on Smart Processing & Computing 12, Nr. 2 (30.04.2023): 122–54. http://dx.doi.org/10.5573/ieiespc.2023.12.2.122.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhao, Jinchao, Yihan Wang und Qiuwen Zhang. „Fast CU Size Decision Method Based on Just Noticeable Distortion and Deep Learning“. Scientific Programming 2021 (08.12.2021): 1–10. http://dx.doi.org/10.1155/2021/3813116.

Der volle Inhalt der Quelle
Annotation:
With the development of broadband networks and high-definition displays, people have higher expectations for the quality of video images, which also brings new requirements and challenges to video coding technology. Compared with H.265/High Efficiency Video Coding (HEVC), the latest video coding standard, Versatile Video Coding (VVC), can save 50%-bit rate while maintaining the same subjective quality, but it leads to extremely high encoding complexity. To decrease the complexity, a fast coding unit (CU) size decision method based on Just Noticeable Distortion (JND) and deep learning is proposed in this paper. Specifically, the hybrid JND threshold model is first designed to distinguish smooth, normal, or complex region. Then, if CU belongs to complex area, the Ultra-Spherical SVM (US-SVM) classifiers are trained for forecasting the best splitting mode. Experimental results illustrate that the proposed method can save about 52.35% coding runtime, which can realize a trade-off between the reduction of computational burden and coding efficiency compared with the latest methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Li, Minghui, Zhaohong Li und Zhenzhen Zhang. „A VVC Video Steganography Based on Coding Units in Chroma Components with a Deep Learning Network“. Symmetry 15, Nr. 1 (31.12.2022): 116. http://dx.doi.org/10.3390/sym15010116.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC) is the latest video coding standard, but currently, most steganographic algorithms are based on High-Efficiency Video Coding (HEVC). The concept of symmetry is often adopted in deep neural networks. With the rapid rise of new multimedia, video steganography shows great research potential. This paper proposes a VVC steganographic algorithm based on Coding Units (CUs). Considering the novel techniques in VVC, the proposed steganography only uses chroma CUs to embed secret information. Based on modifying the partition modes of chroma CUs, we propose four different embedding levels to satisfy the different needs of visual quality, capacity and video bitrate. In order to reduce the bitrate of stego-videos and improve the distortion caused by modifying them, we propose a novel convolutional neural network (CNN) as an additional in-loop filter in the VVC codec to achieve better restoration. Furthermore, the proposed steganography algorithm based on chroma components has an advantage in resisting most of the video steganalysis algorithms, since few VVC steganalysis algorithms have been proposed thus far and most HEVC steganalysis algorithms are based on the luminance component. Experimental results show that the proposed VVC steganography algorithm achieves excellent performance on visual quality, bitrate cost and capacity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Yoon, Yong-Uk, und Jae-Gon Kim. „Activity-Based Block Partitioning Decision Method for Versatile Video Coding“. Electronics 11, Nr. 7 (28.03.2022): 1061. http://dx.doi.org/10.3390/electronics11071061.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC), the latest international video coding standard, has more than twice the compression performance of High-Efficiency Video Coding (HEVC) through adopting various coding techniques. The multi-type tree (MTT) block structure offers more advanced flexible block partitioning by allowing the binary tree (BT) and ternary tree (TT) structures, as well as the quadtree (QT) structure. Because VVC selects the optimal block partition by performing encoding on all possible CU partitions, the encoding complexity increases enormously. In this paper, we observe the relationship between block partitions and activity that indicates block texture complexity. Based on experimental observations, we propose an activity-based fast block partitioning decision method to reduce the encoding complexity. The proposed method uses only information of the current block without using the information of neighboring or upper blocks, and also minimizes the dependency on QP. For these reasons, the proposed algorithm is simple and parallelizable. In addition, by utilizing the gradient calculation used in VVC’s ALF, a VVC-friendly fast algorithm was designed. The proposed method consists of two-step decision-making processes. The first step terminates the block partitioning early based on observed posterior probability through the relationship between the block size and activity per sample. Next, the sub-activities of the current block are used to determine the type and direction of partitioning. The experimental results show that in the all-intra configuration, the proposed method can reduce the encoding time of the VVC test model (VTM) by up to 45.15% with 2.80% BD-rate loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Li, Yue, Fei Luo und Yapei Zhu. „Temporal Prediction Model-Based Fast Inter CU Partition for Versatile Video Coding“. Sensors 22, Nr. 20 (12.10.2022): 7741. http://dx.doi.org/10.3390/s22207741.

Der volle Inhalt der Quelle
Annotation:
Versatile video coding (VVC) adopts an advanced quad-tree plus multi-type tree (QTMT) coding structure to obtain higher compression efficiency, but it comes at the cost of a considerable increase in coding complexity. To effectively reduce the coding complexity of the QTMT-based coding unit (CU) partition, we propose a fast inter CU partition method based on a temporal prediction model, which includes early termination QTMT partition and early skipping multi-type tree (MT) partition. Firstly, according to the position of the current CU, we extract the optimal CU partition information of the position corresponding to the previously coded frames. We then establish a temporal prediction model based on temporal CU partition information to predict the current CU partition. Finally, to reduce the cumulative of errors of the temporal prediction model, we further extract the motion vector difference (MVD) of the CU to determine whether the QTMT partition can be terminated early. The experimental results show that the proposed method can reduce the inter coding complexity of VVC by 23.19% on average, while the Bjontegaard delta bit rate (BDBR) is only increased by 0.97% on average under the Random Access (RA) configuration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Lei, Meng, Falei Luo, Xinfeng Zhang, Shanshe Wang und Siwei Ma. „Joint Local and Nonlocal Progressive Prediction for Versatile Video Coding“. IEEE Transactions on Image Processing 31 (2022): 2824–38. http://dx.doi.org/10.1109/tip.2022.3161831.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Meng, Xuewei, Chuanmin Jia, Xinfeng Zhang, Shanshe Wang und Siwei Ma. „Spatio-Temporal Correlation Guided Geometric Partitioning for Versatile Video Coding“. IEEE Transactions on Image Processing 31 (2022): 30–42. http://dx.doi.org/10.1109/tip.2021.3126420.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lim, Sung-Chang, Dae-Yeon Kim und Jungwon Kang. „Simplification on Cross-Component Linear Model in Versatile Video Coding“. Electronics 9, Nr. 11 (09.11.2020): 1885. http://dx.doi.org/10.3390/electronics9111885.

Der volle Inhalt der Quelle
Annotation:
To improve coding efficiency by exploiting the local inter-component redundancy between the luma and chroma components, the cross-component linear model (CCLM) is included in the versatile video coding (VVC) standard. In the CCLM mode, linear model parameters are derived from the neighboring luma and chroma samples of the current block. Furthermore, chroma samples are predicted by the reconstructed samples in the collocated luma block with the derived parameters. However, as the CCLM design in the VVC test model (VTM)-6.0 has many conditional branches in its processes to use only available neighboring samples, the CCLM implementation in parallel processing is limited. To address this implementation issue, this paper proposes including the neighboring sample generation as the first process of the CCLM, so as to simplify the succeeding CCLM processes. As unavailable neighboring samples are replaced with the adjacent available samples by the proposed CCLM, the neighboring sample availability checks can be removed. This results in simplified downsampling filter shapes for the luma sample. Therefore, the proposed CCLM can be efficiently implemented by employing parallel processing in both hardware and software implementations, owing to the removal of the neighboring sample availability checks and the simplification of the luma downsampling filters. The experimental results demonstrate that the proposed CCLM reduces the decoding runtime complexity of the CCLM mode, with negligible impact on the Bjøntegaard delta (BD)-rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Lee, J., und J. Jeong. „Deblocking performance analysis of weak filter on versatile video coding“. Electronics Letters 56, Nr. 6 (März 2020): 289–90. http://dx.doi.org/10.1049/el.2019.3760.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Park, Sang-Hyo, und Je-Won Kang. „Fast Affine Motion Estimation for Versatile Video Coding (VVC) Encoding“. IEEE Access 7 (2019): 158075–84. http://dx.doi.org/10.1109/access.2019.2950388.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Chen, Yamei, Li Yu, Hongkui Wang, Tiansong Li und Shengwei Wang. „A novel fast intra mode decision for versatile video coding“. Journal of Visual Communication and Image Representation 71 (August 2020): 102849. http://dx.doi.org/10.1016/j.jvcir.2020.102849.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

HoangVan, Xiem, Sang NguyenQuang und Fernando Pereira. „Versatile Video Coding Based Quality Scalability With Joint Layer Reference“. IEEE Signal Processing Letters 27 (2020): 2079–83. http://dx.doi.org/10.1109/lsp.2020.3039729.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Sau, Carlo, Dario Ligas, Tiziana Fanni, Luigi Raffo und Francesca Palumbo. „Reconfigurable Adaptive Multiple Transform Hardware Solutions for Versatile Video Coding“. IEEE Access 7 (2019): 153258–68. http://dx.doi.org/10.1109/access.2019.2946054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

He, Liqiang, Shuhua Xiong, Ruolan Yang, Xiaohai He und Honggang Chen. „Low-Complexity Multiple Transform Selection Combining Multi-Type Tree Partition Algorithm for Versatile Video Coding“. Sensors 22, Nr. 15 (25.07.2022): 5523. http://dx.doi.org/10.3390/s22155523.

Der volle Inhalt der Quelle
Annotation:
Despite the fact that Versatile Video Coding (VVC) achieves a superior coding performance to High-Efficiency Video Coding (HEVC), it takes a lot of time to encode video sequences due to the high computational complexity of the tools. Among these tools, Multiple Transform Selection (MTS) require the best of several transforms to be obtained using the Rate-Distortion Optimization (RDO) process, which increases the time spent video encoding, meaning that VVC is not suited to real-time sensor application networks. In this paper, a low-complexity multiple transform selection, combined with the multi-type tree partition algorithm, is proposed to address the above issue. First, to skip the MTS process, we introduce a method to estimate the Rate-Distortion (RD) cost of the last Coding Unit (CU) based on the relationship between the RD costs of transform candidates and the correlation between Sub-Coding Units’ (sub-CUs’) information entropy under binary splitting. When the sum of the RD costs of sub-CUs is greater than or equal to their parent CU, the RD checking of MTS will be skipped. Second, we make full use of the coding information of neighboring CUs to terminate MTS early. The experimental results show that, compared with the VVC, the proposed method achieves a 26.40% reduction in time, with a 0.13% increase in Bjøontegaard Delta Bitrate (BDBR).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Park, Dohyeon, Gihwa Moon, Byung Tae Oh und Jae-Gon Kim. „Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding“. Sensors 23, Nr. 23 (27.11.2023): 9452. http://dx.doi.org/10.3390/s23239452.

Der volle Inhalt der Quelle
Annotation:
After the development of the Versatile Video Coding (VVC) standard, research on neural network-based video coding technologies continues as a potential approach for future video coding standards. Particularly, neural network-based intra prediction is receiving attention as a solution to mitigate the limitations of traditional intra prediction performance in intricate images with limited spatial redundancy. This study presents an intra prediction method based on coarse-to-fine networks that employ both convolutional neural networks and fully connected layers to enhance VVC intra prediction performance. The coarse networks are designed to adjust the influence on prediction performance depending on the positions and conditions of reference samples. Moreover, the fine networks generate refined prediction samples by considering continuity with adjacent reference samples and facilitate prediction through upscaling at a block size unsupported by the coarse networks. The proposed networks are integrated into the VVC test model (VTM) as an additional intra prediction mode to evaluate the coding performance. The experimental results show that our coarse-to-fine network architecture provides an average gain of 1.31% Bjøntegaard delta-rate (BD-rate) saving for the luma component compared with VTM 11.0 and an average of 0.47% BD-rate saving compared with the previous related work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Papaioannou, Georgios I., Maria Koziri, Thanasis Loukopoulos und Ioannis Anagnostopoulos. „On Combining Wavefront and Tile Parallelism with a Novel GPU-Friendly Fast Search“. Electronics 12, Nr. 10 (13.05.2023): 2223. http://dx.doi.org/10.3390/electronics12102223.

Der volle Inhalt der Quelle
Annotation:
As the necessity of supporting ever-increasing demands in video resolution leads to new video coding standards, the challenge of harnessing their computational overhead becomes important. Such overhead stems not only from the increased image data due to higher resolutions but also from the coding techniques per se that are introduced by each standard to improve compression. All modern standards in the field of video coding offer high compression efficiency, but this is achieved by increasing the computational complexity of the encoding part. Ultra-High-Definition (UHD) videos, bring new encoding implementation schemes that are being recommended for CPU and GPU parallelization. Therefore, several works are published to achieve better performance and reduce encoding complexity. Following this idea, we proposed and evaluated a hybrid encoding scheme that utilizes the constant growth of the CPU power with the massive GPU popularity in parallel. Taking advantage of the encoding schemes from the leading video coding standards, such as High-Efficiency Video Coding (HEVC) and Versatile Video Coding (VVC), which support parallel processing thru Wavefront or Tiling, in our work, we combined both of them at the same time as a whole, and in addition, we introduced a GPU-friendly fast search algorithm that is highly parallel and alternative to the default non-parallel TZ-Search. Through an experimental evaluation with common test sequences, the proposed GPU Fast Motion Estimation with our previous Wavefront per Tile Parallelism (WTP) was shown to provide valid trade-off between speedup and video coding efficiency, effectively combining the best of two worlds, i.e., WTP using CPUs and parallel Motion Estimation with GPUs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Park, Sang-Hyo, und Je-Won Kang. „Context-Based Ternary Tree Decision Method in Versatile Video Coding for Fast Intra Coding“. IEEE Access 7 (2019): 172597–605. http://dx.doi.org/10.1109/access.2019.2956196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Zhao, Shuai, Xiwu Shang, Guozhong Wang und Haiwu Zhao. „A Fast Algorithm for Intra-Frame Versatile Video Coding Based on Edge Features“. Sensors 23, Nr. 13 (07.07.2023): 6244. http://dx.doi.org/10.3390/s23136244.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC) introduces many new coding technologies, such as quadtree with nested multi-type tree (QTMT), which greatly improves the efficiency of VVC coding. However, its computational complexity is higher, which affects the application of VVC in real-time scenarios. Aiming to solve the problem of the high complexity of VVC intra coding, we propose a low-complexity partition algorithm based on edge features. Firstly, the Laplacian of Gaussian (LOG) operator was used to extract the edges in the coding frame, and the edges were divided into vertical and horizontal edges. Then, the coding unit (CU) was equally divided into four sub-blocks in the horizontal and vertical directions to calculate the feature values of the horizontal and vertical edges, respectively. Based on the feature values, we skipped unnecessary partition patterns in advance. Finally, for the CUs without edges, we decided to terminate the partition process according to the depth information of neighboring CUs. The experimental results show that compared with VTM-13.0, the proposed algorithm can save 54.08% of the encoding time on average, and the BDBR (Bjøntegaard delta bit rate) only increases by 1.61%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Li, Ximei, Jun He, Qi Li und Xingru Chen. „An Adjacency Encoding Information-Based Fast Affine Motion Estimation Method for Versatile Video Coding“. Electronics 11, Nr. 21 (23.10.2022): 3429. http://dx.doi.org/10.3390/electronics11213429.

Der volle Inhalt der Quelle
Annotation:
Versatile video coding (VVC), a new generation video coding standard, achieves significant improvements over high efficiency video coding (HEVC) due to its added advanced coding tools. Despite the fact that affine motion estimation adopted in VVC takes into account the translational, rotational, and scaling motions of the object to improve the accuracy of interprediction, this technique adds a high computational complexity, making VVC unsuitable for use in real-time applications. To address this issue, an adjacency encoding information-based fast affine motion estimation method for VVC is proposed in this paper. First, this paper counts the probability of using the affine mode in interprediction. Then we analyze the trade-off between computational complexity and performance improvement based on statistical information. Finally, by exploring the mutual exclusivity between skip and affine modes, an enhanced method is proposed to reduce interprediction complexity. Experimental results show that compared with the VVC, the proposed low-complexity method achieves 10.11% total encoding time reduction and 40.85% time saving of affine motion estimation with a 0.16% Bjøontegaard delta bitrate (BDBR) increase.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lee, Sujin, Sang-hyo Park und Dongsan Jun. „Object-Cooperated Ternary Tree Partitioning Decision Method for Versatile Video Coding“. Sensors 22, Nr. 17 (23.08.2022): 6328. http://dx.doi.org/10.3390/s22176328.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an object-cooperated decision method for efficient ternary tree (TT) partitioning that reduces the encoding complexity of versatile video coding (VVC). In most previous studies, the VVC complexity was reduced using decision schemes based on the encoding context, which do not apply object detecion models. We assume that high-level objects are important for deciding whether complex TT partitioning is required because they can provide hints on the characteristics of a video. Herein, we apply an object detection model that discovers and extracts the high-level object features—the number and ratio of objects from frames in a video sequence. Using the extracted features, we propose machine learning (ML)-based classifiers for each TT-split direction to efficiently reduce the encoding complexity of VVC and decide whether the TT-split process can be skipped in the vertical or horizontal direction. The TT-split decision of classifiers is formulated as a binary classification problem. Experimental results show that the proposed method more effectively decreases the encoding complexity of VVC than a state-of-the-art model based on ML.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Song, Hyeonju, und Yung-Lyul Lee. „Inverse Transform Using Linearity for Video Coding“. Electronics 11, Nr. 5 (01.03.2022): 760. http://dx.doi.org/10.3390/electronics11050760.

Der volle Inhalt der Quelle
Annotation:
In hybrid block-based video coding, transform plays an important role in energy compaction. Transform coding converts residual data in the spatial domain into frequency domain data, thereby concentrating energy in a lower frequency band. In VVC (versatile video coding), the primary transform is performed using DCT-II (discrete cosine transform type 2), DST-VII (discrete sine transform type 7), and DCT-VIII (discrete cosine transform type 8). Considering that DCT-II, DST-VII, and DCT-VIII are all linear transforms, inverse transform is proposed to reduce the number of computations by using the linearity of transform. When the proposed inverse transform using linearity is applied to the VVC encoder and decoder, run-time savings can be achieved without decreasing the coding performance relative to the VVC decoder. It is shown that, under VVC common-test conditions (CTC), average decoding time savings values of 4% and 10% are achieved for all intra (AI) and random access (RA) configurations, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Lee, Taesik, und Dongsan Jun. „Fast Mode Decision Method of Multiple Weighted Bi-Predictions Using Lightweight Multilayer Perceptron in Versatile Video Coding“. Electronics 12, Nr. 12 (15.06.2023): 2685. http://dx.doi.org/10.3390/electronics12122685.

Der volle Inhalt der Quelle
Annotation:
Versatile Video Coding (VVC), the state-of-the-art video coding standard, was developed by the Joint Video Experts Team (JVET) of ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal mode decision during the encoding process. In particular, VVC adopted the bi-prediction with CU-level weight (BCW) as one of the new tools, which enhanced the coding efficiency of conventional bi-prediction by assigning different weights to the two prediction blocks in the process of inter prediction. In this study, we investigate the statistical characteristics of input features that exhibit a correlation with the BCW and define four useful types of categories to facilitate the inter prediction of VVC. With the investigated input features, a lightweight neural network with multilayer perceptron (MLP) architecture is designed to provide high accuracy and low complexity. We propose a fast BCW mode decision method with a lightweight MLP to reduce the computational complexity of the weighted multiple bi-prediction in the VVC encoder. The experimental results show that the proposed method significantly reduced the BCW encoding complexity by up to 33% with unnoticeable coding loss, compared to the VVC test model (VTM) under the random-access (RA) configuration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie