Artigos de revistas sobre o tema "Adaptive video coding"

Siga este link para ver outros tipos de publicações sobre o tema: Adaptive video coding.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Adaptive video coding".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Pan, Tung-Ming, Kuo-Chin Fan e Yuan-Kai Wang. "Object-Based Approach for Adaptive Source Coding of Surveillance Video". Applied Sciences 9, n.º 10 (16 de maio de 2019): 2003. http://dx.doi.org/10.3390/app9102003.

Texto completo da fonte
Resumo:
Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Fu, Zhe. "Embedded Image and Video Coding Algorithm Based on Adaptive Filtering Equation". Advances in Mathematical Physics 2021 (8 de setembro de 2021): 1–11. http://dx.doi.org/10.1155/2021/7953993.

Texto completo da fonte
Resumo:
Based on the improved adaptive filtering method, this paper conducts in-depth discussion and research on embedded graphics and video coding and chooses to improve the adaptive filtering algorithm from three aspects: starting point prediction, search template, and window partitioning. The algorithm is imported into the encoder for video capture and encoding. By capturing videos of different formats, resolutions, and times, the memory size of the video files collected before and after the algorithm optimization is compared, and the optimized algorithm occupies the memory space of the video file in the actual system. The conclusion of less and higher coding rates. The collected video information is stored on a personal computer equipped with a freeness, and external electronic devices only need to download and install the browser, and the collected video information can be accessed in the local area network through the protocol. The improved coding algorithm has higher coding efficiency and can reduce the storage space occupied by the video.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Tu, Wang, Xin Jin, Lingjun Li, Chenggang Yan, Yaoqi Sun, Mang Xiao, Weidong Han e Jiyong Zhang. "Efficient Content Adaptive Plenoptic Video Coding". IEEE Access 8 (2020): 5797–804. http://dx.doi.org/10.1109/access.2020.2964056.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yamaguchi, H. "Adaptive DCT coding of video signals". IEEE Transactions on Communications 41, n.º 10 (1993): 1534–43. http://dx.doi.org/10.1109/26.237888.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hu, Qiang, Jun Zhou, Xiaoyun Zhang, Zhiru Shi e Zhiyong Gao. "Viewport-adaptive 360-degree video coding". Multimedia Tools and Applications 79, n.º 17-18 (13 de janeiro de 2020): 12205–26. http://dx.doi.org/10.1007/s11042-019-08390-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tsai, Chia-Yang, Ching-Yeh Chen, Tomoo Yamakage, In Suk Chong, Yu-Wen Huang, Chih-Ming Fu, Takayuki Itoh et al. "Adaptive Loop Filtering for Video Coding". IEEE Journal of Selected Topics in Signal Processing 7, n.º 6 (dezembro de 2013): 934–45. http://dx.doi.org/10.1109/jstsp.2013.2271974.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

KIM, Kyung-Yong, Gwang-Hoon PARK e Doug-Young SUH. "Adaptive Depth-Map Coding for 3D-Video". IEICE Transactions on Information and Systems E93-D, n.º 8 (2010): 2262–72. http://dx.doi.org/10.1587/transinf.e93.d.2262.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Yoon, Yeo-Jin, Seung-Won Jung, Hahyun Lee, Hui Yong Kim, Jin Soo Choi e Sung-Jea Ko. "Adaptive Prediction Block Filter for Video Coding". ETRI Journal 34, n.º 1 (2 de fevereiro de 2012): 106–9. http://dx.doi.org/10.4218/etrij.12.0211.0042.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Dong, Jie, e Yan Ye. "Adaptive Downsampling for High-Definition Video Coding". IEEE Transactions on Circuits and Systems for Video Technology 24, n.º 3 (março de 2014): 480–88. http://dx.doi.org/10.1109/tcsvt.2013.2278146.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Zhang, Yixuan, e Ce Zhu. "Adaptive coset partition for distributed video coding". Signal Processing 90, n.º 8 (agosto de 2010): 2480–86. http://dx.doi.org/10.1016/j.sigpro.2010.02.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Hsieh, M. H., e Ch H. Wei. "Adaptive multialphabet arithmetic coding for video compression". Computer Standards & Interfaces 20, n.º 6-7 (março de 1999): 469. http://dx.doi.org/10.1016/s0920-5489(99)91028-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Puri, Atul, R. Aravind e Barry Haskell. "Adaptive frame/field motion compensated video coding". Signal Processing: Image Communication 5, n.º 1-2 (fevereiro de 1993): 39–58. http://dx.doi.org/10.1016/0923-5965(93)90026-p.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Yang, Chao, Ping An, Liquan Shen e Deyang Liu. "Adaptive Bit Allocation for 3D Video Coding". Circuits, Systems, and Signal Processing 36, n.º 5 (2 de setembro de 2016): 2102–24. http://dx.doi.org/10.1007/s00034-016-0402-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Nam Ik Cho, Heesub Lee e Sang Uk Lee. "An adaptive quantization algorithm for video coding". IEEE Transactions on Circuits and Systems for Video Technology 9, n.º 4 (junho de 1999): 527–35. http://dx.doi.org/10.1109/76.767118.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Gonzales, C. A., L. Allman, T. McCarthy, P. Wendt e A. N. Akansu. "DCT coding for motion video storage using adaptive arithmetic coding". Signal Processing: Image Communication 2, n.º 2 (agosto de 1990): 145–54. http://dx.doi.org/10.1016/0923-5965(90)90017-c.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lee, Jin Young. "Enhanced Intra Prediction Based on Adaptive Coding Order and Multiple Reference Sets in HEVC". Electronics 8, n.º 6 (22 de junho de 2019): 703. http://dx.doi.org/10.3390/electronics8060703.

Texto completo da fonte
Resumo:
High Efficiency Video Coding (HEVC) is the most recent video coding standard. It can achieve a significantly higher coding performance than previous video coding standards, such as MPEG-2, MPEG-4, and H.264/AVC (Advanced Video Coding). In particular, to obtain high coding efficiency in intra frames, HEVC investigates various directional spatial prediction modes and then selects the best prediction mode based on rate-distortion optimization. For further improvement of coding performance, this paper proposes an enhanced intra prediction method based on adaptive coding order and multiple reference sets. The adaptive coding order determines the best coding order for each block, and the multiple reference sets enable the block to be predicted from various reference samples. Experimental results demonstrate that the proposed method achieves better intra coding performance than the conventional method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

HoangVan, Xiem. "Adaptive Quantization Parameter Estimation for HEVC Based Surveillance Scalable Video Coding". Electronics 9, n.º 6 (30 de maio de 2020): 915. http://dx.doi.org/10.3390/electronics9060915.

Texto completo da fonte
Resumo:
Visual surveillance systems have been playing a vital role in human modern life with a large number of applications, ranging from remote home management, public security to traffic monitoring. The recent High Efficiency Video Coding (HEVC) scalable extension, namely SHVC, provides not only the compression efficiency but also the adaptive streaming capability. However, SHVC is originally designed for videos captured from generic scenes rather than from visual surveillance systems. In this paper, we propose a novel HEVC based surveillance scalable video coding (SSVC) framework. First, to achieve high quality inter prediction, we propose a long-term reference coding method, which adaptively exploits the temporal correlation among frames in surveillance video. Second, to optimize the SSVC compression performance, we design a quantization parameter adaptation mechanism in which the relationship between SSVC rate-distortion (RD) performance and the quantization parameter is statistically modeled by a fourth-order polynomial function. Afterwards, an appropriate quantization parameter is derived for frames at long-term reference position. Experiments conducted for a common set of surveillance videos have shown that the proposed SSVC significantly outperforms the relevant SHVC standard, notably by around 6.9% and 12.6% bitrate saving for the low delay (LD) and random access (RA) coding configurations, respectively while still providing a similar perceptual decoded frame quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ismeel, Ban Sabah. "Adaptive inter frame compression using image segmented technique". Iraqi Journal of Physics (IJP) 11, n.º 21 (24 de fevereiro de 2019): 1–11. http://dx.doi.org/10.30723/ijp.v11i21.361.

Texto completo da fonte
Resumo:
The computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries. Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit, the research modifies a method for video image data compression based on the two-component code; in this coding technique, the video image is partitioned into regions of slowly varying intensity. The contours separating the regions are coded by DCT, while the rest image regions are coded by Block-Truncation Coding. this hybrid coding technique called segmented image coding (SIC). Also in this paper A modify of the four step search for motion Estimation technique was produce. for searching scheme has been introduced which is contributed in decreasing the motion estimation searching time of the successive inter frames.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Mieloch, Dawid, Adrian Dziembowski, Marek Domański, Gwangsoon Lee e Jun Young Jeong. "Color-dependent pruning in immersive video coding". Journal of WSCG 30, n.º 1-2 (2022): 91–98. http://dx.doi.org/10.24132/jwscg.2022.11.

Texto completo da fonte
Resumo:
This paper presents the color-dependent method of removing inter-view redundancy from multiview video. The pruning of input views decides which fragments of views are redundant, i.e., do not provide new information about the three-dimensional scene, as these fragments were already visible from different views. The proposed modification of the pruning uses both color and depth and utilizes the adaptive pruning threshold which increases the robustness against the noisy input. As performed experiments have shown, the proposal provides significant improvement in the quality of encoded multiview videos and decreases erroneous areas in the decoded video caused by different camera characteristics, specular surfaces, and mirror-like reflections. The pruning method proposed by the authors of this paper was evaluated by experts of the ISO/IEC JTC1/SC29/WG 11 MPEG and included by them in the Test Model of MPEG Immersive Video.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Kumar, Praveen, Amit Pande e Ankush Mittal. "Efficient compression and network adaptive video coding for distributed video surveillance". Multimedia Tools and Applications 56, n.º 2 (7 de dezembro de 2010): 365–84. http://dx.doi.org/10.1007/s11042-010-0672-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

MENG, Lili, Yao ZHAO, Anhong WANG, Jeng-Shyang PAN e Huihui BAI. "Compatible Stereo Video Coding with Adaptive Prediction Structure". IEICE Transactions on Information and Systems E94-D, n.º 7 (2011): 1506–9. http://dx.doi.org/10.1587/transinf.e94.d.1506.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Rahmoune, A., P. Vandergheynst e P. Frossard. "Flexible motion-adaptive video coding with redundant expansions". IEEE Transactions on Circuits and Systems for Video Technology 16, n.º 2 (fevereiro de 2006): 178–90. http://dx.doi.org/10.1109/tcsvt.2005.859932.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Kamnoonwatana, N., D. Agrafiotis e C. N. Canagarajah. "Flexible Adaptive Multiple Description Coding for Video Transmission". IEEE Transactions on Circuits and Systems for Video Technology 22, n.º 1 (janeiro de 2012): 1–11. http://dx.doi.org/10.1109/tcsvt.2011.2129251.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Chun, K. W., K. W. Lim, H. D. Cho e J. B. Ra. "An adaptive perceptual quantization algorithm for video coding". IEEE Transactions on Consumer Electronics 39, n.º 3 (1993): 555–58. http://dx.doi.org/10.1109/30.234634.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Liu, Pengyu, Yuan Gao e Kebin Jia. "An Adaptive Motion Estimation Scheme for Video Coding". Scientific World Journal 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/381056.

Texto completo da fonte
Resumo:
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Chen, Jianwen, Jianhua Zheng, Feng Xu e John Villasenor. "Adaptive Frequency Weighting for High-Performance Video Coding". IEEE Transactions on Circuits and Systems for Video Technology 22, n.º 7 (julho de 2012): 1027–36. http://dx.doi.org/10.1109/tcsvt.2012.2189671.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Seixas Dias, Andre, Shenglan Huang, Saverio G. Blasi, Marta Mrak e Ebroul Izquierdo. "Time-Constrained Video Delivery Using Adaptive Coding Parameters". IEEE Transactions on Circuits and Systems for Video Technology 29, n.º 7 (julho de 2019): 2082–95. http://dx.doi.org/10.1109/tcsvt.2018.2857212.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

PARK, M. W., G. H. PARK, S. JEONG, D. Y. SUH e K. KIM. "Adaptive GOP Structure for Joint Scalable Video Coding". IEICE Transactions on Communications E90-B, n.º 2 (1 de fevereiro de 2007): 431–34. http://dx.doi.org/10.1093/ietcom/e90-b.2.431.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Paul, Anand. "Adaptive Search Window for High Efficiency Video Coding". Journal of Signal Processing Systems 79, n.º 3 (10 de setembro de 2013): 257–62. http://dx.doi.org/10.1007/s11265-013-0841-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Seo, Young-Ho, Yoon-Hyuk Lee, Ji-Sang Yoo e Dong-Wook Kim. "Scalable hologram video coding for adaptive transmitting service". Applied Optics 52, n.º 1 (27 de novembro de 2012): A254. http://dx.doi.org/10.1364/ao.52.00a254.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Puri, A., e R. Aravind. "Motion-compensated video coding with adaptive perceptual quantization". IEEE Transactions on Circuits and Systems for Video Technology 1, n.º 4 (1991): 351–61. http://dx.doi.org/10.1109/76.120774.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Sikora, T., e B. Makai. "Shape-adaptive DCT for generic coding of video". IEEE Transactions on Circuits and Systems for Video Technology 5, n.º 1 (1995): 59–62. http://dx.doi.org/10.1109/76.350781.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Meng-Han Hsieh e Che-Ho Wei. "An adaptive multialphabet arithmetic coding for video compression". IEEE Transactions on Circuits and Systems for Video Technology 8, n.º 2 (abril de 1998): 130–37. http://dx.doi.org/10.1109/76.664097.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Xiang, Guoqing, Huizhu Jia, Mingyuan Yang, Yuan Li e Xiaodong Xie. "A novel adaptive quantization method for video coding". Multimedia Tools and Applications 77, n.º 12 (12 de agosto de 2017): 14817–40. http://dx.doi.org/10.1007/s11042-017-5064-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Kamnoonwatana, Nawat, Dimitris Agrafiotis e Nishan Canagarajah. "Rate controlled redundancy-adaptive multiple description video coding". Signal Processing: Image Communication 26, n.º 4-5 (abril de 2011): 205–19. http://dx.doi.org/10.1016/j.image.2011.02.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Xiong, Zhiwei, Xiaoyan Sun, Jizheng Xu e Feng Wu. "Content-adaptive deblocking for high efficiency video coding". Signal Processing: Image Communication 27, n.º 3 (março de 2012): 260–68. http://dx.doi.org/10.1016/j.image.2012.01.019.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Guofang, Tu, e Zhang Can. "Block adaptive recursive algorithm for video conference coding". Journal of Electronics (China) 13, n.º 2 (abril de 1996): 140–46. http://dx.doi.org/10.1007/bf02684755.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Kokare, Parmeshwar, e Dr MasoodhuBanu. N.M. "Review on using Region of interest for HEVC". International Journal of Engineering & Technology 7, n.º 2.4 (10 de março de 2018): 93. http://dx.doi.org/10.14419/ijet.v7i2.4.11173.

Texto completo da fonte
Resumo:
High efficiency video coding (HEVC) is the latest video compression standard. The coding efficiency of HEVC is 50% more than the preceding standard Advanced video coding (AVC). HEVC has gained this by introducing many advanced techniques such as adaptive block partitioning system known as quadtree, tiles for parallelization, improved entropy coding called Context-Adaptive Binary Arithmetic Coding (CABAC), 35 intra prediction modes (IPMs), etc. all these techniques have increased the complexity of encoding process due to which real time application of HEVC for video transfer is not yet convenient. The main objective of this paper is to provide a review of the recent developments in HEVC, particularly focusing on using region of interest (ROI) for reducing the encoding process time. Summaries of the different approaches to identify the ROI are discussed and a new method is explained.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Liu, Bo, Jiandong Liu, Shuhong Wang, Ming Zhong, Bo Li e Yujie Liu. "HEVC Video Encryption Algorithm Based on Integer Dynamic Coupling Tent Mapping". Journal of Advanced Computational Intelligence and Intelligent Informatics 24, n.º 3 (20 de maio de 2020): 335–45. http://dx.doi.org/10.20965/jaciii.2020.p0335.

Texto completo da fonte
Resumo:
A selective encryption algorithm is proposed to improve the efficiency of high efficiency video coding (HEVC) video encryption and ensure the security of HEVC videos. The algorithm adopts the integer dynamic coupling tent mapping optimization model as the pseudo-random sequence generator, and multi-core parallelization is used as the sequence generation mechanism. The binstrings during the process of context adaptive binary arithmetic coding are selected for encryption, which conforms to the features of invariable binstream and compatible format in terms of video encryption. Performance tests for six types of standard videos with different resolutions were performed. The results indicated that the encryption algorithm has a large key space and benefits from a high encryption effect.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Vu, Tien Huu, Minh Ngoc Do, Sang Quang Nguyen, Huy PhiCong, Thipphaphone Sisouvong e Xiem HoangVan. "Learning Adaptive Quantization Parameter for Consistent Quality Oriented Video Coding". Electronics 12, n.º 24 (6 de dezembro de 2023): 4905. http://dx.doi.org/10.3390/electronics12244905.

Texto completo da fonte
Resumo:
In the industry 4.0 era, video applications such as surveillance visual systems, video conferencing, or video broadcasting have been playing a vital role. In these applications, for manipulating and tracking objects in decoded video, the quality of decoded video should be consistent because it largely affects the performance of the machine analysis. To cope with this problem, we propose a novel perceptual video coding (PVC) solution in which a full reference quality metric named video multimethod assessment fusion (VMAF) is employed together with a deep convolutional neural network (CNN) to obtain consistent quality while still achieving high compression performance. First of all, in order to achieve the consistent quality requirement, we propose a CNN model with an expected VMAF as input to adaptively adjust the quantization parameters (QP) for each coding block. Afterwards, to increase the compression performance, a Lagrange coefficient of rate-distortion optimization (RDO) mechanism is adaptively computed according to rate-QP and quality-QP models. The experimental results show that the proposed PVC solution has achieved two targets simultaneously: the quality of video sequence is kept consistently with an expected quality level and the bit rate saving of the proposed method is higher than traditional video coding standards and the relevant benchmark, notably with around 10% bitrate saving on average.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Ibrahim, Sarmad K., e Nasser N. Khamiss. "A New Wireless Generation Technology for Video Streaming". Journal of Computer Networks and Communications 2019 (21 de abril de 2019): 1–9. http://dx.doi.org/10.1155/2019/3671826.

Texto completo da fonte
Resumo:
With the exponential rise in the volumes of video traffic in cellular networks, there is an urgent need for improving the quality of video delivery. This research proposes a mobile generation model based on the updated technologies of the fourth- and fifth-generation mobile systems, which is called Proposed Generation (Pro-G). This model uses wider bandwidth and advanced adaptive modulation and coding. It also incorporates the method of the adaptive video streaming of multiple video data rates by using the transcoding technique, which is called H.265 proposed (H.265 pro). Thus, both methods are tested to provide a large number of users of video/data application with more speed and best quality. A comparison with 4G technology is done to assign the development regarding number of users with data rate. The suggested video coding shows how much the overall system is more reliable over the congested channel than conventional video coding technologies such as high-efficiency video coding (HEVC/H.265) and advanced video coding (AVC/H.264). The results showed that the proposed method of transmitting wireless data is better than the LTE-ADV method. In this method, the rate of data transfer increases by 29% compared with LTE-ADV, while the bit rate saving was increased to 13% in the proposed video coding compared with that in the H.265.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Grois, Dan, e Ofer Hadar. "Efficient Region-of-Interest Scalable Video Coding with Adaptive Bit-Rate Control". Advances in Multimedia 2013 (2013): 1–17. http://dx.doi.org/10.1155/2013/281593.

Texto completo da fonte
Resumo:
This work relates to the regions-of-interest (ROI) coding that is a desirable feature in future applications based on the scalable video coding, which is an extension of the H.264/MPEG-4 AVC standard. Due to the dramatic technological progress, there is a plurality of heterogeneous devices, which can be used for viewing a variety of video content. Devices such as smartphones and tablets are mostly resource-limited devices, which make it difficult to display high-quality content. Usually, the displayed video content contains one or more ROI(s), which should be adaptively selected from the preencoded scalable video bitstream. Thus, an efficient scalable ROI video coding scheme is proposed in this work, thereby enabling the extraction of the desired regions-of-interest and the adaptive setting of the desirable ROI location, size, and resolution. In addition, an adaptive bit-rate control is provided for the region-of-interest scalable video coding. The performance of the presented techniques is demonstrated and compared with the joint scalable video model reference software (JSVM 9.19), thereby showing significant bit-rate savings as a tradeoff for the relatively low PSNR degradation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Bachu, Srinivas, e N. Ramya Teja. "Fuzzy Holoentropy-Based Adaptive Inter-Prediction Mode Selection for H.264 Video Coding". International Journal of Mobile Computing and Multimedia Communications 10, n.º 2 (abril de 2019): 42–60. http://dx.doi.org/10.4018/ijmcmc.2019040103.

Texto completo da fonte
Resumo:
Due to the advancement of multimedia and its requirement of communication over the network, video compression has received much attention among the researchers. One of the popular video codings is scalable video coding, referred to as H.264/AVC standard. The major drawback in the H.264 is that it performs the exhaustive search over the interlayer prediction to gain the best rate-distortion performance. To reduce the computation overhead due to exhaustive search on mode prediction process, this paper presents a new technique for inter prediction mode selection based on the fuzzy holoentropy. This proposed scheme utilizes the pixel values and probabilistic distribution of pixel symbols to decide the mode. The adaptive mode selection is introduced here by analyzing the pixel values of the current block to be coded with those of a motion compensated reference block using fuzzy holoentropy. The adaptively selected mode decision can reduce the computation time without affecting the visual quality of frames. Experimentation of the proposed scheme is evaluated by utilizing five videos, and from the analysis, it is evident that proposed scheme has overall high performance with values of 41.367 dB and 0.992 for PSNR and SSIM respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Zhang, Qiuwen, Shuaichao Wei e Rijian Su. "Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC". Scientific Programming 2019 (15 de janeiro de 2019): 1–13. http://dx.doi.org/10.1155/2019/1574081.

Texto completo da fonte
Resumo:
Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Shen, Liquan, Ping An, Xinpeng Zhang e Zhaoyang Zhang. "Adaptive transform size decision algorithm for high-efficiency video coding inter coding". Journal of Electronic Imaging 23, n.º 4 (27 de agosto de 2014): 043023. http://dx.doi.org/10.1117/1.jei.23.4.043023.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Zupancic, Ivan, Saverio G. Blasi, Eduardo Peixoto e Ebroul Izquierdo. "Inter-Prediction Optimizations for Video Coding Using Adaptive Coding Unit Visiting Order". IEEE Transactions on Multimedia 18, n.º 9 (setembro de 2016): 1677–90. http://dx.doi.org/10.1109/tmm.2016.2579505.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

A. Suthar, Haresh. "VHDL Implementation of H.264 Video Coding Standard". International Journal of Reconfigurable and Embedded Systems (IJRES) 1, n.º 3 (1 de novembro de 2012): 95. http://dx.doi.org/10.11591/ijres.v1.i3.pp95-102.

Texto completo da fonte
Resumo:
<p>This Paper contains VHDL implementation of H.264 video coding standard, which is new video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the H.264/AVC standardization effort is to enhance compression performance and provision of a “network-friendly” video representation addressing “conversational” (video telephony) and “no conversational” (storage, broadcast, or streaming) applications.H.264 video coder standard is having fundamental blocks like transform and quantization, Intra prediction, Inter prediction and Context Adaptive Variable Length Coding (CAVLC). Each block is designed and integrated to one top module in VHDL.</p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

MORI, Shungo, e Masaki BANDAI. "A Quality-Level Selection for Adaptive Video Streaming with Scalable Video Coding". IEICE Transactions on Communications E102.B, n.º 4 (1 de abril de 2019): 824–31. http://dx.doi.org/10.1587/transcom.2017ebp3432.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Shyh-Fang, Huang. "Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV". International Journal of Digital Multimedia Broadcasting 2012 (2012): 1–7. http://dx.doi.org/10.1155/2012/801641.

Texto completo da fonte
Resumo:
With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX) is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS) setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP) which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Wang, Gang, Hexin Chen, Mianshu Chen, Qiang Zhao e Yingjie Song. "Fractional-pel Interpolation Algorithm Based on Adaptive Filter". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 08 (18 de novembro de 2019): 2054022. http://dx.doi.org/10.1142/s0218001420540221.

Texto completo da fonte
Resumo:
In order to further improve fractional-pel interpolation image quality of video sequence with different resolutions and reduce algorithm complexity, the fractional-pel interpolation algorithm based on adaptive filter (AF_FIA) is proposed. This algorithm adaptively selects the interpolation filters with different orders according to the three video sequence regions with different resolutions; in the three video sequence regions with different resolutions, the high-order interpolation filter is replaced by low-order interpolation filter according to the correlation between pixels to realize the adaptive selection of filter. The complexity analysis results show that compared with other algorithms, this algorithm reduces space complexity and computation complexity, thus reducing the storage access and coding time. The simulation results indicate that compared with other algorithms, this algorithm has good coding performance and robustness for video sequences with different resolutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia