Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Video compression.

Статті в журналах з теми "Video compression"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Video compression".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Rajasekhar, H., and B. Prabhakara Rao. "An Efficient Video Compression Technique Using Watershed Algorithm and JPEG-LS Encoding." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6671–79. http://dx.doi.org/10.1166/jctn.2016.5613.

Повний текст джерела
Анотація:
In the previous video compression method, the videos were segmented by using the novel motion estimation algorithm with aid of watershed method. But, the compression ratio (CR) of compression with novel motion estimation algorithm was not giving an adequate result. Moreover this methods performance is needed to be improved in the encoding and decoding processes. Because most of the video compression methods have utilized encoding techniques like JPEG, Run Length, Huffman coding and LSK encoding. The improvement of the encoding techniques in the compression process will improve the compression result. Hence, to overcome these drawbacks, we intended to propose a new video compression method with renowned encoding technique. In this proposed video compression method, the input video frames motion vectors are estimated by applying watershed and ARS-ST (Adaptive Rood Search with Spatio-Temporal) algorithms. After that, the vector blocks which have high difference value are encoded by using the JPEG-LS encoder. JPEG-LS have excellent coding and computational efficiency, and it outperforms JPEG2000 and many other image compression methods. This algorithm is of relatively low complexity, low storage requirement and its compression capability is efficient enough. To get the compressed video, the encoded blocks are subsequently decoded by JPEG-LS. The implementation result shows the effectiveness of proposed method, in compressing more number of videos. The performance of our proposed video compression method is evaluated by comparing the result of proposed method with the existing video compression techniques. The comparison result shows that our proposed method acquires high-quality compression ratio and PSNR for the number of testing videos than the existing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 2 (September 10, 2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Повний текст джерела
Анотація:
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Strachan, David, Margarida DeBruin, and Robert Marhong. "Video Compression." SMPTE Journal 105, no. 2 (February 1996): 68–73. http://dx.doi.org/10.5594/j04666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Повний текст джерела
Анотація:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mohammed, Dhrgham Hani, and Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation." Webology 18, SI05 (October 30, 2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Повний текст джерела
Анотація:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lu, Ming, Zhihao Duan, Fengqing Zhu, and Zhan Ma. "Deep Hierarchical Video Compression." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8859–67. http://dx.doi.org/10.1609/aaai.v38i8.28733.

Повний текст джерела
Анотація:
Recently, probabilistic predictive coding that directly models the conditional distribution of latent features across successive frames for temporal redundancy removal has yielded promising results. Existing methods using a single-scale Variational AutoEncoder (VAE) must devise complex networks for conditional probability estimation in latent space, neglecting multiscale characteristics of video frames. Instead, this work proposes hierarchical probabilistic predictive coding, for which hierarchal VAEs are carefully designed to characterize multiscale latent features as a family of flexible priors and posteriors to predict the probabilities of future frames. Under such a hierarchical structure, lightweight networks are sufficient for prediction. The proposed method outperforms representative learned video compression models on common testing videos and demonstrates computational friendliness with much less memory footprint and faster encoding/decoding. Extensive experiments on adaptation to temporal patterns also indicate the better generalization of our hierarchical predictive mechanism. Furthermore, our solution is the first to enable progressive decoding that is favored in networked video applications with packet loss.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (March 2, 2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Повний текст джерела
Анотація:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (March 1, 2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Повний текст джерела
Анотація:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Flierl, Markus, and Bernd Girod. "Multiview Video Compression." IEEE Signal Processing Magazine 24, no. 99 (2007): 66–76. http://dx.doi.org/10.1109/msp.2007.4317465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Flierl, Markus, and Bernd Girod. "Multiview Video Compression." IEEE Signal Processing Magazine 24, no. 6 (November 2007): 66–76. http://dx.doi.org/10.1109/msp.2007.905699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Khan, Umair, Sajjad Afrakhteh, Federico Mento, Andrea Smargiassi, Riccardo Inchingolo, Francesco Tursi, Veronica Narvena, Tiziano Perrone, Giovanni Iacca, and Libertario Demi. "Coronavirus disease 2019 patients prognostic stratification based on low complex lung ultrasound video compression." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A189. http://dx.doi.org/10.1121/10.0018617.

Повний текст джерела
Анотація:
In the last years, efforts have been made towards automating semi-quantitative analysis of lung ultrasound (LUS) data. To this end, several methods have been proposed with a focus on frame-level classification. However, no extensive work has been done to evaluate LUS data directly at the video level. This study proposes an effective video compression and classification technique for assessing LUS data. This technique is based on maximum, mean, and minimum intensity projection (with respect to the temporal dimension) of LUS video data. This compression allows preserving hyper- and hypo-echoic regions and results in compressing a LUS video down to three frames, which are then classified using a convolutional neural network (CNN). Results show that this compression not only preserves visual artifacts appearance in the reduced data, but also achieves a promising agreement of 81.61% at the prognostic level. Conclusively, the suggested method reduces the amount of frames needed to assess LUS video down to 3. Note that on average a LUS videos consists of a few hundreds frames. At the same time, state-of-the-art performance at video and prognostic levels are achieved, while significantly reducing the computational cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Laghari, Asif Ali, Hui He, Shahid Karim, Himat Ali Shah, and Nabin Kumar Karn. "Quality of Experience Assessment of Video Quality in Social Clouds." Wireless Communications and Mobile Computing 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/8313942.

Повний текст джерела
Анотація:
Video sharing on social clouds is popular among the users around the world. High-Definition (HD) videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS). Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE) level of end users. To assess the QoE of video compression, we conducted subjective (QoE) experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Leong, Chi Wa, Behnoosh Hariri, and Shervin Shirmohammadi. "Exploiting Orientational Redundancy in Multiview Video Compression." International Journal of Computer and Electrical Engineering 7, no. 2 (2015): 70–81. http://dx.doi.org/10.17706/ijcee.2015.v7.873.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lu, Hongrui, Yingjun Zhang, and Zhuolin Wang. "Time Delay Optimization of Compressing Shipborne Vision Sensor Video Based on Deep Learning." Journal of Marine Science and Engineering 11, no. 1 (January 6, 2023): 122. http://dx.doi.org/10.3390/jmse11010122.

Повний текст джерела
Анотація:
As the technology for offshore wireless transmission and collaborative innovation in unmanned ships continues to mature, research has been gradually carried out in various countries on methods of compressing and transmitting perceptual video while driving ships remotely. High Efficiency Video Coding (H.265/HEVC) has played an extremely important role in the field of Unmanned Aerial Vehicle (UAV) and autopilot, and as one of the most advanced coding schemes, its performance in compressing visual sensor video is excellent. According to the characteristics of shipborne vision sensor video (SVSV), optimizing the coding aspects with high computational complexity is one of the important methods to improve the video compression performance. Therefore, an efficient video coding technique is proposed to improve the efficiency of SVSV compression. In order to optimize the compression performance of SVSV, an intra-frame coding delay optimization algorithm that works in the intra-frame predictive coding (PC) session by predicting the Coding Unit (CU) division structure in advance is proposed in combination with deep learning methods. The experimental results show that the total compression time of the algorithm is reduced by about 45.49% on average compared with the official testbed HM16.17 for efficient video coding, while the Bjøntegaard Delta Bit Rate (BD-BR) increased by an average of 1.92%, and the Peak Signal-to-Noise Ratio (BD-PSNR) decreased by an average of 0.14 dB.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hadizadeh, Hadi, and Ivan V. Bajic. "Saliency-Aware Video Compression." IEEE Transactions on Image Processing 23, no. 1 (January 2014): 19–33. http://dx.doi.org/10.1109/tip.2013.2282897.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Cramer, C., E. Gelenbe, and P. Gelenbe. "Image and video compression." IEEE Potentials 17, no. 1 (1998): 29–33. http://dx.doi.org/10.1109/45.652854.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Tudor, P. N. "MPEG-2 video compression." Electronics & Communication Engineering Journal 7, no. 6 (December 1, 1995): 257–64. http://dx.doi.org/10.1049/ecej:19950606.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Chen, Zhibo, Tianyu He, Xin Jin, and Feng Wu. "Learning for Video Compression." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 2 (February 2020): 566–76. http://dx.doi.org/10.1109/tcsvt.2019.2892608.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Dewi, Siagian. "Implementasi Algoritma Elias Gamma Code Untuk Kompresi File Video Pada Aplikasi Drama Korea." Jurnal Sains dan Teknologi Informasi 1, no. 3 (August 27, 2022): 90–95. http://dx.doi.org/10.47065/jussi.v1i3.2183.

Повний текст джерела
Анотація:
Korean drama application is an application that is useful as a means of entertainment. When the user downloads the Korean drama application video on a smartphone (cellphone) and saves the video file downloaded by the Korean drama, it will produce a large file size. Because in general we do the activity of downloading many Korean drama videos, we get a long video duration. The longer the video duration, the larger the file size will be. Then the large file size will cause problems because it is limited by storage capacity. When you want to send it, it will take a long time to solve this problem, a solution is obtained by compression. The Elias Gamma Code algorithm is a lossless compression technique, which does not remove previous information, where the decompression results of the compressed video are the same as the original video file. By using the Elias Gamma Code algorithm, the Compression Ratio performance is 36%. These results prove that the compression of the downloaded video files by applying the Elias Gamma Code algorithm can reduce the size of large video files to be smaller than the original size.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Acharjee, Suvojit, Sayan Chakraborty, Wahiba Ben Abdessalem Karaa, Ahmad Taher Azar, and Nilanjan Dey. "Performance Evaluation of Different Cost Functions in Motion Vector Estimation." International Journal of Service Science, Management, Engineering, and Technology 5, no. 1 (January 2014): 45–65. http://dx.doi.org/10.4018/ijssmet.2014010103.

Повний текст джерела
Анотація:
Video is an important medium in terms of information sharing in this present era. The tremendous growth of video use can be seen in the traditional multimedia application as well as in many other applications like medical videos, surveillance video etc. Raw video data is usually large in size, which demands for video compression. In different video compressing schemes, motion vector is a very important step to remove the temporal redundancy. A frame is first divided into small blocks and then motion vector for each block is computed. The difference between two blocks is evaluated by different cost functions (i.e. mean absolute difference (MAD), mean square error (MSE) etc).In this paper the performance of different cost functions was evaluated and also the most suitable cost function for motion vector estimation was found.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

P, Madhavee Latha, and Annis Fathima A. "REVIEW ON IMAGE AND VIDEO COMPRESSION STANDARDS." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (April 1, 2017): 373. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19760.

Повний текст джерела
Анотація:
Nowadays, the number of photos taken each day is growing exponentially on phones and the number of photos uploading on Internet is also increasing rapidly. This explosion of photos in Internet and personal devices such as phones posed a challenge to the effective storage and transmission.Multimedia files are the files having text, images, audio, video, and animations, which are large and require lots of hard disk space. Hence, these files take more time to move from one place to another place over the Internet. Image compression is an effective way to reduce the storage space and speedup the transmission. Data compression is used everywhere on the internet, that is, the videos, the images, and the music in online. Even though many different image compression schemes exist, current needs and applications require fast compression algorithms which produce acceptable quality images or video with minimum size. In this paper, image and video compression standards are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Nevart A. Minas and Faten H. Al-Qadhee. "Digital Video Compression Using DCT-Based Iterated Function System (IFS)." Tikrit Journal of Pure Science 22, no. 6 (January 30, 2023): 125–30. http://dx.doi.org/10.25130/tjps.v22i6.800.

Повний текст джерела
Анотація:
Large video files processing involves a huge volume of data. The codec, storage systems and network needs resource utilization, so it becomes important to minimize the used memory space and time to distribute these videos over the Internet using compression techniques. Fractal image and video compression falls under the category of lossy compression. It gives best results when used for natural images. This paper presents an efficient method to compress an AVI (Audio Video Interleaved) file with fractal video compression(FVC). The video first is separated into a sequence of frames that are a color bitmap images, then images are transformed from RGB color space to Luminance/Chrominance components (YIQ) color space; each of these components is compressed alone with Enhanced Partition Iterated Function System (EPIFS), then fractal codes are saved. The classical IFS suffers from a very long encoding time that needed to find the best matching for each range block when compared with the domain image blocks. In this work, the (FVC) is enhanced by enhancing the IFS of the fractal image compression using a classification scheme based on the Discrete Cosine Transform (DCT). Experimentation is performed by considering different block sizes and jump steps to reduce number of the tested domain blocks. Results shows a significant reduction in the encoding time with good quality and high compression ratio for different video files.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tang, Chuanbo, Xihua Sheng, Zhuoyuan Li, Haotian Zhang, Li Li, and Dong Liu. "Offline and Online Optical Flow Enhancement for Deep Video Compression." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5118–26. http://dx.doi.org/10.1609/aaai.v38i6.28317.

Повний текст джерела
Анотація:
Video compression relies heavily on exploiting the temporal redundancy between video frames, which is usually achieved by estimating and using the motion information. The motion information is represented as optical flows in most of the existing deep video compression networks. Indeed, these networks often adopt pre-trained optical flow estimation networks for motion estimation. The optical flows, however, may be less suitable for video compression due to the following two factors. First, the optical flow estimation networks were trained to perform inter-frame prediction as accurately as possible, but the optical flows themselves may cost too many bits to encode. Second, the optical flow estimation networks were trained on synthetic data, and may not generalize well enough to real-world videos. We address the twofold limitations by enhancing the optical flows in two stages: offline and online. In the offline stage, we fine-tune a trained optical flow estimation network with the motion information provided by a traditional (non-deep) video compression scheme, e.g. H.266/VVC, as we believe the motion information of H.266/VVC achieves a better rate-distortion trade-off. In the online stage, we further optimize the latent features of the optical flows with a gradient descent-based algorithm for the video to be compressed, so as to enhance the adaptivity of the optical flows. We conduct experiments on two state-of-the-art deep video compression schemes, DCVC and DCVC-DC. Experimental results demonstrate that the proposed offline and online enhancement together achieves on average 13.4% bitrate saving for DCVC and 4.1% bitrate saving for DCVC-DC on the tested videos, without increasing the model or computational complexity of the decoder side.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kwon, Ilhwan, Jun Li, and Mukesh Prasad. "Lightweight Video Super-Resolution for Compressed Video." Electronics 12, no. 3 (January 28, 2023): 660. http://dx.doi.org/10.3390/electronics12030660.

Повний текст джерела
Анотація:
Video compression technology for Ultra-High Definition (UHD) and 8K UHD video has been established and is being widely adopted by major broadcasting companies and video content providers, allowing them to produce high-quality videos that meet the demands of today’s consumers. However, high-resolution video content broadcasting is not an easy problem to be resolved in the near future due to limited resources in network bandwidth and data storage. An alternative solution to overcome the challenges of broadcasting high-resolution video content is to downsample UHD or 8K video at the transmission side using existing infrastructure, and then utilizing Video Super-Resolution (VSR) technology at the receiving end to recover the original quality of the video content. Current deep learning-based methods for Video Super-Resolution (VSR) fail to consider the fact that the delivered video to viewers goes through a compression and decompression process, which can introduce additional distortion and loss of information. Therefore, it is crucial to develop VSR methods that are specifically designed to work with the compression–decompression pipeline. In general, various information in the compressed video is not utilized enough to realize the VSR model. This research proposes a highly efficient VSR network making use of data from decompressed video such as frame type, Group of Pictures (GOP), macroblock type and motion vector. The proposed Convolutional Neural Network (CNN)-based lightweight VSR model is suitable for real-time video services. The performance of the model is extensively evaluated through a series of experiments, demonstrating its effectiveness and applicability in practical scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lau, Tiffany Wai Shan, Anthony Robert Lim, Kyra Anne Len, and Loren Gene Yamamoto. "Chest compression efficacy of child resuscitators." Journal of Paramedic Practice 13, no. 11 (November 2, 2021): 448–55. http://dx.doi.org/10.12968/jpar.2021.13.11.448.

Повний текст джерела
Анотація:
Background: Chest compression efficacy determines blood flow in cardiopulmonary resuscitation (CPR) and relies on body mechanics, so resuscitator weight matters. Individuals of insufficient weight are incapable of generating a sufficient downward chest compression force using traditional methods. Aims: This study investigated how a resuscitator's weight affects chest compression efficacy, determined the minimum weight required to perform chest compressions and, for children and adults below this minimum weight, examine alternate means to perform chest compressions. Methods: Volunteers aged 8 years and above were enrolled to perform video-recorded, music-facilitated, compression-only CPR on an audible click-confirming manikin for 2 minutes, following brief training. Subjects who failed this proceeded to alternate modalities: chest compressions by jumping on the lower sternum; and squat-bouncing (bouncing the buttocks on the chest). These methods were assessed via video review. Findings: There were 57 subjects. The 30 subjects above 40kg were all able to complete nearly 200 compressions in 2 minutes. Success rates declined in those who weighed less than 40kg. Below 30 kg, only one subject (29.9 kg weight) out of 14 could achieve 200 effective compressions. Nearly all of the 23 subjects who could not perform conventional chest compressions were able to achieve effective chest compressions using alternate methods. Conclusion: A weight below 40kg resulted in a declining ability to perform standard chest compressions effectively. For small resuscitators, the jumping and squat-bouncing methods resulted in sufficient compressions most of the time; however, chest recoil and injuries are concerns.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Reuss, Edward. "VC-5 Video Compression for Mezzanine Compression Workflows." SMPTE Motion Imaging Journal 124, no. 1 (January 2015): 55–61. http://dx.doi.org/10.5594/j18500.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Saputra, Indra, Harun Mukhtar, and Januar Al Amien. "Analisis Perbandingan Performa Codec H.264 & H.265 Video Streaming Dari Segi Quality of Service." Jurnal CoSciTech (Computer Science and Information Technology) 2, no. 1 (June 12, 2021): 9–13. http://dx.doi.org/10.37859/coscitech.v2i1.2190.

Повний текст джерела
Анотація:
Video streaming is a technology that is often used when watching videos on the internet without having to download then video to play it. Some problems that affect the performance of video streaming are such as the large capacity of the video size, the capacity of the video size affects the smoothness when streaming video. In this research, two video compression methods will be compared, namely H.264 codec and H.265 codec. The test is carried out to determine the effect of using the compression method according to changes in the codec ang framerate used. After testing the two codec, it can be concluded that the H.265 codec is more effective to apply than the H.264 codec.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Mochurad, Lesia. "A Comparison of Machine Learning-Based and Conventional Technologies for Video Compression." Technologies 12, no. 4 (April 15, 2024): 52. http://dx.doi.org/10.3390/technologies12040052.

Повний текст джерела
Анотація:
The growing demand for high-quality video transmission over bandwidth-constrained networks and the increasing availability of video content have led to the need for efficient storage and distribution of large video files. To improve the latter, this article offers a comparison of six video compression methods without loss of quality. Particularly, H.255, VP9, AV1, convolutional neural network (CNN), recurrent neural network (RNN), and deep autoencoder (DAE). The proposed decision is to use a dataset of high-quality videos to implement and compare the performance of classical compression algorithms and algorithms based on machine learning. Evaluations of the compression efficiency and the quality of the received images were made on the basis of two metrics: PSNR and SSIM. This comparison revealed the strengths and weaknesses of each approach and provided insights into how machine learning algorithms can be optimized in future research. In general, it contributed to the development of more efficient and effective video compression algorithms that can be useful for a wide range of applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yang, Yixin, Zhiqang Xiang, and Jianbo Li. "Research on Low Frame Rate Video Compression Algorithm in the Context of New Media." Security and Communication Networks 2021 (September 27, 2021): 1–10. http://dx.doi.org/10.1155/2021/7494750.

Повний текст джерела
Анотація:
When using the current method to compress the low frame rate video animation video, there is no frame rate compensation for the video image, which cannot eliminate the artifacts generated in the compression process, resulting in low definition, poor quality, and low compression efficiency of the compressed low frame rate video animation video. In the context of new media, the linear function model is introduced to study the frame rate video animation video compression algorithm. In this paper, an adaptive detachable convolutional network is used to estimate the offset of low frame rate video animation using local convolution. According to the estimation results, the video frames are compensated to eliminate the artifacts of low frame rate video animation. After the frame rate compensation, the low frame rate video animation video is divided into blocks, the CS value of the image block is measured, the linear estimation of the image block is carried out by using the linear function model, and the compression of the low frame rate video animation video is completed according to the best linear estimation result. The experimental results show that the low frame rate video and animation video compressed by the proposed algorithm have high definition, high compression quality under different compression ratios, and high compression efficiency under different compression ratios.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Dhungel, Prasanga, Prashant Tandan, Sandesh Bhusal, Sobit Neupane, and Subarna Shakya. "Video Compression for Surveillance Application using Deep Neural Network." June 2020 2, no. 2 (June 3, 2020): 131–45. http://dx.doi.org/10.36548/jaicn.2020.2.006.

Повний текст джерела
Анотація:
We present a new approach to video compression for video surveillance by refining the shortcomings of conventional approach and substitute each traditional component with their neural network counterpart. Our proposed work consists of motion estimation, compression and compensation and residue compression, learned end-to-end to minimize the rate-distortion trade off. The whole model is jointly optimized using a single loss function. Our work is based on a standard method to exploit the spatio-temporal redundancy in video frames to reduce the bit rate along with the minimization of distortions in decoded frames. We implement a neural network version of conventional video compression approach and encode the redundant frames with lower number of bits. Although, our approach is more concerned toward surveillance, it can be extended easily to general purpose videos too. Experiments show that our technique is efficient and outperforms standard MPEG encoding at comparable bitrates while preserving the visual quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Shen, Yang, Jinqin Lu, Li Zhu, and Fangming Deng. "Research on Deep Compression Method of Expressway Video Based on Content Value." Electronics 11, no. 23 (December 4, 2022): 4024. http://dx.doi.org/10.3390/electronics11234024.

Повний текст джерела
Анотація:
Aiming at the problem that the storage space and network bandwidth of expressway surveillance video are occupied largely due to data redundancy and sparse information, this paper proposes a deep compression method of expressway video depth based on content value. Firstly, the YOLOv4 algorithm is used to analyze the content value of the original video, extract video frames with vehicle information, and eliminate unintentional frames. An improved CNN is then designed by adding Feature Pyramids and the Inception module to accelerate the extraction and fusion of features at all levels and improve the performance of image classification and prediction. Finally, the whole model is integrated into HEVC encoder for compressing the preprocessed video. The experimental results show that at the expense of only a 5.96% increase of BD-BR, and only a 0.19 dB loss of BD-PSNR, the proposed method achieves a 64% compression ratio and can save 62.82% coding time compared with other classic data compression methods based on deep learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Malik, Manas. "Framework For Lossless Data Compression Using Python." International Journal of Engineering and Computer Science 8, no. 03 (March 31, 2019): 24575–85. http://dx.doi.org/10.18535/ijecs/v8i03.4296.

Повний текст джерела
Анотація:
A lot has been done in the field of data compression, yet we don’t have a proper application for compressing daily usage files. There are appropriate and very specific tools online that provide files to be compressed and saved, but the content we use for streaming our videos, be it a Netflix video or a gaming theater play, data consumed is beyond the calculation of a user. Back-end developers know all about it and as developers we have acknowledged it but not yet achieved it in providing on an ease level. Since the user would not never be concerned about compression, developers can always take initiative while building the application to provide accessibility with compression before-hand. We have decided to create a framework that will provide all the functionality needed for a developer to add this feature. Making use of the python language this process can work. I’m a big fan of Python, mostly because it has a vibrant developer community that has helped produce an unparalleled collection of libraries that enable one to add features to applications quickly. For the DEFLATE lossless compression, has a higher level of abstraction provided by the zlib C library, in Python it is generally provided by the Python zlib library which is an interface, we have a lot to do including the audio, video and subtitles of the file. We also make use of the fabulous ffmpy library. ffmpy is a Python library that provides access to the ffmpeg command line utility. ffmpeg is a command-line application that can perform several different kinds of transformations on video files, including video compression, which is the most commonly requested feature of ffmpeg. Frame rate and audio synchronization are few other parameters to look closely. This is an ongoing project and there remains few implementation aspects, data compression remains a concern when touched upon the design. We along with python community intend to solve this issue.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Basha, Sardar N., and A. Rajesh. "Scalable Video Coding Using Accordion Discrete Wavelet Transform and Tucker Decomposition for Multimedia Applications." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 601–8. http://dx.doi.org/10.1166/jctn.2019.7777.

Повний текст джерела
Анотація:
The digital world demands the transmission and storage of high quality video for streaming and broadcasting applications, the constraints are the network bandwidth and the memory of devices for the various multimedia and scientific applications, the video consists of spatial and temporal redundancies. The objective of any video compression algorithm is to eliminate the redundant information from the video signal during compression for effective transmission and storage. The correlation between the successive frames has not been exploited enough by the current compression algorithms. In this paper, a novel method for video compression is presented. The proposed model, applies the transformation on set of group of pictures (GOP). The high spatial correlation is achieved from the spatial and temporal redundancy of GOP by accordion representation and this helps to bypass the computationally demanding motion compensation step. The core idea of the proposed technique is to apply Tucker Decomposition (TD) on the Discrete Wavelet Transform (DWT) coefficients of the Accordion model of the GOP. We use DWT to separate the video in to different sub-images and TD to efficiently compact the energy of sub-images. The blocking artifacts will be considerably eliminated as the block size is huge. The proposed method attempts to reduce the spatial and temporal redundancies of the video signal to improve the compression ratio, computation time, and PSNR. The experimental results prove that the proposed method is efficient especially in high bit rate and with slow motion videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Noor, Noor, and Qusay Abboodi Ali. "A New Method for Intelligent Multimedia Compression Based on Discrete Hartley Matrix." Fusion: Practice and Applications 16, no. 2 (2024): 108–17. http://dx.doi.org/10.54216/fpa.160207.

Повний текст джерела
Анотація:
Multimedia data (video, audio, images) require storage space and transmission bandwidth when sent through social media networking. Despite rapid advances in the capabilities of digital communication systems, the high data size and data transfer bandwidth continue to exceed the capabilities of available technology, especially among social media users. The recent growth of multimedia-based web applications such as WhatsApp, Telegram, and Messenger has created a need for more efficient ways to compress media data. This is because the transmission speed of networks for multimedia data is relatively slow. In addition, there is a specific size for sending files via email or social networks, because much high-definition multimedia information can reach the Giga Byte size. Moreover, most smart cameras have high imaging resolution, which increases the bit rate of multimedia files of video, audio, and image. Therefore, the goal of data compression is to represent media (video, audio, images, etc.) as accurately as possible with the minimum number of bits (bit rate). Traditional data compression methods are complex for users. They require a high processing power for media data. This shows that most of the existing algorithms have loss in data during the process of compressing and decompressing data, with a high bitrate for media data (video, audio, and image). Therefore, this work describes a new method for media compression systems by discrete Hartley matrix (128) to get a high speed and low bit rate for compressing multimedia data. Finally, the results show that the proposed algorithm has a high-performance speed with a low bit rate for compression data, without losing any part of data (video, sound, and image). Furthermore, the majority of users of social media are satisfied with the data compression interactive system, with high performance and effectiveness in compressing multimedia data. This, in turn, will make it easier for users to easily send their files of video, audio, and images via social media networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sharabayko, Maxim P., and Nikolay G. Markov. "Fast Search for Intra Prediction Mode in H.265/HEVC Video Compression." Key Engineering Materials 685 (February 2016): 897–901. http://dx.doi.org/10.4028/www.scientific.net/kem.685.897.

Повний текст джерела
Анотація:
Mechanical engineering, chemical engineering and other industries have a high demand for the video compression systems that are used, e.g., in CCTV and video sensing. The newest video compression standard H.265/HEVC provides the compression rate of 100–300 times to the uncompressed video. The side effect is the increase of a computational complexity of the compression system. This high complexity obstructs the industrial implementation of H.265/HEVC video compression systems. One of the main objectives is to reduce intra compression complexity. In this paper, we present our algorithm of a fast search for intra prediction mode in H.265/HEVC video compression. The algorithm on average provides only 1.9% bitrate increase with 41% timesaving.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Putra, Arief Bramanto Wicaksono, Rheo Malani, Bedi Suprapty, Achmad Fanany Onnilita Gaffar, and Roman Voliansky. "Inter-Frame Video Compression based on Adaptive Fuzzy Inference System Compression of Multiple Frame Characteristics." Knowledge Engineering and Data Science 6, no. 1 (May 30, 2023): 1. http://dx.doi.org/10.17977/um018v6i12023p1-14.

Повний текст джерела
Анотація:
Video compression is used for storage or bandwidth efficiency in clip video information. Video compression involves encoders and decoders. Video compression uses intra-frame, inter-frame, and block-based methods. Video compression compresses nearby frame pairs into one compressed frame using inter-frame compression. This study defines odd and even neighboring frame pairings. Motion estimation, compensation, and frame difference underpin video compression methods. In this study, adaptive FIS (Fuzzy Inference System) compresses and decompresses each odd-even frame pair. First, adaptive FIS trained on all feature pairings of each odd-even frame pair. Video compression-decompression uses the taught adaptive FIS as a codec. The features utilized are "mean", "std (standard deviation)", "mad (mean absolute deviation)", and "mean (std)". This study uses all video frames' average DCT (Discrete Cosine Transform) components as a quality parameter. The adaptive FIS training feature and amount of odd-even frame pairings affect compression ratio variation. The proposed approach achieves CR=25.39% and P=80.13%. "Mean" performs best overall (P=87.15%). "Mean (mad)" has the best compression ratio (CR=24.68%) for storage efficiency. The "std" feature compresses the video without decompression since it has the lowest quality change (Q_dct=10.39%).
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Liu, Shangdong, Puming Cao, Yujian Feng, Yimu Ji, Jiayuan Chen, Xuedong Xie, and Longji Wu. "NRVC: Neural Representation for Video Compression with Implicit Multiscale Fusion Network." Entropy 25, no. 8 (August 4, 2023): 1167. http://dx.doi.org/10.3390/e25081167.

Повний текст джерела
Анотація:
Recently, end-to-end deep models for video compression have made steady advancements. However, this resulted in a lengthy and complex pipeline containing numerous redundant parameters. The video compression approaches based on implicit neural representation (INR) allow videos to be directly represented as a function approximated by a neural network, resulting in a more lightweight model, whereas the singularity of the feature extraction pipeline limits the network’s ability to fit the mapping function for video frames. Hence, we propose a neural representation approach for video compression with an implicit multiscale fusion network (NRVC), utilizing normalized residual networks to improve the effectiveness of INR in fitting the target function. We propose the multiscale representations for video compression (MSRVC) network, which effectively extracts features from the input video sequence to enhance the degree of overfitting in the mapping function. Additionally, we propose the feature extraction channel attention (FECA) block to capture interaction information between different feature extraction channels, further improving the effectiveness of feature extraction. The results show that compared to the NeRV method with similar bits per pixel (BPP), NRVC has a 2.16% increase in the decoded peak signal-to-noise ratio (PSNR). Moreover, NRVC outperforms the conventional HEVC in terms of PSNR.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Abomhara, M., O. O. Khalifa, O. Zakaria, A. A. Zaidan, B. B. Zaidan, and A. Rame. "Video Compression Techniques: An Overview." Journal of Applied Sciences 10, no. 16 (August 1, 2010): 1834–40. http://dx.doi.org/10.3923/jas.2010.1834.1840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Singh, Vipula. "Patents on Image/Video Compression." Recent Patents on Signal Processinge 1, no. 2 (December 1, 2011): 101–15. http://dx.doi.org/10.2174/2210686311101020101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hu, Yu. "Some Technologies about Video Compression." Advanced Materials Research 393-395 (November 2011): 284–87. http://dx.doi.org/10.4028/www.scientific.net/amr.393-395.284.

Повний текст джерела
Анотація:
Many strategies can be developed into mature algorithms to compress video more efficiently than today’s standardized codecs. Future video compression algorithms may employ more adaptability, more refined temporal and spatial prediction models with better distortion metrics. The cost to users is the significant increase of implementation complexity at both the encoder and decoder. Fortunately, it seems that bitrates have a slower doubling time than computing power, so the disadvantage of increasing implementation complexity may one day be balanced with much improved processor capabilities. Development trends and perspectives of video compression analyzed in the following paper, highlighting problems and research directions are also analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Singh, Vipula. "Patents on Image/Video Compression." Recent Patents on Signal Processing 1, no. 2 (December 21, 2011): 101–15. http://dx.doi.org/10.2174/1877612411101020101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kim, Myung-Jun, and Yung-Lyul Lee. "Object Detection-Based Video Compression." Applied Sciences 12, no. 9 (April 29, 2022): 4525. http://dx.doi.org/10.3390/app12094525.

Повний текст джерела
Анотація:
Video compression is designed to provide good subjective image quality, even at a high-compression ratio. In addition, video quality metrics have been used to show the results can maintain a high Peak Signal-to-Noise Ratio (PSNR), even at high compression. However, there are many difficulties in object recognition on the decoder side due to the low image quality caused by high compression. Accordingly, providing good image quality for the detected objects is necessary for the given total bitrate for utilizing object detection in a video decoder. In this paper, object detection-based video compression by the encoder and decoder is proposed that allocates lower quantization parameters to the detected-object regions and higher quantization parameters to the background. Therefore, better image quality is obtained for the detected objects on the decoder side. Object detection-based video compression consists of two types: Versatile Video Coding (VVC) and object detection. In this paper, the decoder performs the decompression process by receiving the bitstreams in the object-detection decoder and the VVC decoder. In the proposed method, the VVC encoder and decoder are processed based on the information obtained from object detection. In a random access (RA) configuration, the average Bjøntegaard Delta (BD)-rates of Y, Cb, and Cr increased by 2.33%, 2.67%, and 2.78%, respectively. In an All Intra (AI) configuration, the average BD-rates of Y, Cb, and Cr increased by 0.59%, 1.66%, and 1.42%, respectively. In an RA configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.17%, 0.23%, and 0.04%, respectively. In an AI configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.71%, 0.30%, and 0.30%, respectively. Subjective image quality was also improved in the object-detected areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ching-Min Cheng, Chien-Hsing Wu, Soo-Chang Pei, Hungwen Li, and Bor-Shenn Jeng. "High speed video compression testbed." IEEE Transactions on Consumer Electronics 40, no. 3 (1994): 538–48. http://dx.doi.org/10.1109/30.320839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Bergmann, N. W., and Yuk Ying Chung. "Video compression with custom computers." IEEE Transactions on Consumer Electronics 43, no. 3 (1997): 925–33. http://dx.doi.org/10.1109/30.628766.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Seo, Guiwon, Jonghwa Lee, and Chulhee Lee. "Frequency sensitivity for video compression." Optical Engineering 53, no. 3 (March 25, 2014): 033107. http://dx.doi.org/10.1117/1.oe.53.3.033107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Waltrich, Joseph B. "Digital video compression-an overview." Journal of Lightwave Technology 11, no. 1 (January 1993): 70–75. http://dx.doi.org/10.1109/50.210573.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Schaeffer, Hayden, Yi Yang, Hongkai Zhao, and Stanley Osher. "Real-Time Adaptive Video Compression." SIAM Journal on Scientific Computing 37, no. 6 (January 2015): B980—B1001. http://dx.doi.org/10.1137/130937792.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kuleshov, S., A. Zaytseva, and A. Aksenov. "Spatiotemporal video representation and compression." Pattern Recognition and Image Analysis 23, no. 1 (March 2013): 87–91. http://dx.doi.org/10.1134/s1054661813010082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Li, Z., and L. Itti. "Visual attention guided video compression." Journal of Vision 8, no. 6 (March 29, 2010): 772. http://dx.doi.org/10.1167/8.6.772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Memon, N. D., and K. Sayood. "Lossless compression of video sequences." IEEE Transactions on Communications 44, no. 10 (1996): 1340–45. http://dx.doi.org/10.1109/26.539775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії