Siga este link para ver outros tipos de publicações sobre o tema: Video compression.

Teses / dissertações sobre o tema "Video compression"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Video compression".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Zhang, Fan. "Parametric video compression". Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574421.

Texto completo da fonte
Resumo:
Advances in communication and compression technologies have facilitated the transmission of high quality video content across a broad range of net- works to numerous terminal types. Challenges for video coding continue to increase due to the demands on bandwidth from increased frame rates, higher resolutions and complex formats. In most cases, the target of any video coding algorithm is, for a given bitrate, to provide the best subjective quality rather than simply produce the most similar pictures to the originals. Based on this premise, texture analysis and synthesis can be utilised to provide higher performance video codecs. This thesis describes a novel means of parametric video compression based on texture warping and synthesis. Instead of encoding whole images or prediction residuals after translational motion estimation, this approach employs a perspective motion model to warp static textures and utilises texture synthesis to create dynamic textures. Texture regions are segmented using features derived from the com- plex wavelet transform and further classified according to their spatial and temporal characteristics. A compatible artefact-based video metric (AVM) has been designed to evaluate the quality of the reconstructed video. Its enhanced version is further developed as a generic perception-based video metric offering improved performance in correlation with subjective opinions. It is unique in being able to assess both synthesised and conventionally coded content. The AVM is accordingly employed in the coding loop to prevent warping and synthesis artefacts, and a local RQO strategy is then developed based on it to make a trade-off between waveform coding and texture warping/synthesis. In addition, these parametric texture models have been integrated into an H.264 video coding framework whose results show significant coding efficiency improvement, up to 60% bitrate savings over H.264/ AVC, on diverse video content.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stampleman, Joseph Bruce. "Scalable video compression". Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70216.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Cilke, Tom. "Video Compression Techniques". International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615075.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper will attempt to present algorithms commonly used for video compression, and their effectiveness in aerospace applications where size, weight, and power are of prime importance. These techniques will include samples of one-, two-, and three-dimensional algorithms. Implementation of these algorithms into usable hardware is also explored but limited to monochrome video only.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Bordes, Philippe. "Adapting video compression to new formats". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S003/document.

Texto completo da fonte
Resumo:
Les nouvelles techniques de compression vidéo doivent intégrer un haut niveau d'adaptabilité, à la fois en terme de bande passante réseau, de scalabilité des formats (taille d'images, espace de couleur…) et de compatibilité avec l'existant. Dans ce contexte, cette thèse regroupe des études menées en lien avec le standard HEVC. Dans une première partie, plusieurs adaptations qui exploitent les propriétés du signal et qui sont mises en place lors de la création du bit-stream sont explorées. L'étude d'un nouveau partitionnement des images pour mieux s'ajuster aux frontières réelles du mouvement permet des gains significatifs. Ce principe est étendu à la modélisation long-terme du mouvement à l'aide de trajectoires. Nous montrons que l'on peut aussi exploiter la corrélation inter-composantes des images et compenser les variations de luminance inter-images pour augmenter l'efficacité de la compression. Dans une seconde partie, des adaptations réalisées sur des flux vidéo compressés existants et qui s'appuient sur des propriétés de flexibilité intrinsèque de certains bit-streams sont investiguées. En particulier, un nouveau type de codage scalable qui supporte des espaces de couleur différents est proposé. De ces travaux, nous dérivons des metadata et un modèle associé pour opérer un remapping couleur générique des images. Le stream-switching est aussi exploré comme une application particulière du codage scalable. Plusieurs de ces techniques ont été proposées à MPEG. Certaines ont été adoptées dans le standard HEVC et aussi dans la nouvelle norme UHD Blu-ray Disc. Nous avons investigué des méthodes variées pour adapter le codage de la vidéo aux différentes conditions de distribution et aux spécificités de certains contenus. Suivant les scénarios, on peut sélectionner et combiner plusieurs d'entre elles pour répondre au mieux aux besoins des applications
The new video codecs should be designed with an high level of adaptability in terms of network bandwidth, format scalability (size, color space…) and backward compatibility. This thesis was made in this context and within the scope of the HEVC standard development. In a first part, several Video Coding adaptations that exploit the signal properties and which take place at the bit-stream creation are explored. The study of improved frame partitioning for inter prediction allows better fitting the actual motion frontiers and shows significant gains. This principle is further extended to long-term motion modeling with trajectories. We also show how the cross-component correlation statistics and the luminance change between pictures can be exploited to increase the coding efficiency. In a second part, post-creation stream adaptations relying on intrinsic stream flexibility are investigated. In particular, a new color gamut scalability scheme addressing color space adaptation is proposed. From this work, we derive color remapping metadata and an associated model to provide low complexity and general purpose color remapping feature. We also explore the adaptive resolution coding and how to extend scalable codec to stream-switching applications. Several of the described techniques have been proposed to MPEG. Some of them have been adopted in the HEVC standard and in the UHD Blu-ray Disc. Various techniques for adapting the video compression to the content characteristics and to the distribution use cases have been considered. They can be selected or combined together depending on the applications requirements
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rambaruth, Ratna. "Region-based video compression". Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843377/.

Texto completo da fonte
Resumo:
First generation image coding standards are now well-established and coders based on these standards are commercially available. However, for emerging applications, good quality at even lower bitrates is required. Ways of exploiting higher level visual information are currently being explored by the research community in order to achieve high compression. Unfortunately very high level approaches are bound to be restrictive as they are highly dependent on the accuracy of lower-level vision operations. Region-based coding only relies on mid-level image processing and thus is viewed as a promising strategy. In this work, substantial advances to the field of region-based video compression are made by considering the complete scheme. Thus, improvements to the failure regions coding and the motion compensation components have been devised. The failure region coding component was improved by predicting the texture inside the failure region from the neighbourhood of the region. A significant gain over widely used techniques such as the SA-DCT was obtained. The accuracy of the motion compensation component was increased by keeping an accurate internal representation for each region both at the encoder and the decoder side. The proposed region-based coding system is also evaluated against other systems, including the MPEG4 codec which has been recently approved by the MPEG community.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Stephens, Charles R. "Video Compression Standardization Issues". International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615077.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper discusses the development of a standard for compressed digital video. The benefits and applications of compressed digital video are reviewed, and some examples of compression techniques are presented. A hardware implementation of a differential pulse code modulation approach is examined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Subramanian, Vivek. "Content-aware Video Compression". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254394.

Texto completo da fonte
Resumo:
In a video there are certain regions in the image that viewers focus on more than others, which are called the salient regions or Regions­Of-Interest (ROI). This thesis aims to improve the perceived quality of videos by improving the quality of these ROis while degrading the quality of the other non-ROI regions of a frame to keep the same bitrate as would have been the case otherwise. This improvement is achieved by using saliency maps generated using an eye tracker or a deep neural network and providing this information to a modified video encoder. In this thesis the open source x264 encoder was chosen to make use of this information. The effects of ROI encoding are studied for high quality 720p videos by encoding them at low bitrates. The results indicate that ROI encoding can improve subjective video quality when carefully applied.
I en video £inns <let vissa delar av bilden som tittarna fokuserar mer pa an andra, och dessa kallas Region of Interest". Malet med den har upp­satsen ar att hoja den av tittaren upplevda videokvaliteten genom att minska kompressionsgraden ( och darmed hoja kvaliteten) i de iogon­fallande delarna av bilden, samtid som man hojer kompressionsgra­den i ovriga delar sa att bitraten blir den samma som innan andring­en. Den har forbattringen gors genom att anvanda Saliency Mapsss­om visar de iogonfallande delarna for varje bildruta. Dessa Saliency Maps"har antingen detekterats med hjalp av en Eye Tracker eller sa har de raknats fram av ett Neuralt Natverk. Informationen anvands sedan i en modifierad version av den oppna codecen x264 enligt en egen­designad algoritm. Effekten av forandringen har studerats genom att koda hogkvalitativa kallfiler vid lag bitrate. Resultaten indikerar att denna metod kan forbattra den upplevda kvaliteten av en video om den appliceras med ratt styrka.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Yap, S. Y. "SoC architectures for video compression". Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.411805.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Honoré, Francis. "A concurrent video compression system". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Mazhar, Ahmad Abdel Jabbar Ahmad. "Efficient compression of synthetic video". Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/9019.

Texto completo da fonte
Resumo:
Streaming of on-line gaming video is a challenging problem because of the enormous amounts of video data that need to be sent during game playing, especially within the limitations of uplink capabilities. The encoding complexity is also a challenge because of the time delay while on-line gamers are communicating. The main goal of this research study is to propose an enhanced on-line game video streaming system. First, the most common video coding techniques have been evaluated. The evaluation study considers objective and subjective metrics. Three widespread video coding techniques are selected and evaluated in the study; H.264, MPEG-4 Visual and VP- 8. Diverse types of video sequences were used with different frame rates and resolutions. The effects of changing frame rate and resolution on compression efficiency and viewers' satisfaction are also presented. Results showed that the compression process and perceptual satisfaction are severely affected by the nature of the compressed sequence. As a result, H.264 showed higher compression efficiency for synthetic sequences and outperformed other codecs in the subjective evaluation tests. Second, a fast inter prediction technique to speed up the encoding process of H.264 has been devised. The on-line game streaming service is a real time application, thus, compression complexity significantly affects the whole process of on-line streaming. H.264 has been recommended for synthetic video coding by our results gained in codecs comparative studies. However, it still suffers from high encoding complexity; thus a low complexity coding algorithm is presented as fast inter coding model with reference management technique. The proposed algorithm was compared to a state of the art method, the results showing better achievement in time and bit rate reduction with negligible loss of fidelity. Third, recommendations on tradeoff between frame rates and resolution within given uplink capabilities are provided for H.264 video coding. The recommended tradeoffs are offered as a result of extensive experiments using Double Stimulus Impairment Scale (DSIS) subjective evaluation metric. Experiments showed that viewers' satisfaction is profoundly affected by varying frame rates and resolutions. In addition, increasing frame rate or frame resolution does not always guarantee improved increments of perceptual quality. As a result, tradeoffs are recommended to compromise between frame rate and resolution within a given bit rate to guarantee the highest user satisfaction. For system completeness and to facilitate the implementation of the proposed techniques, an efficient game video streaming management system is proposed. Compared to existing on-line live video service systems for games, the proposed system provides improved coding efficiency, complexity reduction and better user satisfaction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Walker, Wendy Tolle 1959. "Video data compression for telescience". Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276830.

Texto completo da fonte
Resumo:
This paper recommends techniques to use for data compression of video data used to point a telescope and from a camera observing a robot, for transmission from the proposed U.S. Space Station to Earth. The mathematical basis of data compression is presented, followed by a general review of data compression techniques. A technique that has wide-spread use in data compression of videoconferencing images is recommended for the robot observation data. Bit rates of 60 to 400 kbits/sec can be achieved. Several techniques are modelled to find a best technique for the telescope data. Actual starfield images are used for the evaluation. The best technique is chosen on the basis of which model provides the most compression while preserving the important information in the images. Compression from 8 bits per pel to 0.015 bits per pel is achieved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Whiteman, Don, e Greg Glen. "Compression Methods for Instrumentation Video". International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611516.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
Video compression is typically required to solve the bandwidth problems related to the transmission of instrumentation video. The use of color systems typically results in bandwidth requirements beyond the capabilities of current receiving and recording equipment. The HORACE specification, IRIG-210, was introduced as an attempt to provide standardization between government test ranges. The specification provides for video compression in order to alleviate the bandwidth problems associated with instrumentation video and is intended to assure compatibility, data quality, and performance of instrumentation video systems. This paper provides an overview of compression methods available for instrumentation video and summarizes the benefits of each method and the problems associated with different compression methods when utilized for instrumentation video. The affects of increased data link bit error rates are also discussed for each compression method. This paper also includes a synopsis of the current HORACE specification, a proposed Vector HORACE specification for color images and hardware being developed to meet both specifications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Deutermann, Alan, e Richard Schaphorst. "COMPRESSION TECHNIQUES FOR VIDEO TELEMETRY". International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613448.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
As the role of television in the aerospace industry has expanded so has the need for video telemetry. In most cases it is important that the video signal be encrypted due to the sensitive nature of the data. Since this means that the signal must be transmitted in digital form, video compression technology must be employed to minimize the transmitted bit rate while maintaining the picture quality at an acceptable level. The basic compression technique which has been employed recently, with successful results, is a combination of Differential PCM and Variable Length coding (DPCM/VLC). This technique has been proposed to the Range Commanders Council to become a possible standard. The purpose of this paper is to compare the basic DPCM/VLC technique with alternative coding technologies. Alternative compression techniques which will be reviewed include Transform coding, Vector Quantization, and Bit Plane coding. All candidate techniques will be viewed as containing four elements -- signal conditioning, signal processing, quantization, and variable length coding. All four techniques will be evaluated and compared from the stand point of compression ratio and picture quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Arrufat, Batalla Adrià. "Multiple transforms for video coding". Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0025/document.

Texto completo da fonte
Resumo:
Les codeurs vidéo état de l’art utilisent des transformées pour assurer une représentation compacte du signal. L’étape de transformation constitue le domaine dans lequel s’effectue la compression, pourtant peu de variabilité dans les types de transformations est constatée dans les systèmes de codage vidéo normalisés : souvent, une seule transformée est considérée, habituellement la transformée en cosinus discrète (DCT). Récemment, d’autres transformées ont commencé à être considérées en complément de la DCT. Par exemple, dans le dernier standard de compression vidéo, nommé HEVC (High Efficiency Video Coding), les blocs de taille 4x4 peuvent utiliser la transformée en sinus discrète (DST), de plus, il est également possible de ne pas les transformer. Ceci révèle un intérêt croissant pour considérer une pluralité de transformées afin d’augmenter les taux de compression. Cette thèse se concentre sur l’extension de HEVC au travers de l’utilisation de multiples transformées. Après une introduction générale au codage vidéo et au codage par transformée, une étude détaillée de deux méthodes de construction de transformations est menée : la transformée de Karhunen Loève (KLT) et une transformée optimisée en débit et distorsion sont considérées. Ces deux méthodes sont comparées entre-elles en substituant les transformées utilisées par HEVC. Une expérimentation valide la pertinence des approches. Un schéma de codage qui incorpore et augmente l’utilisation de multiples transformées est alors introduit : plusieurs transformées sont mises à disposition de l’encodeur, qui sélectionne celle qui apporte le meilleur compromis dans le plan débit distorsion. Pour ce faire, une méthode de construction qui permet de concevoir des systèmes comportant de multiples transformations est décrite. Avec ce schéma de codage, le débit est significativement réduit par rapport à HEVC, tout particulièrement lorsque les transformées sont nombreuses et complexes à mettre en oeuvre. Néanmoins, ces améliorations viennent au prix d’une complexité accrue en termes d’encodage, de décodage et de contrainte de stockage. En conséquence, des simplifications sont considérées dans la suite du document, qui ont vocation à limiter l’impact en réduction de débit. Une première approche est introduite dans laquelle des transformées incomplètes sont motivées. Les transformations de ce type utilisent un seul vecteur de base, et sont conçues pour travailler de concert avec les transformations de HEVC. Cette technique est évaluée et apporte une réduction de complexité significative par rapport au précédent système, bien que la réduction de débit soit modeste. Une méthode systématique, qui détermine les meilleurs compromis entre le nombre de transformées et l’économie de débit est alors définie. Cette méthode utilise deux types différents de transformée : basés sur des transformées orthogonales séparables et des transformées trigonométriques discrètes (DTT) en particulier. Plusieurs points d’opération sont présentés qui illustrent plusieurs compromis complexité / gain en débit. Ces systèmes révèlent l’intérêt de l’utilisation de transformations multiples pour le codage vidéo
State of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Aklouf, Mourad. "Video for events : Compression and transport of the next generation video codec". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.

Texto completo da fonte
Resumo:
L'acquisition et la diffusion de contenus avec une latence minimale sont devenus essentiel dans plusieurs domaines d'activités tels que la diffusion d'évènements sportifs, la vidéoconférence, la télé-présence, la télé-opération de véhicules ou le contrôle à distance de systèmes. L'industrie de la diffusion en direct a connu une croissance en 2020, et son importance va encore croitre au cours des prochaines années grâce à l'émergence de nouveaux codecs vidéo à haute efficacité reposant sur le standard Versatile Video Coding(VVC)et à la cinquième génération de réseaux mobiles (5G).Les méthodes de streaming de type HTTP Adaptive Streaming (HAS) telles que MPEG-DASH, grâce aux algorithmes d'adaptation du débit de transmission de vidéo compressée, se sont révélées très efficaces pour améliorer la qualité d'expérience (QoE) dans un contexte de vidéo à la demande (VOD).Cependant, dans les applications où la latence est critique, minimiser le délai entre l'acquisition de l'image et son affichage au récepteur est essentiel. La plupart des algorithmes d'adaptation de débit sont développés pour optimiser la transmission vidéo d'un serveur situé dans le cœur de réseau vers des clients mobiles. Dans les applications nécessitant un streaming à faible latence, le rôle du serveur est joué par un terminal mobile qui va acquérir, compresser et transmettre les images via une liaison montante comportant un canal radio vers un ou plusieurs clients. Les approches d'adaptation de débit pilotées par le client sont par conséquent inadaptées. De plus, les HAS, pour lesquelles la prise de décision se fait avec une périodicité de l'ordre de la seconde ne sont pas suffisamment réactives lors d'une mobilité importante du serveur et peuvent engendrer des délais importants. Il est donc essentiel d'utiliser une granularité d'adaptation très fine afin de réduire le délai de bout-en-bout. En effet, la taille réduite des tampons d'émission et de réception afin de minimiser la latence rend plus délicate l'adaptation du débit dans notre cas d'usage. Lorsque la bande passante varie avec une constante de temps plus petite que la période avec laquelle la régulation est faite, les mauvaises décisions de débit de transmission peuvent induire un surcroit de latence important.L'objet de cette thèse est d'apporter des éléments de réponse à la problématique de la transmission vidéo à faible latence depuis des terminaux (émetteurs) mobiles. Nous présentons d'abord un algorithme d'adaptation de débit image-par-image pour la diffusion à faible latence. Une approche de type Model Predictive Control (MPC) est proposée pour déterminer le débit de codage de chaque image à transmettre. Cette approche utilise des informations relatives au niveau de tampon de l'émetteur et aux caractéristiques du canal de transmission. Les images étant codées en direct, un modèle reliant le paramètre de quantification (QP) au débit de sortie du codeur vidéo est nécessaire. Nous avons donc proposé un nouveau modèle reliant le débit au paramètre de quantification et à la distorsion de l'image précédente. Ce modèle fournit de bien meilleurs résultats dans le contexte d'une décision prise image par image du débit de codage que les modèle de référence de la littérature.En complément des techniques précédentes, nous avons également proposé des outils permettant de réduire la complexité de codeurs vidéo tels que VVC. La version actuelle du codeur VVC (VTM10) a un temps d'exécution neuf fois supérieur à celui du codeur HEVC. Par conséquent, le codeur VVC n'est pas adapté aux applications de codage et diffusion en temps réel sur les plateformes actuellement disponibles. Dans ce contexte, nous présentons une méthode systématique, de type branch-and-prune, permettant d'identifier un ensemble d'outils de codage pouvant être désactivés tout en satisfaisant une contrainte sur l'efficacité de codage. Ce travail contribue à la réalisation d'un codeur VVC temps réel
The acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Milovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.

Texto completo da fonte
Resumo:
Cette thèse aborde le problème de la compression efficace de contenus vidéo immersifs, représentés avec le format Multiview Video plus Depth (MVD). Le standard du Moving Picture Experts Group (MPEG) pour la transmission des données MVD est appelé MPEG Immersive Video (MIV), qui utilise des codecs vidéo 2D compresser les informations de texture et de profondeur de la source. Par rapport au codage vidéo traditionnel, le codage vidéo immersif est complexe et limité non seulement par le compromis entre le débit binaire et la qualité, mais aussi par le débit de pixels. C'est pourquoi la MIV utilise le pruning pour réduire le débit de pixels et les corrélations entre les vues et crée une mosaïque de morceaux d'images (patches). L'estimation de la profondeur côté décodeur (DSDE) est apparue comme une approche alternative pour améliorer le système vidéo immersif en évitant la transmission de cartes de profondeur et en déplaçant le processus d'estimation de la profondeur du côté du décodeur. DSDE a été étudiée dans le cas de nombreuses vues entièrement transmises (sans pruning). Dans cette thèse, nous démontrons les avancées possibles en matière de codage vidéo immersif, en mettant l'accent sur le pruning du contenu de source. Nous allons au-delà du DSDE et examinons l'effet distinct de la restauration de la profondeur au niveau du patch du côté du décodeur. Nous proposons deux approches pour intégrer la DSDE sur le contenu traité avec le pruning du MIV. La première approche exclut un sous-ensemble de cartes de profondeur de la transmission, et la seconde approche utilise la qualité des patchs de profondeur estimés du côté de l'encodeur pour distinguer ceux qui doivent être transmis de ceux qui peuvent être récupérés du côté du décodeur. Nos expériences montrent un gain de 4.63 BD-rate pour Y-PSNR en moyenne. En outre, nous étudions également l'utilisation de techniques neuronales de synthèse basées sur l'image (IBR) pour améliorer la qualité de la synthèse de nouvelles vues et nous montrons que la synthèse neuronale elle-même fournit les informations nécessaires au pruning du contenu. Nos résultats montrent un bon compromis entre le taux de pixels et la qualité de la synthèse, permettant d'améliorer la synthèse visuelle de 3.6 dB en moyenne
This thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Mehrseresht, Nagita Electrical Engineering &amp communication UNSW. "Adaptive techniques for scalable video compression". Awarded by:University of New South Wales. Electrical Engineering and communication, 2005. http://handle.unsw.edu.au/1959.4/20552.

Texto completo da fonte
Resumo:
In this work we investigate adaptive techniques which can be used to improve the performance of highly scalable video compression schemes under resolution scaling. We propose novel content adaptive methods for motion compensated 3D discrete wavelet transformation (MC 3D-DWT) of video. The proposed methods overcome problems of ghosting and non-aligned aliasing artifacts, which can arise in regions of motion model failure, when the video is reconstructed at reduced temporal or spatial resolutions. We also study schemes which facilitate simultaneous scaling of compressed video bitstreams based on both constant bit-rate and constant distortion criteria, using simple and generic scaling operations. In regions where the motion model fails, the motion compensated temporal discrete wavelet transform (MC TDWT) causes ghosting artifacts under frame-rate scaling, due to temporal lowpass filtering along invalid motion trajectories. To avoid ghosting artifacts, we adaptively select between different lowpass filters, based on a local estimate of the motion modelling accuracy. Experimental results indicate that the proposed adaptive transform substantially removes ghosting artifacts while also preserving the high compression efficiency of the original MC TDWT. We also study the impact of various MC 3D-DWT structures on spatial scalability. Investigating the interaction between spatial aliasing, scalability and energy compaction shows that the t+2D structure essentially has higher compression efficiency. However, where the motion model fails, structures of this form cause non-aligned aliasing artifacts under spatial scaling. We propose novel adaptive schemes to continuously adapt the structure of MC 3D-DWT based on information available within the compressed bitstream. Experimental results indicate that the proposed adaptive structure preserves the high compression efficiency of the t+2D structure while also avoiding the appearance of non-aligned aliasing artifacts under spatial scaling. To provide simultaneous rate and distortion scaling, we study ???layered substream structure. Scaling based on distortion generates variable bit-rate traffic which satisfies the desired average bit-rate and is consistent with the requirements of leaky-bucket traffic models. We propose a novel method which also satisfies constraints on instantaneous bit-rate. This method overcomes the weakness of previous methods with small leaky-bucket buffer sizes. Simulation results indicate promising performance with both MC 3D-DWT interframe and JPEG2000 intraframe compression.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Nasrallah, Anthony. "Novel compression techniques for next-generation video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.

Texto completo da fonte
Resumo:
Le contenu vidéo occupe aujourd'hui environ 82% du trafic Internet mondial. Ce pourcentage important est dû à la révolution des contenus vidéo. D’autre part, le marché exige de plus en plus des vidéos avec des résolutions et des qualités plus élevées. De ce fait, développer des algorithmes de codage encore plus efficaces que ceux existants devient une nécessité afin de limiter afin de limiter l’augmentation de la quantité de données vidéo circulant sur internet et assurer une meilleure qualité de service. En outre, la consommation impressionnante de contenu multimédia dans les produits électroniques impacte l’aspect écologique. Par conséquent, trouver un compromis entre la complexité des algorithmes et l’efficacité des implémentations s’impose comme nouveau défi. Pour cela, une équipe collaborative a été créée dans le but de développer une nouvelle norme de codage vidéo, Versatile Video Coding – VVC/H.266. Bien que VVC ait pu aboutir à une réduction de plus de 40% du débit par rapport à HEVC, cela ne signifie pas du tout qu’il n’y a plus de besoin pour améliorer encore l’efficacité du codage. De plus, VVC ajoute une complexité remarquable par rapport à HEVC. Cette thèse vient répondre à ces problématiques en proposant trois nouvelles méthodes d'encodage. Les apports de cette recherche se répartissent en deux axes principaux. Le premier axe consiste à proposer et mettre en œuvre de nouveaux outils de compression dans la nouvelle norme, capables de générer des gains de codage supplémentaires. Deux méthodes ont été proposées pour ce premier axe. Le point commun entre ces deux méthodes est la dérivation des informations de prédiction du côté du décodeur. En effet, l’augmentation des choix de l’encodeur peut améliorer la précision des prédictions et donne moins de résidus d’énergie, conduisant à une réduction du débit. Néanmoins, plus de modes de prédiction impliquent plus de signalisation à envoyer dans le flux binaire pour informer le décodeur des choix qui ont été faits au niveau de l’encodeur. Les gains mentionnés ci-dessus sont donc largement compensés par la signalisation ajoutée. Si l’information de prédiction est dérivée au niveau du décodeur, ce dernier n’est plus passif, mais devient actif, c’est le concept de décodeur intelligent. Ainsi, il sera inutile de signaler l’information, d’où un gain en signalisation. Chacune des deux méthodes propose une technique intelligente différente pour prédire l’information au niveau du décodeur. La première technique construit un histogramme de gradients pour déduire différents modes de prédiction intra pouvant ensuite être combinés, pour obtenir le mode de prédiction intra final pour un bloc donné. Cette propriété de fusion permet de prédire plus précisément les zones avec des textures complexes, ce qui, dans les schémas de codage conventionnels, nécessiterait plutôt un partitionnement et/ou une transmission plus fine des résidus à haute énergie. La deuxième technique consiste à donner à VVC la possibilité de basculer entre différents filtres d’interpolation pour la prédiction inter. La déduction du filtre optimal sélectionné par l’encodeur est réalisée grâce à des réseaux de neurones convolutifs. Le deuxième axe, contrairement au premier, ne cherche pas à ajouter une contribution à l’algorithme de base de VVC. Cet axe vise plutôt à permettre une utilisation optimisée de l’algorithme déjà existant. L’objectif ultime est de trouver le meilleur compromis possible entre l’efficacité de compression fournie et la complexité imposée par les outils VVC. Ainsi, un système d’optimisation est conçu pour déterminer une technique efficace d’adaptation de l’activation des outils au contenu. La détermination de ces outils peut être effectuée soit en utilisant des réseaux de neurones artificiels, soit sans aucune technique d’intelligence artificielle
Video content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Tohidypour, Hamid Reza. "Complexity reduction schemes for video compression". Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60250.

Texto completo da fonte
Resumo:
With consumers having access to a plethora of video enabled devices, efficient transmission of video content with different quality levels and specifications has become essential. The primary way of achieving this task is using the simulcast approach, where different versions of the same video sequence are encoded and transmitted separately. This approach, however, requires significantly large amounts of bandwidth. Another solution is to use scalable Video Coding (SVC), where a single bitstream consists of a base layer (BL) and one or more enhancement layers (ELs). At the decoder side, based on bandwidth or type of application, the appropriate part of an SVC bit stream is used/decoded. While SVC enables delivery of different versions of the same video content within one bit stream at a reduced bitrate compared to simulcast approach, it significantly increases coding complexity. However, the redundancies introduced between the different versions of the same stream allow for complexity reduction, which in turn will result in simpler hardware and software implementation and facilitate the wide adoption of SVC. This thesis addresses complexity reduction for spatial scalability, SNR/Quality/Fidelity scalability, and multiview scalability for the High Efficiency Video Coding (HEVC) standard. First, we propose a fast method for motion estimation of spatial scalability, followed by a probabilistic method for predicting block partitioning for the same scalability. Next, we propose a content adaptive complexity reduction method, a mode prediction approach based on statistical studies, and a Bayesian based mode prediction method all for the quality scalability. An online-learning based mode prediction method is also proposed for quality scalability. For the same bitrate and quality, our methods outperform the original SVC approach by 39% for spatial scalability and by 45% for quality scalability. Finally, we propose a content adaptive complexity reduction scheme and a Bayesian based mode prediction scheme. Then, an online-learning based complexity reduction scheme is proposed for 3D scalability, which incorporates the two other schemes. Results show that our methods reduce the complexity by approximately 23% compared to the original 3D approach for the same quality/bitrate. In summary, our methods can significantly reduce the complexity of SVC, enabling its market adoption.
Applied Science, Faculty of
Graduate
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Gandhi, Rakeshkumar Hasmukhlal. "3-Dimensional pyramids for video compression". Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6550.

Texto completo da fonte
Resumo:
The larger memory and channel bandwidth requirements for digital video transmission and storage make it mandatory to use compression techniques. Representation of the video signal in a pyramid format not only compresses the signal but also makes it suitable for specific applications such as packet video based on asynchronous transmission mode (ATM) and compatible advanced television (ATV). In this thesis, we propose to employ a pyramid data structure for video compression. A review of video coding schemes is first presented, followed by a review of the various 2-dimensional (2D) and 3-dimensional (3D) pyramid data structures from the perspectives of data compression. The performance of different configurations of temporal/spatial pyramid data structures is then measured for video compression in terms of the first order entropy. Based on this study, we introduce an efficient 3D adaptive temporal/spatial pyramid which selects either the temporal or spatial contractions using the temporal and spatial prediction differences, respectively. We propose a video codec that combines the adaptive temporal/spatial pyramid and an intra-frame coding technique. Simulation results on CCITT standard video sequences indicate that the adaptive pyramid reduced the lossless bit rate by a factor of two. For video conferencing applications, excellent subjective quality as well as objective quality (PSNR value of 36.6 db) are obtained at a bit rate less than T1 rate (i.e. 1.544 Mbits/s). Promising results have been obtained for CCIR resolution (720 x 480), high detail sequences at a bit rate of 6 Mbits/s. Furthermore, smooth transition is achieved in the case of scene changes without sacrificing picture quality. Finally, the algorithm is well suited for constant bit rate and constant quality applications. (Abstract shortened by UMI.)
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Chan, Eric Wai Chi. "Novel motion estimators for video compression". Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6864.

Texto completo da fonte
Resumo:
In this thesis, the problem of motion estimation is addressed from two perspectives, namely, hardware architecture and reduced complexity algorithms in the spatial and transform domains. First, a VLSI architecture which implements the full search block matching algorithm in real time is presented. The interblock dependency is exploited and hence the architecture can meet the real time requirement in various applications. Most importantly, the architecture is simple, modular and cascadable. Hence the proposed architecture is easily implementable in VLSI as a codec. The spatial domain algorithm consists of a layered structure and alleviates the local optimum problem. Most importantly, it employs a simple matching criterion, namely, a modified pixel difference classification (MPDC) and hence results in a reduced computational complexity. In addition, the algorithm is compatible with the recently proposed MPEG-1 video compression standard. Simulation results indicate that the proposed algorithm provides a comparable performance (compared to the algorithms reported in the literature) at a significantly reduced computational complexity. In addition, the hardware implementation of the proposed algorithm is very simple because of the binary operations used in the matching criteria. Finally, we present a wavelet transform based fast multiresolution motion estimation (FMRME) scheme. Here, the wavelet transform is used to exploit both the spatial and temporal redundancies resulting in an efficient coder. In FMRME, the correlations among the orientation subimages of the wavelet pyramid structure are exploited resulting in an efficient motion estimation process. In addition, this significantly reduces side information for motion vectors which corresponds to significant improvements in coding performance of the FMRME based wavelet coder for video compression. Simulation results demonstrate the superior coding performance of the FMRME based wavelet transform coder. (Abstract shortened by UMI.)
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Thom, Gary A., e Alan R. Deutermann. "A COMPARISON OF VIDEO COMPRESSION ALGORITHMS". International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608290.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Compressed video is necessary for a variety of telemetry requirements. A large number of competing video compression algorithms exist. This paper compares the ability of these algorithms to meet criteria which are of interest for telemetry applications. Included are: quality, compression, noise susceptibility, motion performance and latency. The algorithms are divided into those which employ inter-frame compression and those which employ intra-frame compression. A video tape presentation will also be presented to illustrate the performance of the video compression algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

West, Jim, e Willard Moore. "A DPCM Approach to Video Compression". International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615076.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper presents a working Variable Length Differential Pulse Code Modulation (VDLPCM) video compression/decompression and encryption system. Included are theory of operation and performance characteristics, as well as a study of packaging problems which arise from using this hardware for severe environmental applications. No classified issues are covered.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

RAJYALAKSHMI, P. S., e R. K. RAJANGAM. "DATA COMPRESSION SYSTEM FOR VIDEO IMAGES". International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615539.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada
In most transmission channels, bandwidth is at a premium and an important attribute of any good digital signalling scheme is to optimally utilise the bandwidth for transmitting the information. The Data Compression System in this way plays a significant role in the transmission of picture data from any Remote Sensing Satellite by exploiting the statistical properties of the imagery. The data rate required for transmission to ground can be reduced by using suitable compression technique. A data compression algorithm has been developed for processing the images of Indian Remote Sensing Satellite. Sample LANDSAT imagery and also a reference photo are used for evaluating the performance of the system. The reconstructed images are obtained after compression for 1.5 bits per pixel and 2 bits per pixel as against the original of 7 bits per pixel. The technique used is uni-dimensional Hadamard Transform Technique. The Histograms are computed for various pictures which are used as samples. This paper describes the development of such a hardware and software system and also indicates how hardware can be adopted for a two dimensional Hadamard Transform Technique.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Ahmad, Zaheer. "Video header compression for wireless communications". Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843682/.

Texto completo da fonte
Resumo:
The delivery of high quality video to wireless users depends on achieving high compression efficiency and high robustness to wireless channel errors. Research on these topics has led to the introduction of a number of video codec standards. However, most of these standards incorporate redundant syntactical information that renders the video more susceptible to channel errors, and reduces the compression efficiency. This thesis presents a new approach to video compression that removes most of the problems associated with the excess syntax, a header compressor and decompressor is placed adjacent to encoder and decoder respectively. The compressor removes the excess header redundancy and decompressor regenerates the original header using already stored reference. The thesis investigates the syntactical header information of video coding standards MPEG-4 and H.264. The analysis shows that the overheads in small video packet may contribute up to 50% of the texture data. The video packet header fields are classified as static and dynamic. Based on the header analysis, a comprehensive header compression scheme is designed for MPEG-4 and H.264 video packets. For a practical scenario, simulations of video packets are carried out including the compressed IP/UDP/RTP overheads. The ROHC (RObust Header Compression) standard compresses the IP/UDP/RTP headers. In this thesis, ROHC parameters have been optimised for transmission over 3GPP simulated downlink channel. Furthermore, an improvement in the ROHC U-mode has been proposed to reduce the effects of unnecessary packet loss due to false context damage. Results show better video quality and lower packet loss rates with proposed scheme. The efficiency of the proposed video header compression scheme is evaluated with different combinations of encoding parameters. Experiments show the using the proposed video header compression scheme, up to 95% of header redundancy may be removed. Extensive simulations illustrate improvements in video quality due to the proposed header compression scheme under various channel conditions. The video quality is further enhanced by using a header-compression based unequal error protection scheme. The bits saved due to header compression can be utilised adaptively to protect the critical data at a fixed transmission rate. The results show significant improvement in objective and subjective video quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Theolin, Henrik. "Video compression optimized for racing drones". Thesis, Luleå tekniska universitet, Datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71533.

Texto completo da fonte
Resumo:
This thesis is a report on the findings of diifferent video coding techniques and their suitability for a low powered lightweight system mounted on a racing drone. Low latency, high consistency and a robust videostream is of the utmost importance. The literature consists of multiplecomparisons and reports on the effciency for the most commonly used video compression algorithms. These reports and findings are mainly not used on a low latency system but are testing in a laboratory environment with settings unusable for a real-time system. The literature that deals with low latency video streaming and network instability shows that only a limited set of each compression algorithms are available to ensure low complexity and no added delay to the coding process. The findings resulted in that AVC/H.264 was the most suited compression algorithm and more precise the x264 implementation was the most optimized to be able to perform well on the low powered system. To reduce delay each frameneeds to be divided into sub-frames so that encoding and decoding may be done in parallel independently of other sub-parts of the frame. This also improves error propagation when used together with an All-Intra (AI) mode that doesn't utilize any motion prediction techniques.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

He, Chao. "Advanced wavelet application for video compression and video object tracking". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1125659908.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xvii, 158 p.; also includes graphics (some col.). Includes bibliographical references (p. 150-158). Available online via OhioLINK's ETD Center
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Mitrica, Iulia. "Video compression of airplane cockpit screens content". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT042.

Texto completo da fonte
Resumo:
Cette thèse aborde le problème de l'encodage de la vidéo des cockpits d'avion. Le cockpit des avions de ligne modernes consiste en un ou plusieurs écrans affichant l'état des instruments de l'avion (par exemple, la position de l'avion telle que rapportée par le GPS, le niveau de carburant tel que lu par les capteurs dans les réservoirs, etc.,) souvent superposés au naturel images (par exemple, cartes de navigation, caméras extérieures, etc.). Les capteurs d'avion sont généralement inaccessibles pour des raisons de sécurité, de sorte que l'enregistrement du cockpit est souvent le seul moyen de consigner les données vitales de l'avion en cas, par exemple, d'un accident. Les contraintes sur la mémoire d'enregistrement disponible à bord nécessitent que la vidéo du cockpit soit codée à des débits faibles à très faibles, alors que pour des raisons de sécurité, les informations textuelles doivent rester intelligibles après le décodage. De plus, les contraintes sur l'enveloppe de puissance des dispositifs avioniques limitent la complexité du sous-système d'enregistrement du poste de pilotage. Au fil des ans, un certain nombre de schémas de codage d'images ou de vidéos avec des contenus mixtes générés par ordinateur et naturels ont été proposés. Le texte et d'autres graphiques générés par ordinateur produisent des composants haute fréquence dans le domaine transformé. Par conséquent, la perte due à la compression peut nuire à la lisibilité de la vidéo et donc à son utilité. Par exemple, l'extension récemment normalisée SCC (Screen Content Coding) de la norme H.265/HEVC comprend des outils conçus explicitement pour la compression du contenu de l'écran. Nos expériences montrent cependant que les artefacts persistent aux bas débits ciblés par notre application, incitant à des schémas où la vidéo n'est pas encodée dans le domaine des pixels. Cette thèse propose des méthodes de codage d'écran de faible complexité où le texte et les primitives graphiques sont codés en fonction de leur sémantique plutôt que sous forme de blocs de pixels. Du côté du codeur, les caractères sont détectés et lus à l'aide d'un réseau neuronal convolutif. Les caractères détectés sont ensuite supprimés de l'écran via le pixel inpainting, ce qui donne une vidéo résiduelle plus fluide avec moins de hautes fréquences. La vidéo résiduelle est codée avec un codec vidéo standard et est transmise du côté récepteur avec une sémantique textuelle et graphique en tant qu'informations secondaires. Du côté du décodeur, le texte et les graphiques sont synthétisés à l'aide de la sémantique décodée et superposés à la vidéo résiduelle, récupérant finalement l'image d'origine. Nos expériences montrent qu'un encodeur AVC/H.264 équipé de notre méthode a de meilleures performances de distorsion-débit que H.265/HEVC et se rapproche de celle de son extension SCC. Si les contraintes de complexité permettent la prédiction inter-trame, nous exploitons également le fait que les caractères co-localisés dans les trames voisines sont fortement corrélés. À savoir, les symboles mal classés sont récupérés à l'aide d'une méthode proposée basée sur un modèle de faible complexité des probabilités de transition pour les caractères et les graphiques. Concernant la reconnaissance de caractères, le taux d'erreur chute jusqu'à 18 fois dans les cas les plus faciles et au moins 1,5 fois dans les séquences les plus difficiles malgré des occlusions complexes.En exploitant la redondance temporelle, notre schéma s'améliore encore en termes de distorsion de débit et permet un décodage de caractères quasi sans erreur. Des expériences avec de vraies séquences vidéo de cockpit montrent des gains de distorsion de débit importants pour la méthode proposée par rapport aux normes de compression vidéo
This thesis addresses the problem of encoding the video of airplane cockpits.The cockpit of modern airliners consists in one or more screens displaying the status of the plane instruments (e.g., the plane location as reported by the GPS, the fuel level as read by the sensors in the tanks, etc.,) often superimposed over natural images (e.g., navigation maps, outdoor cameras, etc.).Plane sensors are usually inaccessible due to security reasons, so recording the cockpit is often the only way to log vital plane data in the event of, e.g., an accident.Constraints on the recording storage available on-board require the cockpit video to be coded at low to very low bitrates, whereas safety reasons require the textual information to remain intelligible after decoding. In addition, constraints on the power envelope of avionic devices limit the cockpit recording subsystem complexity.Over the years, a number of schemes for coding images or videos with mixed computer-generated and natural contents have been proposed. Text and other computer generated graphics yield high-frequency components in the transformed domain. Therefore, the loss due to compression may hinder the readability of the video and thus its usefulness. For example, the recently standardized Screen Content Coding (SCC) extension of the H.265/HEVC standard includes tools designed explicitly for screen contents compression. Our experiments show however that artifacts persist at the low bitrates targeted by our application, prompting for schemes where the video is not encoded in the pixel domain.This thesis proposes methods for low complexity screen coding where text and graphical primitives are encoded in terms of their semantics rather than as blocks of pixels.At the encoder side, characters are detected and read using a convolutional neural network.Detected characters are then removed from screen via pixel inpainting, yielding a smoother residual video with fewer high frequencies. The residual video is encoded with a standard video codec and is transmitted to the receiver side together with text and graphics semantics as side information.At the decoder side, text and graphics are synthesized using the decoded semantics and superimposed over the residual video, eventually recovering the original frame. Our experiments show that an AVC/H.264 encoder retrofitted with our method has better rate-distortion performance than H.265/HEVC and approaches that of its SCC extension.If the complexity constraints allow inter-frame prediction, we also exploit the fact that co-located characters in neighbor frames are strongly correlated.Namely, the misclassified symbols are recovered using a proposed method based on low-complexity model of transitional probabilities for characters and graphics. Concerning character recognition, the error rate drops up to 18 times in the easiest cases and at least 1.5 times in the most difficult sequences despite complex occlusions.By exploiting temporal redundancy, our scheme further improves in rate-distortion terms and enables quasi-errorless character decoding. Experiments with real cockpit video footage show large rate-distortion gains for the proposed method with respect to video compression standards
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Chen, Liyong. "Joint image/video inpainting for error concealment in video coding". Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39558915.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Chen, Liyong, e 陳黎勇. "Joint image/video inpainting for error concealment in video coding". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39558915.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Gao, Wenfeng. "Real-time video postprocessing algorithms and metrics /". Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5913.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Huang, Bihong. "Second-order prediction and residue vector quantization for video compression". Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S026/document.

Texto completo da fonte
Resumo:
La compression vidéo est une étape cruciale pour une grande partie des applications de télécommunication. Depuis l'avènement de la norme H.261/MPEG-2, un nouveau standard de compression vidéo est produit tous les 10 ans environ, avec un gain en compression de 50% par rapport à la précédente. L'objectif de la thèse est d'obtenir des gains en compression par rapport à la dernière norme de codage vidéo HEVC. Dans cette thèse, nous proposons trois approches pour améliorer la compression vidéo en exploitant les corrélations du résidu de prédiction intra. Une première approche basée sur l'utilisation de résidus précédemment décodés montre que, si des gains sont théoriquement possibles, le surcoût de la signalisation les réduit pratiquement à néant. Une deuxième approche basée sur la quantification vectorielle mode-dépendent (MDVQ) du résidu préalablement à l'étape classique transformée-quantification scalaire, permet d'obtenir des gains substantiels. Nous montrons que cette approche est réaliste, car les dictionnaires sont indépendants du QP et de petite taille. Enfin, une troisième approche propose de rendre adaptatif les dictionnaires utilisés en MDVQ. Un gain substantiel est apporté par l'adaptivité, surtout lorsque le contenu vidéo est atypique, tandis que la complexité de décodage reste bien contenue. Au final on obtient un compromis gain-complexité compatible avec une soumission en normalisation
Video compression has become a mandatory step in a wide range of digital video applications. Since the development of the block-based hybrid coding approach in the H.261/MPEG-2 standard, new coding standard was ratified every ten years and each new standard achieved approximately 50% bit rate reduction compared to its predecessor without sacrificing the picture quality. However, due to the ever-increasing bit rate required for the transmission of HD and Beyond-HD formats within a limited bandwidth, there is always a requirement to develop new video compression technologies which provide higher coding efficiency than the current HEVC video coding standard. In this thesis, we proposed three approaches to improve the intra coding efficiency of the HEVC standard by exploiting the correlation of intra prediction residue. A first approach based on the use of previously decoded residue shows that even though gains are theoretically possible, the extra cost of signaling could negate the benefit of residual prediction. A second approach based on Mode Dependent Vector Quantization (MDVQ) prior to the conventional transformed scalar quantization step provides significant coding gains. We show that this approach is realistic because the dictionaries are independent of QP and of a reasonable size. Finally, a third approach is developed to modify dictionaries gradually to adapt to the intra prediction residue. A substantial gain is provided by the adaptivity, especially when the video content is atypical, without increasing the decoding complexity. In the end we get a compromise of complexity and gain for a submission in standardization
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Tsoligkas, Nick A. "Video/Image Processing Algorithms for Video Compression and Image Stabilization Applications". Thesis, Teesside University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517469.

Texto completo da fonte
Resumo:
As the use of video becomes increasingly popular and wide spread in the areas of broadcast services, internet, entertainment and security-related applications, providing means for fast. automated, and effective techniques to represent video based on its content, such as objects and meanings, is important topic of research. In many applications.. removing the hand shaking effect and making video images stable and clear or decomposing (and then transmitting) the video content into a collection of meaningful objects is a necessity. Therefore automatic techniques for video stabilization, extraction of objects from video data as well as transmitting their shapes, motion and texture at very low bit rates over error networks, are desired. In this thesis the design of a new low bit rate codec is presented. Furthermore a method about video stabilization is introduced. The main technical contributions resulted from this work are as follows. Firstly, an adaptive change detection algorithm identifies the objects from the background. The luminance difference between framer~ in the first stage, is modelled so as to separate contributions caused by noise and illumination variations from those caused by meaningful moving objects. In the second stage the segmentation tool based on image blocks, histograms and clustering algorithms segments the difference image into areas corresponding to objects. In the third stage morphological edge detection, contour analysis, and object labelling are the main tasks of the proposed segmentation algorithm. Secondly, a new low bit rate codec is designed and analyzed based on the proposed segmentation tool. The estimated motion vectors inside the change detection mask, the comer points of the shapes as well as the residual information inside the motion failure regions are transmitted to the decoder using different coding techniques, thus achieving efficient compression. Thirdly, a novel approach of estimating and removing unwanted video motion, which does not require accelerators or gyros, is presented. The algorithm estimates the camera motion from the incoming video stream and compensates for unwanted translation and rotation. A synchronization unit supervises and generates the stabilized video sequence. The reliability of all the proposed algorithms is demonstrated by extensive experimentation on various video shots.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Wang, Zhou. "Rate scalable foveated image and video communications /". Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3064684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Akhlaghian, Tab Fardin. "Multiresolution scalable image and video segmentation". Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060227.100704/index.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Fok, Stanley. "Foveated Stereo Video Compression for Visual Telepresence". Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/922.

Texto completo da fonte
Resumo:
This thesis focuses on the design of a foveated stereo video compression algorithm for visual telepresence applications. In a typical telepresence application, a user at the local site views real-time stereo video recorded and transmitted from a robotic camera platform located at a remote site. The robotic camera platform tracks the user's head motion producing the sensation of being present at the remote site. The design of the stereo video compression algorithm revolved around a fast spatio-temporal block-based motion estimation algorithm, with a foveated SPIHT algorithm used to compress and foveate the independent frames and error residues. Also, the redundancy between the left and right video streams was exploited by disparity compensation. Finally, position feedback from the robotic camera platform was used to perform global motion compensation, increasing the compression performance without raising computation requirements. The algorithm was analysed by introducing the above mentioned components separately. It was found that each component increased the compression rate significantly, producing compressed video with similar compression and quality as MPEG2. The implementation of the algorithm did not meet the real-time requirements on the experiment computers. However, the algorithm does not contain any intrinsic delays. Therefore, given faster processors or optimized software implementation, the design should be able to run in real-time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Idris, Fayez M. "An algorithm and architecture for video compression". Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6886.

Texto completo da fonte
Resumo:
In this thesis, we present a new frame adaptive vector quantization technique and an architecture for real-time video compression. Video compression is becoming increasingly important with several applications. There are two kinds of redundancies in a video sequence namely, spatial and temporal. Vector quantization (VQ) is an efficient technique for exploiting the spatial correlation. The temporal redundancies are usually removed by using motion estimation/compensation techniques. The coding performance of VQ may be improved by employing adaptive techniques at the expense of increases in computational complexity. We propose a new technique for video compression using adaptive VQ (VC-FAVQ). This technique exploits the interframe as well as intraframe correlations in order to reduce the bit rate. In addition, a dynamic self organized codebook is used to track the local statistics from frame to frame. Computer simulations using standard CCITT video sequences were performed. Simulation results demonstrate the superior coding performance of VC-FAVQ. We note that both VQ and motion estimation algorithms are essentially template matching operations. However, they are compute intensive necessitating the use of special purpose architectures for real-time implementation. Associative memories are efficient for template matching in parallel. We propose an unified associative memory architecture for real implementation of VC-FAVQ. This architecture is based on a novel storage concept where image data is stored by association rather than by contents. The architecture has the advantages of simplicity, partitionability and modularity and is hence suitable for VLSI implementation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Maniccam, Suchindran S. "Image-video compression, encryption and information hiding /". Online version via UMI:, 2001.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Paul, Baldine-Brunel. "Video Compression based on iterated function systems". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13553.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Stewart, Graeme Robert. "Implementing video compression algorithms on reconfigurable devices". Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1267/.

Texto completo da fonte
Resumo:
The increasing density offered by Field Programmable Gate Arrays(FPGA), coupled with their short design cycle, has made them a popular choice for implementing a wide range of algorithms and complete systems. In this thesis the implementation of video compression algorithms on FPGAs is studied. Two areas are specifically focused on; the integration of a video encoder into a complete system and the power consumption of FPGA based video encoders. Two FPGA based video compression systems are described, one which targets surveillance applications and one which targets video conferencing applications. The FPGA video surveillance system makes use of a novel memory format to improve the efficiency with which input video sequences can be loaded over the system bus. The power consumption of a FPGA video encoder is analyzed. The results indicating that the motion estimation encoder stage requires the most power consumption. An algorithm, which reuses the intra prediction results generated during the encoding process, is then proposed to reduce the power consumed on an FPGA video encoder’s external memory bus. Finally, the power reduction algorithm is implemented within an FPGA video encoder. Results are given showing that, in addition to reducing power on the external memory bus, the algorithm also reduces power in the motion estimation stage of a FPGA based video encoder.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Edirisinghe, Eran A. "Data compression of stereo images and video". Thesis, Loughborough University, 1999. https://dspace.lboro.ac.uk/2134/10325.

Texto completo da fonte
Resumo:
One of the amazing properties of human vision is its ability to feel the depth of the scenes being viewed. This is made possible by a process named stereopsis, which is the ability of our brain to fuse together the stereo image pair seen by two eyes. As a stereo image pair is a direct result of the same scene being viewed by a slightly different perspective they open up a new paradigm where spatial redundancy could be exploited for efficient transmission and storage of stereo image data. This thesis introduces three novel algorithms for stereo image compression. The first algorithm improves compression by exploiting the redundancies present in the so-called disparity field of a stereo image pair. The second algorithm uses a pioneering block coding strategy to simultaneously exploit the inter-frame and intra-frame redundancy of a stereo image pair, eliminating the need of coding the disparity field. The basic idea behind the development of the third algorithm is the efficient exploitation of redundancy in smoothly textured areas that are present in both frames, but are relatively displaced from each other due to binocular parallax. Extra compression gains of up to 20% have been achieved by the use of these techniques. The thesis also includes research work related to the improvement of the MPEG-4 video coding standard, which is the first audiovisual representation standard that understands a scene as a composition of audio-visual objects. A linear extrapolation based padding technique that makes use of the trend of pixel value variation often present near object boundaries, in padding the exterior pixels of the reference video object has been proposed. Coding gains of up to 7% have been achieved for coding boundary blocks of video objects. Finally a contour analysis based approach has been proposed for MPEG-4 video object extraction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Pham, Anh Quang. "Wavelet video compression for iterative wireless transceivers". Thesis, University of Southampton, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494687.

Texto completo da fonte
Resumo:
For the sake of exploring the feasibility of providing video services for mobile users, this thesis investigates the design of wireless video communication systems using the British Broadcasting Corporation's recent proprietary wavelet-based video codec referred to as the Dirac codec. The Dirac video-encoded bitstream is subjected to a rigorous error sensitivity investigation for the scale of assisting us in contriving various joint source-channel coding (JSCC) and decoding schemes for wireless videophones. Unequal Error Protection (UEP) is an attractive technique of implementing JSCC. Based on the Dirac video codec's bit sensitivity studies, an Unequal Error Protection (UEP) scheme using turbo-equalized Irregular Convolutional Codes (IRCCs) was designed. Furthermore, an Iterative Source-Channel Decoding (ISCD) scheme, which exploits the residual redundancy left in the source encoded bitstream for the sake of improving the attainable system performance was investigated. Hence, a novel ISCD scheme employing a specific bit-to-symbol mapping scheme referred to as Over-Complete Mapping (OCM) was proposed. This allows us to design an attractive video transmission scheme having a high error resilience at a reasonable complexity and a low delay. The research reported in this thesis was concluded with the design of unequal error protection irregular over-complete mapping for wavelet video telephony using iterative source and channel decoding. The philosophy of this video telephone scheme is that we exploit as much redundancy inherent in the Dirac-encoded bitstream as possible for improving the system's BER performance, while protecting the more sensitive portions of the Dirac video-encoded sequence with a lower OCM rate. Finally, a near-instantaneous adaptive transreceiver for wireless video telephony was designed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Czerepinski, Przemyslaw Jan. "Displaced frame difference coding for video compression". Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Handcock, Jason Anthony. "Video compression techniques and rate-distortion optimisation". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326726.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

GONZALES, JOSE ANTONIO CASTINEIRA. "EVALUATING MOTION ESTIMATION ALGORITHMS FOR VIDEO COMPRESSION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1996. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8711@1.

Texto completo da fonte
Resumo:
Este trabalho teve por objetivo estudar algoritmos de estimação de movimento baseados na técnica de casamento de bloco a fim de avaliar a importância da sua escolha na construção de um codificador para uso em compressão de seqüência de imagens. Para isto foram estudados quatro algoritmos baseados na técnica de casamento de bloco, sendo verificada a interdependência existente entre os vários parâmetros que os compõem, tais como, tamanho da área de busca, critérios de medida de distorção entre blocos e tamanhos de blocos, em relação à qualidade da imagem reconstruída.
This work was performed to study motion estimation algorithms based on block matching in order to evaluate the importance of the choice of the motion estimation algorithm in the Project of a image sequence compression coder. In order to do so, they were studied four motion estimation algorithms, and their performance were evaluated considering some parameters such as search region size, methods to measure the matching between blocks and block sizes, related to the quality of the reconstructed image.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Drake, Matthew Henry. "Stream programming for image and video compression". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36774.

Texto completo da fonte
Resumo:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 101-108).
Video playback devices rely on compression algorithms to minimize storage, transmission bandwidth, and overall cost. Compression techniques have high realtime and sustained throughput requirements, and the end of CPU clock scaling means that parallel implementations for novel system architectures are needed. Parallel implementations increase the complexity of application design. Current languages force the programmer to trade off productivity for performance; the performance demands dictate that the parallel programmer choose a low-level language in which he can explicitly control the degree of parallelism and tune his code for performance. This methodology is not cost effective because this architecture-specific code is neither malleable nor portable. Reimplementations must be written from scratch for each of the existing parallel and reconfigurable architectures. This thesis shows that multimedia compression algorithms, composed of many independent processing stages, are a good match for the streaming model of computation. Stream programming models afford certain advantages in terms of programmability, robustness, and achieving high performance.
(cont.) This thesis intends to influence language design towards the inclusion of features that lend to the efficient implementation and parallel execution of streaming applications like image and video compression algorithms. Towards this I contribute i) a clean, malleable, and portable implementation of an MPEG-2 encoder and decoder expressed in a streaming fashion, ii) an analysis of how a streaming language improves programmer productivity, iii) an analysis of how a streaming language enables scalable parallel execution, iv) an enumeration of the language features that are needed to cleanly express compression algorithms, v) an enumeration of the language features that support large scale application development and promote software engineering principles such as portability and reusability. This thesis presents a case study of MPEG-2 encoding and decoding to explicate points about language expressiveness. The work is in the context of the StreamIt programming language.
by Matthew Henry Drake.
M.Eng.and S.B.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Zhang, Yang. "High dynamic range image and video compression". Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.683903.

Texto completo da fonte
Resumo:
High dynamic range (HDR) technology (capture and display) call offer high levels of immersion through a dynamic range that meets and exceeds that of the Human visual system (HVS). This increase in immersion comes at the cost of higher bit-depth, bandwidth and memory requirements, which are significantly higher than those of conventional Low dynamic range (LDR) content. The challenge is thus to develop a coding solution to efficiently compress HDR images and video into a manageable bitrate without compromising perceptual quality. Over the past century, a large number of psycho-visual experiments have been carried out by psychologists and physiologists with the goal of understanding how the HVS works. One of the human vision phenomena concerns reduced sensitivity to patterns of low and high spatial-frequencies. This phenomenon is parametrized by the contrast sensitivity function (CSF). In this thesis, proper luminance and chrominance CSFs have been employed, in conjunction with an optimised wavelet 1mb-band weighting method. Experimental results indicate that the proposed method ontperforms previous approaches and operates in accordance with the characteristics of the HVS, when tested objectively using a HDR Visible Difference Predictor (VDP), and subjective evaluation. The HVS shows non-linear sensitivity to the distortion introduced by lossy image and video coding. Two psycho-visual experiments were performed using of a high dynamic range display, in order to determine the potential differences between LDR and HDR edge masking (EM) and luminance masking (LM) effects. The EM experimental results indicate that the visibility threshold is higher for the case of HDR content than for LDR, especially on the dark background side of an edge. The LM experimental results suggest that the HDR visibility threshold is higher compared to that of SDR for both dark and bright luminance backgrounds. A novel perception-based quantization method that exploits luminance masking in the HVS in order to enhance the performance of the High Efficiency Video Coding (HEVC) standard for the case of HDR video content has been proposed in this thesis . The proposed method has been integrated into the reference codec considered for the HEVC range extensions and its performance was assessed by measuring the bitrate reduction against the codec without perceptual quantization. The results indicate that the proposed method achieves significant bitrate savings, up to 42.2%, compared to HEVC at the same objective quality (based on HDR-VDP- 2) and subjective evaluation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Emmot, Sebastian. "Characterizing Video Compression Using Convolutional Neural Networks". Thesis, Luleå tekniska universitet, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79430.

Texto completo da fonte
Resumo:
Can compression parameters used in video encoding be estimated, given only the visual information of the resulting compressed video? If so, these parameters could potentially improve existing parametric video quality estimation models. Today, parametric models use information like bitrate to estimate the quality of a given video. This method is inaccurate since it does not consider the coding complexity of a video. The constant rate factor (CRF) parameter for h.264 encoding aims to keep the quality constant while varying the bitrate, if the CRF for a video is known together with bitrate, a better quality estimate could potentially be achieved. In recent years, artificial neural networks and specifically convolutional neural networks have shown great promise in the field of image processing. In this thesis, convolutional neural networks are investigated as a way of estimating the constant rate factor parameter for a degraded video by identifying the compression artifacts and their relation to the CRF used. With the use of ResNet, a model for estimating the CRF for each frame of a video can be derived, these per-frame predictions are further used in a video classification model which performs a total CRF prediction for a given video. The results show that it is possible to find a relation between the visual encoding artifacts and CRF used. The top-5 accuracy achieved for the model is at 61.9% with the use of limited training data. Given that today’s parametric bitrate based models for quality have no information about coding complexity, even a rough estimate of the CRF could improve the precision of them.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Savadatti-Kamath, Sanmati S. "Video analysis and compression for surveillance applications". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26602.

Texto completo da fonte
Resumo:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. J. R. Jackson; Committee Member: Dr. D. Scott; Committee Member: Dr. D. V. Anderson; Committee Member: Dr. P. Vela; Committee Member: Dr. R. Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Sharma, Naresh. "Arbitrarily Shaped Virtual-Object Based Video Compression". Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1238165271.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia