Rozprawy doktorskie na temat „Scalable video”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Scalable video.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Scalable video”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Lee, Ying 1979. "Scalable video". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9071.

Pełny tekst źródła
Streszczenie:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 51).
This thesis presents the design and implementation of a scalable video scheme that accommodates the uncertainties in networks and the differences in receivers' displaying mechanisms. To achieve scalability, a video stream is encoded into two kinds of layers, namely the base layer and the enhancement layer. The decoder must process the base layer in order to display minimally acceptable video quality. For higher quality, the decoder simply combines the base layer with one or more enhancement layers. Incorporated with the IP multicast system, the result is a highly flexible and extensible structure that facilitates video viewing to a wide variety of devices, yet customizes the presentation for each individual receiver.
by Ying Lee.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
2

Stampleman, Joseph Bruce. "Scalable video compression". Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70216.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wee, Susie Jung-Ah. "Scalable video coding". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dereboylu, Ziya. "Error resilient scalable video coding". Thesis, University of Surrey, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.582748.

Pełny tekst źródła
Streszczenie:
Video compression is necessary for effective coding of video data so that the data can be stored or transmitted more efficiently. In video compression, the more redundant data are discarded, the higher compression ratios will be achievable. This causes the contents of a compressed bitstream to be highly dependent on each other. In video communications, the compressed bitstream is ,subject to losses and errors due to the nature of the transmission medium. Since the contents of the compressed bitstream are highly dependent on each other, when a loss or an error occurs this leads to propagation of the error, which causes deterioration of the decoded video quality. Error resilience plays an important role / in decreasing the quality degradation caused by losses arid errors. Error resilience methods can either take place in the encoder side as a coding technique which decreases the effects of errors on the coded bitstream or in the decoder side as a technique which conceals the detected errors or losses. Error concealment which takes place in decoder side and redundant slice coding which takes place in encoder side are investigated throughout the thesis. The first part of the thesis investigates efficient error concealment techniques for Scalable Video Coding (SVC). These include the utilisation of higher Temporal Level picture motion information and the utilisation of "Bridge Pictures" which will be described in later chapters, for error concealment. The second part of the thesis investigates redundant slice coding for SVc. Single Block per Macroblock and Zero Residual redundant slice coding schemes are proposed and tested in this part of the thesis. In addition to these, an adaptive redundant slice allocation scheme is also proposed and tested. The last part of the thesis investigates error resilient coding techniques for multi-view 3D video. Multi-view 3D video compression is achieved using the SVC CoDec by coding one of the views as the Base Layer and the other views as the Enhancement Layers utilising the adaptive inter-layer prediction mechanism of the SVc.
Style APA, Harvard, Vancouver, ISO itp.
5

Sanhueza, Gutiérrez Andrés Edgardo. "Scalable video coding sobre TCP". Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/136454.

Pełny tekst źródła
Streszczenie:
Ingeniero Civil Eléctrico
En tiempos modernos la envergadura del contenido multimedia avanza más rápido que el desarrollo de las tecnologías necesarias para su correcta difusión a través de la red. Es por esto que se hacen necesarios nuevos protocolos que sirvan como puente entre ambas entidades para así obtener un máximo de provecho del contenido a pesar de que la tecnología para distribuirlos aún no sea la adecuada. Es así, que dentro de las últimas tecnologías de compresión de video se encuentra Scalable Video Coding (SVC), la cual tiene por objetivo codi car distintas calidades en un único bitstream capaz de mostrar cualquiera de las calidades embebidas en éste según se reciba o no toda la información. En el caso de una conexión del tipo streaming, en donde es necesaria una uidez y delidad en ambos extremos, la tecnología SVC tiene un potencial muy grande respecto de descartar un mínimo de información para privilegiar la uidez de la transmisión. El software utilizado para la creación y manipulación de estos bitstreams SVC es Joint Scalable Video Model (JSVM). En este contexto, se desarrolla el algoritmo de deadline en Matlab, que omite informaci ón del video SVC de acuerdo a qué tan crítico sea el escenario de transmisión. En este escenario se considera la percepción de uidez del usuario como medida clave, por lo cual se prioriza mantener siempre una tasa de 30 fps a costa de una pérdida de calidad mínima. El algoritmo, omite información de acuerdo a qué tan lejos se esté de este deadline de 30 fps, si se está muy lejos, se omite información poco relevante, y si se está muy cerca, información más importante. Los resultados se contrastan con TCP y se evalúan para distintos valores de RTTs, cumpliendo totalmente el objetivo para valores menores a 150 ms que resultan en diferencias de hasta 20 s a favor del algoritmo de deadline al término de la transmisión. Esta mejora en tiempo de arribo no descarta información esencial y sólo degrada ligeramente la calidad del video en pos de mantener la tasa de 30fps. Por el contrario, en escenarios muy adversos de 300 ms en RTT, las omisiones son de gran envergadura y comprometen frames completos, en conjunto con una degradación generalizada del video y la aparición de artefactos en éste. Por tanto la propuesta cumple los objetivos en ambientes no muy adversos. Para toda la simulación se uso un video en movimiento de 352x288 y 150 frames de largo.
Style APA, Harvard, Vancouver, ISO itp.
6

Mehrseresht, Nagita Electrical Engineering &amp communication UNSW. "Adaptive techniques for scalable video compression". Awarded by:University of New South Wales. Electrical Engineering and communication, 2005. http://handle.unsw.edu.au/1959.4/20552.

Pełny tekst źródła
Streszczenie:
In this work we investigate adaptive techniques which can be used to improve the performance of highly scalable video compression schemes under resolution scaling. We propose novel content adaptive methods for motion compensated 3D discrete wavelet transformation (MC 3D-DWT) of video. The proposed methods overcome problems of ghosting and non-aligned aliasing artifacts, which can arise in regions of motion model failure, when the video is reconstructed at reduced temporal or spatial resolutions. We also study schemes which facilitate simultaneous scaling of compressed video bitstreams based on both constant bit-rate and constant distortion criteria, using simple and generic scaling operations. In regions where the motion model fails, the motion compensated temporal discrete wavelet transform (MC TDWT) causes ghosting artifacts under frame-rate scaling, due to temporal lowpass filtering along invalid motion trajectories. To avoid ghosting artifacts, we adaptively select between different lowpass filters, based on a local estimate of the motion modelling accuracy. Experimental results indicate that the proposed adaptive transform substantially removes ghosting artifacts while also preserving the high compression efficiency of the original MC TDWT. We also study the impact of various MC 3D-DWT structures on spatial scalability. Investigating the interaction between spatial aliasing, scalability and energy compaction shows that the t+2D structure essentially has higher compression efficiency. However, where the motion model fails, structures of this form cause non-aligned aliasing artifacts under spatial scaling. We propose novel adaptive schemes to continuously adapt the structure of MC 3D-DWT based on information available within the compressed bitstream. Experimental results indicate that the proposed adaptive structure preserves the high compression efficiency of the t+2D structure while also avoiding the appearance of non-aligned aliasing artifacts under spatial scaling. To provide simultaneous rate and distortion scaling, we study ???layered substream structure. Scaling based on distortion generates variable bit-rate traffic which satisfies the desired average bit-rate and is consistent with the requirements of leaky-bucket traffic models. We propose a novel method which also satisfies constraints on instantaneous bit-rate. This method overcomes the weakness of previous methods with small leaky-bucket buffer sizes. Simulation results indicate promising performance with both MC 3D-DWT interframe and JPEG2000 intraframe compression.
Style APA, Harvard, Vancouver, ISO itp.
7

Fan, Dian. "Scalable Video Transport over IP Networks". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/460.

Pełny tekst źródła
Streszczenie:
With the advances in video compression and networking techniques, the last ten years have witnessed an explosive growth of video applications over the Internet. However, the service model of the current best-effort network was never engineered to handle video traffic and, as a result, video applications still suffer from varying and unpredictable network conditions, in terms of bandwidth, packet loss and delay. To address these problems, a lot of innovative techniques have been proposed and researched. Among them, scalable video coding is a promising one to cope with the dynamics of the available bandwidth and heterogeneous terminals. This work aims at improving the efficacy of scalable video transport over IP networks. In this work, we first propose an optimal interleaving scheme combined with motion-compensated fine granularity scalability video source coding and unequal loss protection schemes, under an imposed delay constraint. The network is modeled as a packet-loss channel with random delays. The motion compensation prediction, ULP allocation and the depth of the interleaver are jointly optimized based on the network status and the delay constraint. We then proceed to investigate the multiple path transport technique. A unified approach which incorporates adaptive motion compensation prediction, multiple description coding and unequal multiple path allocation, is proposed to improve both the robustness and error resilience property of the video coding and transmission system, while the delivered video quality is improved simultaneously. To analytically investigate the efficacy of error resilient transport schemes for progressively encoded sources, including unequal loss protection, best-effort and FEC transport schemes, we develop evaluation and optimization approaches for these transport schemes. In this part of the work, the network is modeled as an M/D/1/K queue, and then a comprehensive queueing analysis is provided. Armed with these results, the efficacy of these transport schemes for progressively encoded sources are investigated and compared.
Style APA, Harvard, Vancouver, ISO itp.
8

Kim, Taehyun. "Scalable Video Streaming over the Internet". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6829.

Pełny tekst źródła
Streszczenie:
The objectives of this thesis are to investigate the challenges on video streaming, to explore and compare different video streaming mechanisms, and to develop video streaming algorithms that maximize visual quality. To achieve these objectives, we first investigate scalable video multicasting schemes by comparing layered video multicasting with replicated stream video multicasting. Even though it has been generally accepted that layered video multicasting is superior to replicated stream multicasting, this assumption is not based on a systematic and quantitative comparison. We argue that there are indeed scenarios where replicated stream multicasting is the preferred approach. We also consider the problem of providing perceptually good quality of layered VBR video. This problem is challenging, because the dynamic behavior of the Internet's available bandwidth makes it difficult to provide good quality. Also a video encoded to provide a consistent quality exhibits significant data rate variability. We are, therefore, faced with the problem of accommodating the mismatch between the available bandwidth variability and the data rate variability of the encoded video. We propose an optimal quality adaptation algorithm that minimizes quality variation while at the same time increasing the utilization of the available bandwidth. Finally, we investigate the transmission control protocol (TCP) for a transport layer protocol in streaming packetized media data. Our approach is to model a video streaming system and derive relationships under which the system employing the TCP protocol achieves desired performance. Both simulation results and the Internet experimental results validate this model and demonstrate the buffering delay requirements achieve desired video quality with high accuracy. Based on the relationships, we also develop realtime estimation algorithms of playout buffer requirements.
Style APA, Harvard, Vancouver, ISO itp.
9

Akhlaghian, Tab Fardin. "Multiresolution scalable image and video segmentation". Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060227.100704/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Al-Muscati, Hussain. "Scalable transcoding of H.264 video". Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=92256.

Pełny tekst źródła
Streszczenie:
Digital video transcoding provides a low complexity mechanism to convert a coded video stream from one compression standard to another. This conversion should be achieved while maintaining a high visual quality. The recent emergence and standardization of the scalable extension of the H.264 standard, together with the large availability of encoded H.264 single-layer content places great importance in developing a transcoding mechanism that converts from the single layer to the scalable form.
In this thesis, transcoding of a single layer H.264/AVC stream to H.264/SVC stream with combined spatial-temporal scalability is achieved through the use of a heterogeneous video transcoder in the pixel domain. This architecture is chosen as a compromise between complexity and reconstruction quality.
In this transcoder, the input H.264/AVC stream is fully decoded. The macroblock coding modes and partitioning decisions are reused to encode the output H.264/SVC stream. A set of new motion vectors is computed from the input stream coded motion vectors. This extracted and modified information is collectively downsampled, together with the decoded frames, in order to provide multiple scalable layers. The newly computed motion vectors are further subjected to a 3 pixel refinement. The output stream is coded with either a hierarchical B-frame or a zero-delay referencing structure.
The performance of the proposed transcoder is validated through simulation results. These simulations compare both the compression efficiency (PSNR/bit-rate) and computational complexity (computation time) of the implemented transcoding scheme to a setup that preforms a full decoding followed by a full encoding of the incoming video stream. It is shown that a significant decrease in computational complexity is achieved with a reduction of over 60% in some cases, while maintaining a small loss in compression efficiency.
Le transcodage vid´eo num´erique fournit un m´ecanisme de faible complexit´e pour convertir un flux vid´eo d'un format de compression `a un autre. Cette conversion devrait ˆetre atteinte tout en maintenant une haute qualit´e visuelle. La r´ecente ´emergence et la normalisation de l'extension "scalable" (en couches) de la norme H.264, ainsi que la grande disponibilit´e de contenu cod´e au format H.264 `a couche unique donnent une grande importance au d´eveloppement d'un m´ecanisme de transcodage qui convertit du format `a couche unique `a la forme "scalable" .
Dans cette th`ese, le transcodage d'un flux simple couche H.264/AVC vers un flux H.264/SVC combinant des couches spatiales et temporelles est obtenue par l'utilisation d'un transcodeur vid´eo h´et´erog`ene dans le domaine des pixels. Cette architecture est choisie comme un compromis entre la complexit´e et la qualit´e de reconstruction.
Dans ce transcodeur, le flux d'entr´ee H.264/AVC est enti`erement d´ecod´e. Le mode de codage et les d´ecisions de partitionnement pour les macro-blocs sont r´eutilis´es pour encoder le flux de sortie H.264/SVC. Un ensemble de nouveaux vecteurs de mouvement est calcul´e `a partir des vecteurs de mouvement du flux d'entr´ee cod´e. Cette information modifi´ee est sous-´echantillonn´ee, en mˆeme temps que les images d´ecod´ees, afin de fournir de multiples couches spatiales. Les vecteurs de mouvement nouvellement calcul´e sont en outre soumis `a un raffinement de 3 pixels. Le flux de sortie est cod´e soit avec soit un syst`eme dimages B hi´erarchique soit avec une structure `a d´elai z´ero.
La performance du transcodeur propos´e est valid´ee par les r´esultats de simulation.
Ces simulations comparent `a la fois l'efficacit´e de compression (PSNR/d´ebit), et la complexit ´e des calculs (temps de calcul) du syst`eme de transcodage `a un syst`eme qui met en uvre un d´ecodage complet suivi d'un r´e-encodage complet du flux vid´eo entrant. Il est d´emontr´e qu'une diminution significative de la complexit´e algorithmique est atteinte avec une r´eduction de plus de 60% dans certains cas, tout en maintenant une faible perte en efficacit´e de compression.
Style APA, Harvard, Vancouver, ISO itp.
11

Hewlett, Gregory James. "Scalable video in a multiprocessing environment". Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/67268.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Lu, Xin. "Efficient algorithms for scalable video coding". Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/59744/.

Pełny tekst źródła
Streszczenie:
A scalable video bitstream specifically designed for the needs of various client terminals, network conditions, and user demands is much desired in current and future video transmission and storage systems. The scalable extension of the H.264/AVC standard (SVC) has been developed to satisfy the new challenges posed by heterogeneous environments, as it permits a single video stream to be decoded fully or partially with variable quality, resolution, and frame rate in order to adapt to a specific application. This thesis presents novel improved algorithms for SVC, including: 1) a fast inter-frame and inter-layer coding mode selection algorithm based on motion activity; 2) a hierarchical fast mode selection algorithm; 3) a two-part Rate Distortion (RD) model targeting the properties of different prediction modes for the SVC rate control scheme; and 4) an optimised Mean Absolute Difference (MAD) prediction model. The proposed fast inter-frame and inter-layer mode selection algorithm is based on the empirical observation that a macroblock (MB) with slow movement is more likely to be best matched by one in the same resolution layer. However, for a macroblock with fast movement, motion estimation between layers is required. Simulation results show that the algorithm can reduce the encoding time by up to 40%, with negligible degradation in RD performance. The proposed hierarchical fast mode selection scheme comprises four levels and makes full use of inter-layer, temporal and spatial correlation aswell as the texture information of each macroblock. Overall, the new technique demonstrates the same coding performance in terms of picture quality and compression ratio as that of the SVC standard, yet produces a saving in encoding time of up to 84%. Compared with state-of-the-art SVC fast mode selection algorithms, the proposed algorithm achieves a superior computational time reduction under very similar RD performance conditions. The existing SVC rate distortion model cannot accurately represent the RD properties of the prediction modes, because it is influenced by the use of inter-layer prediction. A separate RD model for inter-layer prediction coding in the enhancement layer(s) is therefore introduced. Overall, the proposed algorithms improve the average PSNR by up to 0.34dB or produce an average saving in bit rate of up to 7.78%. Furthermore, the control accuracy is maintained to within 0.07% on average. As aMADprediction error always exists and cannot be avoided, an optimisedMADprediction model for the spatial enhancement layers is proposed that considers the MAD from previous temporal frames and previous spatial frames together, to achieve a more accurateMADprediction. Simulation results indicate that the proposedMADprediction model reduces the MAD prediction error by up to 79% compared with the JVT-W043 implementation.
Style APA, Harvard, Vancouver, ISO itp.
13

Stoian, Andrei. "Scalable action detection in video collections". Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1034/document.

Pełny tekst źródła
Streszczenie:
Cette thèse a pour but de proposer de nouvelles méthodes d'indexation des bases de données vidéo de type archive culturelle à partir des actions humaines qu'elles contiennent. Les actions humaines, représentent un aspect important des contenus multimédia, à côté des sons, images ou de la parole. L'interrogation technique principale à laquelle nous répondons est ``Comment détecter et localiser précisément et rapidement dans une vidéo une action humaine, à partir de quelques exemples de cette même action?''. Le défi relevé par cette interrogation se trouve dans la satisfaction de ces deux critères: qualité de détection et rapidité.Nous avons traité, dans une première partie, l'adaptation des mesures de similarité aux contraintes de temps de calcul et mémoire nécessaires pour avoir un système rapide de détection d'actions. Nous avons montré qu'une approche de type "alignement de séquences" couplée avec une sélection de variables permet de répondre rapidement à des requêtes et obtient une bonne qualité des résultats. L'ajout d'un filtrage préliminaire permet d'améliorer encore les performances.Dans une seconde partie de la thèse nous avons crée une méthode d'accélération de l'étage de filtrage pour obtenir une complexité de recherche sous-linéaire dans la taille de la base. Nous nous sommes basés sur le hachage sensible à la similarité et sur une nouvelle approche à l'exploration dans l'espace de hachage, adaptée à la << requête-par-détecteur >>.Nous avons testé les méthodes proposées sur une nouvelle base annotée de vidéos de grande taille destinée à la détection et localisation d'actions humaines. Nous avons montré que nos approches donnent des résultats de bonne qualité et qu'elles peuvent passer à l'échelle
This thesis proposes new methods for indexing video collections with varied content, such as cultural archives. We focus on human actions, which represent an important cultural aspect, together with sound, images and speech. Our main technical challenge is 'How to quickly detect and precisely localize human actions in a large video collection, when these actions are given as a query through example video clips?'. Thus, the difficulty of the task is due to criteria: quality of detection and search response time
Style APA, Harvard, Vancouver, ISO itp.
14

Kao, Meng-Ping. "A block-based scalable motion model for highly scalable video coding". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3307532.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed July 18, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 118-125).
Style APA, Harvard, Vancouver, ISO itp.
15

Hu, Mingyou. "Highly scalable 2D model-based video coding". Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/842677/.

Pełny tekst źródła
Streszczenie:
With rapid mergers of computer, communications, and entertainment industries, we can expect a trend of growing heterogeneity (in channel bandwidth, receiver capacity, etc.) for future digital video coding applications. Furthermore, some new functions appear, such as object manipulation, which should be supported by the video coding techniques. The traditional video coding approach is very constrained and inefficient to the heterogeneity issue and user interaction. Scalable coding, allowing partial decoding at a variety of resolution, temporal, quality, and object levels from a single compressed codestream, is widely considered as a promising technology for efficient signal representation and transmission in a heterogeneous environment. However, although several scalable algorithms have been proposed in the literature and the international standards over the last decade, further research is necessary to improve the compression performance of scalable video coding. This thesis investigates scalable 2D model-based video coding method with efficient video compression as well as excellent scalability performance, in order to satisfy the newly appeared requirements. It first examines main model-based video coding techniques and scalable video coding methods. Also, the parametric video models that describe the real world and image generation process are briefly described. Next, video segmentation algorithms are investigated to semantically represent the video frame into video objects. At the first frame, the texture information and the motion from first several frames are used to extract the semantic foreground objects. For some sequences, user interaction is required to get semantic objects. In later frames, the proposed complexity-scalable contour-tracking algorithm is used to segment each frame. After that, each object is progressively approximated using three-layer 2D mesh model. In order to represent the motion of human face more precisely, face detection and modelling are also investigated. This technique, in which human face is modelled separately, is shown to produce improvements of object motion representation. Scalable model compression is also outlined in this thesis. Object model is represented into two parts: object shape and interior object model, which are compressed separately. A scalable contour approximation algorithm is proposed. Both intra- and predictive scalable shape-coding algorithms are investigated and proposed to code the object shape progressively. The encoded coarser layers are used to improve the coding efficiency of the current layer. The effectiveness of these algorithms is demonstrated through the results of extensive experiments. We also investigate the scalable texture coding of video objects. An improved shape-adaptive SPECK algorithm is employed in intra-texture coding and is also used for residual texture coding after motion compensated temporal filtering. During motion compensated temporal filtering, scalable mesh object model is used, and scalable motion vector coding is achieved using CABAC codec. A hierarchically structured bitstream is created, which is optimised for rate-distortion, to facilitate efficient bit truncation and bit allocation among video frames and video objects. The coding system can encode/decode the video object independently and generate a separate bit stream for each object. As is exhibited in our experiments, such a high coding scalability in the proposed coding system is achieved without a significant cost in compression performance commonly experienced in most scalable coding systems.
Style APA, Harvard, Vancouver, ISO itp.
16

Bayrakeri, Sadik. "Scalable video coding using spatio-temporal interpolation". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15385.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Wang, Zhou. "Rate scalable foveated image and video communications /". Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3064684.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Zhang, Lelin. "Scalable Content-Based Image and Video Retrieval". Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15439.

Pełny tekst źródła
Streszczenie:
The popularity of the Internet and portable image capturing devices brings in unprecedented amount of images and videos. Content-based visual search provides an important tool for users to consume the ever-growing digital media repositories, and is becoming an increasingly demanding task as never before. In this thesis, we focus on improving the scalability, efficiency and usability of content-based image and video retrieval systems, particularly in dynamic and open environments. Towards our goal, we make four contributions to the research community. First, we propose a scalable approach to adopt bag-of-visual-words (BoVW) to content-based image retrieval (CBIR) in peer-to-peer (P2P) networks. To overcome the dynamic P2P environment, we propose a distributed codebook updating algorithm based on splitting/merging of individual codewords, which maintains the workload balance in the network churn. Our approach offers a scalable framework for content-based visual search in P2P environment. Second, we improve the retrieval performance of CBIR with relevance feedback (RF). We formulate the RF process as an energy minimization problem, and utilize graph cuts algorithm to solve the problem and obtain relevant/irrelevant labels for the images. Our method enables flexible partitioning of the feature space and is capable of handling challenging scenarios. Third, we improve the retrieval performance of trajectory based action video retrieval with spatial-temporal context. We exploit the spatial-temporal correlations among trajectories for descriptor coding, and tackle the trajectory segment mis-alignment issue with an offset-aware distance for trajectory matching. Finally, we develop a toolset to improve the efficiency and provide better insight of the BoVW pipeline. Our toolset provides robust integration of different methods, automatic parallel execution and result reusing, and visualization of the retrieval process.
Style APA, Harvard, Vancouver, ISO itp.
19

Lam, Sui Yuk. "Complexity optimization in H.264 and scalable extension /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20LAM.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Poullot, Sébastien. "Scalable Content-Based Video Copy Detection for Stream Monitoring and Video Mining". Paris, CNAM, 2009. http://www.theses.fr/2009CNAM0627.

Pełny tekst źródła
Streszczenie:
This thesis essentially adresses the scability of the indexong methods of vectorial databases. The applications concern the similarity-based search of video descriptors in large volumes in order to perform content-based copy detection. On one hand we want to perform an online monitoring of a video stream on a reference database, containing here 280000 hours of video, which means 17 billions of descriptors. The proposed solution is based on a new indexing and probalistic searching method based on a Zgrid, but also on a distorsion of the video descriptors and on a local density model. The goal is to perform a more selective and so faster similarity search. Here we can handle the monitoring of one video stream on the 280000 hours database in a differed real time with a single standard PC. On the other hand we want to detect the occurences of the videos in a such a large database. The problem become quadratic, here a similarity self join of the descriptor database must be performed. Here we propose a new global description of the frames based on a local descriptions to reduce complexity while conserving a good tobustness. We also propose an indexing scheme apated to this task which presents moreover an easily parrallel scheme in order to mine the previously announced volumes. Our tests have been performed on dtabases containing up to 10000 hours of video in 80 hours with a single standard PC
Le paysage vidéo a récemment été profondément bouleversé par de nombreuses innovations technologiques. Les méthodes et acteurs de la distribution et de la production vidéo ont notamment fortement évolués. Le nombre de canaux de diffusion télévisuels augmente continuellement et parallèlement Internet supporte de nombreux sites communautaires et blogs comportant de la vidéo. Les utilisateurs finaux sont devenus eux-mêmes auteurs et créateurs, le volume qu'ils génèrent concurrence largement celui produit par les professionnels. On peut aussi noter que les logiciels d'édition vidéo sont aujourd'hui grand public et que la personnalisation de contenus est simple et très en vogue. Les professionnels aussi réutilisent largement de vieux contenus pour faire du neuf. Une conséquence directe est l'augmentation croissante du nombre de copies diffusées et hébergées sur les réseaux. L'existence de ces copies soulèvent le problème de la protection des droits. Un ayant droit peut exprimer légitimement le besoin d'être rémunéré si un oeuvre lui appartenant est diffusé sur une chaîne. L'INA est chargé d'accomplir cette tâche en France et donc de surveiller les différents canaux pour noter ces rediffusions. Le challenge tient aux volumes à protéger et à surveiller. Le nombre d'heures numérisées est de l'ordre du demi million et le nombre de canaux de la centaine. Les documentalistes ne peuvent gérer une telle connaissance ni un tel afflux. Un pré travail automatique par ordinateur est obligatoire: un système de surveillance vidéo par le contenu. Celui-ci est chargé de lire les flux vidéos diffusés et de décider si dans ces flux apparaissent des vidéos issues de la base référence à protéger. La détection par le contenu signifie l'utilisation du signal vidéo pour faire cette reconnaissance. Les vidéos représentent de gros volumes de données, et l'exploitation du signal complet n'est pas envisageable. Par conséquent on résume les vidéos par des descripteurs, sorte de quantificateurs du signal. Le problème de la surveillance repose alors sur la recherche de descripteurs dans une base de descripteurs de référence. Ces bases contiennent des milliards de descripteurs qui sont des vecteurs de moyenne ou grande dimension (20 à quelques centaines). Un tel système pour être viable demande alors un système d'indexation des descripteurs pour effectuer des recherches rapides. Après cette recherche un processus prend la décision en utilisant les descripteurs issus de la recherche. Dans cette thèse nous présentons un nouveau schéma d'indexation, appelé Zgrid, pour faire la recherche rapide. Ce schéma permet de faire une recherche approximative. Nous l'avons amélioré par des analyses de distribution des données dans l'espace de description. Par ailleurs nous proposons un nouveau modèle des distortions subies par les descripteurs lors des processus de copies et un modèle de densité locale pour corriger la recherche, celle-ci est alors plus sélective et moins consommatrice de temps. L'utilisation croisée de ces différentes propositions permet de suivre en temps réel différé un flux vidéo et de le comparer à une base de référence de 280,000 heures de vidéo avec un simple PC. L'existence de nombreuses copies peut aussi présenter des avantages. La détection des différentes occurrences d'un même contenu peut permettre par exemple de mutualiser des annotations ou d'aider à la navigation dans les bases vidéos. Le problème prend alors une autre dimension avec une complexité quadratique: on doit rechercher l'ensemble des descripteurs d'une base sur cette même base, ce qu'on appelle communément une auto jointure par similarité. Pour réduire la complexité de cette tâche nous proposons ici un nouveau descripteur dit Glocal qui utilise des descripteurs locaux pour construire un descripteur global au niveau de l'image. Ce changement de niveau permet par ailleurs de réduire aussi la complexité du processus de décision finale. Nous proposons aussi un nouveau système d'indexation adapté à l'auto jointure par similarité et à ce descripteur. La réduction globale des temps de calculs permet de trouver les occurrences dans une base de 10,000 heures avec un simple PC mais aussi de trouver ces occurrences dans une petite base (moins de 100 heures) en 30 secondes. On peut ainsi envisager des applications « off-line » pour les administrateurs de site vidéos et « online » pour les utilisateurs
Style APA, Harvard, Vancouver, ISO itp.
21

Secker, Andrew J. Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Motion-adaptive transforms for highly scalable video compression". Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2004. http://handle.unsw.edu.au/1959.4/33036.

Pełny tekst źródła
Streszczenie:
This thesis investigates motion-adaptive temporal transformations and motion parameter coding schemes, for highly scalable video compression. The first aspect of this work proposes a new framework for constructing temporal discrete wavelet transforms, based on motion-compensated lifting steps. The use of lifting preserves invertibility regardless of the selected motion model. By contrast, the invertibility requirement has restricted previous approaches to either block-based or global motion compensation. We show that the proposed framework effectively applies the temporal wavelet transform along the motion trajectories. Video sequences reconstructed at reduced frame-rates, from subsets of the compressed bitstream, demonstrate the visually pleasing properties expected from lowpass filtering along the motion trajectories. Experimental results demonstrate the effectiveness of temporal wavelet kernels other than the simple Haar. We also demonstrate the benefits of complex motion modelling, by using a deformable triangular mesh. These advances are either incompatible or diffcult to achieve with previously proposed strategies for scalable video compression. A second aspect of this work involves new methods for the representation, compression and rate allocation of the motion information. We first describe a compact representation for the various motion mappings associated with the proposed lifting transform. This representation significantly reduces the number of distinct motion fields that must be transmitted to the decoder. We also incorporate a rate scalable scheme for coding the motion parameters. This is achieved by constructing a set of quality layers for the motion information, in a manner similar to that used to construct the scalable sample representation. When the motion layers are truncated, the decoder receives a quantized version of the motion parameters used to code the sample data. A linear model is employed to quantify the effects of motion parameter quantization on the reconstructed video distortion. This allows the optimal trade-off between motion and subband sample bit-rates to be determined after the motion and sample data has been compressed. Two schemes are proposed to determine the optimal trade-off between motion and sample bit-rates. The first scheme employs a simple but effective brute force search approach. A second scheme explicitly utilizes the linear model, and yields comparable performance to the brute force scheme, with significantly less computational cost. The high performance of the second scheme also serves to reinforce the validity of the linear model itself. In comparison to existing scalable coding schemes, the proposed video coder achieves significantly higher compression performance, and motion scalability facilitates effcient compression even at low bit-rates. Experimental results show that the proposed scheme is also competitive with state-of-the-art non-scalable video coders.
Style APA, Harvard, Vancouver, ISO itp.
22

Uyar, Ahmet. "Scalable service oriented architecture for audio/video conferencing". Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2005. http://wwwlib.umi.com/cr/syr/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Hägg, Ragnar. "Scalable High Efficiency Video Coding : Cross-layer optimization". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257558.

Pełny tekst źródła
Streszczenie:
In July 2014, the second version of the HEVC/H.265 video coding standard was announced, and it included the Scalable High efficiency Video Coding (SHVC) extension. SHVC is used for coding a video stream with subset streams of the same video with lower quality, and it supports spatial, temporal and SNR scalability among others. This is used to enable easy adaption of a video stream, by dropping or adding packages, to devices with different screen sizes, computing power and bandwidth. In this project SHVC has been implemented in Ericsson's research encoder C65. Some cross-layer optimizations have also been implemented and evaluated. The main goal of these optimizations are to make better decisions when choosing the reference layer's motion parameters and QP, by doing multi-pass coding and using the coded enhancement layer information from the first pass.
Style APA, Harvard, Vancouver, ISO itp.
24

Li, Xue. "Scalable and adaptive video multicast over the internet". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8202.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Höferlin, Benjamin [Verfasser]. "Scalable Visual Analytics for Video Surveillance / Benjamin Höferlin". München : Verlag Dr. Hut, 2014. http://d-nb.info/1050331842/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Shi, Feng. "An architecture for scalable and deterministic video servers". Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627397.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Naghdinezhad, Amir. "Error resilient methods in scalable video coding (SVC)". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121379.

Pełny tekst źródła
Streszczenie:
With the rapid development of multimedia technology, video transmission over unreliable channels like Internet and wireless networks, is widely used. Channel errors can result in a mismatch between the encoder and the decoder, and because of the predictive structures used in video coding, the errors will propagate both temporally and spatially. Consequently, the quality of the received video at the decoder may degrade significantly. In order to improve the quality of the received video, several error resilient methods have been proposed. Furthermore, in addition to compression efficiency and error robustness, flexibility has become a new multimedia requirement in advanced multimedia applications. In these applications such as video conferencing and video streaming, compressed video is transmitted over heterogeneous networks with a broad range of clients with different requirements and capabilities in terms of power, bandwidth and display resolution, simultaneously accessing the same coded video. The scalable video coding concept was proposed to address the flexibility issue by generating a single bit stream that meets the requirement of these users. This dissertation is concerned with novel contributions in the area of error resilience for scalable extension of H.264/AVC. The first part of the dissertation focuses on modifying the conventional prediction structure in order to reduce the propagation of error to succeeding frames. We propose two new prediction structures that can be used in temporal and spatial scalability of SVC. The proposed techniques improve the previous methods by efficiently exploiting the Intra macroblocks (MBs) in the reference frames and exponential decay of error propagation caused by the introduced leaky prediction.In order to satisfy both coding efficiency and error resilience in error prone channels, we combine error resilience mode decision technique with the proposed prediction structures. The end-to-end distortion of the proposed prediction structure is estimated and used instead of the source coding distortion in the rate distortion optimization. Furthermore, accurately analysing the utility of each video packet in unequal error protection techniques is a critical and usually very complex process. We present an accurate low complexity utility estimation technique. This technique estimates the utility of each network abstraction layer (NAL) by considering the error propagation to future frames. Also, a low delay version of this technique, which can be used in delay constrained applications, is presented.
La révolution technologique de l'information et des communications a donné lieu à un élargissement du marché des applications multimédias. Sur des canaux non fiables comme Internet et les réseaux sans fil, la présence des erreurs de transmission est considérée comme l'une des principales causes de la dégradation de la qualité vidéo au niveau du récepteur. Et en raison des structures de prédiction utilisées dans le codage vidéo, ces erreurs ont tendance à se propager à la fois temporellement et spatialement. Par conséquent, la qualité de la vidéo reçue risque de se dégrader d'une façon considérable. Afin de minimiser ce risque, des outils qui permettent de renforcer la robustesse contre les erreurs ont été proposés. En plus de la résistance aux erreurs, la flexibilité est devenue une nouvelle exigence dans des applications multimédias comme la vidéo conférence et la vidéo en streaming. En effet, la vidéo compressée est transmise sur des réseaux hétérogènes avec un large éventail de clients ayant des besoins différents et des capacités différentes en termes de puissance, de résolution vidéo et de bande passante, d'où la nécessite d'une solution pour l'accès simultané à la même vidéo codée. La scalabilité est venue répondre aux exigences de tous ces utilisateurs.Cette thèse, élaborée dans le cadre du développement de la version scalable de la norme H.264/AVC (aussi connue sous le nom de SVC), présente des idées innovantes dans le domaine de la résilience aux erreurs. La première partie de la thèse expose deux nouvelles structures de prédiction qui aident à renforcer la résistance aux erreurs. Les structures proposées peuvent être utilisées dans la scalabilité temporelle et spatiale et visent essentiellement à améliorer les méthodes antérieures en exploitant de manière plus efficace les MBs "Intra" dans les images de référence et en profitant de la prédiction "Leaky" qui permet de réduire de façon exponentielle la propagation des erreurs de transmission.Afin de satisfaire à la fois l'efficacité du codage et la résilience aux erreurs, nous avons combiné les techniques proposées avec les modules de décision. En plus, une estimation de la distorsion de bout en bout a été utilisée dans le calcul du coût des différents modes. En outre, analyser avec précision l'importance de chaque paquet de données vidéo dans de telles structures est un processus critique et généralement très complexe. Nous avons proposé une méthode simple et fiable pour cette estimation. Cette méthode consiste à évaluer l'importance de chaque couche d'abstraction réseau (NAL) en considérant la propagation des erreurs dans les images futures. En plus, une version avec un faible délai de réponse a été présentée.
Style APA, Harvard, Vancouver, ISO itp.
28

Atta, Randa. "Scalable video coding based on the DCT pyramid". Thesis, University of Essex, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Biatek, Thibaud. "Efficient rate control strategies for scalable video coding". Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0007/document.

Pełny tekst źródła
Streszczenie:
High Efficiency Video Coding (HEVC/H.265) est la dernière norme de compression vidéo, finalisée en Janvier 20 13. Son extension scalable, SHVC, a été publiée en Octobre 2014 et supporte la scalabilité spatiale, en gamut de couleur (CGS) et même en norme de compression (AVC vers HEVC). SHVC peut être utilisée pour l'introduction de nouveaux services, notamment grâce à la rétrocompatibilité qu'elle apporte par la couche de base (BL) et qui est complétée par une couche d'amélioration (BL+EL) qui apporte les nouveaux services. De plus, SHVC apporte des gains en débit significatifs par rapport à l'encodage dit simulcast (l'encodage HEVC séparés). SHVC est considérée par DVB pour accompagner l'introduction de services UHD et est déjà incluse dans la norme ATSC-3.0. Dans ce contexte, l'objectif de la thèse est la conception de stratégies de régulation de débit pour les codeurs HEVC/SHVC lors de l'introduction de nouveaux services UHD. Premièrement, nous avons étudié l'approche p-domaine qui modélise linéairement le nombre coefficient non-nuls dans les résidus transformés et quantifiés avec le débit, et qui permet de réaliser des régulations de débit peu complexes. Après validation du modèle, nous avons conçu un premier algorithme de contrôle de débit au niveau bloc en utilisant cette approche. Pour chaque bloc et son débit cible associé, notre méthode estime de façon précise le paramètre de quantification (QP) optimal à partir des blocs voisins, en limitant l'erreur de débit sous les 4%. Puis, nous avons proposé un modèle d'estimation déterministe du p-domaine qui évite l'utilisation de tables de correspondance et atteignant une précision d'estimation supérieure à90%. Deuxièmement, nous avons investigué l'impact du ratio de débit entre les couches d'un codeur SHVC sur ses performances de compression, pour la scalabilité spatiale, CGS et SOR vers HDR. En se basant sur les résultats de cette étude, nous avons élaborés un algorithme de régulation de débit adaptatif. La première approche proposée optimise les gains de codage en choisissant dynamiquement le ratio de débit optimal dans un intervalle prédéterminé et fixe lors de l'encodage. Cette première méthode a montré un gain de codage significatif de 4.25% par rapport à une approche à ratio fixe. Cette méthode a été ensuite améliorée en lui ajoutant des contraintes de qualité et de débit sur chaque couche, au lieu de considérer un in tervalle fixe. Ce second algorithme a été testé sur le cas de diffusion de programme HD/UHD ct de déploiement de se1vices UHDI-P1 vers UHD 1-P2 (cas d'usage DVB), où elle permet des gains de 7.51% ct 8.30% respectivement. Enfin, le multiplexage statistique de programmes scalable a été introduit et brièvement investigué. Nous avons proposé une première approche qui ajuste conjointement le débit global attribué à chaque programme ainsi que le ratio de débit, de façon à optimiser les performances de codage. De plus, la méthode proposée lisse les variations et l'homogénéité de la qualité parmi les programmes. Cette méthode a été appliquée à une base de données contenant des flux pré-encodés. La méthode permet dans ce cas une réduction du surcoût de la scalabilité de 11.01% à 7.65% comparé à l'encodage a débit et ratio fixe, tout en apportant une excellente précision et une variation de qualité limitée
High Efficiency Video Coding (HEVC/H.265) is the latest video coding standard, finalized in Janua1y 2013 as the successor of Advanced Video Coding (AVC/H.264). Its scalable extension, called SHVC was released in October 2014 and enables spatial, bitdepth, color-gamut (CGS) and even standard scalability. SHVC is a good candidate for introducing new services thanks to backward compatibility features with legacy HEVC receivers through the base-layer (BL) stream and next generation ones through the BL+EL (enhancement layer). In addition, SHVC saves substantial bitrate with respect to simulcast coding (independent coding of layers) and is also considered by DVB for UHD introduction and included in ATSC-3 .0. In this context, the work of this thesis aims at designing efficient rate-control strategies for HEVC and its scalable extension SHVC in the context of new UHD formats introduction. First, we have investigated the p-domain approach which consists in linking the number of non-zero transfonned and quantized residual coefficients with the bitrate, in a linear way, to achieve straightforward rate-control. After validating it in the context of HEVC and SHVC codings, we have developed an innovative Coding Tree Unit (CTU)-level rate-control algorithm using the p-domain. For each CTU and its associated targeted bit rate, our method accurately estimates the most appropriate quantization parameter (QP) based on neighborhood indicators, with a bit rate error below 4%. Then, we have proposed a deterministic way of estimating the p-domain model which avoids the implementation of look-up tables. The proposed method enables accurate model estimation over 90%. Second, we have explored the impact of the bitrate ratio between layers on the SHVC performance for the spatial, CGS and SDR-to-HDR scalability. Based on statistical observations, we have built an adaptive rate control algorithms (ARC). We have first proposed an ARC scheme which optimizes coding performance by selecting the optimal ratio into a fixed ratio inte1val, under a global bitrate instruction (BL+EL). This method is adaptive and considers the content and the type of scalability. This first approach enables a coding gain of 4.25% compared to fixed-ratio encoding. Then, this method has been enhanced with quality and bandwidth constraints in each layer instead of considering a fixed interval. This second method has been tested on hybrid delivery of HD/UHD services and backward compatible SHVC encoding of UHDI -PI /UHDI -P2 services (DVB use-case) where it enables significant coding gains of 7.51% and 8.30%, respectively. Finally, the statistical multiplexing of SHVC programs has been investigated. We have proposed a first approach which adjusts both the global bit rate to allocate in each program and the ratio between BL and EL to optimize the coding performance. In addition, the proposed method smooths the quality variations and enforces the quality homogeneity between programs. This method has been applied to a database containing pre-encoded bitstreams and enables an overhead reduction from 11.01% to 7.65% compared to constant bitrate encoding, while maintaining a good accuracy and an acceptable quality variations among programs
Style APA, Harvard, Vancouver, ISO itp.
30

Shahid, Muhammad Zafar Javed. "Protection of Scalable Video by Encryption and Watermarking". Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20074.

Pełny tekst źródła
Streszczenie:
Le champ du traitement des images et des vidéos attire l'attention depuis les deux dernières décennies. Ce champ couvre maintenant un spectre énorme d'applications comme la TV 3D, la télé-surveillance, la vision par ordinateur, l'imagerie médicale, la compression, la transmission, etc. En ce début de vingt et unième siècle nous sommes témoins d'une révolution importante. Les largeurs de bande des réseaux, les capacités de mémoire et les capacités de calcul ont été fortement augmentés durant cette période. Un client peut avoir un débit de plus de 100~mbps tandis qu'un autre peut utiliser une ligne à 56~kbps. Simultanément, un client peut avoir un poste de travail puissant, tandis que d'autres peuvent avoir juste un téléphone mobile. Au milieu de ces extrêmes, il y a des milliers de clients avec des capacités et des besoins très variables. De plus, les préférences d'un client doivent s'adapter à sa capacité, par exemple un client handicapé par sa largeur de bande peut être plus intéressé par une visualisation en temps réel sans interruption que d'avoir une haute résolution. Pour y faire face, des architectures hiérarchiques de codeurs vidéo ont été introduites afin de comprimer une seule fois, et de décomprimer de différentes manières. Comme la DCT n'a pas la fonctionnalité de multi-résolution, une architecture vidéo hiérarchique est conçue pour faire face aux défis des largeurs de bande et des puissances de traitement hétérogènes. Avec l'inondation des contenus numériques, qui peuvent être facilement copiés et modifiés, le besoin de la protection des contenus vidéo a pris plus d'importance. La protection de vidéos peut être réalisée avec l'aide de trois technologies : le tatouage de méta-données et l'insertion de droits d'auteur, le cryptage pour limiter l'accès aux personnes autorisées et la prise des empreintes digitales active pour le traçage de traître. L'idée principale dans notre travail est de développer des technologies de protection transparentes à l'utilisateur. Cela doit aboutir ainsi à un codeur vidéo modifié qui sera capable de coder et d'avoir un flux de données protégé. Puisque le contenu multimédia hiérarchique a déjà commencé à voir le jour, algorithmes pour la protection indépendante de couches d 'amélioration sont également proposées
Field of image and video processing has got lot of attention during the last two decades. This field now covers a vast spectrum of applications like 3D TV, tele-surveillance, computer vision, medical imaging, compression, transmission and much more. Of particular interest is the revolution being witnessed by the first decade of twenty-first century. Network bandwidths, memory capacities and computing efficiencies have got revolutionized during this period. One client may have a 100~mbps connection whereas the other may be using a 56~kbps dial up modem. Simultaneously, one client may have a powerful workstation while others may have just a smart-phone. In between these extremes, there may be thousands of clients with varying capabilities and needs. Moreover, the preferences of a client may adapt to his capacity, e.g. a client handicapped by bandwidth may be more interested in real-time visualization without interruption than in high resolution. To cope with it, scalable architectures of video codecs have been introduced to 'compress once, decompress many ways' paradigm. Since DCT lacks the multi-resolution functionality, a scalable video architecture is designed to cope with challenges of heterogeneous nature of bandwidth and processing power. With the inundation of digital content, which can be easily copied and modified, the need for protection of video content has got attention. Video protection can be materialized with help of three technologies: watermarking for meta data and copyright insertion, encryption to restrict access to authorized persons, and active fingerprinting for traitor tracing. The main idea in our work is to make the protection technology transparent to the user. This would thus result in a modified video codec which will be capable of encoding and playing a protected bitstream. Since scalable multimedia content has already started coming to the market, algorithms for independent protection of enhancement layers are also proposed
Style APA, Harvard, Vancouver, ISO itp.
31

Palaniappan, Ramanathan. "Scalable video communications: bitstream extraction algorithms for streaming, conferencing and 3DTV". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42732.

Pełny tekst źródła
Streszczenie:
This research investigates scalable video communications and its applications to video streaming, conferencing and 3DTV. Scalable video coding (SVC) is a layer-based encoding scheme that provides spatial, temporal and quality scalability. Heterogeneity of the Internet and clients' operating environment necessitate the adaptation of media content to ensure a satisfactory multimedia experience. SVC's layer structure allows the extraction of partial bitstreams at reduced spatial, quality and temporal resolutions that adjust the media bitrate at a fine granularity to changes in network state. The main focus of this research work is in developing such extraction algorithms in the context of SVC. Based on a combination of metadata computations and prediction mechanisms, these algorithms evaluate the quality contribution of each layer in the SVC bitstream and make extraction decisions that are aimed at maximizing video quality while operating within the available bandwidth resources. These techniques are applied in two-way interaction and one-way streaming of 2D and 3D content. Depending on the delay tolerance of these applications, rate-distortion optimized extraction algorithms are proposed. For conferencing applications, the extraction decisions are made over single frames and frame pairs due to tight end-to-end delay constraints. The proposed extraction algorithms for 3D content streaming maximize the overall perceived 3D quality based on human stereoscopic perception. When compared to current extraction methods, the new algorithms offer better video quality at a given bitrate while performing lesser number of metadata computations in the post-encoding phase. The solutions proposed for each application achieve the recurring goal of maintaining the best possible level of end-user quality of multimedia experience in spite of network impairments.
Style APA, Harvard, Vancouver, ISO itp.
32

Lalgudi, Hariharan G., Michael W. Marcellin, Ali Bilgin i Mariappan S. Nadar. "SCALABLE LOW COMPLEXITY CODER FOR HIGH RESOLUTION AIRBORNE VIDEO". International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/605492.

Pełny tekst źródła
Streszczenie:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Real-time transmission of airborne images to a ground station is highly desirable in many telemetering applications. Such transmission is often through an error prone, time varying wireless channel, possibly under jamming conditions. Hence, a fast, efficient, scalable, and error resilient image compression scheme is vital to realize the full potential of airborne reconnaisance. JPEG2000, the current international standard for image compression, offers most of these features. However, the computational complexity of JPEG2000 limits its use in some applications. Thus, we present a scalable low complexity coder (SLCC) that possesses many desirable features of JPEG2000, yet having high throughput.
Style APA, Harvard, Vancouver, ISO itp.
33

Dai, Min. "Rate-distortion analysis and traffic modeling of scalable video coders". Texas A&M University, 2004. http://hdl.handle.net/1969.1/3143.

Pełny tekst źródła
Streszczenie:
In this work, we focus on two important goals of the transmission of scalable video over the Internet. The first goal is to provide high quality video to end users and the second one is to properly design networks and predict network performance for video transmission based on the characteristics of existing video traffic. Rate-distortion (R-D) based schemes are often applied to improve and stabilize video quality; however, the lack of R-D modeling of scalable coders limits their applications in scalable streaming. Thus, in the first part of this work, we analyze R-D curves of scalable video coders and propose a novel operational R-D model. We evaluate and demonstrate the accuracy of our R-D function in various scalable coders, such as Fine Granular Scalable (FGS) and Progressive FGS coders. Furthermore, due to the time-constraint nature of Internet streaming, we propose another operational R-D model, which is accurate yet with low computational cost, and apply it to streaming applications for quality control purposes. The Internet is a changing environment; however, most quality control approaches only consider constant bit rate (CBR) channels and no specific studies have been conducted for quality control in variable bit rate (VBR) channels. To fill this void, we examine an asymptotically stable congestion control mechanism and combine it with our R-D model to present smooth visual quality to end users under various network conditions. Our second focus in this work concerns the modeling and analysis of video traffic, which is crucial to protocol design and efficient network utilization for video transmission. Although scalable video traffic is expected to be an important source for the Internet, we find that little work has been done on analyzing or modeling it. In this regard, we develop a frame-level hybrid framework for modeling multi-layer VBR video traffic. In the proposed framework, the base layer is modeled using a combination of wavelet and time-domain methods and the enhancement layer is linearly predicted from the base layer using the cross-layer correlation.
Style APA, Harvard, Vancouver, ISO itp.
34

Garbas, Jens-Uwe [Verfasser]. "Scalable Wavelet-Based Multiview Video Coding / Jens-Uwe Garbas". München : Verlag Dr. Hut, 2010. http://d-nb.info/1009972251/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Mathew, Reji Kuruvilla Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Quad-tree motion models for scalable video coding applications". Awarded by:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/44600.

Pełny tekst źródła
Streszczenie:
Modeling the motion that occurs between frames of a video sequence is a key component of video coding applications. Typically it is not possible to represent the motion between frames by a single model and therefore a quad-tree structure is employed where smaller, variable size regions or blocks are allowed to take on separate motion models. Quad-tree structures however suffer from two fundamental forms of redundancy. First, quad-trees exhibit structural redundancy due to their inability to exploit the dependence between neighboring leaf nodes with different parents. The second form of redundancy is due to the quad-tree structure itself being limited to capture only horizontal and vertical edge discontinuities at dyadically related locations; this means that general discontinuities in the motion field, such as those caused by boundaries of moving objects, become difficult and expensive to model. In our work, we address the issue of structural redundancy by introducing leaf merging. We describe how the intuitively appealing leaf merging step can be incorporated into quad-tree motion representations for a range motion modeling contexts. In particular, the impact of rate-distortion (R-D) optimized merging for two motion coding schemes, these being spatially predictive coding, as used by H.264, and hierarchical coding, are considered. Our experimental results demonstrate that the merging step can provide significant gains in R-D performance for both the hierarchical and spatial prediction schemes. Hierarchical coding has the advantage that it offers scalable access to the motion information; however due to the redundancy it introduces hierarchical coding has not been traditionally pursued. Our work shows that much of this redundancy can be mitigated with the introduction of merging. To enable scalable decoding, we employ a merging scheme which ensures that the dependencies introduced via merging can be hierarchically decoded. Theoretical investigations confirm the inherent advantages of leaf merging for quad-tree motion models. To enable quad-tree structures to better model motion discontinuity boundaries, we introduce geometry information to the quad-tree representation. We choose to model motion and geometry using separate quad-tree structures; thereby enabling each attribute to be refined separately. We extend the leaf merging paradigm to incorporate the dual tree structure allowing regions to be formed that have both motion and geometry attributes, subject to rate-distortion optimization considerations. We employ hierarchical coding for the motion and geometry information and ensure that the merging process retains the property of resolution scalability. Experimental results show that the R-D performance of the merged dual tree representation, is significantly better than conventional motion modeling schemes. Theoretical investigations show that if both motion and boundary geometry can be perfectly modeled, then the merged dual tree representation is able to achieve optimal R-D performance. We explore resolution scalability of merged quad-tree representations. We consider a modified Lagrangian cost function that takes into account the possibility of scalable decoding. Experimental results reveal that the new cost objective can considerably improve scalability performance without significant loss in overall efficiency and with competitive performance at all resolutions.
Style APA, Harvard, Vancouver, ISO itp.
36

Gramsci, Shantanu Khan. "A scalable video streaming approach using distributed b-tree". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/33848.

Pełny tekst źródła
Streszczenie:
Streaming video comprises the most of today’s Internet traffic, and it’s pre- dicted to increase more. Today millions of users are watching video over the Internet; video sharing sites are getting more than billion hits per day. To serve this massive user base has always been a challenging job. Over the period of time a number of approaches have been proposed, mainly in two categories - client server and peer to peer based streaming. Despite the potential scalability benefits of peer to peer systems, most popular video sharing sites today are using client server model, leveraging the caching benefits of Content Delivery Networks. In such scenarios, video files are replicated among a group of edge servers, clients’ requests are directed to an edge server instead of serving by the original video source server. The main bottle neck to this approach is that each server has a capacity limit beyond which it cannot serve properly. Instead of traditional file based streaming approach, in this thesis we pro- pose to use distributed data structure as the underlying storage for streaming video. We developed a distributed B-tree, and stored video files in the B- tree which runs over a cluster of computers and served from there. We show that system throughput increases almost linearly when more computers are added to the system.
Style APA, Harvard, Vancouver, ISO itp.
37

Li, Yuliang. "Congestion control for scalable video transmission over IP networks". Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441312.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Dikgole, Lesang Vincent. "Towards a scalable video interactivity solution over the IMS". Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/10333.

Pełny tekst źródła
Streszczenie:
Includes bibliographical references (leaves 72-76).
Rapid increase in bandwidth and the interactive and scalability features of the Internet provide a precedent for a converged platform that will support interactive television. Next Generation Network platforms such as the IP Multimedia Subsystem (IMS) support Quality of Service (QoS), fair charging and possible integration with other services for the deployment of IPTV services. IMS architecture supports the use of the Session Initiation Protocol (SIP) for session control and the Real Time Streaming Protocol (RTSP) for media control. This study aims to investigate video interactivity designs over the Internet using an evaluation framework to examine the performance of both SIP and RTSP protocols over the IMS over different access networks. It proposes a Three Layered Video Interactivity Framework (TLVIF) to reduce the video processing load on a server.
Style APA, Harvard, Vancouver, ISO itp.
39

Ni, Pengpeng. "Towards Optimal Quality of Experience via Scalable Video Coding". Licentiate thesis, Västerås : School of Innovation, Design and Engineering, Mälardalen University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7421.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Bhowmik, Deepayan. "Robust watermarking techniques for scalable coded image and video". Thesis, University of Sheffield, 2011. http://etheses.whiterose.ac.uk/1526/.

Pełny tekst źródła
Streszczenie:
In scalable image/video coding, high resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices and usage requirements. These content adaptations, which include quality, resolution and frame rate scaling may also affect the content protection data, such as, watermarks and are considered as a potential watermark attack. In this thesis, research on robust watermarking techniques for scalable coded image and video, are proposed and the improvements in robustness against various content adaptation attacks, such as, JPEG 2000 for image and Motion JPEG 2000, MC-EZBC and H.264/SVC for video, are reported. The spread spectrum domain, particularly wavelet-based image watermarking schemes often provides better robustness to compression attacks due to its multi-resolution decomposition and hence chosen for this work. A comprehensive and comparative analysis of the available wavelet-based watermarking schemes,is performed by developing a new modular framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM). This analysis is used to derive a watermark embedding distortion model, that establishes a directly proportional relationship between the sum of energy of the selected wavelet coefficients and the distortion performance, i.e., mean square error (MSE) in spatial domain. On the other hand, the improvements on robustness is achieved by modeling the bit plane discarding, which analyzes the effect of the quantization and de-quantization within the image coder and ranks the wavelet coefficients and other parameters according to their ability to retain the watermark data intact under quality scalable coding-based content adaptation. The work, then, extends these image watermarking models in video watermarking. But a direct extension of the image watermarking methods into frame by frame video watermarking without considering motion, results in flicker and other motion mismatch artifacts in the watermarked video. Motion compensated temporal filtering (MCTF) provides a good framework for accounting the motion. A generalized MCTF-based spatio-temporal decomposition domain (2D+t+2D) video watermarking framework is developed to address such issues. Improvements on imperceptibility and robustness are achieved by embedding the watermark in 2D+t compared to traditional t+2D MCTF based watermarking schemes. Finally, the research outcomes, discussed above, are combined to propose a novel concept of scalable watermarking scheme, that generates a distortion constrained robustness scalable watermarked media code stream which can be truncated at various points to generate the watermarked image or video with the desired distortion-robustness requirements.
Style APA, Harvard, Vancouver, ISO itp.
41

Li, Chin-Wei, i 李晉緯. "Video Quality Reference Models for Scalable Video Adaptation". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/24628436185458079013.

Pełny tekst źródła
Streszczenie:
碩士
國立屏東科技大學
資訊管理系所
103
This paper discusses how to adapt MD-FEC channel coding for scalable-video-coding (SVC) videos in a loss-burst channel. We propose an empirical loss distortion model and then use that model as a reference to design a fast MD-FEC adaption algorithm that provides necessary protection for SVC videos with goal of optimizing the perceptual video quality. We conduct simulation on two H.264/SVC movie trailers, each of them consisting of 16 layers. The results reveal that our model can provide a cost-effective distortion reference for MD-FEC to perform fast adaption.
Style APA, Harvard, Vancouver, ISO itp.
42

Mei-Heng, Lin, i 林美亨. "Face Detection in Scalable Video". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/03355681428582322876.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊學院碩士在職專班資訊組
95
3D wavelet scalable video coding has the advantages of bandwith, temporal, and spatial scalability, which includes three main modules : T_Module、S_Module、Entropy_Coding, those produce final compressed output bit stream. In this thesis we would like to combine face detection in 3D wavelet scalable video. The advantages are: 1. User only download one bit stream and extract it for what they need. 2. Server only provides one bit stream for all users to download and it also saves lots of disk space in network. We propose a strategy that makes the face can be detected under different environment and detect the face correctly and quickly.
Style APA, Harvard, Vancouver, ISO itp.
43

Αθανασόπουλος, Διονύσιος. "Motion compensation-scalable video coding". Thesis, 2006. http://nemertes.lis.upatras.gr/jspui/handle/10889/523.

Pełny tekst źródła
Streszczenie:
Αντικείμενο της διπλωματικής εργασίας αποτελεί η κλιμακοθετήσιμη κωδικοποίηση βίντεο (scalable video coding) με χρήση του μετασχηματισμού wavelet. Η κλιμακοθετήσιμη κωδικοποίηση βίντεο αποτελεί ένα πλαίσιο εργασίας, όπου από μια ενιαία συμπιεσμένη ακολουθία βίντεο μπορούν να προκύψουν αναπαραστάσεις του βίντεο με διαφορετική ποιότητα, ανάλυση και ρυθμό πλαισίων. Η κλιμακοθετησιμότητα του βίντεο αποτελεί σημαντική ιδιότητα ενός συστήματος στις μέρες μας, όπου το video-streaming και η επικοινωνία με βίντεο γίνεται μέσω μη αξιόπιστων μέσων διάδοσης και μεταξύ τερματικών με διαφορετικές δυνατότητες Στην εργασία αυτή αρχικά μελετάται ο μετασχηματισμός wavelet, ο οποίος αποτελεί το βασικό εργαλείο για την κλιμακοθετήσιμη κωδικοποίηση τόσο εικόνων όσο και ακολουθιών βίντεο. Στην συνέχεια, αναλύουμε την ιδέα της ανάλυσης πολλαπλής διακριτικής ικανότητας (multiresolution analysis) και την υλοποίηση του μετασχηματισμού wavelet με χρήση του σχήματος ανόρθωσης (lifting scheme), η οποία προκάλεσε νέο ενδιαφέρον στο χώρο της κλιμακοθετήσιμης κωδικοποίησης βίντεο. Τα κλιμακοθετήσιμα συστήματα κωδικοποίησης βίντεο διακρίνονται σε δύο κατηγορίες: σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χρόνου και έπειτα στο πεδίο του χώρου και σε αυτά που εφαρμόζουν το μετασχηματισμό wavelet πρώτα στο πεδίο του χώρου και έπειτα στο πεδίο του χρόνου. Εμείς εστιάzουμε στη πρώτη κατηγορία και αναλύουμε τη διαδικάσια κλιμακοθετήσιμης κωδικοποίησης/αποκωδικοποίησης καθώς και τα επιμέρους κομμάτια από τα οποία αποτελείται. Τέλος, εξετάζουμε τον τρόπο με τον οποίο διάφορες παράμετρoι επηρεάζουν την απόδοση ενός συστήματος κλιμακοθετήσιμης κωδικοποίησης βίντεο και παρουσιάζουμε τα αποτελέσματα από τις πειραματικές μετρήσεις. Βασιζόμενοι στα πειραματικά αποτελέσματα προτείνουμε έναν προσαρμοστικό τρόπο επιλογής των παραμέτρων με σκοπό τη βελτίωση της απόδοσης και συγχρόνως τη μείωση της πολυπλοκότητας.
In this master thesis we examine the scalable video coding based on the wavelet transform. Scalable video coding refers to a compression framework where content representations with different quality, resolution, and frame-rate can be extracted from parts of one compressed bitstream. Scalable video coding based on motion-compensated spatiotemporal wavelet decompositions is becoming increasingly popular, as it provides coding performance competitive with state-of-the-art coders, while trying to accommodate varying network bandwidths and different receiver capabilities (frame-rate, display size, CPU, etc.) and to provide solutions for network congestion or video server design. In this master thesis we investigate the wavelet transform, the multiresolution analysis and the lifting scheme. Then, we focus on the scalable video coding/decoding. There exist two different architectures of scalable video coding. The first one performs the wavelet transform firstly on the temporal direction and then performs the spatial wavelet decomposition. The other architecture performs firstly the spatial wavelet transform and then the temporal decomposition. We focus on the first architecture, also known as t+2D scalable coding systems. Several coding parameters affect the performance of the scalable video coding scheme such as the number of temporal levels and the interpolation filter used for subpixel accuracy. We have conducted extensive experiments in order to test the influence of these parameters. The influence of these parameters proves to be dependent on the video content. Thus, we present an adaptive way of choosing the value of these parameters based on the video content. Experimental results show that the proposed method not only significantly improves the performance but reduces the complexity of the coding procedure.
Style APA, Harvard, Vancouver, ISO itp.
44

Huang, Shang-Wen, i 黃尚文. "Face Tracking on Scalable Video". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/03362867186077437234.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊學院碩士在職專班資訊組
98
How is it measure people face to detect in static image, how be a lot of problem that people study since detect the position that measures the faces of people for a long time in the image and come to track in the dynamic film the faces of people are research lesson of nearly one step even more question. To distinguish for main fact pieces of image, few utilize 3D three-dimensional image carry on people research that face track. But on the research that tracks in the faces of people, present most documents are still only the test on static database, also unable to is it make instant application of to bring effectively, most heavy difficulty different network frequently wide among them.The paper is based on scalable video coding algorithm to do face tracking on scalable video, first part we introduce the algorithm of scalable video, second part is for face tracking method. We hope to reduce the data amount of processing face tracking, we use motion vector (MV) to make it.
Style APA, Harvard, Vancouver, ISO itp.
45

Kuo, Wan-ting, i 郭琬婷. "Bit-depth Scalable Video Coding". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50887456223998110622.

Pełny tekst źródła
Streszczenie:
碩士
國立中正大學
電機工程所
96
Scalable video coding(SVC)is currently developed as an extension of H.264/AVC video coding standard. It aims at supporting any combination of spatial, temporal, and SNR scalability. Recently, images offering not only high definition quality, but also high dynamic range are desirable. Since the photographic color images with high dynamic range are more and more popular and easily acquired, JVT has issued a “Call for Proposals” to standardize bit-depth scalable video coding into SVC standard. This thesis work proposed three H.264/AVC compliant bit-depth scalable video coding schemes: LH mode(Low Bit-depth to High Bit-depth), HL mode(High Bit-depth to Low Bit-depth)and the merged LHHL mode for different applications. All of the schemes efficiently exploit the inter-layer relationship between the high bit-depth layer and the low bit-depth layer on Macroblock level For LH mode, the bit-stream is generated embeddable; it means that the generated bitstream is backwards compatible to H.264 and it could be extended to a higher bit-depth enhancement layer. According to the channel condition, transmitter would truncate the bit-stream. For example, it is possible to delivery the low bit-depth bitstream(typically 8 bit-depth)only, or transmit all the whole bit-stream to reconstruct both bit-depth sequences, without any truncation. To improve the coding efficiency, information from low bit-depth sequence, such as residual data, reconstructed textures are processed by inverse tone mapping and can be regarded as inter-layer prediction information. On the other hand, HL mode or LHHL mode contain distinct architecture. The inter-layer prediction is obtained through the process of tone mapping on high bit-depth information. According to the experimental results, HL mode and LHHL mode outperforms LH mode and traditional simulcast coding scheme.
Style APA, Harvard, Vancouver, ISO itp.
46

陳淵祥. "Progressive Watermarking in Scalable Video". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/91219865378921704804.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

王季培. "Embedding Watermarks in Scalable Video". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/49896600107568626587.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊學院碩士在職專班資訊組
94
This paper is based on scalable video coding algorithm to embedding watermark in scalable video, first we will introduce on scalable video coding algorithm. Scalable video coding algorithm include temporal domain transformation, spatial domain transformation, entropy coding, and motion estimation. Motion estimation is used to select motion vector coding mode. In my embedding watermark algorithm, watermark will embed in L frame’s middle frequency after spatial domain transformation. The concept is to use block to embed watermark and append security key and quantization coefficient to make watermark robust. We also use different resolutions, rates and quantization coefficients to do some experiments. In this experiment, we can see my algorithm to detect watermark in different resolution has same result. But different quantization coefficients will get different results in detecting watermark, so quantization coefficient increase the BCR (Bit Correct Rate) will increase, but PSNR will decrease. I also try to destroy watermark use three methods frames drop, frames average and remove line, and use this paper algorithm we can detect embedded watermark successful.
Style APA, Harvard, Vancouver, ISO itp.
48

Ugur, Kemal. "Scalable coding of H.264 video". Thesis, 2004. http://hdl.handle.net/2429/15728.

Pełny tekst źródła
Streszczenie:
Real-time transmission of digital video over media, such as the Internet and wireless networks has recently been receiving much attention. A big challenge of video transmission over such networks is the variation of available bandwidth over time. Traditional video coding standards whose main objective is to optimize the quality of transmitted video at a given bitrate, do not offer effective solutions to the bandwidth variation problem. To deal with this problem, different scalable video coding techniques have been developed. The latest video coding standard, H.264, provides superior compression efficiency over all previous standards. This standard, however, does not include tools for coding the video in a scalable fashion. In this thesis, we introduce methods that allow encoding and transmitting of H.264 video in a scalable fashion. The method we propose is an adaptation of the existing MPEG-4 Fine Granular Scalability structure (FGS) to the H.264 standard. Our proposed algorithm minimizes the added number of the bits needed in adapting the advanced features of H.264 to the FGS system. Our proposed system has the advantages of being highly error resilient and having low computational complexity. Due to its structure, the FGS standard has low coding efficiency when compared to single layer coding. To overcome this problem, we also introduce a hybrid method that combines our proposed H.264 based FGS approach with the stream-switching approach employed in the H.264 standard. By combining different techniques, our proposed system offers a complete solution for all kinds of applications. The proposed system outperforms existing systems by offering optimum bandwidth utilization and improved video quality for the end user.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
49

Macnicol, James Roy. "Scalable video coding by stream morphing". 2002. http://www.ozemail.com.au/~sigsegv/thesis_ss.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Macnicol, James Roy. "Scalable video coding by stream morphing /". 2003. http://www.ozemail.com.au/~sigsegv/thesis_ss.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii