Tesis sobre el tema "Codage adaptatif de la vidéo"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Codage adaptatif de la vidéo".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Herrou, Glenn. "Résolution Spatio-temporelle Adaptative pour un Codage à Faible Complexité des Formats Vidéo Émergents". Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0020.
Texto completoThe definition of the latest Ultra-High Definition TV (UHDTV) standard aims to increase the user’s quality of experience by introducing new video signal features such as 4K and High Frame-Rate (HFR). However, these new features multiply by a factor 8 the amount of data to be processed before transmission to the end user.In addition to this new format, broadcasters and Over-The-Top (OTT) content providers have to encode videos in different formats and at different bitrates due to the wide variety of devices with heterogeneous video format and network capacities used by consumers.SHVC, the scalable extension of the latest video coding standard High Efficiency Video Coding (HEVC) is a promising solution to address these issues but its computationally demanding architecture reaches its limit with the encoding and decoding of the data-heavy newly introduced immersive video features of the UHDTV video format.The objective of this thesis is thus to investigate lightweight scalable encoding approaches based on the adaptation of the spatio-temporal resolution. The first part of this document proposes two pre-processing tools, respectively using polyphase and wavelet frame-based approaches, to achieve spatial scalability with a slight complexity overhead.Then, the second part of this thesis addresses the design of a more conventional dual-layer scalable architecture using an HEVC encoder in the Base Layer (BL) for backward compatibility and a proposed low-complexity encoder, based on the local adaptation of the spatial resolution, for the Enhancement Layer (EL).Finally, the last part of this thesis investigates spatiotemporal resolution adaptation. A variable frame-rate algorithm is first proposed as pre-processing. This solution has been designed to locally and dynamically detect the lowest frame-rate that does not introduce visible motion artifacts. The proposed variable frame-rate and adaptive spatial resolution algorithms are then combined to offer a lightweight scalable coding of 4K HFR video contents
Trioux, Anthony. "Étude et optimisation d'un système de vidéotransmission conjoint source-canal basé "SoftCast". Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2019. http://www.theses.fr/2019UPHF0018.
Texto completoLinear video coding (LVC) schemes have recently demonstrated a high potential for delivering video content over challenging wireless channels. SoftCast represents the pioneer of the LVC schemes. Different from current video transmission standards and particularly useful in broadcast situation, SoftCast is a joint source-channel coding system where pixels are processed by successive linear operations (DCT transform, power allocation, quasi-analog modulation) and directly transmitted without quantization or coding (entropic or channel). This allows to provide a received video quality directly proportional to the transmission channel quality, without any feedback information, while avoiding the complex adaptation mechanisms of conventional schemes. A first contribution of this thesis is the study of the end-to-end performances of SoftCast. Theoretical models are thus proposed taking into account the bandwidth constraints of the application, the power allocation, as well as the type of decoder used at the reception (LLSE, ZF). Based on a subjective test campaign, a second part concern an original study of the video quality and specific artifacts related to SoftCast. In a third part, preprocessing methods are proposed to increase the received quality in terms of PSNR scores with an average gain of 3 dB. Finally, an adaptive algorithm modifying the size of the group of pictures (GoP) according to the characteristics of the transmitted video content is proposed. This solution allows to obtain about 1 dB additional gains in terms of PSNR scores
Elhamzi, Wajdi. "Définition et implantation matérielle d'un estimateur de mouvement configurable pour la compression vidéo adaptative". Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-01016351.
Texto completode, Cuetos Philippe. "Streaming de Vidéos Encodées en Couches sur Internet avec Adaptation au Réseau et au Contenu". Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00000489.
Texto completoAklouf, Mourad. "Video for events : Compression and transport of the next generation video codec". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Texto completoThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Abdallah, Alaeddine. "Mécanismes Cross-Layer pour le streaming vidéo dans les réseaux WIMAX". Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14142/document.
Texto completoDriven by the increasing demand for multimedia services in broadband Internet networks, WIMAX technology has emerged as a competitive alternative to the wired broadband access solutions. The IEEE 802.16 is a solution that provides high throughput by ensuring a satisfactory QoS. In particular, it is suitable for multimedia applications that have strict QoS constraints. However, the users’ heterogeneity and diversity in terms of bandwidth, radio conditions and available resources, pose new deployment challenges. Indeed, multimedia applications need to interact with their environment to inform the access network about their QoS requirements and dynamically adapt to changing network conditions.In this context, we propose two solutions for video streaming over 802.16 networks based on Cross-Layer approach. We are interested in both unicast and multicast transmissions in uplink and downlink of one or more WIMAX cells.First, we proposed an architecture that enables Cross-Layer adaptation and optimization of video streaming based on available resources. We defined the entity CLO (Cross-Layer Optimizer) that takes benefits from service flow management messages, exchanged between BS and SS, at the MAC level, to determine the necessary adaptations / adjustment to ensure optimal delivery of the application. Adaptations occur at two epochs, during the admission of the video stream and during the streaming phase. The performance analysis, performed through simulations, shows the effectiveness of the CLO to adapt in a dynamic way, the video data rate depending on network conditions, and thus guarantee an optimal QoS.Second, we proposed a solution that enables IP multicast video delivery in WIMAX network. This solution allows finding the compromise between the diversity of end-user requirements, in terms of radio conditions, modulation schemes and available resources, along with the SVC hierarchy video format, to offer the best video quality even for users with low radio conditions. Indeed, we define a multicast architecture that allows each user to get a video quality proportionally to its radio conditions and its available bandwidth. Towards this end, several IP multicast groups are created depending on the SVC video layers. Subsequently, our solution allows optimizing the use of radio resources by exploiting the different modulations that can be selected by the end-users
Bacquet, Anne-Sophie. "Transmission optimisée de flux vidéo haute définition H. 264/AVC et SVC sur ADSL2 : adaptation conjointe des paramètres de codage source et de transmission". Valenciennes, 2010. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/eae7153a-baf5-4519-95e2-77387529496c.
Texto completoThe eligibility of any ADSL subscriber to video services strongly depends on the length of his line. Beyond a given distance, video transmission is no more possible at the desired bit rate with a targeted quality of service level. In this work, we propose different solutions to extend the area of eligibility for high-definition video services. These solutions rely on bit rate adaptation techniques of the H. 264 compressed video streams, whose parameters are jointly optimized together with the ADSL2 transmission parameters in terms of received quality. In a first solution, we consider that the input high definition compressed video streams are non scalable: in this case, bit rate reduction is performed by means of appropriate transrating. The adapted video stream is then equally protected and transmitted according to optimal ADSL2 parameters. Thanks to this solution, eligibility was extended by 1. 2 km on average over the tested lines, with resultant satisfying visual quality. Two other solutions are then proposed when the input video stream is compressed thanks to the scalable extension of H. 264 named SVC. First, we propose a hybrid solution for bit rate adaptation, which relies on scalability then transrating. This solution improves the quality of received videos up to +3 dB in terms of PSNR values. Preliminary results obtained with the scalable extension of H. 264/AVC lead us to evaluate SVC performances for varying spatial resolutions (CIF to Full-HD). We show that the performances of this codec are reduced for lower resolutions videos. The last proposed solution for ADSL eligibility extension is finally presented for CIF resolution videos. It consists in a multi-resolution approach, where the scalable video stream is divided into two separated parts of variable relevance, which are therefore unequally protected. This proposal improves the performances up to 0. 5 dB obtained in comparison with an equal protection approach
Hentati, Manel. "Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)". Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.
Texto completoThe main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
Le, Guen Benjamin. "Adaptation du contenu spatio-temporel des images pour un codage par ondelettes". Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00355207.
Texto completoDans cette thèse, nous proposons d'aborder le problème d'adaptativité sous un angle différent. L'idée est de déformer le contenu d'une image pour l'adapter au noyau d'ondelette séparable standard. La déformation est modélisée par un maillage déformable et le critère d'adaptation utilisé est le coût de description de l'image déformée. Une minimisation énergétique similaire à une estimation de mouvement est mise en place pour calculer les paramètres du maillage. A l'issue de cette phase d'analyse, l'image est représentée par une image déformée de moindre coût de codage et par les paramètres de déformation. Après codage, transmission et décodage de ces inforrnations, l'image d'origine peut être synthétisée en inversant la déformation. Les performances en compression de ce schéma par analyse-synthèse spatiales sont étudiées et comparées à celles de JPEG2000. Visuellement, on observe une meilleure reconstruction des contours des images avec une atténuation significative de l'effet rebond.
Conservant l'idée d'adapter le contenu des images à un noyau de décomposition fixe, nous proposons ensuite un schéma de codage par analyse-synthèse spatio-temporelles dédié à la vidéo. L'analyse prend en entrée un groupe d'images (GOF) et génère en sortie un groupe d'images déformées dont le contenu est adapté à une décomposition 3D horizontale-verticale-temporelle fixe. Le schéma est conçu de sorte qu'une seule géométrie soit estimée et transmise pour l'ensemble du GOF. Des résultats de compression sont présentés en utilisant le maillage déformable pour modéliser la géométrie et le mouvement. Bien qu'une seule géométrie soit encodée, nous montrons que son coût est trop important pour permettre une amélioration significative de la qualité visuelle par rapport à un schéma par analyse-synthèse exploitant uniquement le mouvement.
Derviaux, Christian. "Evaluation de la visibilité des effets de blocs dans le codage MPEG : application à l'amélioration de la qualité visuelle de séquences video". Valenciennes, 1998. http://www.theses.fr/1998VALE0032.
Texto completoFatani, Imade Fahd Eddine. "Contribution à l’étude de l’optimisation conjointe source-canal d’une transmission vidéo dans un contexte MIMO sans fil : application à la vidéosurveillance embarquée pour les transports publics". Valenciennes, 2010. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/f1e3d785-7cbb-4d39-86d8-eec5433f62a0.
Texto completoVideo monitoring applications in the Public Transport field rely on wireless telecommunication systems which require high data rate between vehicles and the ground and high Quality of Service (QoS). In order to satisfy these constraints we have proposed to take into account both transmission parameters and video coding by combining Multiple Description Coding (MDC) and Region Of Interest coding with different MIMO (Mulitple Input Multiple Output) schemes on the basis of the PHY layer of IEEE802. 11n Wifi standard in a metro environment (tunnel). First, we have shown that it is possible to increase the performance of a MIMO system by optimizing bits and power allocation independently of the type of information to be transmitted. Two approaches are proposed. They lead to an optimal repartition of resources, reach maximal diversity order and they outperform the max-SNR precoder performances. Secondly, the association of MDC with MIMO schemes is introduced to adapt the video content to the multi antenna structure particularly when the channel knowledge is not available at transmitter side. Furthermore, the performances can be enhanced using a low data rate return link and considering the Orthogonalized Spatial Multiplexing (OSM) and the precoded OSM. When perfect channel information is available at transmitter side thanks to a high data rate return link, MIMO schemes are associated with hierarchic video coding consisting in the separation of regions of interest in the scene. The stream associated to the area with the maximal interest is transmitted on the eigen channel with the higher gain. This strategy allows to guaranty better robustness and acceptable QoS of the video streams observed in the control-center. The creation of the different regions of interest is based on the Flexible Macroblock Ordering (FMO) technique introduced in the new compression standard H. 264/AVC. We have shown the interest of the different transmission schemes proposed in order to enhance the QoS of a video stream with no increase of the transmitted power and of the number of radio access points along the infrastructure
Ahmed, Toufik. "Adaptative packet video streaming over IP networks : a cross layer approach". Versailles-St Quentin en Yvelines, 2003. http://www.theses.fr/2003VERS0042.
Texto completoWhile there is an increasing demand for streaming video applications on IP networks, various network characteristics make the deployment of these applications more challenging than traditional internet applications like email and web. These applications that transmit audiovisual data over IP must cope with the time varying bandwidth and delay of the network and must be resilient to packet loss and error. This dissertation thesis examines these challenges and presents a cross layer video streamin over large scale IP networks with statistical quality of service (QoS) guarantee. Video sequences are typically compressed according to the emerging MPEG-4 multimedia framework to achieve bandwidth efficiency an content-based interactivity. The original characteristic of MPEG-4 is to provide an integrated object-oriented representation and coding of natural and synthetic audio-visual content for manipulating and transporting over a broad range of communication infrastructures. The originality of this work is to propose a cross-layer approach for resolving some of the critical issues on delivering packet video data over IP networks with satisfactory quality of service. While, current and past works on this topic respect the protocol layer isolation paradigm, the key idea behind our work is to break this limitation and to rather inject content-level semantic and service-level requirement within the proposed IP video transport mechanics and protocols
Viswanathan, Kartik. "Représentation reconstruction adaptative des hologrammes numériques". Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0012/document.
Texto completoWith the increased interest in 3D video technologies for commercial purposes, there is renewed interest in holography for providing true, life-like images. Mainly for the hologram's capability to reconstruct all the parallaxes that are needed for a truly immersive views that can be observed by anyone (human, machine or animal). But the large amount of information that is contained in a hologram make it quite unsuitable to be transmitted over existing networks in real-time. In this thesis we present techniques to effectively reduce the size of the hologram by pruning portions of the hologram based on the position of the observer. A large amount of information contained in the hologram is not used if the number of observers of an immersive scene is limited. Under this assumption, parts of the hologram can be pruned out and only the requisite parts that can cause diffraction at an observer point can be retained. For reconstructions these pruned holograms can be propagated numerically or optically. Wavelet transforms are employed to capture the localized frequency information from the hologram. The selection of the wavelets is based on the localization capabilities in the space and frequency domains. Gabor and Morlet wavelets possess good localization in space and frequency and form good candidates for the view based reconstruction system. Shannon wavelets are also employed for this cause and the frequency domain based application using the Shannon wavelet is shown to provide fast calculations for real-time pruning and reconstruction
Kimiaei, Asadi Mariam. "Adaptation de contenu multimedia avec MPEG 21 : conversion de ressources et adaptation sémantique de scènes". Paris, ENST, 2005. http://www.theses.fr/2005ENST0040.
Texto completoThe objective of this Ph. D. Thesis is to propose new, simple and efficient techniques and methodologies for support of multimedia content adaptation to constrained contexts. The work is based on parts of the on-going MPEG-21 standard that aims at defining different components of a multimedia distribution framework. The thesis is divided into two main parts: single media adaptation and semantic adaptation of multimedia composed documents. In single media adaptation, the media is adapted to the context constraints, such as terminal capabilities, user preferences, network capacities, author recommendations and etc. In this type of adaptation, the media is considered solely, i. E. As mono media. We have defined description tools extending the MPEG-21 DIA schema, for description of hints and suggestions on different media adaptations and their corresponding parameters. In semantic adaptation of structured multimedia documents, we addressed the question of adaptation based on temporal, spatial and semantic relationships between the media objects. When adapting a multimedia presentation, in order to preserve the consistency and meaningfulness of the adapted scene, the adaptation process needs to have access to the semantic information of the presentation. We have defined a language as a set of descriptors, for the expression of semantic information of composed multimedia content. In our implementations, we used SMIL 2. 0 for describing multimedia scenes
Brunel, Lionel. "Indexation vidéo par l'analyse de codage". Phd thesis, Université de Nice Sophia-Antipolis, 2004. http://tel.archives-ouvertes.fr/tel-00214113.
Texto completoHuchet, Gregory. "Nouvelles méthodes de codage vidéo distribué". Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26267/26267.pdf.
Texto completoLahsini, Cyrine. "Codage distribué pour la compression vidéo". Télécom Bretagne, 2013. http://www.theses.fr/2013TELB0175.
Texto completoTraditional Video coding systems such as H-26x or MPEG-X uses a motion-compensated predictive coding at the encoder to exploit temporal dependencies between successive frames of a video sequence. In these systems, the complexity of the encoder is 5 to 10 times greater than that of the decoder. This scheme of asymmetrical model is suitable for the transmission of video from a server to mobile devices, but not suitable for sending video via mobile devices to a base station. For this type of application, it is better to search an encoding scheme of the previous dual encoder with a relatively low complexity and decoder with a higher processing power. Distributed video coding, also called Wyner-Ziv coding is a new video coding paradigm which combines low complexity and robustness of frame coding in Intra mode with compression efficiency of Inter-mode coding frame. With the advent of turbo codes in the 90s, this technique has experienced a resurgence of interest. In the first part, we studied the principle of distributed video coding in the pixel domain. To improve the performance of the reference model, we introduced at the reception the BCJR source decoder which exploits the correlation of the source video. Indeed, a video stream has, indeed, a large amount of temporal correlation between successive frames of the sequence and space within a frame between its pixels. The aim of our study is to propose a new architecture that allows the exploitation, in addition to temporal correlation, the spatial correlation of Wyner-Ziv frames. The source is considered as Markovian source, this feature means that this source has a residual redundant information providing additional information to the receiver which can be used to correct some errors introduced by the virtual channel, through a scheme of joint source-channel decoding. The second part of the thesis was devoted to the implementation of a new video coding scheme with low complexity encoder suitable for applications that have limited computational power at the transmitter. The study is performed at pixel and transform domain. The proposed schemes can exploit both temporal and spatial correlation of the video sequence, introducing an arithmetic coder to be used alternately with the turbo code. In the pixel domain, we considered a size larger than the GOP, keyframes (as in the distributed video coding) is encoded and decoded using a codec intra. For the remaining frames of the GOP, we exploit the temporal correlation using an entropy encoder (arithmetic encoder) only for the two most significant bitplanes. The other bitplanes are encoded using a turbo code. In the transform domain, the temporal correlation has been exploited by using the arithmetic encoder for only the DC coefficients. Other DCT coefficients are encoded using turbo code
Huchet, Grégory. "Nouvelles méthodes de codage vidéo distribué". Doctoral thesis, Université Laval, 2009. http://hdl.handle.net/20.500.11794/20836.
Texto completoMaugey, Thomas. "Codage vidéo distribué de séquences multi-vues". Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00577147.
Texto completoFeideropoulou, Georgia. "Codage Conjoint Source-Canal des Sources Vidéo". Phd thesis, Télécom ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001294.
Texto completoHaj, Taieb Mohamed. "Codage vidéo distribué utilisant les turbo codes". Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/30170/30170.pdf.
Texto completoMost of the video compression processing is usually performed at the transmitter in the conventional video coding standards (MPEG, H.263, H.264/AVC [1]). This choice is due to the fact that the transmitter has full knowledge of its source to ensure easy and efficient compression. In addition, the usual applications of video transmission ensure a flow from a centralized station, with a higher computational capacity, to a number of receivers. The compression task is thus performed only once by a computationally adapted station. However, with the emergence of wireless surveillance locally distributed cameras, the growth of cellular interactive video applications as well as many other applications involving several low cost video encoders at the expense of high complexity central decoder, the compression task can no longer be handled by the encoder and thus the compression complexity should be transferred to the decoder. Slepian and Wolf information theoretical result on lossless coding for correlated distributed sources [2] and its extension to the lossy source coding case with side information at the decoder, as introduced by Wyner and Ziv [3], constitute the theoretical basis of distributed source coding. These theoretical concepts have given birth to a wide field of applications as the recent distributed video coding paradigm, established a few years ago. In this doctoral thesis, we present a study of various distributed video coding schemes in the pixel and transform domains. The decoder exploits the correlation between the video sequence to be transmitted by the encoder and the side information. This correlation can be seen as a virtual channel whose input is the frame to be transmitted and the output is the side information. Turbo coding is used to generate the parity bits which are sent, gradually upon decoder requests, to correct the errors in the side information considered as a noisy version of the original frame. In this work, we implement various algorithms for distributed video coding based on turbo codes in order to approach the efficiency of conventional video encoders.
Gorin, Jérôme. "Machine virtuelle universelle pour codage vidéo reconfigurable". Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00997683.
Texto completoGorin, Jérôme. "Machine virtuelle universelle pour codage vidéo reconfigurable". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0025.
Texto completoThis thesis proposes a new paradigm that abstracts the architecture of computer systems for representing virtual machines’ applications. Current applications are based on abstraction of machine’s instructions and on an execution model that reflects operations of these instructions on the target machine. While these two models are efficient to make applications portable across a wide range of systems, they do not express concurrency between instructions. Expressing concurrency is yet essential to optimize processing of application as the number of processing units is increasing in computer systems. We first develop a “universal” representation of applications for virtual machines based on dataflow graph modeling. Thus, an application is modeled by a directed graph where vertices are computation units (the actors) and edges represent the flow of data between vertices. Each processing units can be treated apart independently on separate resources. Concurrency in the instructions is then made explicitly. Exploit this new description formalism of applications requires a change in programming rules. To that purpose, we introduce and define a “Minimal and Canonical Representation” of actors. It is both based on actor-oriented programming and on instructions ‘abstraction used in existing Virtual Machines. Our major contribution, which incorporates the two new representations proposed, is the development of a “Universal Virtual Machine” (UVM) for managing specific mechanisms of adaptation, optimization and scheduling based on the Low-Level Virtual Machine (LLVM) infrastructure. The relevance of the MVU is demonstrated on the MPEG Reconfigurable Video Coding standard. In fact, MPEG RVC provides decoder’s reference application compliant with the MPEG-4 part 2 Simple Profile in the form of dataflow graph. One application of this thesis is a new dataflow description of a decoder compliant with the MPEG-4 part 10 Constrained Baseline Profile, which is twice as complex as the reference MPEG RVC application. Experimental results show a gain in performance close to double on a two cores compare to a single core execution. Developed optimizations result in a gain on performance of 25% for compile times reduced by half. The work developed demonstrates the operational nature of this standard and offers a universal framework which exceeds the field of video domain (3D, sound, picture...)
Crave, Olivier. "Approches théoriques en codage vidéo robuste multi-terminal". Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004774.
Texto completoRossignol, François. "Codage fractal basé-région de séquences vidéo segmentées". Mémoire, Université de Sherbrooke, 2003. http://savoirs.usherbrooke.ca/handle/11143/1241.
Texto completoAndré, Thomas. "Codage vidéo scalable et mesure de distorsion entropique". Nice, 2007. http://www.theses.fr/2007NICE4051.
Texto completoThe current video compression standards MPEG4 and H. 264 improve the tradeoff between rate and quality of compressed videos. They also support new features such as scalability, which enables the user to decompress a single video bit-stream to different rates and spatiotemporal resolutions without any additional computation. However, scalability often results in a performance drop for given resolution and rate. In a first part, we propose a scalable motion- compensated wavelet-based video coder. Wavelet transforms bring more flexibility and offer a natural support to scalability, so that it can be implemented with very limited performance loss. Our main contributions are related to motion-compensated temporal filtering, optimal motion vectors estimation, model-based bit allocation, minimal-cost scalability and occlusion management. Moreover, the proposed decoder is entirely compatible with the still-image coding standard JPEG2000. In a second part, we introduce a distortion measure based on the conditional differential entropy of the input signal given its quantized value. Indeed, mean squared error has been widely used as a distortion criterion, but tends to favor high-energy coefficients. Although this behavior is relevant at high bit-rate, it does not always lead to a better visual quality in the general case. We investigate the intrinsic properties of the proposed distortion measure and we integrate it into optimal scalar and vectorial quantizers. We also propose a fast bit allocation algorithm based on this distortion measure, which leads to a great visual quality improvement of highly-compressed images while preserving JPEG2000 compatibility
Vu, Thuong Van. "Application du codage réseau dans l'environnement sans fil : conditions de codage et contrôle de redondance adaptatif". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01022166.
Texto completoVu, Thuong Van. "Application du codage réseau dans l'environnement sans fil : conditions de codage et contrôle de redondance adaptatif". Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066062.
Texto completoSince its first introduction in 2001, network coding has gained a significant attention from the research communities in the need of improving the way of communication in computer networks. In short, network coding is a technique which allows the nodes to combine several native packets into one coded packet for transmission (i.e, coding packets) instead of simply forwarding packets one by one. With network coding, a network can save the number of transmissions to reduce data transfer time and increase throughput. This breaks the great assumption about keeping information separate and whole. Information must not be tampered but it can be mixed, and transformed. In the scope of the thesis, we focus on two main benefits of network coding: throughput improvement and transmission reliability against random losses. For the throughput improvement, we use inter-flow network coding and extend the coding conditions. For transmission reliability, we use intra-flow network coding and suggest new coding schemes. The obtained results via NS-2 simulations are quite promising
Wang, Shan. "Stratégie de codage conjoint de séquences vidéo basé bandelettes". Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Wang-Shan/2008-Wang-Shan-These.pdf.
Texto completoThe work of this thesis relies on image compression as well as digital communications. The image processing is applicable in a lot of communication-related fields like medical imaging, telemedicine, videoconference, cinema and TV. The digital transmission systems assure the information exchanges between a source and a receiver. Considering that the physical medium used to support such transmission is not yet perfect, it is well known that the information transmitted can be exposed to several types of interference, resulting errors at the receiver. Furthermore, the communication system itself can cause errors. For the sake of performance, many image transmission systems analyze the flaws of the human visual perception. In addition, we also take into account all the elements of the digital communication channel in order to have a robust scheme in difficult transmission conditions as well as in low bandwidth channel. To this end, we propose to use wavelet decomposition (DWT) in the context of wireless video transmission, which is different from actual standards using DCT in priority like MPEG-4,. 264, AVC, …. This guarantees more flexibility in the possible hierarchisation for the information to be coded. In fact, using different applications and network types in a communication system, the quality of service at the receiver can be very variable. In addition to spatial compression, the compression rate can also be increased using GOP (Group of Pictures) techniques and motion compensation (Motion Vectort) to exploit the similarities between successive images. In order to have a coding system more robust, we’ve used vector quantizations from codebooks built with self-organizing maps (SOM algorithm), these codebooks can be superimposed on constellations of quadrature amplitude modulation (QAM). The following fix length coding reduces the compression rate but preserves better the transmitted data facing channel errors
Cammas, Nathalie. "Codage vidéo scalable par maillages et ondelettes t+2D". Rennes 1, 2004. https://hal.archives-ouvertes.fr/tel-01131881.
Texto completoRobert, Antoine. "Transformées orientées par blocs pour le codage vidéo hybride". Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00003631.
Texto completoRobert, Antoine. "Transformées orientées par blocs pour le codage vidéo hybride". Phd thesis, Paris, ENST, 2008. https://pastel.hal.science/pastel-00003631.
Texto completoThis thesis deals with improving state of the art video coders by using structural information of the images. Classical transforms (DCT, wavelets,…) do not effectively represent the geometrical structures whose state of the art is historically carried by second generation wavelets. Other studies are DCT-based with orientations in order to represent these contours. The aim is to improve the coding stage of residual H. 264/AVC images (spatial or temporal) by using their geometrical structures. For that, an oriented method by pre and post-processing associated with a course adapted with the coefficients has been defined. The pre-processing stage carries out pseudo-rotations straightening the blocks of the images towards horizontal or vertical axe. This operation is realized by shears that is to say by circular shifts of the pixels, improving the decorrelation of the DCT. This method inserted in a H. 264/AVC coder presents good coding performances. But, the coding cost of the orientations, selected by a rate-distortion criterion, is high deteriorating the performances in low bitrates, the method remains more efficient than H. 264/AVC in high bitrates (QP<30). The quantized coefficients are then scanned according to vertical, horizontal or zigzag patterns depending on the rectifications. This adaptive scan allows preserving rate thus improving our global method which becomes more efficient than H. 264/AVC in medium bitrates (QP<35)
Dhollande, Nicolas. "Optimisation du codage HEVC par des moyens de pré-analyse et/ou pré-codage du contenu". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S113.
Texto completoThe High Efficiency Video Coding (HEVC) standard was released in 2013 which reduced network bandwidth by a factor of 2 compared to the prior standard H.264/AVC. These gains are achieved by a very significant increase in the encoding complexity. Especially with the industrial demand to shift in format from High Definition (HD) to Ultra High Definition (UHD), one can understand the relevance of complexity reduction techniques to develop cost-effective encoders. In our first contribution, we attempted new strategies to reduce the encoding complexity of Intra-pictures. We proposed a method with inference rules on the coding modes from the modes obtained with pre-encoding of the UHD video down-sampled in HD. We, then, proposed a fast partitioning method based on a pre-analysis of the content. The first method reduced the complexity by a factor of 3x and the second one, by a factor of 6, with a loss of compression efficiency of 5%. As a second contribution, we adressed the Inter-pictures. By implementing inference rules in the UHD encoder, from a HD pre-encoding pass, the encoding complexity is reduced by a factor of 3x when both HD and UHD encodings are considered, and by 9.2x on just the UHD encoding, with a loss of compression efficiency of 3%. Combined with an encoding configuration imitating a real system, our approach reduces the complexity by a factor of close to 2x with 4% of loss. These strategies built during this thesis offer encouraging prospects for implementation of low complexity HEVC UHD encoders. They are fully adapted to the WebTV/OTT segment that is playing a growing part in the video delivery, in which the video signal is encoded with different resolution to reach heterogeneous devices and network capacities
Moinard, Matthieu. "Codage vidéo hybride basé contenu par analyse/synthèse de données". Phd thesis, Telecom ParisTech, 2011. http://tel.archives-ouvertes.fr/tel-00830924.
Texto completoBrouard, Olivier. "Pré-analyse de la vidéo pour un codage adapté. Application au codage de la TVHD en flux H.264". Phd thesis, Université de Nantes, 2010. http://tel.archives-ouvertes.fr/tel-00522618.
Texto completoAgostini, Marie Andrée. "Nouvelles approches pour la compression de vidéos haute définition : application au codage par descriptions multiples". Nice, 2009. http://www.theses.fr/2009NICE4017.
Texto completoThe framework of the thesis is a wavelet-based video coder. Fully scalable, this video encoder is based on a lifted motion-compensated wavelet transform. The first challenge was to reduce the cost of the motion vectors, which can be prohibitive at low bit-rates, by quantizing with losses the vectors. This method has been applied to the H. 264 coder. The goal is to find the optimal bit-rates for the motion vectors and for the temporal wavelet coefficients in order to minimize the total distortion. A theoretical distortion model has thus been established, and an optimal bit-rate allocation has been realized. The influence of some badly estimated motion vectors on the motion-compensated wavelet transform has also been minimized. The steps of the lifting scheme have been closely adapted to the energy of the motion. To deal with the problems of efficient video transmission over noisy channels, Multiple Description Coding (MDC) has been explored. The framework is a balanced MDC scheme for scan-based wavelet transform video coding. A focus is done on the joint decoding of descriptions received at decoder and corrupted by noise. The challenge is to reconstruct a central signal with a distortion as small as possible using the knowledge of the probability density function of the descriptions, by two different algorithms. Distributed video coding has also been explored
Laroche, Guillaume. "Modules de codage par compétition et suppression de l'information de compétition pour le codage de séquences vidéo". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005379.
Texto completoLe, Léannec Fabrice. "Codage vidéo robuste et hiérarchique pour la transmission sur réseaux hétérogènes". Rennes 1, 2001. http://www.theses.fr/2001REN1S018.
Texto completoCagnazzo, Marco. "CODAGE DES DONNÉES VISUELLES : EFFICACITÉ, ROBUSTESSE, TRANSMISSION". Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00859677.
Texto completoToto-Zarasoa, Velotiaray. "Codage de sources distribuées : Outils et Applications à la compression vidéo". Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00592117.
Texto completoPau, Grégoire. "Ondelettes et décompositions spatio-temporelles avancées : application au codage vidéo scalable". Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00002189.
Texto completoBalter, Raphaèle. "Construction d'un maillage 3D évolutif et scalable pour le codage vidéo". Rennes 1, 2005. ftp://ftp.irisa.fr/techreports/theses/2005/balter.pdf.
Texto completoToto-Zarasoa, Velotiaray. "Codage de sources distribués : outils et applications à la compression vidéo". Rennes 1, 2010. https://tel.archives-ouvertes.fr/tel-00539044.
Texto completoDistributed source coding is a technique that allows to compress several correlated sources, without any cooperation between the encoders, and without rate loss provided that the decoding is joint. Motivated by this principle, distributed video codin has emerged, exploiting the correlation between the consecutive video frames, tremendously simplifying the encoder, and leaving the task of exploiting the correlation to the decoder. The first part of our contributions in this thesis presents the asymmetric coding of binary sources that are not uniform. We analyze the coding of non-uniform Bernoulli sources, and that of hidden Markov sources. For both sources, we first show that exploiting the distribution at the decoder clearly increases the decoding capabilities of a given channel code. For the binary symmetric channel modeling the correlation between the sources, we propose a tool to estimate its parameter, thanks to an EM algorithm. We show that this tool allows to obtain fast estimation of the parameter, while having a precision that is close to the Cramer-Rao lower bound. In the second part, we develop some tools that facilitate the coding of the previous sources. This is done by the use of syndrome-based Turbo and LDPC codes, and the EM algorithm. This part also presents new tools that we have developed to achieve the bounds of asymmetric and non-asymmetric distributed source coding. We also show that, when it comes to non-uniform sources, the roles of the correlated sources are not symmetric. Finally, we show that the proposed source models are well suited for the video bit planes distributions, and we present results that proof the efficiency of the developed tools. The latter tools improve the rate-distortion performance of the video codec in an interesting amount, provided that the correlation channel is additive
Tizon, Nicolas. "Codage vidéo scalable pour le transport dans un réseau sans fil". Paris, ENST, 2009. http://www.theses.fr/2009ENST0032.
Texto completoBitrate adaptation is a key issue when considering streaming applications involving throughput limited networks with error prone channels, as wireless networks. The emergence of recent source coding standards like the scalable extension of H. 264/AVC namely Scalable Video Coding (SVC), that allows to encode in the same bitstream a wide range of spatio-temporal and quality layers, offers new adaptation facilities. The concept of scalability, when exploited for dynamic channel adaptation purposes, raises at least two kinds of issues: how to measure network conditions and how to differentiate transmitted data in terms of distortion contribution ? In this document, we propose and compare different approaches in terms of network architecture in order to comply with different practical requirements. The first approach consists in a video streaming system that uses SVC coding in order to adapt the input stream at the radio link layer as a function of the available bandwidth, thanks to a Media Aware Network Element (MANE) that assigns priority labels to video packets. The second approach consists in not modifying the existing network infrastructure and keeping the adaptation operations in the server that exploits long term feedbacks from the client. Moreover, in this document, we present a recursive distortion model, which is used to dynamically calculate the contribution of each packet to the final distortion. Finally, in the scope of lossy compression with subband decomposition and quantization, a contribution has been proposed in order to jointly resize decoded pictures and adapt the inverse transformation matrices following quantization noise and images content
Guillotel, Philippe. "De l'optimisation globale à l'optimisation locale psycho-visuelle en codage vidéo". Rennes 1, 2012. http://www.theses.fr/2012REN1S009.
Texto completoVideo coding is an essential part of the production-delivery-rendering video chain. The efficiency of the coding scheme gives the quality perceived by the final user and contributes to the evaluation of the quality of experience (or QoE). A video encoder is a complex system with many different aspects requiring a specific know-how to specify the right algorithm for the considered application. This work deals with the main topics to be considered, proposes innovative solutions and discusses their respective performances. The first part is an introduction to the coding of video signals with some remainders on the general principles necessary to understand this thesis. Spatial sampling, temporal sampling and colorimetry theories are first discussed, and the encoding is introduced just after. The different tools and mechanisms are described, as well as the main existing standards relevant regarding this work. The video formats impact is discussed to demonstrate the interest of the progressive scanning format, even if it is not yet largely diploid because of the necessary backward compatibility. Finally, we demonstrate the importance of knowing the applicative context in a particular case, the professional video production where very high quality video is required. The second part is dedicated to the global optimisation issues based on both the complexity-distortion and rate-distortion functions, where the distortion is mainly the mathematical difference between the original and decoded signals. The first chapter introduces the adequacy between the algorithm and the considered platform. We discuss here a specific IC considered today as one of the most efficient IC of its generation. A particular focus is proposed in the other chapters regarding adaptive coding techniques for the signal, channel or user. The third part introduces a new research area recently attracting a lot of attention from the academic researchers, the local perceptual coding. After an introduction of the human visual system, distortion metrics and other subjective aspects, different research studies are presented. It is proposed to use local adaptation based on the human perception. In other words we propose to study how each picture area can be encoded to provide a better subjective quality. It is a recent research topic but it opens new perspectives not yet fully explored. Finally, extensions and perspectives are proposed in the conclusion to complete this work
Fauquet, Jerôme. "Optimisation de la qualité vidéo MPEG-2 en transmission ADSL : étude d'un transcodage vidéo hiérarchique". Valenciennes, 2003. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/3496492e-f16f-4b55-ad64-f01c961e825e.
Texto completoWe present an original method to optimize the received video quality within the framework of digital video transmission over ADSL. This one is based on a hierarchical coding scheme, using the MPEG-2 data partitioning mode, with rate adaptation in the air, which allows to take into account the characteristics of the transmission channel. This scheme is described in great detail, as well as the method of bi-resolution transmission using power transfer, which allows an unequal protection of the high and low priority video data streams. Finally we present the simulation results of digital video mono- then bi-resolution transmission over ADSL, with or without rate adaptation. The performances of the various solutions described here are evaluated in terms of BER, as well as MSE of the compressed digital video sequence, before and after transmission. In particular, the analysis of the results shows the superiority of the proposed solution compared to classical transmission schemes
Yaacoub, Charles. "Codage conjoint source-canal pour l'optimisation d'un système de codage distribué de sources vidéo transmises sur un lien sans fil". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005457.
Texto completoYaacoub, Charles. "Codage conjoint source-canal pour l'optimisation d'un système de codage distribué de sources vidéo transmises sur un lien sans fil". Phd thesis, Paris, ENST, 2009. https://pastel.hal.science/pastel-00005457.
Texto completoIn this thesis, we first develop a comparative study between binary and non binary turbo-codes used for channel coding as well as for the compression of distributed sources, and we implement a distributed video coding system based on quadri-binary turbo-codes. We then derive the theoretical compression bounds for the case of source coding as well as for joint source-channel coding. These calculations are then used in a cross-layer approach that aims at reducing the excessive use of the feedback channel. Therefore, our system determines the transmission rate for each user taking into account the amount of motion in the captured video scene as well as the state of the transmission channel. We propose afterwards a coding technique that estimates the transmission rate for each user while optimizing the value of the quantization parameter. As a result, we obtain a distributed video coding system with adaptive quantization and dynamic rate allocation. The influence of H. 264 Intra-coding of key frames on the system's performance is also considered. Based on our theoretical study, we then develop novel algorithms that dynamically adapt the GOP size and determine the coding mode for each frame, without the need for a feedback channel. Finally, a frame fusion approach that aims at improving the side information is proposed, based on genetic algorithms
Franche, Jean-François. "Optimisation d’algorithmes de codage vidéo sur des plateformes à plusieurs processeurs parallèles". Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/1130/1/FRANCHE_Jean%2DFran%C3%A7ois.pdf.
Texto completoKubasov, Denis. "Codage de sources distribuées : nouveaux outils et application à la compression vidéo". Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/kubasov.pdf.
Texto completoDistributed video coding (DVC) is a new video coding paradigm allowing flexible encoder/decoder complexity balance. In this thesis we propose several practical solutions offering better rate-distortion performances than existing algorithms. We start by studying the problem of side information (SI) extraction in DVC. We consider alternative motion models for more efficient motion estimation at the decoder, and propose a hybrid method for using multiple SI hypotheses simultaneously. We also study the problem of spatial SI, and derive a measure of SI quality. Finally, we regard the SI improvement problem as a denoising problem, and try several denoising methods. To address the problem of correlation modeling in DVC, several algorithms are proposed. In particular, we propose a hybrid encoder/decoder rate control solution, reducing significantly the decoder complexity and providing a robust decoder bit error rate estimation technique. Quantisation table design for Wyner-Ziv frames from the rate-distortion point of view is also addressed. Finally, source coding aspects of DVC are studied. We propose to exploit the source statistics on three different levels: the whole image level (by using a more efficient decorrelating transform), the band level (by employing a distributed prediction algorithm), and the quantisation indices level, where the statistical redundancy is exploited using Huffman codes