Academic literature on the topic 'Compression avec pertes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compression avec pertes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Compression avec pertes":
Benderradji, Razik, Hamza Gouidmi, and Abdelhadi Beghidja. "Etude numérique de transition RR / MR dans l’interaction onde de choc / choc de compression." Journal of Renewable Energies 19, no. 4 (October 17, 2023): 595–604. http://dx.doi.org/10.54966/jreen.v19i4.597.
Koff, D., P. Bak, P. Brownrigg, A. Kiss, and L. Lepanto. "Compression des images medicales avec perte : le point sur le projet Canadien." Journal de Radiologie 89, no. 10 (October 2008): 1352. http://dx.doi.org/10.1016/s0221-0363(08)76059-1.
Kumar, Shailendra, Ismita Nautiyal, and Shikhar Shukla. "Some physical properties of delignified and compressed Melia dubia wood." BOIS & FORETS DES TROPIQUES 341 (July 20, 2019): 71. http://dx.doi.org/10.19182/bft2019.341.a31758.
Cavaro-Menard, C. "Compression des images medicales avec et sans perte : aspects techniques et validation clinique." Journal de Radiologie 89, no. 10 (October 2008): 1352. http://dx.doi.org/10.1016/s0221-0363(08)76058-x.
Fayolle, Jacky, and Françoise Milewski. "Un compromis monétaire favorable à l'Europe." Revue de l'OFCE 61, no. 2 (June 1, 1997): 5–92. http://dx.doi.org/10.3917/reof.p1997.61n1.0005.
Leclerc, Marc. "Traumatismes de la carapace chez la tortue : présentation de la technique de cerclage en huit." Le Nouveau Praticien Vétérinaire canine & féline 20, no. 85 (2023): 68–72. http://dx.doi.org/10.1051/npvcafe/2024011.
Vignes, S. "Lipœdème." Obésité 14, no. 3 (September 2019): 124–30. http://dx.doi.org/10.3166/obe-2019-0071.
Liu, Melody. "A Parametric Study Of The Parameters Governing Flow Incidence Angle Tolerance For Turbomachine Blades." Journal of Student Science and Technology 10, no. 1 (August 19, 2017). http://dx.doi.org/10.13034/jsst.v10i1.129.
Cervi, Andrea, Lisa Kim, Sriharsha Athreya, and Jason Cheung. "A Blue Leg: An Interventional Approach to a Limb-Threatening Deep Vein Thrombosis." Canadian Journal of General Internal Medicine 12, no. 2 (August 30, 2017). http://dx.doi.org/10.22374/cjgim.v12i2.238.
Dissertations / Theses on the topic "Compression avec pertes":
Valade, Cédric. "Compression d'images complexes avec pertes : application à l'imagerie radar." Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00002064.
Valade, Cédric. "Compression d'images complexes avec pertes : application à l'imagerie radar /." Paris : École nationale supérieure des télécommunications, 2007. http://catalogue.bnf.fr/ark:/12148/cb41003215x.
Liu, Yi. "Codage d'images avec et sans pertes à basse complexité et basé contenu." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0028/document.
This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding
Jung, Ho-Youl. "Contribution a la compression sans pertes pour la transmission progressive des images : proposition de transformations avec arrondis." Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0032.
Lossless image compression often consists of two consecutive steps: lossless (reversible) decorrelation and entropy coding. The decorrelated data should be represented by integer number, as it will be entropy coded. In this dissertation, we develop new decorrelation methods, based on a pair of rounding operations, which are very effective for lossless and progressive image compression. The proposed methods are classified into two classes: block transform and linear filtering. In the class of block transform, a new reversible transform, called the Rounding Transform (RT), is proposed, which maps an integer vector onto another integer vector by using weighted average and difference filters followed by a rounding operation. The RT is applied to lossless pyramid structured coding with various elementary block sizes and filters. We also propose a unified definition for the integer Walsh-Hadamard Transform (WHT) which is represented by a cascade of matrix products with a rounding operation. This permits applying a large size WHT to the lossess pyramid structured coding. In the class of linear filtering, lossless subband coding systems are developed. We propose an extension of the RT, called the Overlapping RT (ORT), which is defined as a two-port input/two-port output FIR filtering system with a pair of rounding operations. The ORT is applied to develop lossless two-band subband coding system. In addition, the ORT is extended into a general size. This permits to develop lossless M (>2) band (non)-separable subband coding system. All the proposed methods are also extended into 3-D case and applied on volumetric medical images
Mary, David. "Techniques causales de codage avec et sans pertes pour les signaux vectoriels." Paris : École nationale supérieure des télécommunications, 2003. http://catalogue.bnf.fr/ark:/12148/cb39085235p.
Sun, Yifei. "Methodological approaches for Goal-oriented Communication." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG111.
In a conventional communication scheme, the receiver tries to reconstruct a version as close as possible to what the sender wishes to transmit. The deviation is usually evaluated using a distortion measure such as the mean square error. This measure is not necessarily appropriate, however, if we are interested in the use that will be made of the transmitted data.Recently, communication schemes have been developed that focus on the objectives to be achieved with the data transmitted. These schemes consider the final use of the data transmitted. The exploitation of the reconstructed data at the receiver should enable results to be obtained that are as close as possible to those that would be obtained from the original data.In the first part of this thesis, the objective corresponds to a constrained optimization problem depending on a vector of parameters transmitted with a limited budget. A goal-oriented compression scheme is proposed. Its aim is to minimize the optimality loss. Linear and non-linear precoding methods are presented. A goal-oriented quantization scheme is also introduced. The proposed complete compression scheme is evaluated on a real data set and shows a significant reduction in optimality loss compared to conventional schemes minimizing the mean square reconstruction error.A second part of the thesis deals with goal-oriented communications and more specifically with transmit power control for systems operated by a remote controller. For this type of device, the energy consumed in communications can become significant. The aim is to propose and characterize a scheme that achieves a compromise between the energy related to system disequilibrium, the energy expended for system control, and the energy expended in communicating information to the remote controller.In this thesis, we consider a set of remote system-controller pairs sharing the same radio channel. Transmitted messages interfere, leading to packet loss. The loss probability depends on the transmission power of each system. In the case of a single system-controller pair, and for linear dynamics, we exploit the recursive structure of the cost function to be optimized to find an optimal transmission power control policy. For multiple system-controller pairs, a game-theoretic solution is proposed
KIM, Geun-Beom. "Recherche des sélectrons, neutralinos et squarks dans le cadre du modèle GMSB avec le détecteur CMS. Etude de la compression sans pertes de données provenant du calorimètre électromagnétique." Phd thesis, Université Louis Pasteur - Strasbourg I, 2001. http://tel.archives-ouvertes.fr/tel-00001120.
Kim, Geun-Beom. "Recherche des sélectrons, neutralinos et squarks dans le cadre du modèle GMSB avec le détecteur CMS : étude de la compression sans pertes de données provenant du calorimètre électromagnétique." Université Louis Pasteur (Strasbourg) (1971-2008), 2001. http://www.theses.fr/2001STR13102.
Babel, Marie. "Compression d'images avec et sans perte par la méthode LAR (Locally Adaptive Resolution)." Phd thesis, INSA de Rennes, 2005. http://tel.archives-ouvertes.fr/tel-00131758.
aux erreurs.
La méthode LAR (Locally Adaptive Resolution) de base a été élaborée à des fins de compression avec pertes à bas-débits. Par l'exploitation des propriétés intrinsèques du LAR, la définition d'une représentation en régions auto-extractibles apporte une solution de codage efficace à la fois en termes de débit et en termes de qualité d'images reconstruites. Le codage à débit localement variable est facilité par l'introduction de la notion de région d'intérêt ou encore de VOP (Video Object Plane).
L'obtention d'un schéma de compression sans perte s'est effectuée conjointement à l'intégration de la notion de scalabilité, par l'intermédiaire de méthodes pyramidales. Associés à une phase de prédiction, trois codeurs différents répondant à ces exigences ont vu le jour : le LAR-APP, l'Interleaved S+P et le RWHT+P. Le LAR-APP (Approche Pyramidale Prédictive) se fonde sur l'exploitation d'un contexte de prédiction enrichi obtenu par un parcours original des niveaux de la pyramide construite. L'entropie des erreurs d'estimation résultantes (estimation réalisée dans le domaine spatial) s'avère ainsi réduite. Par la définition d'une solution opérant dans le domaine transformé, il nous a été possible d'améliorer plus encore les performances
entropiques du codeur scalable sans perte. L'Interleaved S+P se construit ainsi par l'entrelacement de deux pyramides de coefficients transformés. Quant à la méthode RWHT+P, elle s'appuie sur une forme nouvelle de la transformée Walsh-Hadamard bidimensionnelle. Les performances en termes d'entropie brute se révèlent bien supérieures à celles de l'état-de-l'art : des résultats tout à fait remarquables sont obtenus notamment sur les
images médicales.
Par ailleurs, dans un contexte de télémédecine, par l'association des méthodes pyramidales du LAR et de la transformée Mojette, un codage conjoint source-canal efficace, destiné à la transmission sécurisée des images médicales compressées sur des réseaux bas-débits, a été défini. Cette technique offre une protection différenciée intégrant la nature hiérarchique des flux issus des méthodes multirésolution du LAR pour une qualité de service exécutée de bout-en-bout.
Un autre travail de recherche abordé dans ce mémoire vise à l'implantation automatique des codeurs LAR sur des architectures parallèles hétérogènes multi-composants. Par le biais de la description des algorithmes sous le logiciel SynDEx, il nous a été possible en particulier de réaliser le prototypage de
l'Interleaved S+P sur des plate-formes multi-DSP et multi-PC.
Enfin, l'extension du LAR à la vidéo fait ici l'objet d'un travail essentiellement prospectif. Trois techniques différentes sont proposées, s'appuyant sur un élément commun : l'exploitation de la représentation en régions précédemment évoquée.
Rodriguez, Cancio Marcelino. "Contributions on approximate computing techniques and how to measure them." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S071/document.
Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking