Dissertations / Theses on the topic 'Optimisation du code latent'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Optimisation du code latent.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Li, Huiyu. "Exfiltration et anonymisation d'images médicales à l'aide de modèles génératifs." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4041.
Full textThis thesis aims to address some specific safety and privacy issues when dealing with sensitive medical images within data lakes. This is done by first exploring potential data leakage when exporting machine learning models and then by developing an anonymization approach that protects data privacy.Chapter 2 presents a novel data exfiltration attack, termed Data Exfiltration by Compression (DEC), which leverages image compression techniques to exploit vulnerabilities in the model exporting process. This attack is performed when exporting a trained network from a remote data lake, and is applicable independently of the considered image processing task. By exploring both lossless and lossy compression methods, this chapter demonstrates how DEC can effectively be used to steal medical images and reconstruct them with high fidelity, using two public CT and MR datasets. This chapter also explores mitigation measures that a data owner can implement to prevent the attack. It first investigates the application of differential privacy measures, such as Gaussian noise addition, to mitigate this attack, and explores how attackers can create attacks resilient to differential privacy. Finally, an alternative model export strategy is proposed which involves model fine-tuning and code verification.Chapter 3 introduces the Generative Medical Image Anonymization framework, a novel approach to balance the trade-off between preserving patient privacy while maintaining the utility of the generated images to solve downstream tasks. The framework separates the anonymization process into two key stages: first, it extracts identity and utility-related features from medical images using specially trained encoders; then, it optimizes the latent code to achieve the desired trade-off between anonymity and utility. We employ identity and utility encoders to verify patient identities and detect pathologies, and use a generative adversarial network-based auto-encoder to create realistic synthetic images from the latent space. During optimization, we incorporate these encoders into novel loss functions to produce images that remove identity-related features while maintaining their utility to solve a classification problem. The effectiveness of this approach is demonstrated through extensive experiments on the MIMIC-CXR chest X-ray dataset, where the generated images successfully support lung pathology detection.Chapter 4 builds upon the work from Chapter 4 by utilizing generative adversarial networks (GANs) to create a more robust and scalable anonymization solution. The framework is structured into two distinct stages: first, we develop a streamlined encoder and a novel training scheme to map images into a latent space. In the second stage, we minimize the dual-loss functions proposed in Chapter 3 to optimize the latent representation of each image. This method ensures that the generated images effectively remove some identifiable features while retaining crucial diagnostic information. Extensive qualitative and quantitative experiments on the MIMIC-CXR dataset demonstrate that our approach produces high-quality anonymized images that maintain essential diagnostic details, making them well-suited for training machine learning models in lung pathology classification.The conclusion chapter summarizes the scientific contributions of this work, and addresses remaining issues and challenges for producing secured and privacy preserving sensitive medical data
Lomüller, Victor. "Générateur de code multi-temps et optimisation de code multi-objectifs." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM050/document.
Full textCompilation is an essential step to create efficient applications.This step allows the use of high-level and target independent languages while maintaining good performances.However, many obstacle prevent compilers to fully optimize applications.For static compilers, the major obstacle is the poor knowledge of the execution context, particularly knowledge on the architecture and data.This knowledge is progressively known during the application life cycle.Compilers progressively integrated dynamic code generation techniques to be able to use this knowledge.However, those techniques usually focuses on improvement of hardware capabilities usage but don't take data into account.In this thesis, we investigate data usage in applications optimization process on Nvidia GPU.We present a method that uses different moments in the application life cycle to create adaptive libraries able to take into account data size.Those libraries can therefore provide more adapted kernels.With the GEMM algorithm, the method is able to provide gains up to 100~\% while avoiding code size explosion.The thesis also investigate runtime code generation gains and costs from the execution speed, memory footprint and energy consumption point of view.We present and study 2 light-weight runtime code generation approaches that can specialize code.We show that those 2 approaches can obtain comparable, and even superior, gains compared to LLVM but at a lower cost
Chaabane, Rim. "Analyse et optimisation de patterns de code." Paris 8, 2011. http://www.theses.fr/2011PA084174.
Full textNotre travail consiste en l’analyse et l’optimisation du code source d’applications de type système hérité (ou legacy system). Nous avons particulièrement travaillé sur un logiciel, appelé GP3, qui est développé et maintenu par la société de finances Sungard. Ce logiciel existe depuis plus d’une vingtaine d’années, il est écrit en un langage propriétaire, procédural et de 4ème génération, appelé ADL (ou Application Development Language). Ce logiciel à été initialement développé sous VMS et accédait à des bases de données d’ancienne génération. Pour des raisons commerciales, il fut porté sous UNIX et s’adresse maintenant à des SGBD-R de nouvelles génération ; Oracle et Sybase. Il a également été étendu de manière à offrir une interface web. Ce système hérité doit maintenant faire face à de nouveaux défis, comme la croissance de la taille des bases de données. Durant ces 20 dernières années, nous avons pu observer la fusion de plusieurs entreprises, ce qui implique la fusion de bases de données. Ces dernières peuvent dépasser les 1 Téra de données et plus encore sont à prévoir à l’avenir. Dans ce nouveau contexte, le langage ADL montre des limites à traiter une aussi importante masse de données. Des patterns de code, désignant des structures d’accès en base, sont suspectés d’être responsables de dégradations des performances. Notre travail consiste à détecter toutes les instances de patterns dans le code source, puis d’identifier les instances les plus consommatrices en temps d’exécution et en nombre d’appels. Nous avons développé un premier outil, nommé Adlmap, basé sur l’analyse statique de code, et qui permet de détecter toutes les accès en base dans le code source. Les accès en base identifiés comme patterns de code sont marqués. Le second outil que nous avons développé, nommé Pmonitor, est basé sur une analyse hybride ; combinaison d’analyses statique et dynamique. Cet outil nous permet de mesurer les performances des patterns de code et ainsi, d’identifier les instances les moins performantes
Jupp, Ian David. "The optimisation of discrete pixel code aperture telescopes." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243190.
Full textHalli, Abderrahmane Nassim. "Optimisation de code pour application Java haute-performance." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM047/document.
Full textL'auteur n'a pas fourni de résumé en anglais
Adamczewski, Martine. "Vectorisation, analyse et optimisation d'un code bidimensionnel eulérien." Bordeaux 1, 1986. http://www.theses.fr/1986BOR10603.
Full textAdamczewski, Martine. "Vectorisation, analyse et optimisation d'un code bidimensionnel eulérien." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb375953123.
Full textRastelli, Riccardo, and Nial Friel. "Optimal Bayesian estimators for latent variable cluster models." Springer Nature, 2018. http://dx.doi.org/10.1007/s11222-017-9786-y.
Full textCosma, Georgina. "An approach to source-code plagiarism detection investigation using latent semantic analysis." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/3575/.
Full textRadhouane, Ridha. "Optimisation de modems VDSL." Valenciennes, 2000. https://ged.uphf.fr/nuxeo/site/esupversions/63219c61-e8b1-424c-a9aa-4c3502179c78.
Full textNijhar, Tajinder Pal Kaur. "Source code optimisation in a high level synthesis system." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242110.
Full textIeva, Carlo. "Révéler le contenu latent du code source : à la découverte des topoi de programme." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS024/document.
Full textDuring the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding
Gomez, Jose Ismael Soto. "Optimisation techniques for combining code modulation with equalisation for fading channels." Thesis, Staffordshire University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336888.
Full textParttimaa, T. (Tuomas). "Test suite optimisation based on response status codes and measured code coverage." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305201293.
Full textOhjelmistojen testisarja koostuu usein tuhansista erilaisista testitapauksista. Testisarjan suorittamisessa voi mennä hukkaan arvokasta aikaa ja resursseja, jos testisarjaa ei optimoida. Testisarja tulee optimoida siten, että se sisältää vähän testitapauksia, jotta epäolennaisten testitapausten tarpeeton suoritus vältetään. Tämä diplomityö keskittyy kaupallisesti saatavilla olevan Hypertext Transfer Protocol (HTTP) -palvelimien fuzz-testisarjan optimointiyrityksiin. Optimointia varten luotiin testiympäristö, joka koostuu fuzz-testaussarjasta, testikohteina käytetyistä viidestä HTTP-palvelimesta ja koodikattavuuden mittaustyökalusta. Ennalta määriteltyä testisarjaa käytettiin testikohteille suoritetuissa testiajoissa, joiden aikana mitattiin testikohteista koodikattavuus. Diplomityössä toteutettiin kolme erilaista testisarjan optimointialgoritmia. Alkuperäinen testisarja optimoitiin käyttämällä algoritmeja testiajojen tuloksiin. Optimoiduilla osajoukkosarjoilla suoritettiin uudet testiajot, joiden aikana mitattiin testikohteista koodikattavuus. Kaikki koodikattavuusmittaustulokset on esitetty ja analysoitu. Kattavuusanalyysin perusteella arvioitiin testisarjan optimointialgoritmien toimivuus ja saavutettiin seuraavat tutkimustulokset. Koodikattavuusanalyysin avulla saatiin varmuus siitä, että muutos vastausviesteissä ilmaisee, mitkä testitapaukset oikeasti käyttävät testikohdetta. Analyysilla saatiin myös kohtalainen varmuus siitä, että optimoidulla osajoukkosarjalla voidaan saavuttaa sama koodikattavuus, joka saavutettiin alkuperäisellä testisarjalla
Descroix, Cannie Claire. "Optimisation de forme des structures à partir d'un grand code éléments finis." Grenoble INPG, 1989. http://www.theses.fr/1989INPG0125.
Full textFaye, Papa Aldemba. "Couplage algorithme génétique-code éléments finis pour le dimensionnement de structures en matériaux composites." Clermont-Ferrand 2, 2004. http://www.theses.fr/2004CLF22497.
Full textTaleb, Robin I. "Preparation, optimisation and characterisation of sequence selective compounds." Thesis, View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/38214.
Full textLaguzet, Florence. "Etude et optimisation d'algorithmes pour le suivi d'objets couleur." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112197.
Full textThe work of this thesis focuses on the improvement and optimization of the Mean-Shift color object tracking algorithm, both from a theoretical and architectural point of view to improve both the accuracy and the execution speed. The first part of the work consisted in improving the robustness of the tracking. For this, the impact of color space representation on the quality of tracking has been studied, and a method for the selection of the color space that best represents the object to be tracked has been proposed. The method has been coupled with a strategy determining the appropriate time to recalculate the model. Color space selection method was also used in collaboration with another object tracking algorithm to further improve the tracking robustness for particularly difficult sequences : the covariance tracking which is more time consuming. The objective of this work is to obtain an entire real time system running on multi-core SIMD processors. A study and optimization phase has been made in order to obtain algorithms with a complexity that is configurable so that they can run in real time on different platforms, for various sizes of images and object tracking. In this context of compromise between speed and performance, it becomes possible to do real-time tracking on processors like ARM Cortex A9
Drai, Patrick. "Optimisation d'une méthode numérique en thermohydraulique - application de cette méthode au code CEDRIC." Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11058.
Full textCrick, Thomas. "Superoptimisation : provably optimal code generation using answer set programming." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518295.
Full textTaleb, Robin I. "Preparation, optimisation and characterisation of sequence selective compounds." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/38214.
Full text"A thesis presented to the University of Western Sydney, College of Health and Science, School of Biomedical and Health Sciences in fulfilment of the requirements for the degree of Doctor of Philosophy." Includes bibliography.
Çergani, Ervina [Verfasser], Mira Akademischer Betreuer] Mezini, and Christoph-Matthias [Akademischer Betreuer] [Bockisch. "Machine Learning as a Mean to Uncover Latent Knowledge from Source Code / Ervina Cergani ; Mira Mezini, Christoph Bockisch." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-116586.
Full textÇergani, Ervina [Verfasser], Mira [Akademischer Betreuer] Mezini, and Christoph [Akademischer Betreuer] Bockisch. "Machine Learning as a Mean to Uncover Latent Knowledge from Source Code / Ervina Cergani ; Mira Mezini, Christoph Bockisch." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1211088456/34.
Full textCergani, Ervina [Verfasser], Mira [Akademischer Betreuer] Mezini, and Christoph [Akademischer Betreuer] Bockisch. "Machine Learning as a Mean to Uncover Latent Knowledge from Source Code / Ervina Cergani ; Mira Mezini, Christoph Bockisch." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1211088456/34.
Full textLemuet, Christophe William. "Automated tuning and code generation system for scientific computing." Versailles-St Quentin en Yvelines, 2005. http://www.theses.fr/2005VERS0003.
Full textDes interactions complexes entre les mécanismes matériels et les comportements des logiciels à l'exécution rendent la tâche des compilateurs difficile pour obtenir de hautes performances, indépendamment du contexte ou de paramètres micro-architecturaux. Dans le cas du calcul scientifique, nombreux sont les codes organisés autour de fonctions standardisées afin d'utiliser des bibliothèques. Ces dernières peuvent être optimisées, de façon manuelle ou automatiquement mais que pour un seul ensemble de noyaux prédéfinis. La performance est alors obtenue au détriment de la flexibilité. Cette thèse défend une approche empirique de l'optimisation automatique de boucles vectorielles. Nous détaillons l'infrastructure d'un système automatique prenant place après des transformations de code haut-niveau et remplaçant la génération de code bas niveau des compilateurs parfois défaillante. Notre processus d'optimisation repose sur une décomposition des boucles vectorielles en des motifs de code organisés de façon hiérarchique dont les performances sont systématiquement évaluées et permettent de déduire des techniques d'optimisation servant à en générer des versions optimisées. Celles ci sont alors stockées dans une base de donnée. Il n'y a pas de limitation au nombre de noyaux à optimisés, et ne repose pas sur la génération de code bas niveau des compilateurs. Ce système appliqué sur plate-formes à base d'Alpha 21264, Power 4 et Itanium 2 a permis de révéler des performances complexes. Des optimisations ont néanmoins été déduites et appliquées sur des codes réels, améliorant les performances de 20% à 100% selon le code considéré
Amstel, Duco van. "Optimisation de la localité des données sur architectures manycœurs." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM019/document.
Full textThe continuous evolution of computer architectures has been an important driver of research in code optimization and compiler technologies. A trend in this evolution that can be traced back over decades is the growing ratio between the available computational power (IPS, FLOPS, ...) and the corresponding bandwidth between the various levels of the memory hierarchy (registers, cache, DRAM). As a result the reduction of the amount of memory communications that a given code requires has been an important topic in compiler research. A basic principle for such optimizations is the improvement of temporal data locality: grouping all references to a single data-point as close together as possible so that it is only required for a short duration and can be quickly moved to distant memory (DRAM) without any further memory communications.Yet another architectural evolution has been the advent of the multicore era and in the most recent years the first generation of manycore designs. These architectures have considerably raised the bar of the amount of parallelism that is available to programs and algorithms but this is again limited by the available bandwidth for communications between the cores. This brings some issues thatpreviously were the sole preoccupation of distributed computing to the world of compiling and code optimization techniques.In this document we present a first dive into a new optimization technique which has the promise of offering both a high-level model for data reuses and a large field of potential applications, a technique which we refer to as generalized tiling. It finds its source in the already well-known loop tiling technique which has been applied with success to improve data locality for both register and cache-memory in the case of nested loops. This new "flavor" of tiling has a much broader perspective and is not limited to the case of nested loops. It is build on a new representation, the memory-use graph, which is tightly linked to a new model for both memory usage and communication requirements and which can be used for all forms of iterate code.Generalized tiling expresses data locality as an optimization problem for which multiple solutions are proposed. With the abstraction introduced by the memory-use graph it is possible to solve this optimization problem in different environments. For experimental evaluations we show how this new technique can be applied in the contexts of loops, nested or not, as well as for computer programs expressed within a dataflow language. With the anticipation of using generalized tiling also to distributed computations over the cores of a manycore architecture we also provide some insight into the methods that can be used to model communications and their characteristics on such architectures.As a final point, and in order to show the full expressiveness of the memory-use graph and even more the underlying memory usage and communication model, we turn towards the topic of performance debugging and the analysis of execution traces. Our goal is to provide feedback on the evaluated code and its potential for further improvement of data locality. Such traces may contain information about memory communications during an execution and show strong similarities with the previously studied optimization problem. This brings us to a short introduction to the algorithmics of directed graphs and the formulation of some new heuristics for the well-studied topic of reachability and the much less known problem of convex partitioning
Karimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Full textMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
Manilov, Stanislav Zapryanov. "Analysis and transformation of legacy code." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29612.
Full textGarcia, rodriguez Daniel. "Optimisation d'un code de dynamique des dislocations pour l'étude de la plasticité des aciers ferritiques." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00767178.
Full textGarcia, Rodriguez Daniel. "Optimisation d'un code de dynamique des dislocations pour l'étude de la plasticité des aciers ferritiques." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENI075/document.
Full textThe present work is part of a larger multi-scale effort aiming to increase knowledge of thephysical phenomena underneath reactor pressure vessel irradiation embrittlement. Withinthis framework, we focused on the description of dislocation mobility in BCC iron, which is oneof the key inputs to dislocation dynamics (DD) simulation codes. An extensive bibliographicreview shows that none of the available expressions can deal with the ductile-fragile transitiondomain of interest. Here, a new screw mobility law able to reproduce the main experimentalobservations is introduced building on the previous models. The aforementioned law is usedtogether with an improved dislocations dynamics code Tridis BCC 2.0, featuring bothperformance and dislocations segments interaction management enhancements, that allowsfor complex DD simulations of BCC iron structures with cross-slip
Weber, Bruno. "Optimisation de code Galerkin discontinu sur ordinateur hybride : application à la simulation numérique en électromagnétisme." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD046/document.
Full textIn this thesis, we present the evolutions made to the Discontinuous Galerkin solver Teta-CLAC – resulting from the IRMA-AxesSim collaboration – during the HOROCH project (2015-2018). This solver allows to solve the Maxwell equations in 3D and in parallel on a large amount of OpenCL accelerators. The goal of the HOROCH project was to perform large-scale simulations on a complete digital human body model. This model is composed of 24 million hexahedral cells in order to perform calculations in the frequency band of connected objects going from 1 to 3 GHz (Bluetooth). The applications are numerous: telephony and accessories, sport (connected shirts), medicine (probes: capsules, patches), etc. The changes thus made include, among others: optimization of OpenCL kernels for CPUs in order to make the best use of hybrid architectures; StarPU runtime experimentation; the design of an integration scheme using local time steps; and many optimizations allowing the solver to process simulations of several millions of cells
Ozaktas, Haluk. "Compression de code et optimisation multicritère des systèmes embarqués dans un contexte temps réel strict." Paris 6, 2011. http://www.theses.fr/2011PA066376.
Full textSuresh, Arjun. "Intercepting functions for memoization." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S106/document.
Full textWe have proposed mechanisms to implement function memoization at a software level as part of our effort to improve sequential code performance. We have analyzed the potential of function memoization on applications and its performance gain on current architectures. We have proposed three schemes - a simple load time approach which works for any dynamically linked function, a compile time approach using LLVM framework which can enable memoization for any program function and also a hardware proposal for doing memoization in hardware and its potential benefits. Demonstration of the link time approach with transcendental functions showed that memoization is applicable and gives good benefit even under modern architectures and compilers (with the restriction that it can be applied only for dynamically linked functions). Our compile time approach extends the scope of memoization and also increases the benefit due to memoization. This works for both user defined functions as well as library functions. It can handle certain kind of non pure functions like those functions with pointer arguments and global variable usage. Hardware memoization lowers the threshold for a function to be memoized and gives more performance gain on average
Moussaoui, Mohamed. "Optimisation de la correction de biais dans le récepteur PIC multi-étages pour le système CDMA." Valenciennes, 2005. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/19958d9e-d214-4166-b25a-481ff9aca6ff.
Full textThe complexity of Verdu's optimal receiver for CDMA increases exponentially with the number of users, loading to an unrealistic implementation. In this thesis, we analyze the sub-optimal interference cancellation receiver with multi-stages parallel structure. The parallel nature of the algorithm can be easily exploited in a multiprocessing environment. This makes it extremely attracting for UMTS-TDD. The use of a matched-filter estimator, results in a bias in the estimated amplitude at the second stage output, particularly for heavy system loads. This bias degrades the system performances in term of bit error rate (BER). We propose an original low complexity approach for reducing the bias, in which we don't attenuate the estimated multiple access interference (MAI), as in the partial cancellation method, but we amplify the amplitude of the received signal for each user by a scalar which we call the amplification factor (AF). An analytical study in the synchronous case and by simulation in the asynchronous case was led to determine an optimal value of this amplification factor, in order to minimize the BER. We indicate the performances obtained in a perfect power control case and for uncompensated near-far effects. Then, we propose a structure incorporating this amplification factor in the multi-stage case, and we compare its complexity with that for the partial cancellation solution. Another aspect considered in this thesis is the impact of the decision functions on the performance of the PIC receiver. We thus considered the `null zone' and ` clipping' functions and compared their sensitivity to decision threshold optimization errors
Lemaitre, Florian. "Tracking haute fréquence pour architectures SIMD : optimisation de la reconstruction LHCb." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS221.
Full textDuring this thesis, we studied linear algebra systems with small matrices (typically from 2x2 to 5x5) used within the LHCb experiment (and also in other domains like computer vision). Linear algebra libraries like Eigen, Magma or the MKL are not optimized for such small matrices. We used and combined many well-known transforms helping SIMD and some unusual transforms like the fast reciprocal square root computation. We wrote a code generator in order to simplify the use of such transforms and to have a portable code. We tested these optimizations and analyzed their impact on the speed of simple algorithm. Batch processing in SoA is crucial to process fast these small matrices. We also analyzed how the accuracy of the results depends on the precision of the data. We implemented these transforms in order to speed-up the Cholesky factorization of small matrices (up to 12x12). The processing speed is capped if the fast reciprocal square root computation is not used. We got a speed up between x10 and x33 using F32. Our version is then from x3 to x10 faster than MKL. Finally, we studied and sped up the Kalman filter in its general form. Our 4x4 F32 implementation is x90 faster. The Kalman filter used within LHCb has been sped up by x2.2 compared to the current SIMD version and by at least x2.3 compared to filters used other high energy physics experiments
Venturini, Hugo. "Le débogage de code optimisé dans le contexte des systèmes embarqués." Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00332344.
Full textDiallo, Amadou Tidiane. "Caractérisation analytique et optimisation de codes source-canal conjoints." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00748545.
Full textFarouni, Tarek. "An Overview of Probabilistic Latent Variable Models with anApplication to the Deep Unsupervised Learning of ChromatinStates." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492189894812539.
Full textYoussef, Mazen Dandache Abbas Diou Camille. "Modélisation, simulation et optimisation des architectures de récepteur pour les techniques d'accès W-CDMA." Metz : Université de Metz, 2009. ftp://ftp.scd.univ-metz.fr/pub/Theses/2009/Mazen.Youssef.SMZ0907.pdf.
Full textAlhindawi, Nouh Talal. "Supporting Source Code Comprehension During Software Evolution and Maintenance." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1374790792.
Full textKiepas, Patryk. "Analyses de performances et transformations de code pour les applications MATLAB." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM063.
Full textMATLAB is a computing environment with an easy programming language and a vast library of functions commonly used in Computation Science and Engineering (CSE) for fast prototyping. However, some features of its environment, such as its dynamic language or interactive style of programming affect how fast the programs can execute. Current approaches to improve MATLAB programs either translate the code to faster static languages like C or Fortran, or apply code transformations to MATLAB code systematically without considering their impact on the performance. In this thesis, we fill this gap by developing techniques for the analysis and codetransformation of MATLAB programs in order to improve their performance. More precisely, we analyse and model the behaviour of the black-box MATLAB environment by measuring the execution characteristics of programs on CPU. From the resulting data, we formalise a static model which predicts the type and order of instructions scheduled by the Just-In-Time (JIT)compiler. This model allows us to propose several code transformations which increase the performance of MATLAB programs by influencing how the JIT compiler generates the machine code. The obtained results demonstrate the practical benefits of the presented methodology
Moore, Brian. "Optimisation de la génération symbolique du code et estimation des paramètres dynamiques de structures mécaniques dans SYMOFROS /." Thèse, Trois-Rivières : Université du Québec à Trois-Rivières, 2000. http://www.uqtr.ca/biblio/notice/resume/03-2216433R.html.
Full textMoore, Brian. "Optimisation de la génération symbolique du code et estimation des paramètres dynamiques de structures mécaniques dans SYMOFROS." Thèse, Université du Québec à Trois-Rivières, 2000. http://depot-e.uqtr.ca/3088/1/000671450.pdf.
Full textMonsifrot, Antoine. "Utilisation du raisonnement à partir de cas et de l'apprentissage pour l'optimisation de code." Rennes 1, 2002. http://www.theses.fr/2002REN10107.
Full textKhan, Minhaj Ahmad. "Techniques de spécialisation de code pour des architectures à hautes performances." Versailles-St Quentin en Yvelines, 2008. http://www.theses.fr/2008VERS0032.
Full textDe nombreuses applications sont incapables d'utiliser les performances crêtes offertes par des architectures modernes comme l'Itanium et Pentium-IV. Cela rend critique les optimisations de code réalisée par les compilateurs. Parmis toutes les optimisations réalisées par les compilateurs, la spécialisation de code, qui fournit aux compilateurs les valeurs des paramètres importants dans le code, est très efficace. La spécialisation statique a comme défault de produire une grande taille du code, appelée, l'explosion du code. Cette grande taille implique des défaults de caches et des coûts de branchements. Elle même impose des contraintes sur d'autres optimisations. Tous ces effets rendent nécessaire de spécialiser le code dynamiquement. La spécialisation de code est donc effectué par lescompilateurs/specialiseurs dynamiques, qui générent le code àl'exécution. Ces approches ne sont pas toujours bénéfique puisque l'exécution doit subir un grand surcoût de géneration à l'exécution qui peut détériorer la performance. De plus, afin d'être amorti, ce coût exige plusieurs invocations du même code. Visant à améliorer les performances des applications complexes, cettethèse propose différentes stratégies pour la spécialisation du code. En utilisant la compilation statique, dynamique et itérative, nous ciblons les problèmes d'explosion de la taille du code et le surcoût en temps induit par la génération du code à l'exécution. Notre "Spécialisation Hybride" génère des versions équivalentes du code après l'avoir specialisé statiquement. Au lieu de conserver toutes les versions, l'une de ces versions peut être utilisée comme un template dont les instructions sont modifiées pendant exécution afin d'être adaptée à d'autres versions. La performance est améliorée puisque le code est spécialisé au moment de la compilation statique. La spécialisation dynamique est donc limitée à la modification d'un petit nombre d'instructions. Différentes variantes de ces approches peuvent améliorer laspécialisation en choisissant des variables adéquates, en diminuant le nombre de compilations et en réduisant la fréquence de laspécialisation dynamique. Notre approche "Spécialisation Itérative" est en mesure d'optimiser les codes régulier en obtenant plusieurs classes optimales du code spécialisé au moment de la compilation statique. En suite, une transformation itérative est appliquée sur le code afin de bénéficier des classes optimales générées et obtenir la meilleure version. Les expérimentations ont été effectuées sur des architectures IA-64 et Pentium- IV, en utilisant les compilateurs gcc et icc. Les approches proposées (Spécialisation Hybride et Itérative), nous permettent d'obtenir une amélioration significative pour plusieurs benchmarks, y compris ceux de SPEC, FFTW et ATLAS
Robillard, Benoît. "Verification formelle et optimisation de l’allocation de registres." Electronic Thesis or Diss., Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0730.
Full textThe need for trustful programs led to an increasing use of formal verication techniques the last decade, and especially of program proof. However, the code running on the computer is not the source code, i.e. the one written by the developper, since it has to betranslated by the compiler. As a result, the formal verication of compilers is required to complete the source code verication. One of the hardest phases of compilation is register allocation. Register allocation is the phase within which the compiler decides where the variables of the program are stored in the memory during its execution. The are two kinds of memory locations : a limited number of fast-access zones, called registers, and a very large but slow-access stack. The aim of register allocation is then to make a great use of registers, leading to a faster runnable code.The most used model for register allocation is the interference graph coloring one. In this thesis, our objective is twofold : first, formally verifying some well-known interference graph coloring algorithms for register allocation and, second, designing new graph-coloring register allocation algorithms. More precisely, we provide a fully formally veri ed implementation of the Iterated Register Coalescing, a very classical graph-coloring register allocation heuristics, that has been integrated into the CompCert compiler. We also studied two intermediate representations of programs used in compilers, and in particular the SSA form to design new algorithms, using global properties of the graph rather than local criteria currently used in the litterature
Lim, Wen Jun. "Analysis and design of analog fountain codes for short packet communications." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29277.
Full textGoutier, Jean-Michel. "Optimisation de la tete de detection d'un capteur a balance d'induction. Identification de la reponse d'un code metallique." Reims, 1995. http://www.theses.fr/1995REIMS012.
Full textThévenoux, Laurent. "Synthèse de code avec compromis entre performance et précision en arithmétique flottante IEEE 754." Perpignan, 2014. http://www.theses.fr/2014PERP1176.
Full textNumerical accuracy and execution time of programs using the floating-point arithmetic are major challenges in many computer science applications. The improvement of these criteria is the subject of many research works. However we notice that the accuracy improvement decrease the performances and conversely. Indeed, improvement techniques of numerical accuracy, such as expansions or compensations, increase the number of computations that a program will have to execute. The more the number of computations added is, the more the performances decrease. This thesis work presents a method of accuracy improvement which take into account the negative effect on the performances. So we automatize the error-free transformations of elementary floating-point operations because they present a high potential of parallelism. Moreover we propose some transformation strategies allowing partial improvement of programs to control more precisely the impact on execution time. Then, tradeoffs between accuracy and performances are assured by code synthesis. We present also several experimental results with the help of tools implementing all the contributions of our works
Changuel, Samar. "Analyse, optimisation et applications des turbocodes produits Reed-Solomon." Brest, 2008. http://www.theses.fr/2008BRES2012.
Full textThis thesis deals with error correcting codes for digital communications and information storage. It focuses on the study of Turbo codes built from two single-error-correcting Reed-Solomon (RS) component codes and their potential applications. As a first step, we study the distance properties of the binary image of RS codes in order to assess the asymptotic performances under maximum-likelihood soft-decision decoding. We then show that a judicious choice for the code roots may yield binary expansions with larger binary minimum distance and more efficient codes compared to classical constructions. Next, we focus on product codes built from two single-error-correcting RS codes. These codes provide high coding rates and have efficient low-complexity iterative decoding. Computing binary minimum distance of product code and simulations results show that optimizing the binary minimum distance of component codes lied to better asymptotic performances without additional decoder complexity. Afterwards, we compare different hard-decision decoding solutions of RS product codes, otherwise we show that RS Turbo codes are robust to shortening. The third part of this thesis is devoted to the association of products RS turbo codes with high order modulations for bandwidth efficient transmission. We first study the Binary Interleaved Coded Modulation (BICM). We propose then a second pragmatic coded modulation scheme achieving a wide range of spectral efficiency without changing the code and the decoder. Simulation results show that, contrary to the BICM schema, the performance of this solution exhibit a constant gap to the capacity regardless to the dimension of the signal set. Furthermore this gap decreases as we increase the code length. The last part of our work investigates the iterative decoding of product code over erasure channel. We propose two erasure decoding algorithms for linear block codes, and we compare them to other solutions proposed in the literature, with the aim of identifying the well suited algorithm to an iterative decoding of product code. The simulation results show that both RS and BCH product codes provide near-capacity performances on erasure channel