Teses / dissertações sobre o tema "Code source (informatique) – Documentation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Code source (informatique) – Documentation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Oumaziz, Mohamed Ameziane. "Cloning beyond source code : a study of the practices in API documentation and infrastructure as code". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0007.
Texto completo da fonteWhen developing a software, maintenance and evolution represents an important part of the development's life-cycle, making up to 80% of the overall cost and effort.During the maintenance effort, it happens that developers have to resort to copying and pasting source code fragments in order to reuse them.Such practice, seemingly harmless is more frequent than we expect.Commonly referred to as ``clones'' in the literature, these source code duplicates are a well-known and studied topic in software engineering.In this thesis, we aim at shedding some light on copy-paste practices on software artifacts. In particular, we chose to focus our contributions on two specific types of software artifacts: API documentation and build files (i.e. Dockerfiles).For both contributions, we follow a common empirical study methodology. First, We show that API documentations and software build files (i.e. Dockerfiles) actually face duplicates issues and that such duplicates are frequent.Secondly, we identify the reasons behind the existence of such duplicates.Thirdly, We perform a survey on experimented developers and find that they're aware of such duplicates, frequently face them. But still have a mixed opinion regarding them.Finally, We show that both software artifacts lack reuse mechanisms to cope with duplicates, and that some developers even resort to ad-hoc tools to manage them
Latappy, Corentin. "Les pratiques de code : de la documentation à la détection". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0101.
Texto completo da fonteCoding practices are increasingly used in the field of software development. Their implementation ensures maintainability, readability, and consistency of the code, which greatly contributes to software quality. Most of these practices are implemented in static analysis tools, or linters, which automatically alert developers when a practice is not followed. However,more and more organizations, tending to create their own internal practices, encounter problems with their understanding and adoption by developers. First, for a practice to be applied, it must first be understood by developers, thus requiring properly written documentation. Yet, this topic of documentation has been little studied in the scientific literature. Then, to promote their adoption, it would be necessary to be able to extend existing analysis tools to integrate new practices, which is difficult given the expertise required to make these modifications. Packmind, a company based in Bordeaux, develops a solution to support developers in bringing out these internal practices through workshops. However, it suffers from the same issues mentioned above. In this thesis, we first focused on providing recommendations to the authors of practice documentation. To do this, we analyzed the documentation of more than 100 rules from 16 different linters to extract a taxonomy of documentation objectives and types of content present. We then conducted a survey among developers to assess their expectations in terms of documentation. This notably allowed us to observe that the reasons why a practice should be applied were very poorly documented, while they are perceived as essential by developers. Secondly, we studied the feasibility of automatically identifying violations of practices from examples. Our context, forcing us to detect internal practices for which we have few examples to learn from, pushed us to implement transfer learning on themachine learning model CodeBERT.We show that the models thus trained achieve good performance in an experimental context, but that accuracy collapseswhenwe apply them to real code bases
Che, Denise. "Automatic documentation generation from source code". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/112822.
Texto completo da fonteThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 46).
Many systems lack robust documentation because writing documentation comes at a cost for developers and offers little immediate benefits. To motivate developers to write more documentation, we need a framework that beneficially incorporates documentation into the functionalities of the codebase. We explored this idea in the context of Exhibit, a data publishing framework developed at MIT CSAIL's Haystack group. Exhibit enables those without much programming experience to create data-rich interactive webpages. We created an automatic documentation system with an approach that motives the creation of rich specifications by developers, which leads to good documentation. The code required for documentation benefits developers by simplifying the codebase and providing fault tolerance through a modular error checking mechanism. This system intends to aid new and returning users of Exhibit in learning the Exhibit syntax and in creating their Exhibits by providing extensive, up-to-date documentation for all the available views and facets in Exhibit along with an interactive example component. The interactive example component provides an interface that allows users to experiment with combinations of the different settings and also aids developers during the unit testing process.
by Denise Che.
M. Eng.
Jaeger, Julien. "Source-to-source transformations for irregular and multithreaded code optimization". Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0015.
Texto completo da fonteIn this dissertation, we show that source-to-source optimization is an efficient method to generate a high performance program for irregular and heterogeneous code from a basic implementation. After describing the evolution of processor architectures, we provide two methods. The first one extract codelets from an irregular code, optimizing these codelets, and predicting the performance of the modified program. The other one limits the impact of alignment issues due to vectorization or bank conflicts. We also present two parallelization technics, one generating parallel codelets, the other scheduling a task graph on an heterogeneous system
Tévar, Hernández Helena. "Evolution of SoftwareDocumentation Over Time : An analysis of the quality of softwaredocumentation". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97561.
Texto completo da fonteDepaz, Pierre. "The role of aesthetics in understanding source code". Electronic Thesis or Diss., Paris 3, 2023. http://www.theses.fr/2023PA030084.
Texto completo da fonteThis thesis investigates how the aesthetic properties of source code enable the representation of programmed semantic spaces, in relation with the function and understanding of computer processes. By examining program texts and the discourses around it, we highlight how source code aesthetics are both dependent on the context in which they are written, and contingent to other literary, architectural, and mathematical aesthetics, varying along different scales of reading. Particularly, we show how the aesthetic properties of source code manifest expressive power due to their existence as a dynamic, functional, and shared computational interface to the world, through formal organizations which facilitate semantic compression and spatial exploration
Lebras, Youenn. "Code optimization based on source to source transformations using profile guided metrics". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV037/document.
Texto completo da fonteOur goal is to develop a framework allowing the definition of source code transformations based on dynamic metrics.This framework be integrated to the MAQAO tool suite developed at the UVSQ / ECR.We present a set of source-to-source transformations guidable by the end user and by the dynamic metrics coming from the various MAQAO tools in order to work at source and binary levels.This framework can also be used as a pre-processor to simplify the development by enabling to perform cleanly and automatically some simple but time-consuming and error-prone transformations (i.e .: loop/function specialization, ...)
Chilowicz, Michel. "Recherche de similarité dans du code source". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587628.
Texto completo da fonteHabchi, Sarra. "Understanding mobile-specific code smells". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I089.
Texto completo da fonteObject-Oriented code smells are well-known concepts in software engineering. They refer to bad design and development practices commonly observed in software systems. With the emergence of mobile apps, new classes of code smells have been identified to refer to bad development practices that are specific to mobile platforms. These mobile-specific code smells differ from object-oriented ones by focusing on performance issues reported in the documentation or developer guidelines. Since their identification, many research works approached mobile-specific code smells to propose detection tools and study them. Nonetheless, most of these studies only focused on measuring the performance impact of such code smells and did not provide any insights about their motives and potential solutions. In particular, we lack knowledge about (i) the rationales behind the accrual of mobile code smells, (ii) the developers’ perception of mobile code smells, and (iii) the generalizability of code smells across different mobile platforms. These lacks hinder the understanding of mobile code smells and consequently prevent the design of adequate solutions for them. Therefore, we conduct in this thesis a series of empirical studies with the aim of understanding mobile code smells. First, we study the expansion of code smells in different mobile platforms. Then, we conduct a large-scale study to analyze the change history of mobile apps and discern the factors that favor the introduction and survival of code smells. To consolidate these studies, we also perform a user study to investigate developers’ perception of code smells and the adequacy of static analyzers as a solution for coping with them. Finally, we perform a qualitative study to question the established foundation about the definition and detection of mobile code smells. The results of these studies revealed important research findings. Notably, we showed that pragmatism, prioritization, and individual attitudes are not relevant factors for the accrual of mobile code smells. The problem is rather caused by ignorance and oversight, which are prevalent among mobile developers. Furthermore, we highlighted several flaws in the code smell definitions that are currently adopted by the research community. These results allowed us to elaborate some recommendations for researchers and tool makers willing to design detection and refactoring tools for mobile code smells. On top of that, our results opened perspectives for research works about the identification of mobile code smells and development practices in general
Chaabane, Rim. "Analyse et optimisation de patterns de code". Paris 8, 2011. http://www.theses.fr/2011PA084174.
Texto completo da fonteNotre travail consiste en l’analyse et l’optimisation du code source d’applications de type système hérité (ou legacy system). Nous avons particulièrement travaillé sur un logiciel, appelé GP3, qui est développé et maintenu par la société de finances Sungard. Ce logiciel existe depuis plus d’une vingtaine d’années, il est écrit en un langage propriétaire, procédural et de 4ème génération, appelé ADL (ou Application Development Language). Ce logiciel à été initialement développé sous VMS et accédait à des bases de données d’ancienne génération. Pour des raisons commerciales, il fut porté sous UNIX et s’adresse maintenant à des SGBD-R de nouvelles génération ; Oracle et Sybase. Il a également été étendu de manière à offrir une interface web. Ce système hérité doit maintenant faire face à de nouveaux défis, comme la croissance de la taille des bases de données. Durant ces 20 dernières années, nous avons pu observer la fusion de plusieurs entreprises, ce qui implique la fusion de bases de données. Ces dernières peuvent dépasser les 1 Téra de données et plus encore sont à prévoir à l’avenir. Dans ce nouveau contexte, le langage ADL montre des limites à traiter une aussi importante masse de données. Des patterns de code, désignant des structures d’accès en base, sont suspectés d’être responsables de dégradations des performances. Notre travail consiste à détecter toutes les instances de patterns dans le code source, puis d’identifier les instances les plus consommatrices en temps d’exécution et en nombre d’appels. Nous avons développé un premier outil, nommé Adlmap, basé sur l’analyse statique de code, et qui permet de détecter toutes les accès en base dans le code source. Les accès en base identifiés comme patterns de code sont marqués. Le second outil que nous avons développé, nommé Pmonitor, est basé sur une analyse hybride ; combinaison d’analyses statique et dynamique. Cet outil nous permet de mesurer les performances des patterns de code et ainsi, d’identifier les instances les moins performantes
Le, Dilavrec Quentin. "Precise temporal analysis of source code histories at scale". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS003.
Texto completo da fonteSoftware systems play a central role in contemporary societies, continuously expanding and adjusting to meet emerging requirements and practices. Over the years, through extensive development efforts and numerous code updates, those systems can accumulate millions of lines of code. Moreover, they exhibit complexity, configurability, and multi-language characteristics, relying on large and complex build pipelines to target multiple platforms and hardware. This requires extensive code analyzes to maintain code, monitor quality and catch bugs. Automated code analysis is a resource-intensive process primarily designed to examine a single specific software version. Presently, existing code analysis methods struggle to efficiently evaluate multiple software versions within a single analysis, taking up an impractical amount of time. This type of code analysis, which scrutinizes software code over several moments of its existence, is referred to as "temporal code analysis." Temporal code analyses open new opportunities for improving software quality and reliability. For example, such analyses would permit to fully study how code and their tests co-evolve in a code history. To overcome the different challenges that prevent such analyses to run at large scale, this thesis makes the following contributions. This thesis first demonstrates the feasibility of analyzing source code changes to identify causality relations between changes (ie. co-evolutions). The second contribution addresses the efficiency of computing fine-grained changes and their impacts from code histories. This required to revisit how source code histories are represented and processed, leveraging the structured nature of code and its stability through time. It led to an approach, called HyperAST, that incrementally computes referential dependencies. The third contribution is a novel code differencing technique for diffing commits. This last contribution, called HyperDiff, complements HyperAST to compare commits at scale
Zhang, Xu. "Analyse de la similarité du code source pour la réutilisation automatique de tests unitaires à l'aide du CBR". Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/29841/29841.pdf.
Texto completo da fonteAutomatically reusing unit tests is a possible solution to help developers with their daily work. Our research proposes preliminary solutions using case base reasoning (CBR), an approach coming from artificial intelligence. This technique try to find the most similar case in a case base to reuse it after some modifications against some new problems to solve. Our works focus on the similarity analysis of a program code with the goal of reusing unit tests. Our main focus will be on the elaboration of a technique to compare classes in the test context. To be more precise, in the thesis, we will discuss: 1. How to find the most similar class for which it will be possible to reuse its tests (main focus); 2. How to find similar methods between the new class and the most similar one; 3. Find which test could be reused considering the similarity of the methods. To achieve this, we will run some experiments to find the bests attributes (characteristics) to compare two classes. Those attributes must be chosen considering the specific context of testing. For example, those characteristics are not the same as for finding duplicated code. This research propose an algorithm to analyze the similarity of classes. Our experiment shows that this algorithm works quite well in the context of the experiment. We also extended the experiment to see if it could possibly work within the whole process of selection and reuse of unit tests. We did this by using some simple techniques that could certainly be refined. In fact, our works demonstrate that it is possible to reuse unit tests despite the fact that our algorithm could be perfected and we suggest some improvements about it.
Lecerf, Jason. "Designing language-agnostic code transformation engines". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I077.
Texto completo da fonteCode transformations are needed in various cases: refactorings, migrations, code specialization, and so on. Code transformation engines work by finding a pattern in the source code and rewriting its occurrences according to the transformation. The transformation either rewrites the occurrences, elements of the intermediate representation (IR) of the language, into new elements or directly rewrites the source code. In this work, we focused on source rewriting since it offers more flexibility through arbitrary transformations, especially for migrations and specializations. Matching patterns come in two different flavors, explicit and syntactic. The former requires the user to know the IR of the language, a heavy knowledge burden. The latter only relies on the syntax of the matched language and not its IR, but requires significantly more work to implement the language back-ends. Language experts tend to know the IR and the syntax of a language, while other users know only the syntax. We propose a pattern matching engine offering a hybrid pattern representation: both explicit and syntactic matching are available in the same pattern. The engine always defaults to syntactic as it is the lowest barrier to entry for patterns. To counterbalance the implementation cost of language back-ends for syntactic pattern matching, we take a generative approach. We combine the hybrid pattern matching engine with a parser generator. The parser generator generates generalized LR (GLR) parsers capable of not only parsing the source but also the hybrid pattern. The back-end implementer only needs to add one line to the grammar of the language to activate the pattern matching engine. This approach to pattern matching requires GLR parsers capable of forking and keeping track of each individual fork. These GLR implementations suffer the more forking is done to handle ambiguities and patterns require even more forking. To prevent an explosion, our Fibered-GLR parsers merge more often and allow for classic disambiguation during the parse through side-effects
Kiss, Fabian. "Documenting for Program Comprehension in Agile Software Development". Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20393.
Texto completo da fonteProgram: Magisterutbildning i informatik
Ouddan, Mohammed Amine. "Indexation et recherche des documents code source basées sur une caractérisation structuro-sémantique : application à la détection de plagiats". Université de Marne-la-Vallée, 2007. http://www.theses.fr/2007MARN0340.
Texto completo da fonteSource code characterization is a very complex task due the amount of similarity between computer science assignments. The various transformations that occur within a plagiarized code make the plagiarism detection more difficult. We propose a multilanguage source code retrieval system for plagiarism detection which is based on twolevel characterization approach. The first level reflects the syntactic feature of the code allowing a structural characterization of its content, and the second level relates to its functional feature allowing a semantic characterization. Our approach is based on the concept of Grammar with Actions which consists to assign significance to the parsing process in a context of characterization, and at the same time, allowing access to the structural and semantic content of the code using the grammar of its programming language. The aim idea is to translate the source code into a set of symbols sequences called characteristic sequences. In the first level of characterization we talk about structural sequences and in the second level we talk about genetic sequences. In order to quantify the similarity between characteristic sequences, we use sequence alignment techniques where the similarity rate is considered as an abstraction of the plagiarism rate between the characterized codes
Amouroux, Guillaume. "Etude de l'analyse automatique des règles de conception des systèmes multitâches temps réel". Paris 11, 2008. http://www.theses.fr/2008PA112053.
Texto completo da fonteThe works presented in this thesis propose a novel method to verify the application of design rules based on the analysis of the source code. Design rules allow to guarantee the presence of sound properties on the final program. Therefore, verifying their presence allows to guarantee the presence of the associated properties. On the opposite, if a particular rule is not found to be applied upon the final program, no judgment may be given regarding the quality of the source code, but the fact that the developer didn’t follow the rule is significant in itself. The verifications are particularly aimed towards the problems specific to multitask systems. The introduction of a dynamic non-deterministic scheduling between the tasks renders the analyses by classical proof of programs inefficient or even useless. This led us to propose a new program analysis technique, based on multiple levels analysis and program slicing. The technique proposed is based on the study of the source code only. The analyses must be performed on this element only, if no other element regarding the system’s design is available
Come, David. "Analyse de la qualité de code via une approche logique et application à la robotique". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30008.
Texto completo da fonteThe quality of source code depends not only on its functional correctness but also on its readability, intelligibility and maintainability. This is currently an important problem in robotics where many open-source frameworks do not spread well in the industry because of uncertainty about the quality of the code. Code analysis and search tools are effective in improving these aspects. It is important that they let the user specify what she is looking for in order to be able to take into account the specific features of the project and of the domain. There exist two main representations of the source code : its Abstract Syntax Tree (AST) and the Control Flow Graph (CFG) of its functions. Existing specification mechanisms only use one of these representations, which is unfortunate because they offer complementaty information. The objective of this work is therefore to develop a method for verifying code compliance with user rules that can take benefit from both the AST and the CFG. The method is underpinned by a new logic we developed in this work : FO++ , which is a temporal extension of first-order logic. Relying on this logic has two advantages. First of all, it is independent of any programming language and has a formal semantics. Then, once instantiated for a given programming language, it can be used as a mean to formalize user provided properties. Finally, the study of its model-checking problem provides a mechanism for the automatic and correct verification of code compliance. These different concepts have been implemented in Pangolin, a tool for the C++ language. Given the code to be checked and a specification (which corresponds to an FO++ formula, written using Pangolin language), the tool indicates whether or not the code meets the specification. It also offers a summary of the evaluation in order to be able to find the code that violate the property as well as a certificate of the result correctness. Pangolin and FO++ have been applied to the field of robotics through the analysis of the quality of ROS packages and the formalization of a ROS-specific design-pattern. As a second and more general application to the development of programs in C++, we have formalized various good practice rules for this language. Finally, we have showed how it is possible to specify and verify rules that are closely related to a specific project by checking properties on the source code of Pangolin itself
Hecht, Geoffrey. "Détection et analyse de l'impact des défauts de code dans les applications mobiles". Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10133/document.
Texto completo da fonteMobile applications are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor low-level design choices, known as code smells. The presence of code smells within software systems may incidentally degrade their quality and performance, and hinder their maintenance and evolution. Thus, it is important to know this smells but also to detect and correct them. While code smells are well-known in object-oriented applications, their study in mobile applications is still in their infancy. Moreover there is a lack of tools to detect and correct them. That is why we present a classification of 17 code smells that may appear in Android applications, as well as a tool to detect and correct code smells on Android. We apply and validate our approach on large amounts of applications (over 3000) in two studies evaluating the presence and evolution of the number of code smells in popular applications. In addition, we also present two approaches to assess the impact of the correction of code smells on performance and energy consumption. These approaches have allowed us to observe that the correction of code smells is beneficial in most cases
Ieva, Carlo. "Révéler le contenu latent du code source : à la découverte des topoi de programme". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS024/document.
Texto completo da fonteDuring the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding
Foucault, Matthieu. "Organisation des développeurs open-source et fiabilité logicielle". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0219/document.
Texto completo da fonteReliability of a software, i.e. its capacity to produce the expected behaviour, is essential to the success of software projects. To ensure such reliability, developers need to reduce the amount of bugs in the source code of the software. One of the techniques available to help developers in this task is the use of software metrics, and especially metrics related to the development process.The general objective of this thesis is to contribute to the validation of process metrics, by studying their relationship with software reliability. These metrics, once validated, can be used in bug predictionmodels with the goal to guide maintenance efforts or can be used to create development guidelines. Given the extent of this domain, we chose to focus on one particular aspect of the development process, which is developers organisation, and we studied this organisation in open-source software projects.In parallel to the validation of process metrics, we contributed to the improvement of the methodology used to extract and analyse metrics, thanks to information available in software repositories
Haddad, Axel. "Shape-Preserving Transformations of Higher-Order Recursion Schemes". Paris 7, 2013. http://www.theses.fr/2013PA077264.
Texto completo da fonteHigher-order recursion scheme model functional programs in the sense that they describe the recursive definitions of the user-defined functions of a pro-gram, without interpreting the built-in functions. Therefore the semantics of a higher-order recursion scheme is the (possibly infinite) tree of executions of a program. This thesis focus on verification related problems. The MSO model- checking problem, i. E. The problem of knowing whether the tree generated by a scheme satisfy an monadic second order logic (MSO) formula, has been solved by Ong in 2006. In 2010 Broadbent Carayol Ong and Serre extended this result by showing that one can transform a scheme such that the nodes in the tree satisfying a given MSO formula are marked, this problem is called the reflection problem. Finally in 2012 Carayol and Serre have solved the selection problem: if the tree of a given scheme satisfies a formula of the form "There exist a set of node such that. . . ", one can transforrn the scheme such that a set witnessing the property is marked. In this thesis, we use a semantics approach to study scheme-transformation related problems. Our goal is to give shape-preserving solutions to such problems, i. E. Solutions where the output scheme has the same structure as the input one. In this idea, we establish a simulation algorithm that takes a scheme G and an evaluation policy r E {0I,I0} and outputs a scheme G' such that the value tree of G' under the policy Tif= is equal to the value tree of G under Then we give new proofs of the reflection and selection, that do not involve collapsible pushdown automata, and are again shape-preserving
Ruzé, Emmanuel. "Collaboration massivement distribuée et gestion du savoir en ligne: le cas de la communauté WordPress (2003-2008). "Code is poetry", but a "documentation resource should be managed"". Phd thesis, Ecole Polytechnique X, 2009. http://pastel.archives-ouvertes.fr/pastel-00006078.
Texto completo da fonteHalli, Abderrahmane Nassim. "Optimisation de code pour application Java haute-performance". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM047/document.
Texto completo da fonteL'auteur n'a pas fourni de résumé en anglais
Stankay, Michal. "Prístupy ku konsolidácii programovej a dátovej základne webového serveru Techno.cz". Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-4549.
Texto completo da fonteSingh, Neeraj Kumar. "Fiabilité et sûreté des systèmes informatiques critiques". Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10129.
Texto completo da fonteSoftware systems are pervasive in all walks of our life and have become an essential part of our daily life. Information technology is one major area, which provides powerful and adaptable opportunities for innovation, and it seems boundless. However, systems developed using computer-based logic have produced disappointing results. According to stakeholders, they are unreliable, at times dangerous, and fail to provide the desired outcomes. Most significant reasons of system failures are the poor development practices for system development. This is due to the complex nature of modern software and lack of adequate and proper understanding. Software development provides a framework for simplifying the complex system to get a better understanding and to develop the higher fidelity quality systems at lower cost. Highly embedded critical systems, in areas such as automation, medical surveillance, avionics, etc., are susceptible to errors, which can lead to grave consequences in case of failures. This thesis intends to contribute to further the use of formal techniques for the development computing systems with high integrity. Specifically, it addresses that formal methods are not well integrated into established critical systems development processes by defining a new development life-cycle, and a set of associated techniques and tools to develop highly critical systems using formal techniques from requirements analysis to automatic source code generation using several intermediate layers with rigorous safety assessment approach. The approach has been realised using the Event-B formalism. This thesis has mainly two parts: techniques and tools and case studies. The techniques and tools section consists of development life-cycle methodology, a framework for real-time animator, refinement chart, a set of automatic code generation tools and formal logic based heart model for close loop modeling. New development methodology, and a set of associated techniques and tools are used for developing the critical systems from requirements analysis to code implementation, where verification and validation tasks are used as intermediate layers for providing a correct formal model with desired system behavior at the concrete level. Introducing new tools help to verify desired properties, which are hidden at the early stage of the system development. We also critically evaluate the proposed development methodology and developed techniques and tools through case studies in the medical and automotive domains. In addition, the thesis work tries to address the formal representation of medical protocols, which is useful for improving the existing medical protocols. We have fully formalised a real-world medical protocol (ECG interpretation) to analyse whether the formalisation complies with certain medically relevant protocol properties. The formal verification process has discovered a number of anomalies in the existing protocols. We have also discovered a hierarchical structure for the ECG interpretation efficiently that helps to find a set of conditions that can be very helpful to diagnose particular disease at the early stage. The main objective of the developed formalism is to test correctness and consistency of the medical protocol
Tran, Vinh Duc. "Des codes pour engendrer des langages de mots infinis". Nice, 2011. http://www.theses.fr/2011NICE4109.
Texto completo da fonteThis thesis deals with the languages of infinite words which are the ω-powers of a language of finite words. In particular, we focus on the open question : given a language L, does there exist an ω-code C such that Cω = Lω ? It is quite similar to the question deciding whether a submonoid of a free monoid is generated by a code. First, we study the set of relations satisfied by language L, i. E. The double factorizations of a word in L* U Lω. We establish a necessary condition for that Lω has a code or an ω-code generator. Next, we define the new class of languages where the set of relations is as simple as possible after codes : one-relation languages. For this class of languages, we characterize the languages L such that there exists a code or an ω-code C such that Lω = Cω, and we show that C is never a finite language. Finally, a characterization of codes concerning infinite words leads us to define reduced languages. We consider the properties of these languages as generators of languages of infinite words
Duruisseau, Mickaël. "Améliorer la compréhension d’un programme à l’aide de diagrammes dynamiques et interactifs". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I042/document.
Texto completo da fonteDevelopers dominate in software development. In this context, they must perform a succession of elementary tasks (analysis, coding, linking with existing code ...), but in order to perform these tasks, a developer must regularly change his context of work (search information, read code ...) and analyze code that is not his. These actions require a high adaptation time and reduce the efficiency of the developer. Software modeling is a solution to this type of problem. It offers an abstract view of a software, links between its entities as well as algorithms used. However, Model-Driven Engineering (MDE) is still underutilized in business. In this thesis, we propose a tool to improve the understanding of a program using dynamic and interactive diagrams. This tool is called VisUML and focuses on the main coding activity of the developer. VisUML provides views (on web pages or modeling tools) synchronized with the code.The generated UML diagrams are interactive and allow fast navigation with and in the code. This navigation reduces the loss of time and context due to activity changes by providing at any time an abstract diagram view of the elements currently open in the developer’s coding tool. In the end, VisUML was evaluated by twenty developers as part of a qualitative experimentation of the tool to estimate the usefulness of such a tool
Richa, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles". Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0082.
Texto completo da fonteIn the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
Xiao, Chenglong. "Custom operator identification for high-level synthesis". Rennes 1, 2012. http://www.theses.fr/2012REN1E005.
Texto completo da fonteIt is increasingly common to see custom operators appear in various fields of circuit design. Custom operators that can be implemented in special hardware units make it possible to reduce code size, improve performance and reduce area. In this thesis, we propose a design flow based on custom operator identification for high-level synthesis. The key issues involved in the design flow are: automatic enumeration and selection of custom operators from a given high-level application code and re-generation of the source code incorporating the selected custom operators. Unlike the previously proposed approaches, our design flow is quite adaptable and is independent of high-level synthesis tools (i. E. , without modifying the scheduling and binding algorithms in high-level synthesis tools). Experimental results show that our approach achieves on average 19%, and up to 37% area reduction, compared to a traditional high-level synthesis. Meanwhile, the latency is reduced on average by 22%, and up to 59%. Furthermore, on average 74% and up to 81% code size reduction can be achieved
Gastellier-Prevost, Sophie. "Vers une détection des attaques de phishing et pharming côté client". Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00699627.
Texto completo da fonteRicha, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles". Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0082/document.
Texto completo da fonteIn the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
Singh, Neeraj Kumar. "Fiabilité et sûreté des systèmes informatiques critiques". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10129/document.
Texto completo da fonteSoftware systems are pervasive in all walks of our life and have become an essential part of our daily life. Information technology is one major area, which provides powerful and adaptable opportunities for innovation, and it seems boundless. However, systems developed using computer-based logic have produced disappointing results. According to stakeholders, they are unreliable, at times dangerous, and fail to provide the desired outcomes. Most significant reasons of system failures are the poor development practices for system development. This is due to the complex nature of modern software and lack of adequate and proper understanding. Software development provides a framework for simplifying the complex system to get a better understanding and to develop the higher fidelity quality systems at lower cost. Highly embedded critical systems, in areas such as automation, medical surveillance, avionics, etc., are susceptible to errors, which can lead to grave consequences in case of failures. This thesis intends to contribute to further the use of formal techniques for the development computing systems with high integrity. Specifically, it addresses that formal methods are not well integrated into established critical systems development processes by defining a new development life-cycle, and a set of associated techniques and tools to develop highly critical systems using formal techniques from requirements analysis to automatic source code generation using several intermediate layers with rigorous safety assessment approach. The approach has been realised using the Event-B formalism. This thesis has mainly two parts: techniques and tools and case studies. The techniques and tools section consists of development life-cycle methodology, a framework for real-time animator, refinement chart, a set of automatic code generation tools and formal logic based heart model for close loop modeling. New development methodology, and a set of associated techniques and tools are used for developing the critical systems from requirements analysis to code implementation, where verification and validation tasks are used as intermediate layers for providing a correct formal model with desired system behavior at the concrete level. Introducing new tools help to verify desired properties, which are hidden at the early stage of the system development. We also critically evaluate the proposed development methodology and developed techniques and tools through case studies in the medical and automotive domains. In addition, the thesis work tries to address the formal representation of medical protocols, which is useful for improving the existing medical protocols. We have fully formalised a real-world medical protocol (ECG interpretation) to analyse whether the formalisation complies with certain medically relevant protocol properties. The formal verification process has discovered a number of anomalies in the existing protocols. We have also discovered a hierarchical structure for the ECG interpretation efficiently that helps to find a set of conditions that can be very helpful to diagnose particular disease at the early stage. The main objective of the developed formalism is to test correctness and consistency of the medical protocol
Robillard, Benoît. "Verification formelle et optimisation de l’allocation de registres". Electronic Thesis or Diss., Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0730.
Texto completo da fonteThe need for trustful programs led to an increasing use of formal verication techniques the last decade, and especially of program proof. However, the code running on the computer is not the source code, i.e. the one written by the developper, since it has to betranslated by the compiler. As a result, the formal verication of compilers is required to complete the source code verication. One of the hardest phases of compilation is register allocation. Register allocation is the phase within which the compiler decides where the variables of the program are stored in the memory during its execution. The are two kinds of memory locations : a limited number of fast-access zones, called registers, and a very large but slow-access stack. The aim of register allocation is then to make a great use of registers, leading to a faster runnable code.The most used model for register allocation is the interference graph coloring one. In this thesis, our objective is twofold : first, formally verifying some well-known interference graph coloring algorithms for register allocation and, second, designing new graph-coloring register allocation algorithms. More precisely, we provide a fully formally veri ed implementation of the Iterated Register Coalescing, a very classical graph-coloring register allocation heuristics, that has been integrated into the CompCert compiler. We also studied two intermediate representations of programs used in compilers, and in particular the SSA form to design new algorithms, using global properties of the graph rather than local criteria currently used in the litterature
David, Robin. "Formal Approaches for Automatic Deobfuscation and Reverse-engineering of Protected Codes". Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0013.
Texto completo da fonteMalware analysis is a growing research field due to the criticity and variety of assets targeted as well as the increasing implied costs. These softwares frequently use evasion tricks aiming at hindering detection and analysis techniques. Among these, obfuscation intent to hide the program behavior. This thesis studies the potential of Dynamic Symbolic Execution (DSE) for reverse-engineering. First, we propose two variants of DSE algorithms adapted and designed to fit on protected codes. The first is a flexible definition of the DSE path predicate computation based on concretization and symbolization. The second is based on the definition of a backward-bounded symbolic execution algorithm. Then, we show how to combine these techniques with static analysis in order to get the best of them. Finally, these algorithms have been implemented in different tools Binsec/se, Pinsec and Idasec interacting alltogether and tested on several malicious codes and commercial packers. Especially, they have been successfully used to circumvent and remove the obfuscation targeted in real-world malwares like X-Tunnel from the famous APT28/Sednit group
Tigori, Kabland Toussaint Gautier. "Méthode de génération d’exécutif temps-réel". Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0019.
Texto completo da fonteIn embedded systems, specialization or configuration of real-time operating systems according to the application requirements consists to remove the operating system services that are not needed by the application. This operation allows both to optimize the memory footprint occupied by the real-time operating system in order to meet the memory constraints in embedded systems and to reduce the amount of dead code inside the real-time operating system in order to improve its reliability. In this thesis, we focus on the use of formal methods to specialize real-time operating systems according applications. One major difficulty using formal models is the gap between the system model and its implementation. Thus, we propose to model the operating system so that the model embeds its source code and manipulated data structures. For this purpose, we use finite state model (possibly timed model) with discrete variables and sequences of instructions which are considered as atomic manipulating these variables. From the operating system model and an application model, the set of reachable states of the operating system model describing the code needed during application execution is computed. Thus, the source code of the specialized operating system is extracted from the pruned model. The overall approach is implemented with Trampoline, a real-time operating system based on OSEK/VDX and AUTOSAR standards. This specialization technique ensures the absence of dead code, minimizes the memory footprint and provides a formal model of the operating system used in a last step to check its behavior by using model checking. In this context, we propose an automatic formal verification technique that allows to check the operating systems according OSEK/VDX and AUTOSAR standards using generic observers
Noureddine, Adel. "Vers une meilleure compréhension de la consommation énergétique des systèmes logiciels". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2014. http://tel.archives-ouvertes.fr/tel-00961346.
Texto completo da fonteLassus, Saint-Genies Hugues de. "Elementary functions : towards automatically generated, efficient, and vectorizable implementations". Thesis, Perpignan, 2018. http://www.theses.fr/2018PERP0010/document.
Texto completo da fonteElementary mathematical functions are pervasive in many high performance computing programs. However, although the mathematical libraries (libms), on which these programs rely, generally provide several flavors of the same function, these are fixed at implementation time. Hence this monolithic characteristic of libms is an obstacle for the performance of programs relying on them, because they are designed to be versatile at the expense of specific optimizations. Moreover, the duplication of shared patterns in the source code makes maintaining such code bases more error prone and difficult. A current challenge is to propose "meta-tools" targeting automated high performance code generation for the evaluation of elementary functions. These tools must allow reuse of generic and efficient algorithms for different flavours of functions or hardware architectures. Then, it becomes possible to generate optimized tailored libms with factorized generative code, which eases its maintenance. First, we propose an novel algorithm that allows to generate lookup tables that remove rounding errors for trigonometric and hyperbolic functions. The, we study the performance of vectorized polynomial evaluation schemes, a first step towards the generation of efficient vectorized elementary functions. Finally, we develop a meta-implementation of a vectorized logarithm, which factors code generation for different formats and architectures. Our contributions are shown competitive compared to free or commercial solutions, which is a strong incentive to push for developing this new paradigm
Abdallah, Rouwaida. "Implementability of distributed systems described with scenarios". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00919684.
Texto completo da fonteEl, Moussawi Ali Hassan. "SIMD-aware word length optimization for floating-point to fixed-point conversion targeting embedded processors". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S150/document.
Texto completo da fonteIn order to cut-down their cost and/or their power consumption, many embedded processors do not provide hardware support for floating-point arithmetic. However, applications in many domains, such as signal processing, are generally specified using floating-point arithmetic for the sake of simplicity. Porting these applications on such embedded processors requires a software emulation of floating-point arithmetic, which can greatly degrade performance. To avoid this, the application is converted to use fixed-point arithmetic instead. Floating-point to fixed-point conversion involves a subtle tradeoff between performance and precision ; it enables the use of narrower data word lengths at the cost of degrading the computation accuracy. Besides, most embedded processors provide support for SIMD (Single Instruction Multiple Data) as a mean to improve performance. In fact, this allows the execution of one operation on multiple data in parallel, thus ultimately reducing the execution time. However, the application should usually be transformed in order to take advantage of the SIMD instruction set. This transformation, known as Simdization, is affected by the data word lengths ; narrower word lengths enable a higher SIMD parallelism rate. Hence the tradeoff between precision and Simdization. Many existing work aimed at provide/improving methodologies for automatic floating-point to fixed-point conversion on the one side, and Simdization on the other. In the state-of-the-art, both transformations are considered separately even though they are strongly related. In this context, we study the interactions between these transformations in order to better exploit the performance/accuracy tradeoff. First, we propose an improved SLP (Superword Level Parallelism) extraction (an Simdization technique) algorithm. Then, we propose a new methodology to jointly perform floating-point to fixed-point conversion and SLP extraction. Finally, we implement this work as a fully automated source-to-source compiler flow. Experimental results, targeting four different embedded processors, show the validity of our approach in efficiently exploiting the performance/accuracy tradeoff compared to a typical approach, which considers both transformations independently
Cornu, Benoit. "Automatic analysis and repair of exception bugs for java programs". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10212/document.
Texto completo da fonteThe world is day by day more computerized. There is more and more software running everywhere, from personal computers to data servers, and inside most of the new popularized inventions such as connected watches or intelligent washing machines. All of those technologies use software applications to perform the services they are designed for. Unfortunately, the number of software errors grows with the number of software applications. In isolation, software errors are often annoyances, perhaps costing one person a few hours of work when their accounting application crashes.Multiply this loss across millions of people and consider that even scientific progress is delayed or derailed by software error: in aggregate, these errors are now costly to society as a whole.We specifically target two problems:Problem #1: There is a lack of debug information for the bugs related to exceptions.This hinders the bug fixing process.To make bug fixing of exceptions easier, we will propose techniques to enrich the debug information.Those techniques are fully automated and provide information about the cause and the handling possibilities of exceptions.Problem #2: There are unexpected exceptions at runtime for which there is no error-handling code.In other words, the resilience mechanisms against exceptions in the currently existing (and running) applications is insufficient.We propose resilience capabilities which correctly handle exceptions that were never foreseen at specification time neither encountered during development or testing. In this thesis, we aim at finding solutions to those problems. We present four contributions to address the two presented problems
Serrano, Lucas. "Automatic inference of system software transformation rules from examples". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS425.
Texto completo da fonteThe Linux kernel is present today in all kinds of computing environments, from smartphones to supercomputers, including both the latest hardware and "ancient" systems. This multiplicity of environments has come at the expense of a large code size, of approximately ten million lines of code, dedicated to device drivers. However, to add new functionalities, or for performance or security reasons, some internal Application Programming Interfaces (APIs) can be redesigned, triggering the need for changes of potentially thousands of drivers using them.This thesis proposes a novel approach, Spinfer, that can automatically perform these API usage updates. This new approach, based on pattern assembly constrained by control-flow relationships, can learn transformation rules from even imperfect examples. Learned rules are suitable for the challenges found in Linux kernel API usage updates
Van, Assche Gilles. "Information-Theoretic aspects of quantum key distribution". Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211050.
Texto completo da fonteLa distribution quantique de clés est une technique cryptographique permettant l'échange de clés secrètes dont la confidentialité est garantie par les lois de la mécanique quantique. Le comportement particulier des particules élémentaires est exploité. En effet, en mécanique quantique, toute mesure sur l'état d'une particule modifie irrémédiablement cet état. En jouant sur cette propriété, deux parties, souvent appelées Alice et Bob, peuvent encoder une clé secrète dans des porteurs quantiques tels que des photons uniques. Toute tentative d'espionnage demande à l'espion, Eve, une mesure de l'état du photon qui transmet un bit de clé et donc se traduit par une perturbation de l'état. Alice et Bob peuvent alors se rendre compte de la présence d'Eve par un nombre inhabituel d'erreurs de transmission.
L'information échangée par la distribution quantique n'est pas directement utilisable mais doit être d'abord traitée. Les erreurs de transmissions, qu'elles soient dues à un espion ou simplement à du bruit dans le canal de communication, doivent être corrigées grâce à une technique appelée réconciliation. Ensuite, la connaissance partielle d'un espion qui n'aurait perturbé qu'une partie des porteurs doit être supprimée de la clé finale grâce à une technique dite d'amplification de confidentialité.
Cette thèse s'inscrit dans le contexte de la distribution quantique de clé où les porteurs sont des états continus de la lumière. En particulier, une partie importante de ce travail est consacrée au traitement de l'information continue échangée par un protocole particulier de distribution quantique de clés, où les porteurs sont des états cohérents de la lumière. La nature continue de cette information implique des aménagements particuliers des techniques de réconciliation, qui ont surtout été développées pour traiter l'information binaire. Nous proposons une technique dite de réconciliation en tranches qui permet de traiter efficacement l'information continue. L'ensemble des techniques développées a été utilisé en collaboration avec l'Institut d'Optique à Orsay, France, pour produire la première expérience de distribution quantique de clés au moyen d'états cohérents de la lumière modulés continuement.
D'autres aspects importants sont également traités dans cette thèse, tels que la mise en perspective de la distribution quantique de clés dans un contexte cryptographique, la spécification d'un protocole complet, la création de nouvelles techniques d'amplification de confidentialité plus rapides à mettre en œuvre ou l'étude théorique et pratique d'algorithmes alternatifs de réconciliation.
Enfin, nous étudions la sécurité du protocole à états cohérents en établissant son équivalence à un protocole de purification d'intrication. Sans entrer dans les détails, cette équivalence, formelle, permet de valider la robustesse du protocole contre tout type d'espionnage, même le plus compliqué possible, permis par les lois de la mécanique quantique. En particulier, nous généralisons l'algorithme de réconciliation en tranches pour le transformer en un protocole de purification et nous établissons ainsi un protocole de distribution quantique sûr contre toute stratégie d'espionnage.
Quantum key distribution is a cryptographic technique, which allows to exchange secret keys whose confidentiality is guaranteed by the laws of quantum mechanics. The strange behavior of elementary particles is exploited. In quantum mechnics, any measurement of the state of a particle irreversibly modifies this state. By taking advantage of this property, two parties, often called Alice and bob, can encode a secret key into quatum information carriers such as single photons. Any attempt at eavesdropping requires the spy, Eve, to measure the state of the photon and thus to perturb this state. Alice and Bob can then be aware of Eve's presence by a unusually high number of transmission errors.
The information exchanged by quantum key distribution is not directly usable but must first be processed. Transmission errors, whether they are caused by an eavesdropper or simply by noise in the transmission channel, must be corrected with a technique called reconciliation. Then, the partial knowledge of an eavesdropper, who would perturb only a fraction of the carriers, must be wiped out from the final key thanks to a technique called privacy amplification.
The context of this thesis is the quantum key distribution with continuous states of light as carriers. An important part of this work deals with the processing of continuous information exchanged by a particular protocol, where the carriers are coherent states of light. The continuous nature of information in this case implies peculiar changes to the reconciliation techniques, which have mostly been developed to process binary information. We propose a technique called sliced error correction, which allows to efficiently process continuous information. The set of the developed techniques was used in collaboration with the Institut d'Optique, Orsay, France, to set up the first experiment of quantum key distribution with continuously-modulated coherent states of light.
Other important aspects are also treated in this thesis, such as placing quantum key distribution in the context of a cryptosystem, the specification of a complete protocol, the creation of new techniques for faster privacy amplification or the theoretical and practical study of alternate reconciliation algorithms.
Finally, we study the security of the coherent state protocol by analyzing its equivalence with an entanglement purification protocol. Without going into the details, this formal equivalence allows to validate the robustness of the protocol against any kind of eavesdropping, even the most intricate one allowed by the laws of quantum mechanics. In particular, we generalize the sliced error correction algorithm so as to transform it into a purification protocol and we thus establish a quantum key distribution protocol secure against any eavesdropping strategy.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Ejjaaouani, Ksander. "Conception du modèle de programmation INKS pour la séparation des préoccupations algorithmiques et d’optimisation dans les codes de simulation numérique : application à la résolution du système Vlasov/Poisson 6D". Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD037.
Texto completo da fonteThe InKS programming model aims to improve readability portability and maintainability of simulation codes as well as boosting developer productivity. To fulfill these objectives, InKS proposes two languages, each dedicated to a specific concern. First, InKS PIA provides concepts to express simulation algorithms with no concerns for optimization. Once this foundation is set, InKSPSO enables optimization specialists to reuse the algorithm in order to specify the optimization part. The model offers to write numerous versions of the optimizations, typically one per architecture, from a single algorithm. This strategy limits the rewriting of code for each new optimization specification, boosting developer productivity.We have evaluated the InKS programming model by using it to implement the 6D Vlasov-Poisson solver and compared our version with a Fortran one. This evaluation highlighted that, in addition to the separation of concerns, the InKS approach is not more complex that traditional ones while offering the same performance. Moreover, using the algorithm, it is able to generate valid code for non-critical parts of code, leaving optimization specialists more time to focus on optimizing the computation intensive parts
Gastellier-Prevost, Sophie. "Vers une détection des attaques de phishing et pharming côté client". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0027.
Texto completo da fonteThe development of online transactions and "always-connected" broadband Internet access is a great improvement for Internet users, who can now benefit from easy access to many services, regardless of the time or their location. The main drawback of this new market place is to attract attackers looking for easy and rapid profits. One major threat is known as a phishing attack. By using website forgery to spoof the identity of a company that proposes financial services, phishing attacks trick Internet users into revealing confidential information (e.g. login, password, credit card number). Because most of the end-users check the legitimacy of a login website by looking at the visual aspect of the webpage displayed by the web browser - with no consideration for the visited URL or the presence and positioning of security components -, attackers capitalize on this weakness and design near-perfect copies of legitimate websites, displayed through a fraudulent URL. To attract as many victims as possible, most of the time phishing attacks are carried out through spam campaigns. One popular method for detecting phishing attacks is to integrate an anti-phishing protection into the web browser of the user (i.e. anti-phishing toolbar), which makes use of two kinds of classification methods : blacklists and heuristic tests. The first part of this thesis consists of a study of the effectiveness and the value of heuristics tests in differentiating legitimate from fraudulent websites. We conclude by identifying the decisive heuristics as well as discussing about their life span. In more sophisticated versions of phishing attacks - i.e. pharming attacks -, the threat is imperceptible to the user : the visited URL is the legitimate one and the visual aspect of the fake website is very similar to the original one. As a result, pharming attacks are particularly effective and difficult to detect. They are carried out by exploiting DNS vulnerabilities at the client-side, in the ISP (Internet Service Provider) network or at the server-side. While many efforts aim to address this problem in the ISP network and at the server-side, the client-side remains excessively exposed. In the second part of this thesis, we introduce two approaches - intended to be integrated into the client’s web browser - to detect pharming attacks at the client-side. These approaches combine both an IP address check and a webpage content analysis, performed using the information provided by multiple DNS servers. Their main difference lies in the method of retrieving the webpage which is used for the comparison. By performing two sets of experimentations, we validate our concept
Lévêque, Thomas. "Définition et contrôle des politiques d’évolution dans les projets logiciels". Grenoble, 2010. http://www.theses.fr/2010GRENM030.
Texto completo da fonteModel Driven Engineering (MDE) aims at mastering software product complexity undertaking "all" software engineering activities on models. Except some specific domains such as critical systems, MDE is not yet widely adopted by large software manufacturers. We claim that one raison is that, in practice, in MDE software projects, an application is a mixture of models and code which are both developed manually. Development and maintenance activities modify at same time models and software; consistent versionning of models and code is needed. Software engineers are working in parallel: therefore, reconciling concurrent modifications and controlling cooperative engineering are needed. Unfortunately, the concepts, mechanisms and tools to solve these issues do not really exist today: we cannot manage conveniently software evolution for large MDE software projects. This thesis proposes concepts and mechanisms needed to manage the coevolution of models and code in a specific domain, using a dedicated engineering environment: a CADSE (Computer Aided Domain Specific Engineering environment). We propose to formally define evolution policies it the CADSE at meta model level. A policy enforces version consistency and evolution ripple effects in each workspace, through a bi-directional synchronization between model elements and files. A policy also defines and enforces the cooperative work strategy between concurrent workspaces
Corteggiani, Nassim. "Towards system-wide security analysis of embedded systems". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS285.
Texto completo da fonteThis thesis is dedicated to the improvement of dynamic analysis techniques allowing the verification of software designed for embedded systems, commonly called firmware. It is clear that the increasing pervasiveness and connectivity of embedded devices significantly increase their exposure to attacks. The consequences of a security issue can be dramatic not least in the economical field, but on the technical stage as well. Especially because of the difficulty to patch some devices. For instance, offline devices or code stored in a mask rom which are read only memory programmed during the chip fabrication. For all these reasons, it is important to thoughtfully test firmware program before the manufacturing process. This thesis presents analysis methods for system-wide testing of security and hardware components. In particular, we propose three impvrovements for partial emulation. First, Inception a dynamic analysis tool to test the security of firmware programs even when mixing different level of semantic (e.g., C/C++ mixed with assembly). Second, Steroids a high performance USB 3.0 probe that aims at minimizing the latency between the analyzer and the real device. Finally, HardSnap a hardware snapshotting method that offers higher visibility and control over the hardware peripherals. It enables testing concurently different execution paths without corrupting the hardware peripherals state
Pham, Van Cam. "Model-Based Software Engineering : Methodologies for Model-Code Synchronization in Reactive System Development". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS611/document.
Texto completo da fonteModel-Based Software Engineering (MBSE) has been proposed as a promising software development methodology to overcome limitations of traditional programming-based methodology in dealing with the complexity of embedded systems. MBSE promotes the use of modeling languages for describing systems in an abstract way and provides means for automatically generating different development artifacts, e.g. code and documentation, from models. The development of a complex system often involves multiple stakeholders who use different tools to modify the development artifacts, model and code in particular in this thesis. Artifact modifications must be kept consistent: a synchronization process needs to propagate modifications made in one artifact to the other artifacts. In this study, the problem of synchronizing Unified Modeling Language (UML)-based architecture models, specified by UML composite structure (UML-CS) and UML state machine (UML-SM) elements, and object-oriented code is presented. UML-CSs are used for describing the component-based software architecture and UML-SMs for discrete event-driven behaviors of reactive systems. The first challenge is to enable a collaboration between software architects and programmers producing model and code by using different tools. This raises the synchronization problem of concurrent artifact modifications. In fact, there is a perception gap between diagram-based languages (modeling languages) and text-based languages (programming languages). On the one hand, programmers often prefer to use the more familiar combination of a programming language and an Integrated Development Environment. On the other hand, software architects, working at higher levels of abstraction, tend to favor the use of models, and therefore prefer diagram-based languages for describing the architecture of the system. The second challenge is that there is a significant abstraction gap between the model elements and the code elements: UML-CS andUML-SM elements are at higher level of abstraction than code elements. The gap makes current synchronization approaches hard to be applied since there is no easy way to reflect modifications in code back to model. This thesis proposes an automated synchronization approach that is composed of two main correlated contributions. To address the first challenge, a generic model-code synchronization methodological pattern is proposed. It consists of definitions of necessary functionalities and multiple processes that synchronize model and code based on several defined scenarios where the developers use different tools to modify model and code. This contribution is independent of UML-CSs and UML-SMs. The second contribution deals with the second challenge and is based on the results from the first contribution. In the second contribution, a bidirectional mapping is presented for reducing the abstraction gap between model and code. The mapping is a set of correspondences between model elements and code elements. It is used as main input of the generic model-code synchronization methodological pattern. More importantly, the usage of the mapping provides the functionalities defined in the first contribution and eases the synchronization of UML-CS and UML-SM elements and code. The approach is evaluated by means of multiple simulations and a case study
David, Robin. "Formal Approaches for Automatic Deobfuscation and Reverse-engineering of Protected Codes". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0013/document.
Texto completo da fonteMalware analysis is a growing research field due to the criticity and variety of assets targeted as well as the increasing implied costs. These softwares frequently use evasion tricks aiming at hindering detection and analysis techniques. Among these, obfuscation intent to hide the program behavior. This thesis studies the potential of Dynamic Symbolic Execution (DSE) for reverse-engineering. First, we propose two variants of DSE algorithms adapted and designed to fit on protected codes. The first is a flexible definition of the DSE path predicate computation based on concretization and symbolization. The second is based on the definition of a backward-bounded symbolic execution algorithm. Then, we show how to combine these techniques with static analysis in order to get the best of them. Finally, these algorithms have been implemented in different tools Binsec/se, Pinsec and Idasec interacting alltogether and tested on several malicious codes and commercial packers. Especially, they have been successfully used to circumvent and remove the obfuscation targeted in real-world malwares like X-Tunnel from the famous APT28/Sednit group
Tërnava, Xhevahire. "Gestion de la variabilité au niveau du code : modélisation, traçabilité et vérification de cohérence". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4114/document.
Texto completo da fonteWhen large software product lines are engineered, a combined set of traditional techniques, such as inheritance, or design patterns, is likely to be used for implementing variability. In these techniques, the concept of feature, as a reusable unit, does not have a first-class representation at the implementation level. Further, an inappropriate choice of techniques becomes the source of variability inconsistencies between the domain and the implemented variabilities. In this thesis, we study the diversity of the majority of variability implementation techniques and provide a catalog that covers an enriched set of them. Then, we propose a framework to explicitly capture and model, in a fragmented way, the variability implemented by several combined techniques into technical variability models. These models use variation points and variants, with their logical relation and binding time, to abstract the implementation techniques. We show how to extend the framework to trace features with their respective implementation. In addition, we use this framework and provide a tooled approach to check the consistency of the implemented variability. Our method uses slicing to partially check the corresponding propositional formulas at the domain and implementation levels in case of 1–to–m mapping. It offers an early and automatic detection of inconsistencies. As validation, we report on the implementation in Scala of the framework as an internal domain specific language, and of the consistency checking method. These implementations have been applied on a real feature-rich system and on three product line case studies, showing the feasibility of the proposed contributions
Robillard, Benoît. "Verification formelle et optimisation de l’allocation de registres". Thesis, Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0730/document.
Texto completo da fonteThe need for trustful programs led to an increasing use of formal verication techniques the last decade, and especially of program proof. However, the code running on the computer is not the source code, i.e. the one written by the developper, since it has to betranslated by the compiler. As a result, the formal verication of compilers is required to complete the source code verication. One of the hardest phases of compilation is register allocation. Register allocation is the phase within which the compiler decides where the variables of the program are stored in the memory during its execution. The are two kinds of memory locations : a limited number of fast-access zones, called registers, and a very large but slow-access stack. The aim of register allocation is then to make a great use of registers, leading to a faster runnable code.The most used model for register allocation is the interference graph coloring one. In this thesis, our objective is twofold : first, formally verifying some well-known interference graph coloring algorithms for register allocation and, second, designing new graph-coloring register allocation algorithms. More precisely, we provide a fully formally veri ed implementation of the Iterated Register Coalescing, a very classical graph-coloring register allocation heuristics, that has been integrated into the CompCert compiler. We also studied two intermediate representations of programs used in compilers, and in particular the SSA form to design new algorithms, using global properties of the graph rather than local criteria currently used in the litterature