Tesis sobre el tema "Logiciel de sécurité"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Logiciel de sécurité".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Duc, Guillaume. "Support matériel, logiciel et cryptographique pour une éxécution sécurisée de processus". Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0041.
Texto completoThe majority of the solutions to the issue of computer security (algorithms, protocols, secure operating systems, applications) are running on insecure hardware architectures that may be vulnerable to physical (bus spying, modification of the memory content, etc. ) or logical (malicious operating system) attacks. Several secure architectures, which are able to protect the confidentiality and the correct execution of programs against such attacks, have been proposed for several years. After the presentation of some cryptographic bases and a review of the main secure architectures proposed in the litterature, we will present the secure architecture CryptoPage. This architecture guarantees the confidentiality of the code and the data of applications and the correct execution against hardware or software attacks. In addition, it also includes a mechanism to reduce the information leakage on the address bus, while keeping reasonable performances. We will also study how to delegate some security operations of the architecture to an untrusted operating system in order to get more flexibility but without compromising the security of thearchitecture. Finally, some other important mechanism are studied: encrypted processid entification, attestations of the results, management of software signals, management of the threads, inter-process communication
Ayrault, Philippe. "Développement de logiciel critique en FoCalize : méthodologie et outils pour l'évaluation de conformité". Paris 6, 2011. http://www.theses.fr/2011PA066120.
Texto completoPorquet, Joël. "Architecture de sécurité dynamique pour systèmes multiprocesseurs intégrés sur puce". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00574088.
Texto completoHumbert, Sophie. "Déclinaison d'exigences de sécurité, du niveau système vers le niveau logiciel, assistée par des modèles formels". Bordeaux 1, 2008. http://www.theses.fr/2008BOR13580.
Texto completoKouadri, Mostéfaoui Ghita. "Vers un cadre conceptuel et logiciel intégrant la sécurité basée sur le contexte dans les environnements pervasifs". Paris 6, 2004. http://www.theses.fr/2004PA066392.
Texto completoBulusu, Sravani Teja. "Méthodologie d'ingénierie des exigences de sécurité réseau". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30084.
Texto completoBuilding secure networks is crucial as well as challenging for any organization. Network security majorly concerns the security architectural needs that describe network segmentation (i.e., security zoning); security of network devices connecting the communicating end user systems; and security of the information being transferred across the communication links. Most often, a late consideration of security aspects (i.e., post-deployment of network design) inevitably results in an increase in costs as well as in the complexity to take into account the necessary changes that have be made to the existing infrastructures. In this regard, network security requirements hold a paramount importance since they drive the decisions related to the implementation of security controls about business needs. Indeed, bad network security requirements can lead to ineffective and costly security or worth security holes in the network security design. Nevertheless, current security requirement engineering methodologies render no support to derive network security requirements. This thesis work is a part of the research project DGA IREHDO2 (Intégration REseau Haut Débit embarqué Optique 2ème phase) that concerns aircrafts future generation networks. Our work is done mainly in collaboration with AIRBUS and is related to the security requirements engineering process for aircraft networks. Our objective in this project is to propose an SRE methodology for capturing and analysing network security requirements, and that facilitates the refinement into network security and monitoring configurations (TOP/DOWN approach). The complexity addressed comes at a time from the differences in point of view: i) with regard to the understanding of the issue of security by different stakeholders, ii) the nature of the systems impacted and the variability of the levels of abstraction in the network development cycle. In this work, we defined SRE methodology based on the abstraction levels proposed by SABSA (Sherwood Applied Business Security Architecture) method in order to structure the refinement activity of business needs into network security requirements. Indeed, SABSA recommends the expression of the needs considering the Business view (decision makers), Architect's view (objectives, risks, processes, applications and interactions), Designer's view (security services), Builder's view (security mechanisms) and Tradesman's view (products, tools, technologies). We considered the first three views. We express the business and architect's views using STS (Social-Technical Systems) formalism. We also propose to represent attacks as multi-agent systems to facilitate the analysis of security risks at these first two views. For expressing the network security requirements captured at Designer's view, we propose a methodology that automates parts of the process of security zoning and network security requirements elicitation using a definite set of formalized rules derived from security design principles and formal integrity models. We developed a tool that implements these rules in ASP (Answer set programming), which facilitates calculating cost-optimal security zone models. In the end, to ensure traceability between the three views, we defined a new modelling notation based on the concepts proposed in KAOS (Keep All Objectives Satisfied) and STS. We illustrate our methodology using a scenario specific to the IRHEDO2 project. Finally, we evaluate our methodology using: 1) an e-commerce enterprise case study; 2) a new scenario specific to the IRHEDO2 project
Mahmalgi, Ammar. "Étude et réalisation d'un testeur de logiciel pour microcontrôleur : contribution à la mise en sécurité d'un système microprogramme". Lille 1, 1987. http://www.theses.fr/1987LIL10088.
Texto completoChollet, Stéphanie. "Orchestration de services hétérogènes et sécurisés". Grenoble 1, 2009. http://www.theses.fr/2009GRE10283.
Texto completoService-oriented Computing (SOC) has appeared recently as a new software engineering paradigm. The very purpose of this reuse-based approach is to build applications through the late composition of independent software elements, called services, which are made available at run-time by internal or external providers. SOC brings properties of major interest. First, it supports rapid application development. Using existing, already tested, services is likely to reduce the time needed to build up an application and the overall quality of this application. SOC also improves software flexibility through late binding. A service to be used by an application is chosen at the last moment, based on its actual availability and on its properties at that moment. The service orientation has also to face thorny problems, as in any reuse-based approach. In this work, we focus on two major issues: the integration of heterogeneous service-oriented technologies and the management of security aspects when invoking a service. Security is actually a major concern to SOC practitioners. SOC technologies have allowed companies to expose applications, internally and externally, and, for that reason are heavily used. However, in some distributed environments, software services and process engines can be alarmingly vulnerable. Service-based processes can expose organizations to a considerable amount of security risk and dependability degradation. We propose to use a model-driven approach for solving this problem. During system design, paradigms such as abstraction, separation of concerns and language definition are used to define a model of the service composition with security properties. This model is transformed into an execution model. We present a generative environment applying these principles for service composition. This environment has been built as part of the SODA European project and validated on several industrial use cases
Mouelhi, Tejeddine. "Modélisation et test de mécanismes de sécurité dans des applications internet". Phd thesis, Institut National des Télécommunications, 2010. http://tel.archives-ouvertes.fr/tel-00544431.
Texto completoParrend, Pierre. "Software security models for service-oriented programming (SOP) platforms". Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0117/these.pdf.
Texto completoLes plates-formes à composants de service (SOP pour Service-oriented Programming) sont des environnements d’exécution génériques qui garantissent des applications conçues selon un modèle architectural propre. Cependant, peu d’outils existent pour garantir l’innocuité des composants installés. Nous proposons par conséquent d’adapter les méthodes d’Assurance de Sécurité Logicielle pour les plates-formes SOP, d’effectuer une analyse de sécurité correspondante et de fournir des mécanismes de protection adaptés. Les mécanismes de protection proposés sont OSGi Robuste (Hardened OSGi), un ensemble de recommandations pour l’implémentation de plates-formes OSGi, CBAC (Component-based Access Control), un mécanisme de contrôle d’accès flexible mis en œuvre à l’installation, et WCA (Weak Component Analysis) qui identifie les vulnérabilités exploitables dans les composants SOP selon l’exposition du code. CBAC et WCA utilisent l’analyse statique du Bytecode pour automatiser la validation des composants lors de leur installation
Goutal, Pascale. "Jeux non-coopératifs finis appliqués à la sécurité nucléaire". Paris 6, 1997. http://www.theses.fr/1997PA066362.
Texto completoParrend, Pierre. "Modèles de Sécurité logicielle pour les plates-formes à composants de service (SOP)". Phd thesis, INSA de Lyon, 2008. http://tel.archives-ouvertes.fr/tel-00362486.
Texto completoClaeys, Timothy. "Sécurité pour l'internet des objets : une approche des bas en haut pour un internet des objets sécurisé et normalisé". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM062.
Texto completoThe rapid expansion of the IoT has unleashed a tidal wave of cheap Internet-connected hardware. Formany of these products, security was merely an afterthought. Due to their advanced sensing and actuatingfunctionalities, poorly-secured IoT devices endanger the privacy and safety of their users.While the IoT contains hardware with varying capabilities, in this work, we primarily focus on the constrainedIoT. The restrictions on energy, computational power, and memory limit not only the processingcapabilities of the devices but also their capacity to protect their data and users from attacks. To secure theIoT, we need several building blocks. We structure them in a bottom-up fashion where each block providessecurity services to the next one.The first cornerstone of the secure IoT relies on hardware-enforced mechanisms. Various security features,such as secure boot, remote attestation, and over-the-air updates, rely heavily on its support. Sincehardware security is often expensive and cannot be applied to legacy systems, we alternatively discusssoftware-only attestation. It provides a trust anchor to remote systems that lack hardware support. In thesetting of remote attestation, device identification is paramount. Hence, we dedicated a part of this work tothe study of physical device identifiers and their reliability.The IoT hardware also frequently provides support for the second building block: cryptography. Itis used abundantly by all the other security mechanisms, and recently much research has focussed onlightweight cryptographic algorithms. We studied the performance of the recent lightweight cryptographicalgorithms on constrained hardware.A third core element for the security of the IoT is the capacity of its networking stack to protect the communications.We demonstrate that several optimization techniques expose vulnerabilities. For example,we show how to set up a covert channel by exploiting the tolerance of the Bluetooth LE protocol towardsthe naturally occurring clock drift. It is also possible to mount a denial-of-service attack that leverages theexpensive network join phase. As a defense, we designed an algorithm that almost completely alleviates theoverhead of network joining.The last building block we consider is security architectures for the IoT. They guide the secure integrationof the IoT with the traditional Internet. We studied the IETF proposal concerning the constrainedauthentication and authorization framework, and we propose two adaptations that aim to improve its security.Finally, the deployment of the IETF architecture heavily depends on the security of the underlying communicationprotocols. In the future, the IoT will mainly use the object security paradigm to secure datain flight. However, until these protocols are widely supported, many IoT products will rely on traditionalsecurity protocols, i.e., TLS and DTLS. For this reason, we conducted a performance study of the most criticalpart of the protocols: the handshake phase. We conclude that while the DTLS handshake uses fewerpackets to establish the shared secret, TLS outperforms DTLS in lossy networks
Zargouni, Yadh. "Évaluation de l'efficacité des mesures de sécurité routière". Paris 6, 1986. http://www.theses.fr/1986PA066194.
Texto completoFioraldi, Andrea. "Fuzzing in the 2020s : novel approaches and solutions". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS546.
Texto completoSecurity remains at risk due to elusive software vulnerabilities, even with extensive fuzzing efforts. Coverage-guided fuzzers, focusing solely on code coverage, often fall short in discovering specific vulnerabilities. The proliferation of diverse fuzzing tools has fragmented the field, making it challenging to combine different fuzzing techniques, assess contributions accurately, and compare tools effectively. To address this, standardized baselines are needed to ensure equitable evaluations. AFL, due to its popularity, is often extended to implement new prototypes despite not being a naive baseline and its monolithic design. On the other hand, custom fuzzers written from scratch tend to reinvent solutions and often lack scalability on multicore systems. This thesis addresses these challenges with several contributions: A new feedback mechanism called InvsCov is introduced, which considers program variable relationships and code coverage. It refines program state approximation for diverse bug detection. Another additional feedback we introduce explores data dependency graphs to enhance fuzzing by rewarding new dataflow edge traversal, effectively finding vulnerabilities missed by standard coverage. We also present a thorough analysis of AFL's internal mechanisms to shed light on its design choices and their impact on fuzzing performance. Finally, to address fragmentation, LibAFL is introduced as a modular and reusable fuzzing framework. Researchers can extend the core fuzzer pipeline, evaluation of compelling techniques, and combination of orthogonal approaches. An attempt to rewrite AFL++ as a frontend to LibAFL won the SBFT'23 fuzzing competition in the bug-finding track. These contributions advance the field of fuzz testing, addressing the challenges of sensitivity in feedback mechanisms, bug diversity, tool fragmentation, and fuzzers evaluation. They provide a foundation for improving fuzzing techniques, enabling the detection of a broader range of bugs, and fostering collaboration and standardization within the community
Karray, Achraf. "Conception, mise en œuvre et validation d’un environnement logiciel pour le calcul sécurisé sur une grille de cartes à puce de type Java". Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13724/document.
Texto completoAbstract
Galissant, Pierre. "Contributions to white-box cryptography : models and algebraic constructions". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG099.
Texto completoDue to the democratization of technologiessuch as mobile payment or the soaring of blockchaintechnologies, there is a growing need for secureimplementations of standardized algorithms in thewhite-box model. In spite of this, there are too fewsecure designs published in the literature. To avoidrelying on hidden design implementations to provideany security in the white-box model, moreimplementations designs and techniques have to beexplored.This thesis begins with a guide to white-boxcryptography. Its goal is to revise, precise or correctwhite-box models, security notions andconstructions that have emerged in the state of theart since the introduction of the concept. We notablyclarify the Remote-Access White-Box model and theHardware Module White-Box and contextualize themin the general cryptographic literature.We then explore white-box implementations of theAES by first synthesizing the knownimplementations techniques and their flaws, andthen proposing a new solution based on polynomialrepresentations, for which we propose a securityanalysis and a challenge implementation. The lastpart of this thesis focuses on the implementation ofmultivariate cryptographic primitives in thewhite-box model. After introducing succinctlymultivariate cryptography, we motivate the studyof this branch of public key cryptography in thewhite-box context. We propose the firstimplementation technique of the HFE family ofsignature algorithms, for which we propose anextensive security analysis and a challengeimplementation. Finally, to propose otherperspectives on multivariate white-boxcryptography, we also propose an incompressiblestream cipher adapted from QUAD
Sun, Yanjun. "Consolidation de validation fonctionnelle de systèmes critiques à l'aide de model checking : application au contrôle commande de centrales nucléaires". Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0047.
Texto completoThe verification and validation of safety-critical real-time system are subject to stringent standards and certifications. Recent progress in model-based system engineering should be applied to such systems since it allows early detection of defects and formal verification techniques. This thesis proposes a model-based testing (MBT) methodology dedicated to functional validation of safety-critical real-time systems. The method is directed by the structural coverage of the Lustre model co-simulated with tje physical process and also by the functional requirements. It relies on a repetitive use of a model checker to generate coverage-based open-loop test sequences. We also propose a refinement technique of progressively adding environment constraints during test generation. The refinement is expected to support the passage from coverage-based open-loop test sequence to functional requirements-based closed-loop test case. Our methodology also considers the state explosion problem of a model checker and proposes a heuristic called hybrid verification combining model checking and simulation
Muench, Marius. "Dynamic binary firmware analysis : challenges & solutions". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS265.
Texto completoEmbedded systems are a key component of modern life and their security is of utmost importance. Hence, the code running on those systems, called "firmware", has to be carefully evaluated and tested to minimize the risks accompanying the ever-growing deployment of embedded systems. One common way to evaluate the security of firmware, especially in the absence of source code, is dynamic analysis. Unfortunately, compared to analysis and testing on desktop system, dynamic analysis for firmware is lacking behind. In this thesis, we identify the main challenges preventing dynamic analysis and testing techniques from reaching their full potential on firmware. Furthermore we point out that rehosting is a promising approach to tackle these problems and develop avatar2, a multi-target orchestration framework which is capable of running firmware in both fully, and partially emulated settings. Using this framework, we adapt several dynamic analysis techniques to successfully operate on binary firmware. In detail we use its scriptability to easily replicate a previous study, we demonstrate that it allows to record and replay the execution of an embedded system, and implement heuristics for better fault detection as run-time monitors. Additionally, the framework serves as building block for an experimental evaluation of fuzz testing on embedded systems, and is used as part in a scalable concolic execution engine for firmware. Last but not least, we present Groundhogger, a novel approach for unpacking embedded devices' firmware which, unlike other unpacking tools, uses dynamic analysis to create unpackers and evaluate it against three real world devices
Calvet, Joan. "Analyse Dynamique de Logiciels Malveillants". Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00922384.
Texto completoJakob, Henner. "Vers la sécurisation des systèmes d'informatique ubiquitaire par le design : une approche langage". Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14269/document.
Texto completoA growing number of environments is being populated with a range of networked devices. Applications leverage these devices to support everyday activities in a variety of areas (e.g., home automation and patient monitoring). As these devices and applications get woven into our everyday activities, they become critical: their failure can put people and assets at risk. Failures can be caused by malicious attacks and misbehaving applications. Although the impact of such situations can be major, security concerns are often considered a secondary issue in the development process, and treated with ad hoc approaches. This thesis proposes to address security concerns throughout the development lifecycle of a pervasive computing system. Security is addressed at design time thanks to dedicated, high-level declarations. These declarations are processed to implement security mechanisms, and to generate programming support to ease the development of the security logic, while keeping it separate from the application logic. Our approach is studied in the context of access control and privacy concerns. Our work has been implemented and leverages an existing software-design language and a suite of tools that covers the software development lifecycle
Mariano, Georges. "Evaluation de logiciels critiques développés par la méthode B : une approche quantitative". Valenciennes, 1997. https://ged.uphf.fr/nuxeo/site/esupversions/823185e9-e82a-44fc-b3e2-17a0b205165e.
Texto completoGirard, Pierre. "Formalisation et mise en œuvre d'une analyse statique de code en vue de la vérification d'applications sécurisées". Toulouse, ENSAE, 1996. http://www.theses.fr/1996ESAE0010.
Texto completoFreyssinet, Eric. "Lutte contre les botnets : analyse et stratégie". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066390/document.
Texto completoBotnets, or networks of computers infected with malware and connected to a command and control system, is one of the main tools for criminal activities on the Internet today. They allow the development of a new type of crime: crime as a service (CaaS). They are a challenge for law enforcement. First by the importance of their impact on the security of networks and the commission of crimes on the Internet. Next, with regards to the extremely international dimension of their dissemination and therefore the enhanced difficulty in conducting investigations. Finally, through the large number of actors that may be involved (software developers, botnet masters, financial intermediaries, etc.). This thesis proposes a thorough study of botnets (components, operation, actors), the specificaion of a data collection method on botnet related activities and finally the technical and organizational arrangements in the fight against botnets; it concludes on proposals on the strategy for this fight. The work carried out has confirmed the relevance, for the effective study of botnets, of a model encompassing all their components, including infrastructure and actors. Besides an effort in providing definitions, the thesis describes a complete model of the life cycle of a botnet and offers methods for categorization of these objects. This work shows the need for a shared strategy which should include the detection elements, coordination between actors and the possibility or even the obligation for operators to implement mitigation measures
Cozzi, Emanuele. "Binary Analysis for Linux and IoT Malware". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS197.
Texto completoFor the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the increasing number of interconnected embedded devices and the IoT revolution are rapidly changing the malware landscape. Malicious actors did not stand by and watch, but quickly reacted to create "Linux malware", showing an increasing interest in Linux-based operating systems and platforms running architectures different from the typical Intel CPU. As a result, researchers must react accordingly. Through this thesis, we navigate the world of Linux-based malicious software and highlight the problems we need to overcome for their correct analysis.After a systematic exploration of the challenges involved in the analysis of Linux malware, we present the design and implementation of the first malware analysis pipeline, specifically tailored to study this emerging phenomenon. We use our platform to analyze over 100K samples and collect detailed statistics and insights that can help to direct future works.We then apply binary code similarity techniques to systematically reconstruct the lineage of IoT malware families, and track their relationships, evolution, and variants. We show how the free availability of source code resulted in a very large number of variants, often impacting the classification of antivirus systems.Last but not least, we address a major problem we encountered in the analysis of statically linked executables. In particular, we present a new approach to identify the boundary between user code and third-party libraries, such that the burden of libraries can be safely removed from binary analysis tasks
Millon, Etienne. "Analyse de sécurité de logiciels système par typage statique". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01067475.
Texto completoMillon, Etienne. "Analyse de sécurité de logiciels système par typage statique". Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066120.
Texto completoOperating system kernels need to manipulate data that comes from user programs through system calls. If it is done in an incautious manner, a security vulnerability known as the Confused Deputy Problem can lead to information disclosure or privilege escalation. The goal of this thesis is to use static typing to detect the dangerous uses of pointers that are controlled by userspace. Most operating systems are written in the C language. We start by isolating Safespeak, a safe subset of it. Its operational semantics as well as a type system are described, and the classic properties of type safety are established. Memory states are manipulated using bidirectional lenses, which can encode partial updates to states and variables. A first analysis is described, that identifies integers used as bitmasks, which are a common source of bugs in C programs. Then, we add to Safespeak the notion of pointers coming from userspace. This breaks type safety, but it is possible to get it back by assigning a different type to the pointers that are controlled by userspace. This distinction forces their dereferencing to be done in a controlled fashion. This technique makes it possible to detect two bugs in the Linux kernel: the first one is in a video driver for an AMD video card, and the second one in the ptrace system call for the Blackfin architecture
Olivier, Paul L. R. "Improving Hardware-in-the-loop Dynamic Security Testing For Linux-based Embedded Devices". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS049.
Texto completoDynamic analysis techniques have proven their effectiveness in security assessment. Nevertheless, it is necessary to be able to execute the code to be analyzed and this is often a challenge for firmware, most of which are deeply integrated into the hardware architecture of embedded systems. Emulation allows to execute a large part of the code but is quickly limited when it is necessary to interact with specialized components. For this, the partial emulation, or hardware-in-the-loop, approach offers several advantages: transferring access to hardware that is difficult to emulate properly and executing the firmware in turn on both entities. To date, this approach has been considered primarily for monolithic firmware, but less so for devices running advanced operating systems. In this thesis, we explore the challenges of security testing for processes running in an emulated environment where part of their execution must be transmitted to their original physical device. Throughout this thesis, we first review the various techniques for intercepting system calls and their objectives. We highlight the fact that forwarding is not a very well explored technique in depth but is a promising approach for evaluating the security of embedded applications. We discuss the challenges of different ways of running a process in two different Linux kernels. We implement through a framework these transfers for a Linux process with its system calls and memory accesses between its emulated environment and its original environment on the physical device. To overcome the challenges of using physical devices for these security tests, we also present a new test platform to reproduce hardware-in-the-loop security experiments
Scholte, Theodoor. "Amélioration de la sécurité par la conception des logiciels web". Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0024/document.
Texto completoThe web has become a backbone of our industry and daily life. The growing popularity of web applications and services and the increasing number of critical transactions being performed, has raised security concerns. For this reason, much effort has been spent over the past decade to make web applications more secure. Despite these efforts, recent data from SANS institute estimates that up to 60% of Internet attacks target web applications and critical vulnerabilities such as cross-site scripting and SQL injection are still very common. In this thesis, we conduct two empirical studies on a large number of web applications vulnerabilities with the aim of gaining deeper insights in how input validation flaws have evolved in the past decade and how these common vulnerabilities can be prevented. Our results suggest that the complexity of the attacks have not changed significantly and that many web problems are still simple in nature. Our studies also show that most SQL injection and a significant number of cross-site scripting vulnerabilities can be prevented using straight-forward validation mechanisms based on common data types. With these empirical results as foundation, we present IPAAS which helps developers that are unaware of security issues to write more secure web applications than they otherwise would do. It includes a novel technique for preventing the exploitation of cross-site scripting and SQL injection vulnerabilities based on automated data type detection of input parameters. We show that this technique results in significant and tangible security improvements for real web applications
Scholte, Theodoor. "Amélioration de la sécurité par la conception des logiciels web". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0024.
Texto completoThe web has become a backbone of our industry and daily life. The growing popularity of web applications and services and the increasing number of critical transactions being performed, has raised security concerns. For this reason, much effort has been spent over the past decade to make web applications more secure. Despite these efforts, recent data from SANS institute estimates that up to 60% of Internet attacks target web applications and critical vulnerabilities such as cross-site scripting and SQL injection are still very common. In this thesis, we conduct two empirical studies on a large number of web applications vulnerabilities with the aim of gaining deeper insights in how input validation flaws have evolved in the past decade and how these common vulnerabilities can be prevented. Our results suggest that the complexity of the attacks have not changed significantly and that many web problems are still simple in nature. Our studies also show that most SQL injection and a significant number of cross-site scripting vulnerabilities can be prevented using straight-forward validation mechanisms based on common data types. With these empirical results as foundation, we present IPAAS which helps developers that are unaware of security issues to write more secure web applications than they otherwise would do. It includes a novel technique for preventing the exploitation of cross-site scripting and SQL injection vulnerabilities based on automated data type detection of input parameters. We show that this technique results in significant and tangible security improvements for real web applications
Mathieu-Mahias, Axel. "Sécurisation des implémentations d'algorithmes cryptographiques pour les systèmes embarqués". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG095.
Texto completoEmbedded systems are ubiquitous and they have more and more applications. Most actual industrial fields rely on embedded systems for accomplishing specific tasks, sometimes highly sensitive. Currently, the deployment of embedded systems is even more increasing by the birth of the "Internet of Things", which is expected to revolutionize our digital world.An embedded system is an electronic and informatic system in control of a specific part of a larger system. Numerous constraints, in particular regarding its size, must be taken into account during its conception. Consequently, an embedded system is usually low cost, consume low power and has limited computational resources.For accomplishing its specific tasks, an embedded system collect, manipulate and exchange data that are usually sensitive. Moreover, such a system can often be directly accessible physically. This particularity can then be exploited by an unauthorized entity in order to control, extract or alter sensitive data manipulated by such systems.In this context, it is mandatory to develop well-suited security mechanisms. In particular, it is crucial to prevent direct physical access to the device but also to protect sensitive data manipulated or stored by the device.Cryptography is the science of secrets and offers numerous ways to mitigate the risks that face electronic devices. However, in such a context, some physical characteristics of the electronics of embedded systems vary during the execution of the implementations of cryptographic algorithms guaranteeing the security of information. In particular, the power consumption of a device or its electromagnetic radiations depend on the manipulated data as well as of the choices made for implementing the cryptographic algorithms. These physical characteristics can also be measured if physical access to the device is possible. The exploitation of these measurements has led to devastating attacks called "Side-Channel attacks". Mounting this kind of attacks enables to extract sensitive data stored or manipulated by an electronic device, usually without much effort.Special countermeasures have to be implemented to mitigate the security risks that face the implementations of cryptographic algorithms without deteriorating too much their performances in practice. Masking is a well-known solution, but its correct implementation requires a thorough analysis of algorithmic solutions it provides, especially in the context just described where devices have limited resources
Freyssinet, Eric. "Lutte contre les botnets : analyse et stratégie". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066390.
Texto completoBotnets, or networks of computers infected with malware and connected to a command and control system, is one of the main tools for criminal activities on the Internet today. They allow the development of a new type of crime: crime as a service (CaaS). They are a challenge for law enforcement. First by the importance of their impact on the security of networks and the commission of crimes on the Internet. Next, with regards to the extremely international dimension of their dissemination and therefore the enhanced difficulty in conducting investigations. Finally, through the large number of actors that may be involved (software developers, botnet masters, financial intermediaries, etc.). This thesis proposes a thorough study of botnets (components, operation, actors), the specificaion of a data collection method on botnet related activities and finally the technical and organizational arrangements in the fight against botnets; it concludes on proposals on the strategy for this fight. The work carried out has confirmed the relevance, for the effective study of botnets, of a model encompassing all their components, including infrastructure and actors. Besides an effort in providing definitions, the thesis describes a complete model of the life cycle of a botnet and offers methods for categorization of these objects. This work shows the need for a shared strategy which should include the detection elements, coordination between actors and the possibility or even the obligation for operators to implement mitigation measures
Sadde, Gérald. "Sécurité logicielle des systèmes informatiques : aspects pénaux et civils". Montpellier 1, 2003. http://www.theses.fr/2003MON10019.
Texto completoReynaud, Daniel. "Analyse de codes auto-modifiants pour la sécurité logicielle". Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL049N/document.
Texto completoSelf-modifying programs run in a very specific way: they are capable to rewrite their own code at runtime. Remarkably absent from theoretical computation models, they are present in every modern computer and operating system. Indeed, they are used by bootloaders, for just-in-time compilation or dynamic optimizations. They are also massively used by malware authors in order to bypass antivirus signatures and to delay analysis. Finally, they are unintentionally present in every program, since we can model code injection vulnerabilities (such as buffer overflows) as the ability for a program to accidentally execute data.In this thesis, we propose a formal framework in order to characterize advanced self-modifying behaviors and code armoring techniques. A prototype, TraceSurfer, allows us to detect these behaviors by using fine-grained execution traces and to visualize them as self-reference graphs. Finally, we assess the performance and efficiency of the tool by running it on a large corpus of malware samples
Mao, Yuxiao. "Détection dynamique d'attaques logicielles et matérielles basée sur l'analyse de signaux microarchitecturaux". Thesis, Toulouse, INSA, 2022. http://www.theses.fr/2022ISAT0015.
Texto completoIn recent years, computer systems have evolved quickly. This evolution concerns different layers of the system, both software (operating systems and user programs) and hardware (microarchitecture design and chip technology). While this evolution allows to enrich the functionalities and improve the performance, it has also increased the complexity of the systems. It is difficult, if not impossible, to fully understand a particular modern computer system, and a greater complexity also stands for a larger attack surface for hackers. While most of the attacks target software vulnerabilities, over the past two decades, attacks exploiting hardware vulnerabilities have emerged and demonstrated their serious impact. For example, in 2018, the Spectre and Meltdown attacks have been disclosed, that exploited vulnerabilities in the microarchitecture layer to allow powerful arbitrary reads, and highlighted the security issues that can arise from certain optimizations of system microarchitecture. Detecting and preventing such attacks is not intuitive and there are many challenges to deal with: (1) the great difficulty in identifying sources of vulnerability implied by the high level of complexity and variability of different microarchitectures; (2) the significant impact of countermeasures on overall performance and on modifications to the system's hardware microarchitecture generally not desired; and (3) the necessity to design countermeasures able to adapt to the evolution of the attack after deployment of the system. To face these challenges, this thesis focuses on the use of information available at the microarchitecture level to build efficient attack detection methods.In particular, we describe a framework allowing the dynamic detection of attacks that leave fingerprints at the system's microarchitecture layer. This framework proposes: (1) the use microarchitectural information for attack detection, which can effectively cover attacks targeting microarchitectural vulnerabilities; (2) a methodology that assists designers in selecting relevant microarchitectural information to extract; (3) the use of dedicated connections for the transmission of information extracted, in order to ensure high transmission bandwidth and prevent data loss; and (4) the use of reconfigurable hardware in conjunction with software to implement attack detection logic. This combination (composing to the so-called detection module) reduces the performance overhead through hardware acceleration, and allows updating detection logic during the system lifetime with reconfiguration in order to adapt to the evolution of attacks. We present in detail the proposed architecture and modification needed on the operating system, the methodology for selecting appropriate microarchitectural information and for integrating this framework into a specific computer system, and we describe how the final system integrating our detection module is able to detect attacks and adapt to attack evolution. This thesis also provides two use-case studies implemented on a prototype (based on a RISC-V core with a Linux operating system) on an FPGA. It shows that, thanks to the analysis of microarchitectural information, relatively simple logic implemented in the detection module is sufficient to detect different classes of attacks (cache side-channel attack and ROP attack)
Mantovani, Alessandro. "An Analysis of Human-in-the-loop Approaches for Reverse Engineering Automation". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS052.pdf.
Texto completoIn system and software security, one of the first criteria before applying an analysis methodology is to distinguish according to the availability or not of the source code. When the software we want to investigate is present in binary form, the only possibility that we have is to extract some information from it by observing its machine code, performing what is commonly referred to as Binary Analysis (BA). The artisans in this sector are in charge of mixing their personal experience with an arsenal of tools and methodologies to comprehend some intrinsic and hidden aspects of the target binary, for instance, to discover new vulnerabilities or to detect malicious behaviors. Although this human-in-the-loop configuration is well consolidated over the years, the current explosion of threats and attack vectors such as malware, weaponized exploits, etc. implicitly stresses this binary analysis model, demanding at the same time for high accuracy of the analysis as well as proper scalability over the binaries to counteract the adversarial actors. Therefore, despite the many advances in the BA field over the past years, we are still obliged to seek novel solutions. In this thesis, we take a step more on this problem, and we try to show what current paradigms lack to increase the automation level. To accomplish this, we isolated three classical binary analysis use cases, and we demonstrated how the pipeline analysis benefits from the human intervention. In other words, we considered three human-in-the-loop systems, and we described the human role inside the pipeline with a focus on the types of feedback that the analyst ``exchanges'' with her toolchain. These three examples provided a full view of the gap between current binary analysis solutions and ideally more automated ones, suggesting that the main feature at the base of the human feedback corresponds to the human skill at comprehending portions of binary code. This attempt to systematize the human role in modern binary analysis approaches tries to raise the bar towards more automated systems by leveraging the human component that, so far, is still unavoidable in the majority of the scenarios. Although our analysis shows that machines cannot replace humans at the current stage, we cannot exclude that future approaches will be able to fill this gap as well as evolve tools and methodologies to the next level. Therefore, we hope with this work to inspire future research in the field to reach always more sophisticated and automated binary analysis techniques
Besson, Frédéric. "Résolution modulaire d'analyses de programmes : application à la sécurité logicielle". Rennes 1, 2002. http://www.theses.fr/2002REN1A114.
Texto completoDelouette, Ilona. "Une analyse d’économie institutionnaliste du financement de la prise en charge de la dépendance : D’un risque social à un risque positif". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1A002.
Texto completoThis doctoral dissertation focuses on the funding of care for elderly dependent persons in France. “Dependency” was not considered as a specific social protection issue until the Arreckz report (1979) in which emerges the idea of creating a 5th social security risk. However, care has always been the subject of overlapping schemes designed independently of social security. These schemes have always proved insufficient to cover social needs. We carry out an institutionalist political economy analysis rooted in the Régulation Theory, and more specifically in B. Théret's approach of national systems of social protection. The care of dependency is understood as a sub-system of social protection, i.e. as a means of reproduction of the economic, political, and domestic spheres which are supported by a shared symbolism. In order to understand the evolutions of the dependency’s funding system, we study the changes of this symbolism and the power relations that influence it. Within this symbolism we are specifically interested in the increasing use of the category of risk and insurance. This category widely used in the field of dependency is also at the core of the symbolism of French social protection. Our research is based on the analysis of observations made by think-tanks concerning the funding of dependency; on semi-directive interviews of “key” players in the field; and on a linguistic and historical analysis of public reports using the software Prospéro (Doxa). We demonstrate that since 1979, dependency has been understood as a social risk (1979-1997), then as a social protection risk (1997-2007), and finally as a predictable and positive risk (2008-2015). Within the symbolism of social protection, the category of risk and insurance is increasingly embedded in managerial and economic discourse. These changes justify the transition from funding defined within the primary share of the value added to funding relying on secondary redistribution. Thus, although the risk category has been used constantly since the 1970s, it does not justify the funding of the field commensurate with the stakes involved. On the contrary, since the 2000s onwards, in view of the power relations prevailing in the field, the category of risk has justified the privatisation of its funding
Laouadi, Rabah. "Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes". Electronic Thesis or Diss., Montpellier, 2016. http://www.theses.fr/2016MONTT255.
Texto completoWithout executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection
Varet, Antoine. "Conception, mise en oeuvre et évaluation d'un routeur embarqué pour l'avionique de nouvelle génération". Phd thesis, INSA de Toulouse, 2013. http://tel.archives-ouvertes.fr/tel-00932283.
Texto completoLaouadi, Rabah. "Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT255.
Texto completoWithout executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection
Boichut, Stéphane. "Approximations pour la vérification automatique de protocoles de sécurité". Besançon, 2006. http://www.theses.fr/2006BESA2042.
Texto completoSecured communications are the foundations of on-line critical applications as e-commerce, e-voting, etc. Automatically verifying such secured communications, represented as security protocols, is of the first interest for industrials. By representing the secrecy verification problem as the reachability problem in rewriting, we propose to automate a method, initially dedicated to expert users, verifying secrecy properties on approximations of the intruder knowledge. The intruder knowledge is a set of terms computed from a given one (representing the initial intruder's knowledge) using a term rewriting system (specifying the intruder and the security protocol). By a semi-algorithm, we provide a diagnostic mentioning that a secrecy property is either violated thanks to the computation of an under-estimation of the intruder knowledge, or satisfied with the computation of an over-estimation. This semi-algorithm is implemented in the automatic TA4SP tool. This tool is integrated in the AVISPA tool (http://www. Avispa-project. Org), a tool-set dedicated to automatic verification of security protocols. We also proposed a technique to reconstruct proof trees of terms reachability in approximated context meaning that attack traces can be drawn as soon as a secrecy property is violated
Montagut, Frédéric. "Processus collaboratifs ubiquitaires : architecture, fiabilité et sécurité". Paris, ENST, 2007. http://www.theses.fr/2007ENST0018.
Texto completoElectronic commerce applications have become the standard support for B2B and B2C collaborations. The concept of workflow has been the main enabler concept for such applications. Workflow technologies indeed make it possible to leverage the functionalities of multiple service providers to build value-added services. Typical workflows however rely on a centralized coordinator in charge of managing the workflow execution. New trends in collaborative business applications however call for more flexibility. As a result, the centralized coordination paradigm is no longer suitable to adequately support the execution of recent business applications. In this thesis we present a decentralized workflow management system to overcome this limitation. The main contribution of this thesis is the design and implementation of a full-fledged distributed workflow management system. The workflow architecture that we developed supports the execution of workflows in environments whereby resources offered by each business partner can be used by any party within the surroundings of that business partner. It also features the runtime assignment of business partners to workflow tasks to provide the adequate flexibility to support dynamic collaborations of business partners. This flexibility however comes at the expense of security and reliability and raises new research issues as opposed to usual workflow systems in terms of security and fault management. To cope with the latter, we first propose a transactional protocol to support the workflow execution. Besides, we introduce new security mechanisms to assure the integrity of the workflow execution and to prevent workflow instance forging
Christofi, Maria. "Preuves de sécurité outillées d’implémentations cryptographiques". Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0029.
Texto completoIn this thesis, we are interested on the formal verification of cryptographic implementations. In the first part, we study the verification of the protocol mERA using the tool ProVerif. We prove that this protocol verifies some security properties, like the authentication, the secrecy and the unlinkability, but also properties like its vivacity. In the second part of this thesis, we study the formal verification of cryptographic implementations against an attack family: attacks with fault injection modifying data. We identify and present the different models of these attacks considering different parameters. We then model the cryptographic implementation (with its countermeasures), we inject all possible fault scenarios and finally we verify the corresponding code using the Frama-C tool, based on static analysis techniques. We present a use case of our method: the verification of an CRT-RSA implementation with Vigilant’s countermeasure. After expressing the necessary properties for the verification, we inject all fault scenarios (regarding the chosen fault model). This verification reveals two fault scenarios susceptible to flow secret information. In order to mechanize the verification, we insert fault scenarios automatically according to both single and multi fault attacks). This creates a new Frama-C plug-in: TL-FACE
Theos, Constantin. "Modélisation du mouvement des personnes lors de l'évacuation d'un bâtiment à la suite d'un sinistre". Phd thesis, Marne-la-vallée, ENPC, 1994. http://www.theses.fr/1994ENPC9407.
Texto completoHourtolle, Catherine. "Conception de logiciels sûrs de fonctionnement : analyse de la sécurité des logiciels : mécanismes de décision pour la programmation en n-versions". Toulouse, INPT, 1987. http://www.theses.fr/1987INPT059H.
Texto completoVenelle, Benjamin. "Contrôle d'accès obligatoire pour systèmes à objets : défense en profondeur des objets Java". Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2023/document.
Texto completoObjects based systems are presents everywhere in our life. When such a system presents vulnerabilities, confidentiality and integrity are thus widely compromised. For example, Java is an object language authorizing many cyber-attacks between 2012 and 2013 leading the US department of homeland security to recommend its abandon. This thesis proposes to limit the relations between the objects thanks to a mandatory access control. First, a general model of objects supporting objects and prototypes languages is defined. Second, the elementary relations are formalized in order to control them. Those relations include the reference, interaction and three types of flow (activity, information and data). Automata authorize a logic that enables to compute the required mandatory policy. At the same time, the computation of the MAC policy and the efficiency are solved since the policy is reduced. Experimentations use the JAAS security objectives existing in the Java language. Thus, one year of Java vulnerabilities is prevented thanks to the Metasploit framework
Pellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0064/document.
Texto completoMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Pellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0064.
Texto completoMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Papoulias, Nikolaos. "Le Débogage à Distance et la Réflexion dans les Dispositifs à Ressources Limitées". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00932796.
Texto completo