Tesi sul tema "Systèmes informatiques – Mesures de sûreté – Automatisation"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Systèmes informatiques – Mesures de sûreté – Automatisation".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Mobarek, Iyad. "Conception d'un système national des équipements médicaux automatiques pour améliorer la performance et réduire les coûts d'investissements et d'exploitations des dispositifs médicaux". Compiègne, 2006. http://www.theses.fr/2006COMP1623.
This thesis describes the different phases of developing, implementing and evaluating a unique Clinical Engineering System (CES) based on Jordan. This includes establishing and then automating ail technical issues and information related to medical equipment in 29 hospitals, 685 health centers, 332 dental clinics, 348 pediatric and mother care clinics and 23 blood banks. Every medical equipment was assigned an identity code that can be recognized through a bar code scanning system and similarly ail other involved parameters like hospitals, personnel, spare parts, workshops. . . Etc. Are also coded comprehensively. The fully automated CES presents a powerful system; implemented over network covering different locations of the Directorate of Biomedical Engineering (DBE) at Ministry Of Heath ail over the country, presenting the first Comprehensive CES to be implemented on the national level and the automated system can read and report in both Arabic and English languages. Compared to international figures the developed clinical engineering system has enhanced the performance of medical equipment including its availability (uptime) up to the best available international levels at extremely low cost. The complete system proved to be invaluable tool to manage, control and report all different parameters concerning medical equipment in the considered clinical engineering system. The System was evaluated and found to be reliable, effective and unique compared to internationally available systems and it presents a. Successful model for other countries
Mantovani, Alessandro. "An Analysis of Human-in-the-loop Approaches for Reverse Engineering Automation". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS052.pdf.
In system and software security, one of the first criteria before applying an analysis methodology is to distinguish according to the availability or not of the source code. When the software we want to investigate is present in binary form, the only possibility that we have is to extract some information from it by observing its machine code, performing what is commonly referred to as Binary Analysis (BA). The artisans in this sector are in charge of mixing their personal experience with an arsenal of tools and methodologies to comprehend some intrinsic and hidden aspects of the target binary, for instance, to discover new vulnerabilities or to detect malicious behaviors. Although this human-in-the-loop configuration is well consolidated over the years, the current explosion of threats and attack vectors such as malware, weaponized exploits, etc. implicitly stresses this binary analysis model, demanding at the same time for high accuracy of the analysis as well as proper scalability over the binaries to counteract the adversarial actors. Therefore, despite the many advances in the BA field over the past years, we are still obliged to seek novel solutions. In this thesis, we take a step more on this problem, and we try to show what current paradigms lack to increase the automation level. To accomplish this, we isolated three classical binary analysis use cases, and we demonstrated how the pipeline analysis benefits from the human intervention. In other words, we considered three human-in-the-loop systems, and we described the human role inside the pipeline with a focus on the types of feedback that the analyst ``exchanges'' with her toolchain. These three examples provided a full view of the gap between current binary analysis solutions and ideally more automated ones, suggesting that the main feature at the base of the human feedback corresponds to the human skill at comprehending portions of binary code. This attempt to systematize the human role in modern binary analysis approaches tries to raise the bar towards more automated systems by leveraging the human component that, so far, is still unavoidable in the majority of the scenarios. Although our analysis shows that machines cannot replace humans at the current stage, we cannot exclude that future approaches will be able to fill this gap as well as evolve tools and methodologies to the next level. Therefore, we hope with this work to inspire future research in the field to reach always more sophisticated and automated binary analysis techniques
Oulaaffart, Mohamed. "Automating Security Enhancement for Cloud Services". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0232.
The advances in virtualization techniques and the maturity of orchestration languages have contributed to the design and deployment of cloud composite services. These cloud services may be subject to changes over time, due to the migration of their resources. This may introduce new vulnerabilities, that compromise the whole services. In that context, this thesis proposes to enhance and automate the security of cloud composite services, according to three main axes. The first axis consists in an automated SMT-based security framework for supporting migrations in cloud composite services, such as those orchestrated with the TOSCA (Topology and Orchestration Specification for Cloud Applications) language. It relies on verification techniques for automatically assessing the configuration changes that affect the components of cloud services during their migrations and determining adequate countermeasures. The second axis investigates the design of an inter-cloud trusted third party, called C3S-TTP (Composite Cloud Configuration Security-Trusted Third Party). This one is capable to perform a precise and exhaustive vulnerability assessment, without requiring the cloud provider and the cloud tenant to share critical configuration information between each other. The third axis is centered on the investigation of a moving target defense strategy which combines artificial intelligence algorithms together with verification techniques. The purpose is to deceive reconnaissance activities performed by attackers through a large exploration of states, while minimizing the occurrence of new vulnerabilities that may impact on the attack surface of cloud composite services
Mendy, Norbert Lucien. "Les attaques et la sécurité des systèmes informatiques". Paris 8, 2006. http://www.theses.fr/2006PA082735.
Hacking activities appeared around 1980 with first personal computers and since did not stop developing. At the beginning, this practice was primarily individual and playful. Now it is mainly made up by the activities of groups, with very various motivations. Today, due to the development of electronic means of communication, data security concerns a wider public. This thesis examines initially, from a technical and sociological point of view, attacks and defense mechanisms, and proposes a new concept of the security which is not only any centered on technical solutions but also takes in consideration the social dimension of the problem
Koutsos, Adrien. "Preuves symboliques de propriétés d’indistinguabilité calculatoire". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN029/document.
Our society extensively relies on communications systems. Because such systems are used to exchange sensitive information and are pervasive, they need to be secured. Cryptographic protocols are what allow us to have secure communications. It is crucial that such protocols do not fail in providing the security properties they claim, as failures have dire consequences. Unfortunately, designing cryptographic protocols is notoriously hard, and major protocols are regularly and successfully attacked. We argue that formal verification is the best way to get a strong confidence in a protocol security. Basically, the goal is to mathematically prove that a protocol satisfies some security property.Our objective is to develop techniques to formally verify equivalence properties of cryptographic protocols, using a method that provides strong security guarantees while being amenable to automated deduction techniques. In this thesis, we argue that the Bana-Comon model for equivalence properties meets these goals. We support our claim through three different contributions.First, we design axioms for the usual functions used in security protocols, and for several cryptographic hypothesis. Second, we illustrate the usefulness of these axioms and of the model by completing case studies of concrete protocols: we study two RFID protocols, KCL et LAK, as well as the 5G-AKA authentication protocol used in mobile communication systems. For each of these protocols, we show existing or new attacks against current versions, propose fixes, and prove that the fixed versions are secure. Finally, we study the problem of proof automation in the Bana-Comon model, by showing the decidability of a set of inference rules which is a sound, though incomplete, axiomatization of computational indistinguishability when using an IND-CCA2 encryption scheme. From a cryptographer's point of view, this can be seen as the decidability of a fixed set of cryptographic game transformations
Cerf, Sophie. "control theory for computing systems : application to big-data cloud services & location privacy protection". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT024.
This thesis presents an application of Control Theory for Computing Systems. It aims at investigating techniques to build and control efficient, dependable and privacy-preserving computing systems. Ad-hoc service configuration require a high level of expertise which could benefit from automation in many ways. A control algorithm can handle bigger and more complex systems, even when they are extremely sensitive to variations in their environment. However, applying control to computing systems raises several challenges, e.g. no physics governs the applications. On one hand, the mathematical framework provided by control theory can be used to improve automation and robustness of computing systems. Moreover, the control theory provides by definition mathematical guarantees that its objectives will be fulfilled. On the other hand, the specific challenges of such use cases enable to expand the control theory itself. The approach taken in this work is to use two application computing systems: location privacy and cloud control. Those two use-cases are complementary in the nature of their technologies and softwares, their scale and in their end-users.The widespread of mobile devices has fostered the broadcasting and collection of users’ location data. It could be for the user to benefit from a personalized service (e.g. weather forecast or route planning) or for the service provider or any other third party to derive useful information from the mobility databases (e.g. road usage frequency or popularity of places). Indeed, many information can be retrieved from location data, including highly sensitive personal data. To overcome this privacy breach, Location Privacy Protection Mechanisms (LPPMs) have been developed. They are algorithm that modify the user’s mobility data, hopefully to hide some sensitive information. However, those tools are not easily configurable by non experts and are static processes that do not adapt to the user’s mobility. We develop two tools, one for already collected databases and one for online usage, that, by tuning the LPPMs, guarantee to the users objective-driven levels of privacy protection and of service utility preservation. First, we present an automated tool able to choose and configure LPPMs to protect already collected databases while ensuring a trade-off between privacy protection and database processing quality. Second, we present the first formulation of the location privacy challenge in control theory terms (plant and control, disturbance and performance signals), and a feedback controller to serve as a proof of concept. In both cases, design, implementation and validation has been done through experiments using data of real users collected on the field.The surge in data generation of the last decades, the so-called bigdata, has lead to the development of frameworks able to analyze them, such as the well known MapReduce. Advances in computing practices has also settled the cloud paradigms (where low-level resources can be rented to allow the development of higher level application without dealing with consideration such as investment in hardware or maintenance) as premium solution for all kind of users. Ensuring the performances of MapReduce jobs running on clouds is thus a major concern for the big IT companies and their clients. In this work, we develop advanced monitoring techniques of the jobs execution time and the platform availability by tuning the resource cluster size and realizing admission control, in spite of the unpredictable client workload. In order to deal with the non linearities of the MapReduce system, a robust adaptive feedback controller has been designed. To reduce the cluster utilization (leading to massive financial and energetic costs), we present a new event-based triggering mechanism formulation combined with an optimal predictive controller. Evaluation is done on a MapReduce benchmark suite running on a large-scale cluster, and using real jobs workloads
Balduzzi, Marco. "Mesures automatisées de nouvelles menaces sur Internet". Paris, Télécom ParisTech, 2011. http://www.theses.fr/2011ENST0042.
Ln the last twenty years, the Internet has grown from a simple, small network to a complex, large-scale system. While the Internet was originally used to offer static content that was organized around simple websites, today, it provides both content and services (e. G. Chat, e-mail, web) as weil as the outsourcing of computation and applications (e. G. Cloud computing). Attackers are not indifferent to the evolution of the Internet. Often driven by a flourishing underground economy, attackers are constantly looking for vulnerabilities, misconfigurations and novel techniques to access protected and authorized systems, to steal private information, or to deliver malicious content. To date, not much research has been conducted to measure the importance and extent of these emerging Internet threats. Conventional detection techniques cannot easily scale to large scale installations, and novel methodologies are required to analyze and discover bugs and vulnerabilities in these complex systems. Ln this thesis, we advance the state-of-art in large scale testing and measurement of Internet threats. We research lnto three novel classes of security problems that affect Internet systems that experienced a fast surge in popularity (i. E. , ClickJacking, HTIP Parameter Pollution, and commercial cloud computing services that allow the outsourcing of server infrastructures). We introduce the tirst, large scale attempt to estimate the prevalence and relevance of these problems on the Internet
Kanoun, Wael. "Intelligent risk-aware system for activating and deactivating policy-based response". Télécom Bretagne, 2011. http://www.theses.fr/2011TELB0165.
La croissance de l'échelle des systèmes d'information critiques, combinée à l'augmentation continue de la fréquence et de la sophistication des attaques, rend les systèmes de réponses classiques inadéquats. Le système peut être la cible de plusieurs attaques simultanées, qui nécessitent l'activation de réponses différentes et contradictoires. En outre, une réponse peut avoir des effets collatéraux, comme (i) induire un coût intrinsèque sur le système, (ii) permettre et faciliter l'exécution d'autres attaques. Ainsi, les systèmes de réponse doivent être conçus d'une manière intelligente, pour optimiser l'activation des réponses appropriées, soit pour les automatiser, soit pour fournir une assistance à la décision aux administrateurs. Alors que la majorité des modèles de réponses existants considère seulement le coût des attaques et des réponses, nous adoptons une perspective plus générale basée sur le risque. Conformément à la définition du risque, nous considérons conjointement l'impact et la vraisemblance de succès des attaques en cours dans le processus de sélection de réponse. D'abord, nous proposons un workflow qui permet de réagir sur deux plans distincts, sur le plan tactique, et sur le plan stratégique. La réponse tactique est composée des contremesures élémentaires à portées limitées dans le système. Elles sont généralement liées à l'occurrence d'attaque en cours. En revanche, la réponse stratégique est spécifiée avec un langage formel qui permet d'exprimer des politiques de sécurité. Elles sont déployées globalement dans le système pour des menaces majeures. Ensuite, nous proposons un modèle pour la réponse tactique, basé sur une évaluation de risques dynamique. Quand une attaque en cours est détectée, nous évaluons le risque global en combinant l'impact potentiel avec la vraisemblance de succès de l'attaque. Les contremesures seront ordonnées par rapport à leur efficacité à réduire le risque global. Nous mettons l'accent sur le facteur de vraisemblance de succès, et nous proposons un modèle dynamique pour évaluer ce paramètre, en tenant compte du progrès de l'attaque en cours et l'état du système. Enfin, nous présentons un framework basé sur les risques pour l'activation et la désactivation de la réponse stratégique. Cette réponse est activée et déployée quand le risque de l'attaque en cours dépasse le coût cumulé de la réponse, et elle est maintenue tant que le risque reste présent. Contrairement aux systèmes existants, nous considérons la désactivation d'une réponse qui est effectuée lorsque le risque de l'attaque décroît, ou dés lors que le coût de la réponse devient important. Dans cette thèse, un service VoIP a été choisi pour valider nos propositions, tout en respectant les contraintes opérationnelles et de sécurité
Znaidi, Wassim. "Quelques propositions de solutions pour la sécurité des réseaux de capteurs sans fil". Lyon, INSA, 2010. http://theses.insa-lyon.fr/publication/2010ISAL0083/these.pdf.
The self-organized growth of three-dimensional (3D) quantum dots has attracted a lot of interest for their potential applications in ptoelectronic and in nanophotonic devices. In this work, we study by optical spectroscopy lnAs/lnP and lnAs/GaAs quantum dots grown by molecular beam epitaxy (MBE) using the Stanski-Krastanov (SK) growth mode. The quantum dots are then embedded in an electric-field tunable device called « nanopixel ». Ln the case of the lnAs/lnP quantum dots, we focused on the impact of growth conditions like the cap thickness of the double cap process on the emission energy, the influence of the first cap, temperature effect and the exciton-biexciton system. In the case of lnAs/GaAs system, we studied the impact of the capping layer, the excited level sates, the excitonbi-exciton system, and the impact of temperature. We successfully fabricated nanopixels including a quantum dots layer inside the intrinsic region of a Schottky diode. First results showing the effect of an electric field on a single quantum dot emission are finally described
Garcia-Alfaro, Joaquin. "Platform of intrusion management : design and implementation". Télécom Bretagne, 2006. http://www.theses.fr/2006TELB0025.
Aujourd’hui les systèmes informatiques sont plus vulnérables aux activités malveillantes qu’auparavant. C’est pour cela que l’utilisation des mécanismes de sécurité traditionnaux est encore nécessaire mais pas suffisante. Nous devons élaborer des méthodes efficaces de détection et de réponse aux attaques afin d’arrêter les événements détectés. Nous présentons dans cette thèse la conception d’une architecture générale qui agira en tant que point central pour analyser et vérifier des politiques de sécurité réseaux, et pour contrôler et configurer – sans anomalies ou erreurs de configuration – des composants de sécurité préventifs et de détection. Nous présentons également un mécanisme de réponse basé sur une bibliothèque de différents types de contremesures. L’objectif de ce mécanisme est d’aider l’administrateur à choisir dans cette bibliothèque la contremesure la mieux adaptée quand une intrusion est détectée. Nous finissons par la présentation d’une infrastructure pour la communication des composants de notre plateforme, et d’un mécanisme pour la protection des composants de celle-ci. Toutes les propositions et approches introduites dans cette thèse ont été implémentées et évaluées. Nous présentons les résultats obtenus dans les sections respectives de cette dissertation
Falcone, Yliès Carlo. "Etude et mise en oeuvre de techniques de validation à l'exécution". Université Joseph Fourier (Grenoble), 2009. http://www.theses.fr/2009GRE10239.
This thesis deals with three dynamic validation techniques: runtime verification (monitoring), runtime enforcement, and testing from property. We consider these approaches in the absence of complete behavioral specification of the system under scrutiny. Our study is done in the context of the Safety-Progress classification of properties. This framework offers several advantages for specifying properties on systems. We adapt the results on this classification, initially dedicated to infinite sequences, to take into account finite sequences. Those sequences may be considered as abstract representations of a system execution. Relying on this general framework, we study the applicability of dynamic validation methods. We characterize the classes of monitorable, enforceable, and testable properties. Then, we proposed three generic approaches for runtime verification, enforcement, and testing. We show how it is possible to obtain, from a property expressed in the {\SP} framework, some verification, enforcement, and testing mechanisms for the property under consideration. Finally, we propose the tools j-VETO and j-POST implementing all the aforementioned results on Java programs
Blond, Julien. "Modélisation et implantation d'une politique de sécurité d'un OS multi-niveaux via une traduction de FoCaLyze vers C". Paris 6, 2010. http://www.theses.fr/2010PA066370.
Contes, Arnaud. "Une architecture de sécurité hiérarchique, adaptable et dynamique pour la Grille". Nice, 2005. http://www.theses.fr/2005NICE4025.
Whereas security is a key notion in the world of distributed applications, its numerous concepts are a difficult step to overcome when constructing such applications. Current middlewares provide all major security-related technologies. However developers still have to select the more accurate one and handle all its underlying processes which is particularly difficult with dynamic, grid-enabled applications. To facilitate the use of security concepts in such applications, tis thesis presents a decentralised security model which takes care of security requirements expressed by all actors (resource provides, administrators, users) involved in a computation. The model is implemented outside the application source code. Its configuration is read from external policy files allowing the adaptation of the application’s security according to its deployments. It has been conceived to handle specific behaviors which could happen during a distributed application life-cycle (use of newly discovered resources, remote object creation)
Thomas, Yohann. "Policy-based response to intrusions through context activation". Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0057.
Nous présentons dans cette thèse une nouvelle approche de réponse face aux menaces auxquelles les systèmes informatiques sont soumis. Cette approche est basée sur l'intégration de la notion de contre-mesure au sein même de la politique de sécurité. En particulier, la notion de contexte permet d'évaluer l'état courant du système, et d'exprimer la politique en fonction de cet état. Pour ce faire, le modèle de contrôle d'accès basé sur l'organisation (Or-BAC) est utilisé, distinguant la définition générique de la politique de son implémentation effective en fonction du contexte. Le contexte intègre aussi bien des paramètres spatiaux et temporels que des paramètres plus proprement liés au domaine de la sécurité opérationnelle, comme les alertes remontées par les systèmes de détection d'intrusions (IDS). Ces alertes permettent la caractérisation de la menace à laquelle est soumis le système d'information à un instant donné. Des contextes de menace sont instanciés par notre système de réponse, permettant de déclencher des mises a jour de la politique et son déploiement subséquent. Ainsi, le système est capable d'adapter dynamiquement ses paramètres de fonctionnement en considérant notamment la menace. Nous proposons une approche innovante établissant le lien entre la politique de sécurité et l'un des principaux moyens qui permet d'encontrôler le respect, à savoir les systèmes de détection d'intrusions. Ce lien n'existait pas jusqu'alors, c'est-à-dire que les violations de la politique de sécurité détectées par les IDS n'avaient que peu de conséquences sur les exigences de la politique de sécurité effectivement implementées par les points d'application. Pourtant, force est de constater que l'implementation de la politique ne doit pas être statique. En particulier, nous montrons qu'il est possible de gérer dynamiquement l'accès aux services et aux ressources en fonction de la menace. En outre, ce travail fournit un début de réponse a la problématique de la réactivité et de la pertinence de la réponse face aux menaces. La réponse aux attaques informatiques est le plus souvent gérée manuellement par l'opérateur de sécurité. Ce même opérateur de sécurité manque malheureusement bien souvent de réactivité et de discernement pour répondre de manière adéquate à la menace, notamment parce qu'il est bien souvent noyé sous le flot des alertes ; le travail d'analyse est fastidieux et difficile au vu du nombre de paramètres a considérer. D'un autre côté, les attaques se multiplient, les attaquants mettent de moins en moins de temps a pénétrer les systèmes et à produire des dégâts qui peuvent rapidement se chiffrer en millions d'euros pour les entreprises. Automatiser la réponse est donc une nécessité
Disson, Eric. "Sécurité des accès dans les systèmes d'information coopératifs". Lyon 3, 2001. http://www.theses.fr/2001LYO33032.
Pham, Van-Hau. "De l'identification d'événements d'attaques dans des traces recueillies sur des pots de miel". Paris, Télécom ParisTech, 2009. http://www.theses.fr/2009ENST0017.
Lnternet security is a major issue nowadays. Several research initiatives have been carried out to understand the Internet security threats. Recently, a domain has emerged called attack attribution that aims at studying the modus operandi of the attacks and at identifying the characteristics of the groups responsible for the observed attacks. The work presented in this thesis participates to the efforts in this area. We show in this work that, starting from network traces collected over two years on a distributed system of low interaction honeypots, one can extract meaningful and useful knowledge about the attackers. To reach this goal, the thesis makes several important contributions. First of all, we show that attack traces can be automatically grouped into three distinct classes, corresponding to different attack phenomena. We have defined, implemented and validated algorithms to automatically group large amount of traces per category. Secondly, we show that, for two of these classes, so called micro and macro attack events can be identified that span a limited amount of time. These attack events represent a key element to help identifying specific activities that would, otherwise, be lost in the so called attack background radiation noise. Here too, a new framework has been defined, implemented and validated over 2 years of traces. Hundreds of significant attack events have been found in our traces. Last but not least, we showed that, by grouping attack events together, it was possible to highlight the modus operandi of the organizations responsible for the attacks. The experimental validation of our approach led to the identification of dozens of so called zombie armies. Their main characteristics are presented in the thesis and they reveal new insights on the dynamics of the attacks carried ou over the Internet
Briffaut, Jérémy. "Formation et garantie de propriétés de sécurité système : application à la détection d'intrusions". Orléans, 2007. http://www.theses.fr/2007ORLE2053.
Abdelnur, Humberto Jorge. "Gestion de vulnérabilités voix sur IP". Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10005/document.
VoIP networks are in a major deployment phase and are becoming widely accepted due to their extended functionality and cost efficiency. Meanwhile, as VoIP traffic is transported over the Internet, it is the target of a range of attacks that can jeopardize its proper functionality. Assuring its security becomes crucial. Among the most dangerous threats to VoIP, failures and bugs in the software implementation will continue rank high on the list of vulnerabilities. This thesis provides three contributions towards improving software security. The first is a VoIP specific security assessment framework integrated with discovery actions, data management and security attacks allowing to perform VoIP specific assessment tests. The second contribution consists in an automated approach able to discriminate message signatures and build flexible and efficient passive fingerprinting systems able to identify the source entity of messages in the network. The third contribution addresses the issue of detecting vulnerabilities using a stateful fuzzer. It provides an automated attack approach capable to track the state context of a target device and we share essential practical experience gathered over a two years period in searching for vulnerabilities in the VoIP space
Benali, Fatiha. "Modélisation et classification automatique des informations de sécurité". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0007/these.pdf.
The security of the Information System(IS) has become an important strategic issue. Currently, organizations or companies are evolving and have multiple nodes running multiple systems. These organizations are deploying multiple security devices and offer different services to their users. Services, resources and equipment deployed may be the targets for intruders. Interoperability between products to monitoring the IS is absolutely necessary. We present in our work an architecture for intrusion detection system based on interoperability between different products (security and management) and services deployed in an organization. This architecture will provide a comprehensive and meets the current needs of the security administrator. Intrusion detection in this context is to analyze the information (alerts and events) generated from all these devices to prevent any action not legally permitted. The process of analyzing information security faced serious problems because of the heterogeneity of the mechanisms involved in the monitoring of the IS and because of the lack of standard to presents of such information. The thesis is part of the modeling of security information to address the problem of the heterogeneity of the products, allowing the management process of information security (such as intrusion detection or the search for causes of a security incident) to be operational and efficient. The first part of the thesis proposes a solution for modeling the semantics of information security through an ontology. The purpose of the ontology is to describe in a uniform manner the semantics for all activities that may be made by users of IS, regardless of the products involved in the supervision of an IS, and focusing on the concepts of knowledge for mechanisms for processing such information. The implementation of the ontology is to make a classification of events and alerts generated by the monitoring products, in categories that were described by the ontology. The second part of the thesis focuses on automating the classification of security messages. As we have a corpus of previously classified messages, therefore we are interested in the techniques for automatic categorization of text (CT). These techniques are based on machine learning methods. The proposed classification process consists of two stages. The first step allows the data preparation and representation in a format usable by the classification algorithms. The second step aims to implement the algorithms machine learning on information security preprocessed. The application of the solutions proposed in the thesis is on a basis of alerts and events provided by the company Exaprotect (a publisher of software security)
Sadde, Gérald. "Sécurité logicielle des systèmes informatiques : aspects pénaux et civils". Montpellier 1, 2003. http://www.theses.fr/2003MON10019.
Bascou, Jean-Jacques. "Contribution à la sécurité des systèmes : une méthodologie d'authentification adaptative". Toulouse 3, 1996. http://www.theses.fr/1996TOU30253.
Jacob, Grégoire. "Malware behavioral models : bridging abstract and operational virology". Rennes 1, 2009. http://www.theses.fr/2009REN1S204.
Cette thèse s'intéresse à la modélisation des comportements malicieux au sein des codes malveillants, communément appelés malwares. Les travaux de thèse s'articulent selon deux directions, l'une opérationnelle, l'autre théorique. L'objectif à terme est de combiner ces deux approches afin d'élaborer des méthodes de détection comportementales couvrant la majorité des malwares existants, tout en offrant des garanties formelles de sécurité contre ceux susceptibles d'apparaître. L'approche opérationnelle introduit un langage comportemental abstrait, décorrélé de l'implémentation. Le langage en lui-même repose sur le formalisme des grammaires attribuées permettant d'exprimer la sémantique des comportements. A l'intérieur du langage, plusieurs descriptions de comportements malicieux sont spécifiées afin de construire une méthode de détection multicouche basée sur le parsing. Sur la base de ce même langage, des techniques de mutation comportementale sont également formalisées à l'aide de techniques de compilation. Ces mutations se révèlent un outil intéressant pour l'évaluation de produits antivirus. L'approche théorique introduit un nouveau modèle viral formel, non plus basé sur les paradigmes fonctionnels, mais sur les algèbres de processus. Ce nouveau modèle permet la description de l'auto-réplication ainsi que d'autres comportements plus complexes, basés sur les interactions. Il supporte la redémonstration de résultats fondamentaux tels que l'indécidabilité de la détection et la prévention par isolation. En outre, le modèle supporte la formalisation de plusieurs techniques existantes de détection comportementale, permettant ainsi d'évaluer formellement leur résistance
Rabah, Mourad. "Évaluation de la sûreté de fonctionnement de systèmes multiprocesseurs à usage multiple". Toulouse, INPT, 2000. http://www.theses.fr/2000INPT021H.
Trabelsi, Slim. "Services spontanés sécurisés pour l'informatique diffuse". Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004140.
Fadlallah, Ahmad. "Des solutions pour la traçabilité des attaques Internet". Paris, ENST, 2008. http://www.theses.fr/2008ENST0012.
The Denial of Service (DoS) attacks are a real threat for the availability and stability of the Internet. Their continuous growth was the main motivation of our research, which starts with a thorough analysis of these attacks. The second step in our research was to study the existing DoS defense solutions. Our study provides an analysis of the most well known defense schemes, their advantages and limitations. In particular, we were interested in studying attack traceback solutions, given their important role in the framework of DoS defense. The analysis of different categories of traceback schemes led us to establish a number of requirements for an effective and deployable traceback solution. Our first solution proposes to mix two existing traceback techniques: packet marking and packet logging in order to mutually solve their problems. Our second solution tries to solve the storage overhead problem of the first solution. It is based on out-of-band signaling, which allows tracing IP flows through generating signaling messages. We enhance this solution by mixing the out of band signaling with packet marking. This method significantly reduces the bandwidth overhead of the previous solution while respecting the rest of performance, security and deployment requirements
Saadi, Rachid. "The Chameleon : un système de sécurité pour utilisateurs nomades en environnements pervasifs et collaboratifs". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0040/these.pdf.
While the trust is easy to set up between the known participants of a communication, the evaluation of trust becomes a challenge when confronted with unknown environment. It is more likely to happen that the collaboration in the mobile environment will occur between totally unknown parties. An approach to handle this situation has long been to establish some third parties that certify the identities, roles and/or rights of both participants in a collaboration. In a completely decentralized environment, this option is not sufficient. To decide upon accesses one prefer to rely only on what is presented to him by the other party and by the trust it can establish, directly by knowing the other party or indirectly, and vice-versa. Hence a mobile user must for example present a set of certificates known in advance and the visited site may use these certificates to determine the trust he can have in this user and thus potentially allow an adapted access. In this schema the mobile user must know in advance where she wants to go and what she should present as identifications. This is difficult to achieve in a global environment. Moreover, the user likes to be able to have an evaluation of the site she is visiting to allow limited access to her resources. And finally, an user does not want to bother about the management of her security at fine grain while preserving her privacy. Ideally, the process should be automatized. Our work was lead to define the Chameleon architecture. Thus the nomadic users can behave as chameleons by taking the "colors" of their environments enriching their nomadic accesses. It relies on a new T2D trust model which is characterized by support for the disposition of trust. Each nomadic user is identified by a new morph certification model called X316. The X316 allows to carry out the trust evaluation together with the roles of the participants while allowing to hide some of its elements, preserving the privacy of its users
Razafindraibe, Hanitriniaina Mamitiana Alin. "Analyse et amélioration de la logique double rail pour la conception de circuits sécurisés". Montpellier 2, 2006. http://www.theses.fr/2006MON20117.
Maingot, Vincent. "Conception sécurisée contre les attaques par fautes et par canaux cachés". Grenoble INPG, 2009. https://tel.archives-ouvertes.fr/tel-00399450.
L'évolution des besoins en sécurité des applications grand public a entraîné la multiplication du nombre de systèmes sur puces doués de capacités de chiffrement. En parallèle, l'évolution des techniques de cryptanalyse permet d'attaquer les implantations des méthodes de chiffrement utilisées dans ces applications. Cette thèse porte sur le développement d'une méthodologie permettant l'évaluation de la robustesse apportée par des protections intégrées dans le circuit. Cette évaluation est basée d'une part sur l'utilisation de plates-formes laser pour étudier les types de fautes induits dans un prototype de circuit sécurisé ; et d'autre part, sur l'utilisation d'une méthode basée sur des simulations pendant la phase de conception pour comparer l'influence sur les canaux cachés de protections contre les fautes. Cette méthodologie a été utilisée dans un premier temps sur le cas simple d'un registre protégé par redondance d'information, puis sur des primitives cryptographiques telles qu'une S-Box AES et des co-processeurs AES et RSA. Ces deux études ont montré que l'ajout de capacités de détection ou de correction améliore la robustesse du circuit face aux différentes attaques
Mouelhi, Tejeddine. "Testing and modeling seurity mechanisms in web applications". Télécom Bretagne, 2010. http://www.theses.fr/2010TELB0151.
This thesis focuses on the issue of security testing of web-applications, considering the internal part of a system (access control policies) and then its interfaces (bypass testing and shielding). The proposed approaches led to address the issue of modeling the security policies as well as the testing artifacts, using Model-Driven Engineering as the underlying technology to propose elements for a model-driven security process. Concerning the internal part of a system, we first study the differences between classical functional tests and test targeting the security mechanisms explicitly (so called security tests). In this context, we adapted mutation analysis to assess and qualify security tests. Then, we proposed three complementary approaches dealing with access control testing; the first one is based on pair-wise technique and allows access control tests to be generated automatically, while the second approach allows functional tests to be selected and transformed into security tests. Finally, the last approach focuses on detecting hidden access control mechanisms, which harm the flexibility of the access control mechanisms and their ability to evolve. To complete all these approaches which focus on the internal part of the application, we tackled the issue of testing the interface and especially the bypass-testing. We leveraged the ideas of bypass-testing and used automated analysis of the web application to provide a new approach for testing and shielding web applications against bypass-attacks, which occur when malicious users bypass client-side input validation. The work on access control testing led us to focus on proposing new model-driven approaches for developing and integrating access control mechanisms in a way that guarantees better quality and testability. Two research directions were explored for this purpose. The first one is based on a metamodel and provides a complete MDE process for automatically specifying, and integrating (semi-automatically) access control policies. This approach takes into account testing at the early stage of modeling and provides a generic certification process based on mutation. Finally, the second approach is based on model composition and allows an automated integration of the access control policy, and more importantly the automated reconfiguration of the system when the access control policy needs to evolve
Kheir, Nizar. "Response policies and counter-measure : management of service dependencies and intrusion and reaction impacts". Télécom Bretagne, 2010. http://www.theses.fr/2010TELB0162.
Saleh, Hayder. "Une architecture novatrice de sécurité à base de carte à puce Internet". Versailles-St Quentin en Yvelines, 2002. http://www.theses.fr/2002VERSA009.
Martinelli, Jean. "Protection d'algorithmes de chiffrement par blocs contre les attaques par canaux auxiliaires d'ordre supérieur". Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0043.
Traditionally, a cryptographic algorithm is estimated through its resistance to "logical" attacks. When this algorithm is implanted within a material device, physical leakage can be observed during the computation and can be analyzed by an attacker in order to mount "side channel" attacks. The most studied side channel attack is the differential power analysis (DPA). First order DPA is now well known and can be prevented by securely proven countermeasures. In 2008, some results are known for second order, but none for third order. The goal of this thesis is to propose a frame for k-th order DPA where k>1. We developed several masking schemes as alternatives to the classical ones in order to propose a better complexity-security ratio. These schemes make use of various mathematical operations such as field multiplication or matrix product and cryptographic tools as secret sharing and multi-party computation. We estimated the security of the proposed schemes following a methodology using both theoretical analysis and practical results. At last we proposed an evaluation of the action of the word size of a cryptographic algorithm upon its resistance against side channel attacks, with respect to the masking scheme implemented
Abbes, Tarek. "Classification du trafic et optimisation des règles de filtrage pour la détection d'intrusions". Nancy 1, 2004. http://www.theses.fr/2004NAN10192.
In this dissertation we are interested by some bottlenecks that the intrusion detection faces, namely the high load traffic, the evasion techniques and the false alerts generation. In order to ensure the supervision of overloaded networks, we classify the traffic using Intrusion Detection Systems (IDS) characteristics and network security policies. Therefore each IDS supervises less IP traffic and uses less detection rules (with respect to traffics it analyses). In addition we reduce the packets time processing by a wise attack detection rules application. During this analysis we rely on a fly pattern matching strategy of several attack signatures. Thus we avoid the traffic reassembly previously used to deceive evasion techniques. Besides, we employ the protocol analysis with decision tree in order to accelerate the intrusion detection and reduce the number of false positives noticed when using a raw pattern matching method
Habib, Lionel. "Formalisations et comparaisons de politiques et de systèmes de sécurité". Paris 6, 2011. http://www.theses.fr/2011PA066146.
Su, Lifeng. "Confidentialité et intégrité du bus mémoire". Paris, Télécom ParisTech, 2010. http://www.theses.fr/2010ENST0008.
The security of program execution is often required for certain critical applications. Unfortunately she is vulnerable to many attacking techniques such as software exploits and hardware attacks. Some existing experiences denote that the security of communication between processor and memory can be compromised by board-level probing attacks. Generally probing attacks are divided into two sub-classes : passive probing and active probing. In the first case, an attacker can capture critical data during the process of processor-memory communication. The active probing attacks can be used to alter memory data in order to compromise the program execution in the processor. The first case is related to the confidentiality of memory data and the latter to the integrity of memory data. This dissertation aims to explore diverse options to protect the confidentiality and integrity of memory bus against board-level probing attacks. The fundamental idea focuses on the implementation of an on-chip hardware cryptographique engine to guarantee the integrity and confidentiality of memory data. As our target market is low-to-medium embedded systems, we intend to propose one protection scheme which is realistic, acceptable-to-market and with low costs. All such strong constraints leverage bigly our specific protection choice
Duc, Guillaume. "Support matériel, logiciel et cryptographique pour une éxécution sécurisée de processus". Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0041.
The majority of the solutions to the issue of computer security (algorithms, protocols, secure operating systems, applications) are running on insecure hardware architectures that may be vulnerable to physical (bus spying, modification of the memory content, etc. ) or logical (malicious operating system) attacks. Several secure architectures, which are able to protect the confidentiality and the correct execution of programs against such attacks, have been proposed for several years. After the presentation of some cryptographic bases and a review of the main secure architectures proposed in the litterature, we will present the secure architecture CryptoPage. This architecture guarantees the confidentiality of the code and the data of applications and the correct execution against hardware or software attacks. In addition, it also includes a mechanism to reduce the information leakage on the address bus, while keeping reasonable performances. We will also study how to delegate some security operations of the architecture to an untrusted operating system in order to get more flexibility but without compromising the security of thearchitecture. Finally, some other important mechanism are studied: encrypted processid entification, attestations of the results, management of software signals, management of the threads, inter-process communication
Vache, Géraldine. "Evaluation quantitative de la sécurité informatique : approche par les vulnérabilités". Toulouse, INSA, 2009. http://eprint.insa-toulouse.fr/archive/00000356/.
This thesis presents a new approach for quantitative security evaluation for computer systems. The main objective of this work is to define and evaluate several quantitative measures. These measures are probabilistic and aim at quantifying the environment influence on the computer system security considering vulnerabilities. Initially, we identified the three factors that have a high influence on system state: 1) the vulnerability life cycle, 2) the attacker behaviour and 3) the administrator behaviour. We studied these three factors and their interdependencies and distinguished two main scenarios based on nature of vulnerability discovery, i. E. Malicious or non malicious. This step allowed us to identify the different states of the system considering the vulnerability exploitation process and to define four measures relating to the states of the system: vulnerable, exposed, compromised, patched and secure. To evaluate these measures, we modelled the process of system compromising by vulnerability exploitation. Afterwards, we characterized the vulnerability life cycle events quantitatively, using real data from a vulnerability database, in order to assign realistic values to the parameters of the models. The simulation of these models enabled to obtain the values of the four measures we had defined. Finally, we studied how to extend the modelling to consider several vulnerabilities. So, this approach allows the evaluation of measures quantifying the influences of several factors on the system security
Joly, Cathie-Rosalie. "Le paiement sur les réseaux : comment créer la confiance dans le paiement en ligne ?" Montpellier 1, 2004. http://www.theses.fr/2004MON10018.
Faurax, Olivier. "Méthodologie d'évaluation par simulation de la sécurité des circuits face aux attaques par faute". Aix-Marseille 2, 2008. http://theses.univ-amu.fr.lama.univ-amu.fr/2008AIX22106.pdf.
Microelectronic security devices are more and more present in our lives (smartcards, SIM cards) and they contains sensitive informations that must be protected (account number, cryptographic key, personal data). Recently, attacks on cryptographic algorithms appeared, based on the use of faults. Adding a fault during a device computation enables one to obtain a faulty result. Using a certain amount of correct results and the corresponding faulty ones, it is possible to extract secret data and, in some cases, complete cryptographic keys. However, physical perturbations used in practice (laser, radiations, power glitch) rarely match with faults needed to successfully perform theoretical attacks. In this work, we propose a methodology to test circuits under fault attacks, using simulation. The use of simulation enables to test the circuit before its physical realization, but needs a lot of time. That is why our methodology helps the user to choose the most important faults in order to significantly reduce the simulation time. The tool and the corresponding methodology have been tested on a cryptographic circuit (AES) using a delay fault model. We showed that use of delays to make faults can generate faults suitable for performing known attacks
Limane, Tahar. "Conception et réalisation d'un outil de surveillance, temps réel, d'espaces de débattements de robots". Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0093.
The study presented in this report addresses the problems of designing and implementing a real-time control system of robots movements. The top-Level objective of the study is to enhance the safety of both the human operators and the machines. We begin with a global analysis of risk conditions in Robotics and general relationship statements between the different factors which have to be taken into account when specifying protection systems. We survey the different methods as well as the different equipments used in protection systems against robots possibly undue clearances. Constraints specification of a mean safety system able to control dynamically the robot's containment within the limits allowed or forbidden spaces are studied. Afterwards, we present the functional and structural specifications a well as the conceptual models of the protection systems to be implemented. Methodological approaches of software engineering type are proposed in view of validating the overall system life-cycle, its quality and its reliability. This study results the elaboration of the software tool SAFE (Surveillance d'Ateliers Flexibles et de Leur environnement) which is described in the report. Further developments of SAFE are suggested concerning, particularly, two inter-related functionalities of safety control : - first, the robot command program itself, - second, the dynamic re-specification of safety space when any change arises in the robot's task
Guichard, Patrice. "Menace sur l'ordinateur : piratage - techniques et solutions". Paris 8, 2001. http://www.theses.fr/2001PA083771.
At the end of the 20th century, the computer hacker myth is one of the most widespread and resilient myths of our modern world. In a high technology society with limitless invisible communication devices, the computer whizkid stereotype popularized by mass media is somewhat reassuring. Viruses, illegal intrusions, sabotage, theft: who don't know examples of criminal computer activities, being themselves victims or not? Secondary phenomenon or worldwide catastrophe, analyzing and quantifying it remains necessary to offer solutions at least to hold it back, if not to eradicate it. The complexity of systems, the heavy costs of hardware and softwares and the extension of application fields have caused computer science to become a key element in the strategies of companies, since any destruction or alteration of their data can compromise their competitiveness or their image, and cause often significant financial loss. Though more and more companies become aware of the risks and widely invest into means to protect themselves, the phenomenon of computer hacking is still poorly known and mastered by companies. It should be analyzed through a multidisciplinary (economic, social and technical) approach. Prevention and protection techniques are quickly evolving but companies remain vulnerable in many fields. Revealing computer-hacking techniques is what this thesis is intended to do, so that administrators can test them on their own networks, something that no tool can do in lieu of them
Bhasin, Shivam. "Contre-mesures au niveau logique pour sécuriser les architectures de crypto-processeurs dans les FPGA". Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00683079.
Modern field programmable gate arrays (FPGA) are capable of implementing complex system on chip (SoC) and providing high performance. Therefore, FPGAs are finding wide application. A complex SoC generally contains embedded cryptographic cores to encrypt/decrypt data to ensure security. These cryptographic cores are computationally secure but their physical implementations can be compromised using side channel attacks (SCA) or fault attacks (FA). This thesis focuses on countermeasures for securing cryptographic cores on FPGAs. First, a register-transfer level countermeasure called ``Unrolling'' is proposed. This hiding countermeasure executes multiple rounds of a cryptographic algorithm per clock which allows deeper diffusion of data. Results show excellent resistance against SCA. This is followed by dual-rail precharge logic (DPL) based countermeasures, which form a major part of this work. Wave dynamic differential logic (WDDL), a commonly used DPL countermeasure well suited for FPGAs is studied. Analysis of WDDL (DPL in general) against FA revealed that it is resistant against a majority of faults. Therefore, if flaws in DPL namely early propagation effect (EPE) and technological imbalance are fixed, DPL can evolve as a common countermeasure against SCA and FA. Continuing on this line of research we propose two new countermeasures: DPL without EPE and Balanced-Cell based DPL (BCDL). Finally advanced evaluation tools like stochastic model, mutual information and combined attacks are discussed which are useful when analyzing countermeasures
Portolan, Michele. "Conception d'un système embarqué sûr et sécurisé". Grenoble INPG, 2006. http://www.theses.fr/2006INPG0192.
This PhD researches a global methodology enabling to improve the dependability and security level against transient logic faults (natural or provoked) appearing inside a hardware/software integrated system, like for instance a smart cardo Results can be applied to all systems built around a synthesisable microprocessor core and a set of specialised peripherals. The protection methods operate simultaneously and in complementary manner on hardware, application software and interface layers (most noticeably, the operating system). High level modifications have been favoured for their advantages in terms of generality, configurability, portability and perpetuity. The proposed approach aims at achieving a good trade-off between robustness and overheads, from both hardware and performance point of views. It is applied on a significant system example, representative of an embedded monoprocessor system
Hussain, Mureed. "Sécurisation des communications et des réseaux informatiques dans des environnements tri-partites". Paris 5, 2005. http://www.theses.fr/2005PA05S038.
Humans evince trust and so do the devices today responsible to take intelligent decisions far more rapidly than the humans do. In this age of information technology all what counts is how quickly information is collected, processed and a right decision is taken and implemented. Irrespective of the domain of the human society information is related to, its security is primordial. When security needs mix up with expectations and somewhat with beliefs. They give birth to risks. To avoid risks, trust management and security policies are designed. Security policies are then expressed ith respect to a particular situation and a specific need. For example in a situation we rna need oniy pnvacy, that is hiding information from unauthorised person may not need non-repudiation. So, the services required to provide information security are heavily dependant on the situation. In a world of 21st century distances don’t really matter but contacts do. After the revolutionary discovery of public-key encryption system in 1976, corresponding people may share a secret exen when they have neyer met before and physically they are on the other end of the globe. This ail has become possible due to the interrnediate intelligent devices capable ofmaking the decisions on the available pertinent information with no significant time delays. How these devices can be made to be trusted and provide a required security service when need be, is the subject of this thesis. Two most prominent solutions of net work and communication security are IETF IP security (IPSec) protocol at OSI network layer and at Transport Level Securitv (TLS) at OSI transport layer. We have modified both standards at appropriate places to accommodate our needs of trust management and provision of security services for generic third party security gateways (TPSG). We think these security gateways are important because first, the network devices may not have enough power and memory resources to perform cryptographic computations and second, the enterprises may lack the required expertise to implement a security solution. Furthermore, the requirement of integrating a ne’ security service may be far harder in existing security infrastructure than to implement it on an extemal device. Both proposed solutions have been formally ‘ alidated using automatic protocol analysers Hermes and AVISPA to avoid any attack vulnerability. An interesting application of this work may be in key management in ad hoc networks
Khatib, Mounis. "Key management and secure routing in mobile ad-hoc networks for trusted-based service provision in pervasive environment". Evry, Télécom & Management SudParis, 2008. http://www.theses.fr/2008TELE0017.
Mobile Ad hoc networks are the closest step to the vision of pervasive computing where all devices dynamically discover each other, organize communication networks between themselves and share resources/information to provide seamless service to the end-user. The major problem of providing security services in Mobile Ad-hoc Networks (MANET) is how to manage the key material. Due to unreliable wireless media, the host mobility and the lack of infrastructure, providing secure communications become a big challenge. In addition, the absence of an efficient key management system in this type of network makes it also hard to build a secured routing protocol. As the traditional key management schemes are not suitable to such environments, there is a high requirement to design an efficient key management system compatible with the characteristics of Ad hoc networks. Mobile Ad-hoc Networks cannot afford to deploy public key cryptosystems due to their high computational overheads and storage constraints, while the symmetric approach has computation efficiency but suffers from potential attacks on key agreement or key distribution. Key management is a central aspect for security in mobile ad hoc networks. Consequently, it is necessary to explore an approach that is based on symmetric key cryptography and overcomes their restrictions. In this thesis, our first contribution aimed to design a new protocol called OPEP that enables two nodes in an ad-hoc network to establish a pair wise key, key verification, authenticated key exchange, and group join and exclusion operations. We implement our protocol using a well-known reactive routing protocol without requiring the use of an online centralized entity; in this manner we succeed to propose a new key management scheme and to secure an existing routing protocol at the same time. It is well known that the current ad hoc routing protocols do not scale to work efficiently in networks of more than a few hundred nodes. For scalability purpose we have chosen a new routing protocol, called PARTY, which is intended to be applied in environments with a large number of heterogeneous nodes. Our second contribution in this thesis was focused on vulnerability analysis of PARTY protocol and proposing a new preventive and corrective mechanism which interact with a new trust model to enforce the cooperation of nodes during the routing process. Finally, we validate our protocols in a service provider platform inside a smart environment to authenticate users, to secure the service provision mechanism in this environment based on our trust model, and to manage services among different users
Cornejo, Bautista Joaquim Alfonso Alejandro. "Etude de la sécurisation du canal de transmission optique par la technique de brouillage de phase". Télécom Bretagne, 2009. http://www.theses.fr/2008TELB0095.
Antakly, Dimitri. "Apprentissage et vérification statistique pour la sécurité". Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4015.
The main objective of this thesis is to combine the advantages of probabilistic graphical model learning and formal verification in order to build a novel strategy for security assessments. The second objective is to assess the security of a given system by verifying whether it satisfies given properties and, if not, how far is it from satisfying them. We are interested in performing formal verification of this system based on event sequences collected from its execution. Consequently, we propose a model-based approach where a Recursive Timescale Graphical Event Model (RTGEM), learned from the event streams, is considered to be representative of the underlying system. This model is then used to check a security property. If the property is not verified, we propose a search methodology to find another close model that satisfies it. We discuss and justify the different techniques we use in our approach and we adapt a distance measure between Graphical Event Models. The distance measure between the learned "fittest" model and the found proximal secure model gives an insight on how far our real system is from verifying the given property. For the sake of completeness, we propose series of experiments on synthetic data allowing to provide experimental evidence that we can attain the desired goals
Chevallier-Mames, Benoit. "Cryptographie à clé publique : constructions et preuves de sécurité". Paris 7, 2006. http://www.theses.fr/2006PA077008.
The public key cryptography concept, proposed by Whitfield Diffie et Martin Hellman, changed the cryptology world. After the description of first heuristically secure schemes, thé formalization of models and security notions has allowed the emergency of provable security. After some reminds about cryptography and security reduction, we propose new signature and encryption schemes, with some advantages over the existing Systems. Indeed, we propose two new signature schemes with a security proof in the random oracle model, and expose a new signature scheme which features a provable security in the standard model. All of these schemes feature both tight security and the possible use of coupons. Next, we describe a new encryption scheme, based on a new cryptographical problem. We also give another look to the universel paddings, and show how to obtain tight security for identity-based encryption schemes. In the last part of this thesis, we deal with the physical security of cryptographical software. We propose notably new efficient countermeasures against simple side-channel attacks (SPA) and differentiel side-channel attacks (DPA)
Boisseau, Alexandre. "Abstractions pour la vérification de propriétés de sécurité de protocoles cryptographiques". Cachan, Ecole normale supérieure, 2003. https://theses.hal.science/tel-01199555.
Since the development of computer networks and electronic communications, it becomes important for the public to use secure electronic communications. Cryptographic considerations are part of the answer to the problem and cryptographic protocols describe how to integrate cryptography in actual communications. However, even if the encryption algorithms are robust, there can still remain some attacks due to logical flaw in protocols and formal verification can be used to avoid such flaws. In this thesis, we use abstraction techniques to formally prove various types of properties : secrecy and authentication properties, fairness properties and anonymity
Carré, Jean-Loup. "Static analysis of embedded multithreaded programs". Cachan, Ecole normale supérieure, 2010. https://theses.hal.science/tel-01199739.
This Phd thesis presents a static analysis algorithm for programs with threads. It generalizes abstract interpretation techniques used in the single-threaded case and allows to detect runtimes errors, e. G, invalid pointer dereferences, array overflows, integer overflows. We have implemented this algorithm. It analyzes a large industrial multithreaded code (100K LOC) in few hours. Our technique is modular, it uses any abtract domain designed for the single-threaded-case. Furthermore, without any change in the fixpoint computation, sorne abstract domains allow to detect data-races or deadlocks. This technique does not assume sequential consistency, since, in practice (INTEL and SPARC processors, JAVA,. . . ), program execution is not sequentially consistent. E. G, it works in TSO (Total Store ordering) or PSO (Partial Store Ordering) memory models