Dissertations / Theses on the topic 'Logiciel – Mesures de sûreté'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Logiciel – Mesures de sûreté.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Duc, Guillaume. "Support matériel, logiciel et cryptographique pour une éxécution sécurisée de processus." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0041.
Full textThe majority of the solutions to the issue of computer security (algorithms, protocols, secure operating systems, applications) are running on insecure hardware architectures that may be vulnerable to physical (bus spying, modification of the memory content, etc. ) or logical (malicious operating system) attacks. Several secure architectures, which are able to protect the confidentiality and the correct execution of programs against such attacks, have been proposed for several years. After the presentation of some cryptographic bases and a review of the main secure architectures proposed in the litterature, we will present the secure architecture CryptoPage. This architecture guarantees the confidentiality of the code and the data of applications and the correct execution against hardware or software attacks. In addition, it also includes a mechanism to reduce the information leakage on the address bus, while keeping reasonable performances. We will also study how to delegate some security operations of the architecture to an untrusted operating system in order to get more flexibility but without compromising the security of thearchitecture. Finally, some other important mechanism are studied: encrypted processid entification, attestations of the results, management of software signals, management of the threads, inter-process communication
Aussibal, Julien. "Rsids : un IDS distribué basé sur le framework CVSS." Pau, 2009. http://www.theses.fr/2009PAUU3044.
Full textIntrusion detection is a method that ensures the availability concept in systems and computer networks. This availability is generally undermined by various anomalies. These anomalies can be caused either legitimately unintended result has operations working on these systems (broken link, traffic, or. . . ), so illegitimate with malicious operations designed to undermine the availability of these systems. The implementation of these various anomalies detection tools, such as IDS (Intrusion Detection System), contribute to early identification of these anomalies and to block them. This thesis has enabled us to develop a new generation platform to generate legitimate and illegitimate anomalies. This work was carried out under the project METROSEC. This platform has enabled us to obtain various traffic captures containing these anomalies. The various illegimitate anomalies were performed with classic tools to make Denial of Service like TFN2k or Trinoo. Legitimate Anormalies were also conducted with flash crowd phenomenon. All these catch real traffic were used in further research on intrusion detection for the evaluation of new methods of detection. In a second part, the implementation of a new detection tool seems necessary to improve the quality of detection of these anomalies. This new distributed IDS, called RSIDS (Risk Scored Intrusion Detection System), will retrieve the results of a multitude of heterogeneous probes. The use of probes will remove the risk of false alarms. Indeed, a probe is not able to detect all anomalies that occur on a system or network. Each alert provided by its probes will be evaluated according to their degree of dangerousness. The assessment of dangerousness based on the framework CVSS (Common Vulnerability Scoring System)
Limane, Tahar. "Conception et réalisation d'un outil de surveillance, temps réel, d'espaces de débattements de robots." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0093.
Full textThe study presented in this report addresses the problems of designing and implementing a real-time control system of robots movements. The top-Level objective of the study is to enhance the safety of both the human operators and the machines. We begin with a global analysis of risk conditions in Robotics and general relationship statements between the different factors which have to be taken into account when specifying protection systems. We survey the different methods as well as the different equipments used in protection systems against robots possibly undue clearances. Constraints specification of a mean safety system able to control dynamically the robot's containment within the limits allowed or forbidden spaces are studied. Afterwards, we present the functional and structural specifications a well as the conceptual models of the protection systems to be implemented. Methodological approaches of software engineering type are proposed in view of validating the overall system life-cycle, its quality and its reliability. This study results the elaboration of the software tool SAFE (Surveillance d'Ateliers Flexibles et de Leur environnement) which is described in the report. Further developments of SAFE are suggested concerning, particularly, two inter-related functionalities of safety control : - first, the robot command program itself, - second, the dynamic re-specification of safety space when any change arises in the robot's task
Abdelnur, Humberto Jorge. "Gestion de vulnérabilités voix sur IP." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10005/document.
Full textVoIP networks are in a major deployment phase and are becoming widely accepted due to their extended functionality and cost efficiency. Meanwhile, as VoIP traffic is transported over the Internet, it is the target of a range of attacks that can jeopardize its proper functionality. Assuring its security becomes crucial. Among the most dangerous threats to VoIP, failures and bugs in the software implementation will continue rank high on the list of vulnerabilities. This thesis provides three contributions towards improving software security. The first is a VoIP specific security assessment framework integrated with discovery actions, data management and security attacks allowing to perform VoIP specific assessment tests. The second contribution consists in an automated approach able to discriminate message signatures and build flexible and efficient passive fingerprinting systems able to identify the source entity of messages in the network. The third contribution addresses the issue of detecting vulnerabilities using a stateful fuzzer. It provides an automated attack approach capable to track the state context of a target device and we share essential practical experience gathered over a two years period in searching for vulnerabilities in the VoIP space
Galissant, Pierre. "Contributions to white-box cryptography : models and algebraic constructions." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG099.
Full textDue to the democratization of technologiessuch as mobile payment or the soaring of blockchaintechnologies, there is a growing need for secureimplementations of standardized algorithms in thewhite-box model. In spite of this, there are too fewsecure designs published in the literature. To avoidrelying on hidden design implementations to provideany security in the white-box model, moreimplementations designs and techniques have to beexplored.This thesis begins with a guide to white-boxcryptography. Its goal is to revise, precise or correctwhite-box models, security notions andconstructions that have emerged in the state of theart since the introduction of the concept. We notablyclarify the Remote-Access White-Box model and theHardware Module White-Box and contextualize themin the general cryptographic literature.We then explore white-box implementations of theAES by first synthesizing the knownimplementations techniques and their flaws, andthen proposing a new solution based on polynomialrepresentations, for which we propose a securityanalysis and a challenge implementation. The lastpart of this thesis focuses on the implementation ofmultivariate cryptographic primitives in thewhite-box model. After introducing succinctlymultivariate cryptography, we motivate the studyof this branch of public key cryptography in thewhite-box context. We propose the firstimplementation technique of the HFE family ofsignature algorithms, for which we propose anextensive security analysis and a challengeimplementation. Finally, to propose otherperspectives on multivariate white-boxcryptography, we also propose an incompressiblestream cipher adapted from QUAD
Le, Blond Stevens. "Est-ce que la vie privée existe toujours (dans les réseaux pair-à-pair) ?" Nice, 2011. http://www.theses.fr/2011NICE4018.
Full textWith the Peer-to-peer (P2P) paradigm, each user shares some of his resources (e. G. , storage and bandwidth) to provide a service. Because a P2P system scales with its number of users, the P2P paradigm is today used by some of the most popular Internet systems. However, P2P systems on the Internet may allow a malicious user to correlate the IP address of other participants with some personal information, possibly introducing a major privacy invasion. To evaluate the severity of this threat, we run privacy attacks against three popular P2P applications: a filesharing application (BitTorrent), a real-time application (Skype), and a low-latency anonymity application (Tor). (a) Exploiting Skype, we show that one can associate the identity (e. G. , birth name and city of residence) of a person with his IP address. We then scale this attack to track the mobility and filesharing behavior of a large number of identified-targeted users. (b) Exploiting BitTorrent, we demonstrate that an adversary can continuously spy on the download of most of the BitTorrent users of the Internet, from a single machine. We also show that it is possible to determine the IP address of the users who inject new content and that the top100 of these users inject 1/3 of all content. (c) Exploiting Tor, we show that it is possible to reveal the IP address of, or trace, supposedly anonymous users and show that Tor's design can also be exploited to trace a significant amount of non-P2P ``anonymous'' traffic. We conclude that P2P applications enable a major invasion in users' privacy and sketch an architecture that could alleviate this threat
Hodique, Yann. "Sûreté et optimisation par les systèmes de types en contexte ouvert et contraint." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-15.pdf.
Full textTrouessin, Gilles. "Traitements fiables de données confidentielles par fragmentation-redondance-dissémination." Toulouse 3, 1991. http://www.theses.fr/1991TOU30260.
Full textJacob, Grégoire. "Malware behavioral models : bridging abstract and operational virology." Rennes 1, 2009. http://www.theses.fr/2009REN1S204.
Full textCette thèse s'intéresse à la modélisation des comportements malicieux au sein des codes malveillants, communément appelés malwares. Les travaux de thèse s'articulent selon deux directions, l'une opérationnelle, l'autre théorique. L'objectif à terme est de combiner ces deux approches afin d'élaborer des méthodes de détection comportementales couvrant la majorité des malwares existants, tout en offrant des garanties formelles de sécurité contre ceux susceptibles d'apparaître. L'approche opérationnelle introduit un langage comportemental abstrait, décorrélé de l'implémentation. Le langage en lui-même repose sur le formalisme des grammaires attribuées permettant d'exprimer la sémantique des comportements. A l'intérieur du langage, plusieurs descriptions de comportements malicieux sont spécifiées afin de construire une méthode de détection multicouche basée sur le parsing. Sur la base de ce même langage, des techniques de mutation comportementale sont également formalisées à l'aide de techniques de compilation. Ces mutations se révèlent un outil intéressant pour l'évaluation de produits antivirus. L'approche théorique introduit un nouveau modèle viral formel, non plus basé sur les paradigmes fonctionnels, mais sur les algèbres de processus. Ce nouveau modèle permet la description de l'auto-réplication ainsi que d'autres comportements plus complexes, basés sur les interactions. Il supporte la redémonstration de résultats fondamentaux tels que l'indécidabilité de la détection et la prévention par isolation. En outre, le modèle supporte la formalisation de plusieurs techniques existantes de détection comportementale, permettant ainsi d'évaluer formellement leur résistance
Boisseau, Alexandre. "Abstractions pour la vérification de propriétés de sécurité de protocoles cryptographiques." Cachan, Ecole normale supérieure, 2003. https://theses.hal.science/tel-01199555.
Full textSince the development of computer networks and electronic communications, it becomes important for the public to use secure electronic communications. Cryptographic considerations are part of the answer to the problem and cryptographic protocols describe how to integrate cryptography in actual communications. However, even if the encryption algorithms are robust, there can still remain some attacks due to logical flaw in protocols and formal verification can be used to avoid such flaws. In this thesis, we use abstraction techniques to formally prove various types of properties : secrecy and authentication properties, fairness properties and anonymity
Carré, Jean-Loup. "Static analysis of embedded multithreaded programs." Cachan, Ecole normale supérieure, 2010. https://theses.hal.science/tel-01199739.
Full textThis Phd thesis presents a static analysis algorithm for programs with threads. It generalizes abstract interpretation techniques used in the single-threaded case and allows to detect runtimes errors, e. G, invalid pointer dereferences, array overflows, integer overflows. We have implemented this algorithm. It analyzes a large industrial multithreaded code (100K LOC) in few hours. Our technique is modular, it uses any abtract domain designed for the single-threaded-case. Furthermore, without any change in the fixpoint computation, sorne abstract domains allow to detect data-races or deadlocks. This technique does not assume sequential consistency, since, in practice (INTEL and SPARC processors, JAVA,. . . ), program execution is not sequentially consistent. E. G, it works in TSO (Total Store ordering) or PSO (Partial Store Ordering) memory models
Ismail, Leila. "Infrastructure système pour applications réparties à base d'agents mobiles." Grenoble INPG, 2000. http://www.theses.fr/2000INPG0072.
Full textJaeger, Éric. "Study of the benefits of using deductive formal methods for secure developments." Paris 6, 2010. http://www.theses.fr/2010PA066048.
Full textFioraldi, Andrea. "Fuzzing in the 2020s : novel approaches and solutions." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS546.
Full textSecurity remains at risk due to elusive software vulnerabilities, even with extensive fuzzing efforts. Coverage-guided fuzzers, focusing solely on code coverage, often fall short in discovering specific vulnerabilities. The proliferation of diverse fuzzing tools has fragmented the field, making it challenging to combine different fuzzing techniques, assess contributions accurately, and compare tools effectively. To address this, standardized baselines are needed to ensure equitable evaluations. AFL, due to its popularity, is often extended to implement new prototypes despite not being a naive baseline and its monolithic design. On the other hand, custom fuzzers written from scratch tend to reinvent solutions and often lack scalability on multicore systems. This thesis addresses these challenges with several contributions: A new feedback mechanism called InvsCov is introduced, which considers program variable relationships and code coverage. It refines program state approximation for diverse bug detection. Another additional feedback we introduce explores data dependency graphs to enhance fuzzing by rewarding new dataflow edge traversal, effectively finding vulnerabilities missed by standard coverage. We also present a thorough analysis of AFL's internal mechanisms to shed light on its design choices and their impact on fuzzing performance. Finally, to address fragmentation, LibAFL is introduced as a modular and reusable fuzzing framework. Researchers can extend the core fuzzer pipeline, evaluation of compelling techniques, and combination of orthogonal approaches. An attempt to rewrite AFL++ as a frontend to LibAFL won the SBFT'23 fuzzing competition in the bug-finding track. These contributions advance the field of fuzz testing, addressing the challenges of sensitivity in feedback mechanisms, bug diversity, tool fragmentation, and fuzzers evaluation. They provide a foundation for improving fuzzing techniques, enabling the detection of a broader range of bugs, and fostering collaboration and standardization within the community
Sadde, Gérald. "Sécurité logicielle des systèmes informatiques : aspects pénaux et civils." Montpellier 1, 2003. http://www.theses.fr/2003MON10019.
Full textPellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0064.
Full textMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Paradinas, Pierre. "La Biocarte : intégration d'une carte à microprocesseur dans un réseau professionnel santé." Lille 1, 1988. http://www.theses.fr/1988LIL10100.
Full textJolly, Germain. "Evaluation d’applications de paiement sur carte à puce." Caen, 2016. https://hal.archives-ouvertes.fr/tel-01419220.
Full textThis thesis deals with high-level evaluation of applications in smartcards. The proposed method combines observation of the communication and detection of violated properties. The goal is to detect anomalies on smart cards (and more precisely on its implementation) and provide a better documentation on this error and on the reasons that triggered this error. We can know on the fly if an application has an error of implementation. The user of the tool configures a set of properties corresponding to the expected behavior of the application. To ascertain compliance of the behavior of the card application with the theory (specifications), the first step is the generation of the oracle, reference used during verification and validation activity. We quickly directed to a smarter technique to target the most interesting behaviors to check for our study. We worked on a generation method based on a genetic algorithm taking as input a set of transaction logs to automatically generate a set of properties (i. E. A set of local and expected behaviors of the applications). The evaluation methodology is developed through the WSCT framework. Two plugins were created and used to communicate with the smart card application, but also to observe and detect an abnormality in the behavior of the application. We used a JavaCard applet developed in the laboratory to test the feasibility of the method for two use cases: during the test phase, the methodology can be used in parallel by the certification firm and during the development of an application, for example, allowing improving the teaching of the JavaCard development and the evaluation of application
Lucas, Audrey. "Support logiciel robuste aux attaques passives et actives pour l'arithmétique de la cryptographie asymétrique sur des (très) petits coeurs de calcul." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S070.
Full textThis thesis deals with protection development and evaluation against fault attacks (FA) and side channel attacks (SCA) simultaneously. These protections have been developed for elliptic curves cryptography (ECC) and its main operation, the scalar multiplication (MS). Two protections have been proposed. The first is point verification (PV) checking that the current point is effectively on the curve, with a uniformization behavior. Thus, this new SM with PV is robust against some FAs and also SPA, since it is uniform. The second one is called counter iteration (IC). ICC protects the scalar against major FAs with a uniform behavior. Its overhead is very small. Our protections have been implemented on Cortex M0 microcontroller for Weiertrass and Montgomery curves and for different types of coordinates. The overhead is between 48 % and 62 %, in the worst case (when the PV is made at each SM iteration). This overhead is smaller than overhead of usual basic protections against SPA. A theorical activity simulator has also been developed. It reproduces the architecture of a simple 32-bit microcontroller. Theoric activity is modeled by the Hamming weigh variations of manipulated data during execution. Thanks to the simulator, the impact of operands is illustrated for arithmetic units. Moreover, SPA and DPA attacks were made for evaluating our protections. Our protections show some security improvements
Cozzi, Emanuele. "Binary Analysis for Linux and IoT Malware." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS197.
Full textFor the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the increasing number of interconnected embedded devices and the IoT revolution are rapidly changing the malware landscape. Malicious actors did not stand by and watch, but quickly reacted to create "Linux malware", showing an increasing interest in Linux-based operating systems and platforms running architectures different from the typical Intel CPU. As a result, researchers must react accordingly. Through this thesis, we navigate the world of Linux-based malicious software and highlight the problems we need to overcome for their correct analysis.After a systematic exploration of the challenges involved in the analysis of Linux malware, we present the design and implementation of the first malware analysis pipeline, specifically tailored to study this emerging phenomenon. We use our platform to analyze over 100K samples and collect detailed statistics and insights that can help to direct future works.We then apply binary code similarity techniques to systematically reconstruct the lineage of IoT malware families, and track their relationships, evolution, and variants. We show how the free availability of source code resulted in a very large number of variants, often impacting the classification of antivirus systems.Last but not least, we address a major problem we encountered in the analysis of statically linked executables. In particular, we present a new approach to identify the boundary between user code and third-party libraries, such that the burden of libraries can be safely removed from binary analysis tasks
Hubert, Laurent. "Foundations and implementation of a tool bench for static analysis of Java bytecode programs." Rennes 1, 2010. http://www.theses.fr/2010REN1S122.
Full textDans cette thèse, nous nous intéressons à l’analyse statique du bytecode Java. L’initialisation d’un système d’information est une phase délicate où des propriétés de sécurité sont vérifiées et des invariants installés. L’initialisation en Java pose des difficultés, que ce soit pour les champs, les objets ou les classes. De ces difficultés peuvent résulter des failles de sécurité, des erreurs d’exécution (bugs), ou une plus grande difficulté à valider statiquement ces logiciels. Cette thèse propose des analyses statiques répondant aux problèmes d’initialisation de champs, d’objets et de classes. Ainsi, nous décrivons une analyse de pointeurs nuls qui suit finement l’initialisation des champs et permet de prouver l’absence d’exception de pointeur nuls (NullPointerException) et de raffiner le graphe de flot de contrôle intra-procédural. Nous proposons aussi une analyse pour raffiner le graphe de flot de contrôle inter-procédural liée à l’initialisation de classe et permettant de modéliser plus finement le contenu des champs statiques. Enfin, nous proposons un système de type permettant de garantir que les objets manipulés sont complètement initialisés, et offrant ainsi une solution formelle et automatique à un problème de sécurité connu. Les fondations sémantiques de ces analyses sont données. Les analyses sont décrites formellement et prouvées correctes. Pour pouvoir adapter ces analyses, formalisées sur de petits langages, au bytecode, nous avons développé une bibliothèque logicielle. Elle nous a permis de produire des prototypes efficaces gérant l’intégralité du bytecode Java
Bulusu, Sravani Teja. "Méthodologie d'ingénierie des exigences de sécurité réseau." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30084.
Full textBuilding secure networks is crucial as well as challenging for any organization. Network security majorly concerns the security architectural needs that describe network segmentation (i.e., security zoning); security of network devices connecting the communicating end user systems; and security of the information being transferred across the communication links. Most often, a late consideration of security aspects (i.e., post-deployment of network design) inevitably results in an increase in costs as well as in the complexity to take into account the necessary changes that have be made to the existing infrastructures. In this regard, network security requirements hold a paramount importance since they drive the decisions related to the implementation of security controls about business needs. Indeed, bad network security requirements can lead to ineffective and costly security or worth security holes in the network security design. Nevertheless, current security requirement engineering methodologies render no support to derive network security requirements. This thesis work is a part of the research project DGA IREHDO2 (Intégration REseau Haut Débit embarqué Optique 2ème phase) that concerns aircrafts future generation networks. Our work is done mainly in collaboration with AIRBUS and is related to the security requirements engineering process for aircraft networks. Our objective in this project is to propose an SRE methodology for capturing and analysing network security requirements, and that facilitates the refinement into network security and monitoring configurations (TOP/DOWN approach). The complexity addressed comes at a time from the differences in point of view: i) with regard to the understanding of the issue of security by different stakeholders, ii) the nature of the systems impacted and the variability of the levels of abstraction in the network development cycle. In this work, we defined SRE methodology based on the abstraction levels proposed by SABSA (Sherwood Applied Business Security Architecture) method in order to structure the refinement activity of business needs into network security requirements. Indeed, SABSA recommends the expression of the needs considering the Business view (decision makers), Architect's view (objectives, risks, processes, applications and interactions), Designer's view (security services), Builder's view (security mechanisms) and Tradesman's view (products, tools, technologies). We considered the first three views. We express the business and architect's views using STS (Social-Technical Systems) formalism. We also propose to represent attacks as multi-agent systems to facilitate the analysis of security risks at these first two views. For expressing the network security requirements captured at Designer's view, we propose a methodology that automates parts of the process of security zoning and network security requirements elicitation using a definite set of formalized rules derived from security design principles and formal integrity models. We developed a tool that implements these rules in ASP (Answer set programming), which facilitates calculating cost-optimal security zone models. In the end, to ensure traceability between the three views, we defined a new modelling notation based on the concepts proposed in KAOS (Keep All Objectives Satisfied) and STS. We illustrate our methodology using a scenario specific to the IRHEDO2 project. Finally, we evaluate our methodology using: 1) an e-commerce enterprise case study; 2) a new scenario specific to the IRHEDO2 project
Karray, Achraf. "Conception, mise en œuvre et validation d’un environnement logiciel pour le calcul sécurisé sur une grille de cartes à puce de type Java." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13724/document.
Full textAbstract
Bursuc, Sergiu. "Contraintes de déductibilité dans une algèbre quotient : réduction de modèles et applications à la sécurité." Cachan, Ecole normale supérieure, 2009. http://www.theses.fr/2009DENS0055.
Full textTo enable formal and automated analysis of security protocols, one has to abstract implementations of cryptographic primitives by terms in a given algebra. However, the algebra can not be free, as cryptographic primitives have algebraic properties that are either relevant to their specification or else they can be simply observed in implementations at hand. These properties are sometimes essential for the execution of the protocol, but they also open the possibility for an attack, as they give to an intruder the means to deduce new information from the messages that he intercepts over the network. In consequence, there was much work over the last few years towards enriching the Dolev-Yao model, originally based on a free algebra, with algebraic properties, modelled by equational theories. In this thesis, driven by both practical and theoretical interests, we propose general decision procedures for the insecurity of protocols, that can be applied to several classes of equational theories
Ali, Loubna. "Gestionnaire d'infrastructure distribuée." Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0088/these.pdf.
Full textThe diffusion of the TIC and the constraints of flexibility and reactivity on the companies lead those to new organizational forms: alliances, networks and another definition of BP inter enterprise in addition of an increased division of the information systems. These new organizational architectures largely use distributed infrastructures what forces to take into account: the security constraints, the access control and the quality of service management. Since of evolutionarily of these organizations, the distributed infrastructure management system must allow the realization of proactive management (to adapt as well as possible to the context changing) and autonomous (so as to guarantee the system agility). To realize these needs, we propose to couple the processes modelling of company with the distributed infrastructure modelling to establish forecasts of the infrastructure utilization (which make the realization of a proactive management possible) and to use the collected data via the administration system to monitor and manage the quality of service from beginning to end. To guarantee the agility of the management system, we chose to implement it dynamically thanks to a mobile agents (aglets) platform. Our management architecture is based on a multilevel administration system (according to the hierarchical organization of the distributed infrastructure of the virtual enterprise) and it is decentralized (thus respecting the responsibility zones). This platform constitutes a point of interconnection not only between the various geographical organization areas but also between the various levels of decision in the management system. The integration of agent’s composition mechanism offers a great flexibility and makes it possible to define "in the request" the necessary administration tools. Lastly, our platform is not limited to the proactive infrastructure management; it makes the possibility also to support security mechanisms while generating the agents able to execute the functionalities of authentication or authorization
Occello, Audrey. "Capitalisation de la sûreté de fonctionnement des applications soumises à des adaptations dynamiques : le modèle exécutable Satin." Phd thesis, Université de Nice Sophia-Antipolis, 2006. http://tel.archives-ouvertes.fr/tel-00090755.
Full textarrivent à maturité et permettent de modifier les applications durant leur
exécution. Si l'on considère que la sûreté de fonctionnement d'une
application est la propriété permettant aux utilisateurs d'un système de
placer une confiance justifiée dans le service qu'il leur délivre alors il
faut garantir lors d'une adaptation dynamique que cette propriété est
préservée. Autrement dit, une adaptation n'est pas "sûre" à partir du moment
où le service que fournit l'application après adaptation diverge du service
attendu par l'utilisateur.
Actuellement, il n'existe pas de solution appropriée au problème de la
sûreté des adaptations dynamiques. En effet, un certain nombre de techniques
(typage, model-checking, ...) destinées à concevoir et implémenter les
systèmes informatiques de façon sûre peuvent être utilisées dans le cadre
des adaptations statiques mais pas directement pour valider des adaptations
dynamiques. De plus, la prise en charge de ces dernières doit tenir compte
du risque qu'une adaptation se produise à un moment inadéquat dans
l'exécution de l'application et implique de traiter les problèmes de sûreté
parallèlement à l'exécution de l'application sans perturbation. Bien que les
plates-formes permettant les adaptations dynamiques proposent des solutions,
il n'existe pas de consensus autour des vérifications à effectuer dans un
contexte dynamique. D'autre part, la mise en oeuvre de ces vérifications
reste souvent informelle ou à la charge du développeur d'applications.
Nous proposons d'identifier la sûreté d'une adaptation indépendamment des
plates-formes, et de déterminer le moment où les modifications liées à une
adaptation peuvent être prises en compte de façon sûre dans l'exécution de
l'application. Cette approche est basée sur un modèle nommé Satin sur lequel
des propriétés de sûreté sont exprimées et validées. Le modèle Satin est mis
en oeuvre sous la forme d'un service de sûreté que les plates-formes peuvent
interroger pour déterminer si une adaptation donnée risque de briser la
sûreté de fonctionnement de l'application.
Varet, Antoine. "Conception, mise en oeuvre et évaluation d'un routeur embarqué pour l'avionique de nouvelle génération." Phd thesis, INSA de Toulouse, 2013. http://tel.archives-ouvertes.fr/tel-00932283.
Full textScholte, Theodoor. "Amélioration de la sécurité par la conception des logiciels web." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0024.
Full textThe web has become a backbone of our industry and daily life. The growing popularity of web applications and services and the increasing number of critical transactions being performed, has raised security concerns. For this reason, much effort has been spent over the past decade to make web applications more secure. Despite these efforts, recent data from SANS institute estimates that up to 60% of Internet attacks target web applications and critical vulnerabilities such as cross-site scripting and SQL injection are still very common. In this thesis, we conduct two empirical studies on a large number of web applications vulnerabilities with the aim of gaining deeper insights in how input validation flaws have evolved in the past decade and how these common vulnerabilities can be prevented. Our results suggest that the complexity of the attacks have not changed significantly and that many web problems are still simple in nature. Our studies also show that most SQL injection and a significant number of cross-site scripting vulnerabilities can be prevented using straight-forward validation mechanisms based on common data types. With these empirical results as foundation, we present IPAAS which helps developers that are unaware of security issues to write more secure web applications than they otherwise would do. It includes a novel technique for preventing the exploitation of cross-site scripting and SQL injection vulnerabilities based on automated data type detection of input parameters. We show that this technique results in significant and tangible security improvements for real web applications
Mantovani, Alessandro. "An Analysis of Human-in-the-loop Approaches for Reverse Engineering Automation." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS052.pdf.
Full textIn system and software security, one of the first criteria before applying an analysis methodology is to distinguish according to the availability or not of the source code. When the software we want to investigate is present in binary form, the only possibility that we have is to extract some information from it by observing its machine code, performing what is commonly referred to as Binary Analysis (BA). The artisans in this sector are in charge of mixing their personal experience with an arsenal of tools and methodologies to comprehend some intrinsic and hidden aspects of the target binary, for instance, to discover new vulnerabilities or to detect malicious behaviors. Although this human-in-the-loop configuration is well consolidated over the years, the current explosion of threats and attack vectors such as malware, weaponized exploits, etc. implicitly stresses this binary analysis model, demanding at the same time for high accuracy of the analysis as well as proper scalability over the binaries to counteract the adversarial actors. Therefore, despite the many advances in the BA field over the past years, we are still obliged to seek novel solutions. In this thesis, we take a step more on this problem, and we try to show what current paradigms lack to increase the automation level. To accomplish this, we isolated three classical binary analysis use cases, and we demonstrated how the pipeline analysis benefits from the human intervention. In other words, we considered three human-in-the-loop systems, and we described the human role inside the pipeline with a focus on the types of feedback that the analyst ``exchanges'' with her toolchain. These three examples provided a full view of the gap between current binary analysis solutions and ideally more automated ones, suggesting that the main feature at the base of the human feedback corresponds to the human skill at comprehending portions of binary code. This attempt to systematize the human role in modern binary analysis approaches tries to raise the bar towards more automated systems by leveraging the human component that, so far, is still unavoidable in the majority of the scenarios. Although our analysis shows that machines cannot replace humans at the current stage, we cannot exclude that future approaches will be able to fill this gap as well as evolve tools and methodologies to the next level. Therefore, we hope with this work to inspire future research in the field to reach always more sophisticated and automated binary analysis techniques
Sannier, Nicolas. "INCREMENT : une approche hybride pour modéliser et analyser dans le large les exigences réglementaires de sûreté." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00941881.
Full textKang, Eun-Young. "Abstractions booléennes pour la vérification des systèmes temps-réel." Thesis, Nancy 1, 2007. http://www.theses.fr/2007NAN10089/document.
Full textThis thesis provides an efficient formal scheme for the tool-supported real-time system verification by combination of abstraction-based deductive and model checking techniques in order to handle the limitations of the applied verification techniques. This method is based on IAR (Iterative Abstract Refinement) to compute finite state abstractions. Given a transition system and a finite set of predicates, this method determines a finite abstraction, where each state of the abstract state space is a true assignment to the abstraction predicates. A theorem prover can be used to verify that the finite abstract model is a correct abstraction of a given system by checking conformance between an abstract and a concrete model by establishing/proving that a set of verification conditions are obtained during the IAR procedure. Then the safety/liveness properties are checked over the abstract model. If the verification condition holds successfully, IAR terminates its procedure. Otherwise more analysis is applied to identify if the abstract model needs to be more precise by adding extra predicates. As abstraction form, we adopt a class of predicate diagrams and define a variant of predicate diagram PDT (Predicate Diagram for Timed systems) that can be used to verify real-time and parameterized systems
Poeplau, Sebastian. "Increasing the performance of symbolic execution by compiling symbolic handling into binaries." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS451.
Full textSymbolic execution has the potential to make software more secure by significantly improving automated vulnerability search. In this thesis, we propose a general technique that allows for more efficient implementations of the execution component in symbolic executors. We first examine the state of the art. From the results of this comparison, we derive the idea of accelerating execution by embedding symbolic handling into compiled programs instead of symbolically interpreting a higher-level representation of the program under test. Using this approach, we develop a compiler-based symbolic executor and show that it indeed achieves high execution speed in comparison with state-of-the-art systems. Since the compiler is limited to scenarios where source code of the program under test is available, we then design and implement a complementary solution for symbolic execution of binaries; it uses the same basic idea of embedding symbolic execution into fast machine code but combines the approach with binary translation to handle the challenges of binary-only analysis. We conclude by discussing research directions that could lead to even more practical systems and ultimately enable the use of symbolic execution in mainstream software testing
Pellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0064/document.
Full textMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Cauchie, Stéphane. "Tactim : from pattern recognition to the security for Biometry." Thesis, Tours, 2009. http://www.theses.fr/2009TOUR4033.
Full textIn this thesis, we propose a study on a new biometric signal following two axes; on the one hand, the pattern recognition objective and on the other hand security aspects. The first part is dedicated to the study of this new process: Tactim. With the help of signal processing methods, we propose a set of characteristic extractors. Then we present their uses through our biometric system. We take advantage of our pattern recognition solver, Coyote, in order to produce our biometric system. The second part is dedicated to the study of Coyote. Coyote is a Case Based Reasoning system, which assembles algorithms in order to produce pattern recognition systems which objective is to minimize error rates. Finally, when we consider the pattern recognition task accomplished (acceptable error rate), we study cryptographic protocols that ensure anonymity of the user during a biometric authentication. We propose a hybridization of our pattern recognition model with our cryptographic protocol (extension of the Bringer et al. protocol)
Olivier, Paul L. R. "Improving Hardware-in-the-loop Dynamic Security Testing For Linux-based Embedded Devices." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS049.
Full textDynamic analysis techniques have proven their effectiveness in security assessment. Nevertheless, it is necessary to be able to execute the code to be analyzed and this is often a challenge for firmware, most of which are deeply integrated into the hardware architecture of embedded systems. Emulation allows to execute a large part of the code but is quickly limited when it is necessary to interact with specialized components. For this, the partial emulation, or hardware-in-the-loop, approach offers several advantages: transferring access to hardware that is difficult to emulate properly and executing the firmware in turn on both entities. To date, this approach has been considered primarily for monolithic firmware, but less so for devices running advanced operating systems. In this thesis, we explore the challenges of security testing for processes running in an emulated environment where part of their execution must be transmitted to their original physical device. Throughout this thesis, we first review the various techniques for intercepting system calls and their objectives. We highlight the fact that forwarding is not a very well explored technique in depth but is a promising approach for evaluating the security of embedded applications. We discuss the challenges of different ways of running a process in two different Linux kernels. We implement through a framework these transfers for a Linux process with its system calls and memory accesses between its emulated environment and its original environment on the physical device. To overcome the challenges of using physical devices for these security tests, we also present a new test platform to reproduce hardware-in-the-loop security experiments
Rahmoun, Smail. "Optimisation multi-objectifs d'architectures par composition de transformation de modèles." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0004.
Full textIn this thesis, we propose a new exploration approach to tackle design space exploration problems involving multiple conflicting non functional properties. More precisely, we propose the use of model transformation compositions to automate the production of architectural alternatives, and multiple-objective evolutionary algorithms to identify near-optimal architectural alternatives. Model transformations alternatives are mapped into evolutionary algorithms and combined with genetic operators such as mutation and crossover. Taking advantage of this contribution, we can (re)-use different model transformations, and thus solve different multiple-objective optimization problems. In addition to that, model transformations can be chained together in order to ease their maintainability and re-usability, and thus conceive more detailed and complex systems
Girard, Pierre. "Formalisation et mise en œuvre d'une analyse statique de code en vue de la vérification d'applications sécurisées." Toulouse, ENSAE, 1996. http://www.theses.fr/1996ESAE0010.
Full textCiarletta, Laurent. "Contribution à l'évaluation des technologies de l'informatique ambiante." Nancy 1, 2002. http://www.theses.fr/2002NAN10234.
Full textComputer Science and Networks are more and more embedded into our daily life. Pervasive or Ubiquitous Computing is at the crossroad of four typical areas: Networking (connecting the elements), Personal Computing (providing services), Embedded Computing (improving software and hardware miniaturization), and Computer-Human Interaction (where artificial intelligence will provide the needed cleverness). This document introduces this emerging technology and the tools, architectures and methods that were developed during the course of my PhD: the Layered Pervasive Computing model, EXiST, the evaluation and distributed simulation platform and the VPSS security architecture. They are first steps towards the resolution of security, standardization, integration, and convergence issues of the technologies at play. Some prototypes and implementations, such as the Aroma Adapter (providing adhoc "intelligence" to electronic devices), a Smart conference Room and a version of EXiST working with Intelligent Agents, are also detailed
Lesas, Anne-Marie. "Vers un environnement logiciel générique et ouvert pour le développement d'applications NFC sécurisées." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0286/document.
Full textIn the field of electronic transactions and payment with a smart card, the Near Field Communication (NFC) standard has stood out against other candidate technologies for secure mobile contactless transactions for payment, access control, or authentication. Secure mobile contactless services are based on the card emulation mode of the NFC standard which involves a smart card type component with restricted access called "Secure Element" (SE) in which sensitive data and sensitive functions are securely stored and run. Despite considerable standardization efforts around the SE ecosystem, the proposed models for the implementation of SE are complex and suffer from the lack of genericity, both to offer abstraction mechanisms, for the development of high-level applications, and for the implementation and verification of applications security constraints.The objective of the thesis is to design and realize a software environment based on a generic model that complies with established standards and which is not very sensitive to technological evolutions. This environment should enable non-experts to develop multi-platform, multi-mode, multi-factor SE-based applications running into the NFC smartphone
Corteggiani, Nassim. "Towards system-wide security analysis of embedded systems." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS285.
Full textThis thesis is dedicated to the improvement of dynamic analysis techniques allowing the verification of software designed for embedded systems, commonly called firmware. It is clear that the increasing pervasiveness and connectivity of embedded devices significantly increase their exposure to attacks. The consequences of a security issue can be dramatic not least in the economical field, but on the technical stage as well. Especially because of the difficulty to patch some devices. For instance, offline devices or code stored in a mask rom which are read only memory programmed during the chip fabrication. For all these reasons, it is important to thoughtfully test firmware program before the manufacturing process. This thesis presents analysis methods for system-wide testing of security and hardware components. In particular, we propose three impvrovements for partial emulation. First, Inception a dynamic analysis tool to test the security of firmware programs even when mixing different level of semantic (e.g., C/C++ mixed with assembly). Second, Steroids a high performance USB 3.0 probe that aims at minimizing the latency between the analyzer and the real device. Finally, HardSnap a hardware snapshotting method that offers higher visibility and control over the hardware peripherals. It enables testing concurently different execution paths without corrupting the hardware peripherals state
Jakob, Henner. "Vers la sécurisation des systèmes d'informatique ubiquitaire par le design : une approche langage." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14269/document.
Full textA growing number of environments is being populated with a range of networked devices. Applications leverage these devices to support everyday activities in a variety of areas (e.g., home automation and patient monitoring). As these devices and applications get woven into our everyday activities, they become critical: their failure can put people and assets at risk. Failures can be caused by malicious attacks and misbehaving applications. Although the impact of such situations can be major, security concerns are often considered a secondary issue in the development process, and treated with ad hoc approaches. This thesis proposes to address security concerns throughout the development lifecycle of a pervasive computing system. Security is addressed at design time thanks to dedicated, high-level declarations. These declarations are processed to implement security mechanisms, and to generate programming support to ease the development of the security logic, while keeping it separate from the application logic. Our approach is studied in the context of access control and privacy concerns. Our work has been implemented and leverages an existing software-design language and a suite of tools that covers the software development lifecycle
Benayoun, Vincent. "Analyse de dépendances ML pour les évaluateurs de logiciels critiques." Electronic Thesis or Diss., Paris, CNAM, 2014. http://www.theses.fr/2014CNAM0915.
Full textCritical software needs to obtain an assessment before commissioning in order to ensure compliance tostandards. This assessment is given after a long task of software analysis performed by assessors. Theymay be helped by tools, used interactively, to build models using information-flow analysis. Tools likeSPARK-Ada exist for Ada subsets used for critical software. But some emergent languages such as thoseof the ML family lack such adapted tools. Providing similar tools for ML languages requires specialattention on specific features such as higher-order functions and pattern-matching. This work presentsan information-flow analysis for such a language specifically designed according to the needs of assessors.This analysis is built as an abstract interpretation of the operational semantics enriched with dependencyinformation. It is proved correct according to a formal definition of the notion of dependency using theCoq proof assistant. This work gives a strong theoretical basis for building an efficient tool for faulttolerance analysis
Lahbib, Asma. "Distributed management framework based on the blockchain technology for industry 4.0 environments." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS017.
Full textThe evolution of the Internet of Things (IoT) started decades ago as part of the first face of the digital transformation, its vision has further evolved due to a convergence of multiple technologies, ranging from wireless communication to the Internet and from embedded systems to micro-electromechanical systems. As a consequence thereof, IoT platforms are being heavily developed, smart factories are being planned to revolutionize the industry organization and both security and trust requirements are becoming more and more critical. The integration of such technologies within the manufacturing environment and processes in combination with other technologies has introduced the fourth industrial revolution referred to also as Industry 4.0. In this future world machines will talk to machines (M2M) to organize the production and coordinate their actions. However opening connectivity to the external world raises several questions about data and IT infrastructure security that were not an issue when devices and machines were controlled locally and just few of them were connected to some other remote systems. That’s why ensuring a secure communication between heterogeneous and reliable devices is essential to protect exchanged information from being stolen or tampered by malicious cyber attackers that may harm the production processes and put the different devices out of order. Without appropriate security solutions, these systems will never be deployed globally due to all kinds of security concerns. That’s why ensuring a secure and trusted communication between heterogeneous devices and within dynamic and decentralized environments is essential to achieve users acceptance and to protect exchanged information from being stolen or tampered by malicious cyber attackers that may harm the production processes and put the different devices out of order. However, building a secure system does not only mean protecting the data exchange but it requires also building a system where the source of data and the data itself is being trusted by all participating devices and stakeholders. In this thesis our research focused on four complementary issues, mainly (I) the dynamic and trust based management of access over shared resources within an Industry 4.0 based distributed and collaborative system, (ii) the establishment of a privacy preserving solution for related data in a decentralized architecture while eliminating the need to rely on additional third parties, (iii) the verification of the safety, the correctness and the functional accuracy of the designed framework and (iv) the evaluation of the trustworthiness degree of interacting parties in addition to the secure storage and sharing of computed trust scores among them in order to guarantee their confidentiality, integrity and privacy. By focusing on such issues and taking into account the conventional characteristics of both IoT and IoT enabled industries environments, we proposed in this thesis a secure and distributed framework for resource management in Industry 4.0 environments. The proposed framework, enabled by the blockchain technology and driven by peer to peer networks, allows not only the dynamic access management over shared resources but also the distribute governance of the system without the need for third parties that could be their-selves vulnerable to attacks. Besides and in order to ensure strong privacy guarantees over the access control related procedures, a privacy preserving scheme is proposed and integrated within the distributed management framework. Furthermore and in order to guarantee the safety and the functional accuracy of our framework software components, we focused on their formal modeling in order to validate their safety and compliance with their specification. Finally, we designed and implemented the proposal in order to prove its feasibility and analyze its performances
Szlifierski, Nicolas. "Contrôle sûr de chaînes d'obfuscation logicielle." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0223.
Full textCode obfuscation is a software protection technique that is designed to make reverse engineering a program more challenging, in order to protect secrets, intellectual property, or to complicate detection (malware). The obfuscation is generally carried out using a set of transformations on the target program. Obfuscation is also often used in conjunction with other software protection techniques, such as watermarking or tamperproofing. These transformations are usually integrated into code transformations applied during the compilation process, such as optimisations. However, the application of obfuscation transformations requires more preconditions on the code and more precision in the application to provide an acceptable performance/safety trade-off, making traditional compilers unsuitable for obfuscation. To address these problems, this thesis presents SCOL, a language that is able to describe the process of compiling a program while considering the specific problems of obfuscation. This allows you to describe compilation chains with a high level of abstraction, while allowing a high degree of accuracy on the composition of the transformations. The language has a type system which checks the correctness of the compilation chain. That means that the composition of transformations provides the targeted level of protection and coverage
Bel, Hadj Aissa Nadia. "Maîtrise du temps d'exécution de logiciels déployés dans des dispositifs personnels de confiance." Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10133/document.
Full textThe proliferation of small and open objects such as personal trusted devices has encouraged the spread of dynamically adaptable runtime environments. Thus, new software can be deployed on the fly after the devices are delivered to their holders. Through our work, we aim to ensure that each new software, whose deployment is successful, will be able to deliver responses within a maximum delay previously established. These guarantees are crucial in terms of safety and security. To this end, we propose to distribute the computation of worst case execution time. Our solution embraces a proof carrying code approach making distinction between a powerful but untrusted computer used to produce the code, and a safe but resource constrained code consumer. The producer does not assume any prior knowledge of the runtime environment on which its software will be executed. The code is statically analyzed to extract loop bounds and a proof containing this information is joint to the software. By a straightforward inspection of the code, the consumer can verify the validity of the proof and compute the global worst case execution time. We experimentally validate our approach on a hardware and software architecture which meets the requirements of trusted personal devices. Finally, we address the challenges raised when software from different service providers potentially untrusted can coexist and interact in a single device. We focus on the impact of the interaction between different software units on the guarantees previously given by the system on the worst case execution time and we outline a solution based on contracts to maintain these guarantees
Boulanger, Jean-Louis. "Expression et validation des propriétés de sécurité logique et physique pour les systèmes informatiques critiques." Compiègne, 2006. http://www.theses.fr/2006COMP1622.
Full textWithin the framework of our research activities, we were interested in the safety of critical systems (whose failure can cause serious damage to people, goods or environment). The design of the safety of such systems requires the expression of safety-related recommendations. These recommendations can come from requirements of the customer (contractual clauses), from the state of the art, from legal reference frame (standards, decrees, laws) or from studies of the consequences of the failures on the system, the people, the environment. Starting from the safety recommendations coming from contractual conditions of the customer, it is thus possible to identify "safety requirements". It will have then to be proved that these safety requirements are taken into account during the whole design and lifecycle of the system. Within the framework of this thesis, we propose a method and implementation examples, which are based on the identification, the expression and the verification of the safety requirements
Benaïssa, Nazim. "La composition des protocoles de sécurité avec la méthode B événementielle." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10034/document.
Full textThe presence of big scale networks in our modern society is affecting our usual practice, which as a result is generating the need to introduce a more and more important level of remote security services. We address in this thesis the problem of security protocols composition, we focus in particular on cryptographic protocols as well as access control policies. The first part of the thesis is dedicated to the composition of cryptographic protocols and to their integration other classes of protocols. We introduce the notion of cryptographic mechanisms. Mechanisms are simple cryptographic protocols that can be composed to obtain more complex protocols if the necessary proof obligations are discharged. We also introduce a technique for a proof based attack reconstruction. The second part of the thesis is dedicated to the deployment of access control policies using refinement, the idea consists in refining abstract policies to obtain a more concrete access control policies. We also propose to combine the refinement technique with the composition technique to obtain a more efficient access control policies deployment techniques
Palisse, Aurélien. "Analyse et détection de logiciels de rançon." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S003/document.
Full textThis phD thesis takes a look at ransomware, presents an autonomous malware analysis platform and proposes countermeasures against these types of attacks. Our countermeasures are real-time and are deployed on a machine (i.e., end-hosts). In 2013, the ransomware become a hot subject of discussion again, before becoming one of the biggest cyberthreats beginning of 2015. A detailed state of the art for existing countermeasures is included in this thesis. This state of the art will help evaluate the contribution of this thesis in regards to the existing current publications. We will also present an autonomous malware analysis platform composed of bare-metal machines. Our aim is to avoid altering the behaviour of analysed samples. A first countermeasure based on the use of a cryptographic library is proposed, however it can easily be bypassed. It is why we propose a second generic and agnostic countermeasure. This time, compromission indicators are used to analyse the behaviour of process on the file system. We explain how we configured this countermeasure in an empiric way to make it useable and effective. One of the challenge of this thesis is to collate performance, detection rate and a small amount of false positive. To finish, results from a user experience are presented. This experience analyses the user's behaviour when faced with a threat. In the final part, I propose ways to enhance our contributions but also other avenues that could be explored
Louboutin, Etienne. "Sensibilité de logiciels au détournement de flot de contrôle." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2021. http://www.theses.fr/2021IMTA0230.
Full textThe security of a software can be taken into account right from the design stage. This approach, called security by design, allows to influence as early as possible the design to influence the architecture of a software. The protections against control flow hijacks, such as return oriented programming, are not designed to change the way of designing the software. They often aim to protect a software either during its compilation or by working directly on the binary produced. In this thesis, we propose metrics allowing a developer to evaluate the sensitivity of a software program to attacks by control flow hijacks. To ease development, metrics defined allow to identify the parameters used in the production of binaries of a software that result in increased sensitivity to these attacks. The use of of these metrics are illustrated in this thesis by studying the influence of compilers and their options, languages and hardware architectures
Vu, Son Tuan. "Optimizing Property-Preserving Compilation." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS435.
Full textIn order to ensure security guarantees of binary applications, program analyses and verifications have to be performed at the binary level. These analyses and verifications require various security or functional properties about the program being analyzed. It is thus necessary to propagate these properties,usually expressed in the source level, down to binary code. However, preserving these properties throughout the optimizing compilation flow is hard due to code optimizations which reorder computations or eliminate unused variables. This thesis presents different approaches to preserve and propagate program properties throughout the optimizing compilation flow with minimal changes to individual transformation passes. In the implementations in LLVM, properties are emitted into executable binaries as DWARF debug information, which can next beused by binary analysis tools. Our mechanisms can be applied to address the problem of preserving security protections inserted at the source level, compiled with optimizations enabled