Дисертації з теми "Systèmes informatiques – Mesures de sûreté – Offuscation (informatique)"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Systèmes informatiques – Mesures de sûreté – Offuscation (informatique)".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Badier, Hannah. "Transient obfuscation for HLS security : application to cloud security, birthmarking and hardware Trojan defense." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2021. https://tel.archives-ouvertes.fr/tel-03789700.
Повний текст джерелаThe growing globalization of the semiconductor supply chain, as well as the increasing complexity and diversity of hardware design flows, have lead to a surge in security threats: risks of intellectual property theft and reselling, reverse-engineering and malicious code insertion in the form of hardware Trojans during manufacturing and at design time have been a growing research focus in the past years. However, threats during highlevel synthesis (HLS), where an algorithmic description is transformed into a lower level hardware implementation, have only recently been considered, and few solutions have been given so far. In this thesis, we focus on how to secure designs during behavioral synthesis using either a cloud-based or an internal but untrusted HLS tool. We introduce a novel design time protection method called transient obfuscation, where the high-level source code is obfuscated using key-based techniques, and deobfuscated after HLS at register-transfer level. This two-step method ensures correct design functionality and low design overhead. We propose three ways to integrate transient obfuscation in different security mechanisms. First, we show how it can be used to prevent intellectual property theft and illegal reuse in a cloud-based HLS scenario. Then, we extend this work to watermarking, by exploiting the side-effects of transient obfuscation on HLS tools to identify stolen designs. Finally, we show how this method can also be used against hardware Trojans, both by preventing insertion and by facilitating detection
Szlifierski, Nicolas. "Contrôle sûr de chaînes d'obfuscation logicielle." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0223.
Повний текст джерелаCode obfuscation is a software protection technique that is designed to make reverse engineering a program more challenging, in order to protect secrets, intellectual property, or to complicate detection (malware). The obfuscation is generally carried out using a set of transformations on the target program. Obfuscation is also often used in conjunction with other software protection techniques, such as watermarking or tamperproofing. These transformations are usually integrated into code transformations applied during the compilation process, such as optimisations. However, the application of obfuscation transformations requires more preconditions on the code and more precision in the application to provide an acceptable performance/safety trade-off, making traditional compilers unsuitable for obfuscation. To address these problems, this thesis presents SCOL, a language that is able to describe the process of compiling a program while considering the specific problems of obfuscation. This allows you to describe compilation chains with a high level of abstraction, while allowing a high degree of accuracy on the composition of the transformations. The language has a type system which checks the correctness of the compilation chain. That means that the composition of transformations provides the targeted level of protection and coverage
Cecchetto, Sylvain. "Analyse du flot de données pour la construction du graphe de flot de contrôle des codes obfusqués." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0042.
Повний текст джерелаThe increase in cyber-attacks around the world makes malicious code analysis a priority research area. This software uses various protection methods, also known as obfuscations, to bypass antivirus software and slow down the analysis process. In this context, this thesis provides a solution to build the Control Float Graph (CFG) of obfuscated binary code. We developed the BOA platform (Basic blOck Analysis) which performs a static analysis of a protected binary code. For this, we have defined a semantics based on the BINSEC tool to which we have added continuations. These allow on one hand to control the self-modifications, and on the other hand to simulate the operating system to handle system calls and interruptions. The static analysis is done by symbolically executing the binary code and calculating the values of the system states using SMT solvers. Thus, we perform a data flow analysis to build the CFG by calculating the transfer addresses. Finally, loop handling is performed by transforming a CFG into a pushdown automaton. BOA is able to compute dynamic jump addresses, to detect opaque predicates, to compute return addresses on a stack even if they have been falsified, to manage interrupt handler falsifications, to rebuild import tables on the fly, and finally, to manage self-modifications. We validated the BOA correction using the Tigress code obfuscator. Then, we tested BOA on 35 known packers and showed that in 30 cases, BOA was able to completely or partially rebuild the initially protected binary. Finally, we detected the opaque predicates protecting XTunnel, a malware used during the 2016 U.S. elections, and we partially unpacked a sample of the Emotet Trojan, which on 14/10/2020 was detected by only 7 antivirus programs out of the 63 offered by VirusTotal. This work contributes to the development of tools for static analysis of malicious code. In contrast to dynamic methods, this solution allows an analysis without executing the binary, which offers a double advantage: on the one hand, a static approach is easier to deploy, and on the other hand, since the malicious code is not executed, it cannot warn its author
Gonzalvez, Alexandre. "Affiner la déobfuscation symbolique et concrète de programmes protégés par des prédicats opaques." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0187.
Повний текст джерелаHigh demand exists nowadays to improve advanced obfuscation and deobfuscation techniques, with the purpose of preventing intellectual property piracy or improving defence against cyber security attacks. This thesis focuses on the improvement of the deobfuscation achieved by symbolic and concrete analysis tools of protected programs using opaque predicates. These tools rely on automated program analysis tools (dynamic symbolic execution engine) that use Satisfiability Modulo Theory solvers (SMT solvers). To understand more precisely some situations in which the predicate analysis performed by these tools fails, our aim is to be able to identify practical solutions to avoid these scenarios and test them in real cases. First results show how an Instruction Set Assembly (ISA) allows opaque predicates to appear or not. We suggest an improvement of the opaque predicates identification based on the SMT solvers behavior. We suggest a method to reshape SMT queries to reduce the effects of opaque predicates. These features are built into several automated tools such as KLEE or Angr, followed by testing them on different programs which contain opaque predicates
Mendy, Norbert Lucien. "Les attaques et la sécurité des systèmes informatiques." Paris 8, 2006. http://www.theses.fr/2006PA082735.
Повний текст джерелаHacking activities appeared around 1980 with first personal computers and since did not stop developing. At the beginning, this practice was primarily individual and playful. Now it is mainly made up by the activities of groups, with very various motivations. Today, due to the development of electronic means of communication, data security concerns a wider public. This thesis examines initially, from a technical and sociological point of view, attacks and defense mechanisms, and proposes a new concept of the security which is not only any centered on technical solutions but also takes in consideration the social dimension of the problem
Sadde, Gérald. "Sécurité logicielle des systèmes informatiques : aspects pénaux et civils." Montpellier 1, 2003. http://www.theses.fr/2003MON10019.
Повний текст джерелаVache, Géraldine. "Evaluation quantitative de la sécurité informatique : approche par les vulnérabilités." Toulouse, INSA, 2009. http://eprint.insa-toulouse.fr/archive/00000356/.
Повний текст джерелаThis thesis presents a new approach for quantitative security evaluation for computer systems. The main objective of this work is to define and evaluate several quantitative measures. These measures are probabilistic and aim at quantifying the environment influence on the computer system security considering vulnerabilities. Initially, we identified the three factors that have a high influence on system state: 1) the vulnerability life cycle, 2) the attacker behaviour and 3) the administrator behaviour. We studied these three factors and their interdependencies and distinguished two main scenarios based on nature of vulnerability discovery, i. E. Malicious or non malicious. This step allowed us to identify the different states of the system considering the vulnerability exploitation process and to define four measures relating to the states of the system: vulnerable, exposed, compromised, patched and secure. To evaluate these measures, we modelled the process of system compromising by vulnerability exploitation. Afterwards, we characterized the vulnerability life cycle events quantitatively, using real data from a vulnerability database, in order to assign realistic values to the parameters of the models. The simulation of these models enabled to obtain the values of the four measures we had defined. Finally, we studied how to extend the modelling to consider several vulnerabilities. So, this approach allows the evaluation of measures quantifying the influences of several factors on the system security
Bascou, Jean-Jacques. "Contribution à la sécurité des systèmes : une méthodologie d'authentification adaptative." Toulouse 3, 1996. http://www.theses.fr/1996TOU30253.
Повний текст джерелаTrabelsi, Slim. "Services spontanés sécurisés pour l'informatique diffuse." Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004140.
Повний текст джерелаSaadi, Rachid. "The Chameleon : un système de sécurité pour utilisateurs nomades en environnements pervasifs et collaboratifs." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0040/these.pdf.
Повний текст джерелаWhile the trust is easy to set up between the known participants of a communication, the evaluation of trust becomes a challenge when confronted with unknown environment. It is more likely to happen that the collaboration in the mobile environment will occur between totally unknown parties. An approach to handle this situation has long been to establish some third parties that certify the identities, roles and/or rights of both participants in a collaboration. In a completely decentralized environment, this option is not sufficient. To decide upon accesses one prefer to rely only on what is presented to him by the other party and by the trust it can establish, directly by knowing the other party or indirectly, and vice-versa. Hence a mobile user must for example present a set of certificates known in advance and the visited site may use these certificates to determine the trust he can have in this user and thus potentially allow an adapted access. In this schema the mobile user must know in advance where she wants to go and what she should present as identifications. This is difficult to achieve in a global environment. Moreover, the user likes to be able to have an evaluation of the site she is visiting to allow limited access to her resources. And finally, an user does not want to bother about the management of her security at fine grain while preserving her privacy. Ideally, the process should be automatized. Our work was lead to define the Chameleon architecture. Thus the nomadic users can behave as chameleons by taking the "colors" of their environments enriching their nomadic accesses. It relies on a new T2D trust model which is characterized by support for the disposition of trust. Each nomadic user is identified by a new morph certification model called X316. The X316 allows to carry out the trust evaluation together with the roles of the participants while allowing to hide some of its elements, preserving the privacy of its users
Maingot, Vincent. "Conception sécurisée contre les attaques par fautes et par canaux cachés." Grenoble INPG, 2009. https://tel.archives-ouvertes.fr/tel-00399450.
Повний текст джерелаL'évolution des besoins en sécurité des applications grand public a entraîné la multiplication du nombre de systèmes sur puces doués de capacités de chiffrement. En parallèle, l'évolution des techniques de cryptanalyse permet d'attaquer les implantations des méthodes de chiffrement utilisées dans ces applications. Cette thèse porte sur le développement d'une méthodologie permettant l'évaluation de la robustesse apportée par des protections intégrées dans le circuit. Cette évaluation est basée d'une part sur l'utilisation de plates-formes laser pour étudier les types de fautes induits dans un prototype de circuit sécurisé ; et d'autre part, sur l'utilisation d'une méthode basée sur des simulations pendant la phase de conception pour comparer l'influence sur les canaux cachés de protections contre les fautes. Cette méthodologie a été utilisée dans un premier temps sur le cas simple d'un registre protégé par redondance d'information, puis sur des primitives cryptographiques telles qu'une S-Box AES et des co-processeurs AES et RSA. Ces deux études ont montré que l'ajout de capacités de détection ou de correction améliore la robustesse du circuit face aux différentes attaques
Habib, Lionel. "Formalisations et comparaisons de politiques et de systèmes de sécurité." Paris 6, 2011. http://www.theses.fr/2011PA066146.
Повний текст джерелаAbbes, Tarek. "Classification du trafic et optimisation des règles de filtrage pour la détection d'intrusions." Nancy 1, 2004. http://www.theses.fr/2004NAN10192.
Повний текст джерелаIn this dissertation we are interested by some bottlenecks that the intrusion detection faces, namely the high load traffic, the evasion techniques and the false alerts generation. In order to ensure the supervision of overloaded networks, we classify the traffic using Intrusion Detection Systems (IDS) characteristics and network security policies. Therefore each IDS supervises less IP traffic and uses less detection rules (with respect to traffics it analyses). In addition we reduce the packets time processing by a wise attack detection rules application. During this analysis we rely on a fly pattern matching strategy of several attack signatures. Thus we avoid the traffic reassembly previously used to deceive evasion techniques. Besides, we employ the protocol analysis with decision tree in order to accelerate the intrusion detection and reduce the number of false positives noticed when using a raw pattern matching method
Martinelli, Jean. "Protection d'algorithmes de chiffrement par blocs contre les attaques par canaux auxiliaires d'ordre supérieur." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0043.
Повний текст джерелаTraditionally, a cryptographic algorithm is estimated through its resistance to "logical" attacks. When this algorithm is implanted within a material device, physical leakage can be observed during the computation and can be analyzed by an attacker in order to mount "side channel" attacks. The most studied side channel attack is the differential power analysis (DPA). First order DPA is now well known and can be prevented by securely proven countermeasures. In 2008, some results are known for second order, but none for third order. The goal of this thesis is to propose a frame for k-th order DPA where k>1. We developed several masking schemes as alternatives to the classical ones in order to propose a better complexity-security ratio. These schemes make use of various mathematical operations such as field multiplication or matrix product and cryptographic tools as secret sharing and multi-party computation. We estimated the security of the proposed schemes following a methodology using both theoretical analysis and practical results. At last we proposed an evaluation of the action of the word size of a cryptographic algorithm upon its resistance against side channel attacks, with respect to the masking scheme implemented
Su, Lifeng. "Confidentialité et intégrité du bus mémoire." Paris, Télécom ParisTech, 2010. http://www.theses.fr/2010ENST0008.
Повний текст джерелаThe security of program execution is often required for certain critical applications. Unfortunately she is vulnerable to many attacking techniques such as software exploits and hardware attacks. Some existing experiences denote that the security of communication between processor and memory can be compromised by board-level probing attacks. Generally probing attacks are divided into two sub-classes : passive probing and active probing. In the first case, an attacker can capture critical data during the process of processor-memory communication. The active probing attacks can be used to alter memory data in order to compromise the program execution in the processor. The first case is related to the confidentiality of memory data and the latter to the integrity of memory data. This dissertation aims to explore diverse options to protect the confidentiality and integrity of memory bus against board-level probing attacks. The fundamental idea focuses on the implementation of an on-chip hardware cryptographique engine to guarantee the integrity and confidentiality of memory data. As our target market is low-to-medium embedded systems, we intend to propose one protection scheme which is realistic, acceptable-to-market and with low costs. All such strong constraints leverage bigly our specific protection choice
Faurax, Olivier. "Méthodologie d'évaluation par simulation de la sécurité des circuits face aux attaques par faute." Aix-Marseille 2, 2008. http://theses.univ-amu.fr.lama.univ-amu.fr/2008AIX22106.pdf.
Повний текст джерелаMicroelectronic security devices are more and more present in our lives (smartcards, SIM cards) and they contains sensitive informations that must be protected (account number, cryptographic key, personal data). Recently, attacks on cryptographic algorithms appeared, based on the use of faults. Adding a fault during a device computation enables one to obtain a faulty result. Using a certain amount of correct results and the corresponding faulty ones, it is possible to extract secret data and, in some cases, complete cryptographic keys. However, physical perturbations used in practice (laser, radiations, power glitch) rarely match with faults needed to successfully perform theoretical attacks. In this work, we propose a methodology to test circuits under fault attacks, using simulation. The use of simulation enables to test the circuit before its physical realization, but needs a lot of time. That is why our methodology helps the user to choose the most important faults in order to significantly reduce the simulation time. The tool and the corresponding methodology have been tested on a cryptographic circuit (AES) using a delay fault model. We showed that use of delays to make faults can generate faults suitable for performing known attacks
Saraydaryan, Jacques. "Détection d'anomalies comportementales appliquée à la vision globale." Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0132/these.pdf.
Повний текст джерелаLn light of the increase in new threads and attacks, security components (Firewall, IDS) are becoming inadequate. Lndeed, complex attack scenarios tend to be confused with normal system behaviors in arder to by-pass local security components. From this perspective, we provided a new method of behavioral anomaly detection based on a global view of the system throughout our work. By taking into account the observation constraints of the entire IS (heterogeneity, high data volume), we built a statistical profile of the system and developed an anomaly detection method that showed that the continuous update of this profile allows us to follow the evolution of legitima te user behaviors and reduces false alarms. Thus, by focusing on the attacker's strategy, our works determined the observation perimeter of system behaviors to detect behavioral anomalies
Bhasin, Shivam. "Contre-mesures au niveau logique pour sécuriser les architectures de crypto-processeurs dans les FPGA." Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00683079.
Повний текст джерелаModern field programmable gate arrays (FPGA) are capable of implementing complex system on chip (SoC) and providing high performance. Therefore, FPGAs are finding wide application. A complex SoC generally contains embedded cryptographic cores to encrypt/decrypt data to ensure security. These cryptographic cores are computationally secure but their physical implementations can be compromised using side channel attacks (SCA) or fault attacks (FA). This thesis focuses on countermeasures for securing cryptographic cores on FPGAs. First, a register-transfer level countermeasure called ``Unrolling'' is proposed. This hiding countermeasure executes multiple rounds of a cryptographic algorithm per clock which allows deeper diffusion of data. Results show excellent resistance against SCA. This is followed by dual-rail precharge logic (DPL) based countermeasures, which form a major part of this work. Wave dynamic differential logic (WDDL), a commonly used DPL countermeasure well suited for FPGAs is studied. Analysis of WDDL (DPL in general) against FA revealed that it is resistant against a majority of faults. Therefore, if flaws in DPL namely early propagation effect (EPE) and technological imbalance are fixed, DPL can evolve as a common countermeasure against SCA and FA. Continuing on this line of research we propose two new countermeasures: DPL without EPE and Balanced-Cell based DPL (BCDL). Finally advanced evaluation tools like stochastic model, mutual information and combined attacks are discussed which are useful when analyzing countermeasures
Portolan, Michele. "Conception d'un système embarqué sûr et sécurisé." Grenoble INPG, 2006. http://www.theses.fr/2006INPG0192.
Повний текст джерелаThis PhD researches a global methodology enabling to improve the dependability and security level against transient logic faults (natural or provoked) appearing inside a hardware/software integrated system, like for instance a smart cardo Results can be applied to all systems built around a synthesisable microprocessor core and a set of specialised peripherals. The protection methods operate simultaneously and in complementary manner on hardware, application software and interface layers (most noticeably, the operating system). High level modifications have been favoured for their advantages in terms of generality, configurability, portability and perpetuity. The proposed approach aims at achieving a good trade-off between robustness and overheads, from both hardware and performance point of views. It is applied on a significant system example, representative of an embedded monoprocessor system
Guichard, Patrice. "Menace sur l'ordinateur : piratage - techniques et solutions." Paris 8, 2001. http://www.theses.fr/2001PA083771.
Повний текст джерелаAt the end of the 20th century, the computer hacker myth is one of the most widespread and resilient myths of our modern world. In a high technology society with limitless invisible communication devices, the computer whizkid stereotype popularized by mass media is somewhat reassuring. Viruses, illegal intrusions, sabotage, theft: who don't know examples of criminal computer activities, being themselves victims or not? Secondary phenomenon or worldwide catastrophe, analyzing and quantifying it remains necessary to offer solutions at least to hold it back, if not to eradicate it. The complexity of systems, the heavy costs of hardware and softwares and the extension of application fields have caused computer science to become a key element in the strategies of companies, since any destruction or alteration of their data can compromise their competitiveness or their image, and cause often significant financial loss. Though more and more companies become aware of the risks and widely invest into means to protect themselves, the phenomenon of computer hacking is still poorly known and mastered by companies. It should be analyzed through a multidisciplinary (economic, social and technical) approach. Prevention and protection techniques are quickly evolving but companies remain vulnerable in many fields. Revealing computer-hacking techniques is what this thesis is intended to do, so that administrators can test them on their own networks, something that no tool can do in lieu of them
Boisseau, Alexandre. "Abstractions pour la vérification de propriétés de sécurité de protocoles cryptographiques." Cachan, Ecole normale supérieure, 2003. https://theses.hal.science/tel-01199555.
Повний текст джерелаSince the development of computer networks and electronic communications, it becomes important for the public to use secure electronic communications. Cryptographic considerations are part of the answer to the problem and cryptographic protocols describe how to integrate cryptography in actual communications. However, even if the encryption algorithms are robust, there can still remain some attacks due to logical flaw in protocols and formal verification can be used to avoid such flaws. In this thesis, we use abstraction techniques to formally prove various types of properties : secrecy and authentication properties, fairness properties and anonymity
Carré, Jean-Loup. "Static analysis of embedded multithreaded programs." Cachan, Ecole normale supérieure, 2010. https://theses.hal.science/tel-01199739.
Повний текст джерелаThis Phd thesis presents a static analysis algorithm for programs with threads. It generalizes abstract interpretation techniques used in the single-threaded case and allows to detect runtimes errors, e. G, invalid pointer dereferences, array overflows, integer overflows. We have implemented this algorithm. It analyzes a large industrial multithreaded code (100K LOC) in few hours. Our technique is modular, it uses any abtract domain designed for the single-threaded-case. Furthermore, without any change in the fixpoint computation, sorne abstract domains allow to detect data-races or deadlocks. This technique does not assume sequential consistency, since, in practice (INTEL and SPARC processors, JAVA,. . . ), program execution is not sequentially consistent. E. G, it works in TSO (Total Store ordering) or PSO (Partial Store Ordering) memory models
Khatib, Mounis. "Key management and secure routing in mobile ad-hoc networks for trusted-based service provision in pervasive environment." Evry, Télécom & Management SudParis, 2008. http://www.theses.fr/2008TELE0017.
Повний текст джерелаMobile Ad hoc networks are the closest step to the vision of pervasive computing where all devices dynamically discover each other, organize communication networks between themselves and share resources/information to provide seamless service to the end-user. The major problem of providing security services in Mobile Ad-hoc Networks (MANET) is how to manage the key material. Due to unreliable wireless media, the host mobility and the lack of infrastructure, providing secure communications become a big challenge. In addition, the absence of an efficient key management system in this type of network makes it also hard to build a secured routing protocol. As the traditional key management schemes are not suitable to such environments, there is a high requirement to design an efficient key management system compatible with the characteristics of Ad hoc networks. Mobile Ad-hoc Networks cannot afford to deploy public key cryptosystems due to their high computational overheads and storage constraints, while the symmetric approach has computation efficiency but suffers from potential attacks on key agreement or key distribution. Key management is a central aspect for security in mobile ad hoc networks. Consequently, it is necessary to explore an approach that is based on symmetric key cryptography and overcomes their restrictions. In this thesis, our first contribution aimed to design a new protocol called OPEP that enables two nodes in an ad-hoc network to establish a pair wise key, key verification, authenticated key exchange, and group join and exclusion operations. We implement our protocol using a well-known reactive routing protocol without requiring the use of an online centralized entity; in this manner we succeed to propose a new key management scheme and to secure an existing routing protocol at the same time. It is well known that the current ad hoc routing protocols do not scale to work efficiently in networks of more than a few hundred nodes. For scalability purpose we have chosen a new routing protocol, called PARTY, which is intended to be applied in environments with a large number of heterogeneous nodes. Our second contribution in this thesis was focused on vulnerability analysis of PARTY protocol and proposing a new preventive and corrective mechanism which interact with a new trust model to enforce the cooperation of nodes during the routing process. Finally, we validate our protocols in a service provider platform inside a smart environment to authenticate users, to secure the service provision mechanism in this environment based on our trust model, and to manage services among different users
Antakly, Dimitri. "Apprentissage et vérification statistique pour la sécurité." Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4015.
Повний текст джерелаThe main objective of this thesis is to combine the advantages of probabilistic graphical model learning and formal verification in order to build a novel strategy for security assessments. The second objective is to assess the security of a given system by verifying whether it satisfies given properties and, if not, how far is it from satisfying them. We are interested in performing formal verification of this system based on event sequences collected from its execution. Consequently, we propose a model-based approach where a Recursive Timescale Graphical Event Model (RTGEM), learned from the event streams, is considered to be representative of the underlying system. This model is then used to check a security property. If the property is not verified, we propose a search methodology to find another close model that satisfies it. We discuss and justify the different techniques we use in our approach and we adapt a distance measure between Graphical Event Models. The distance measure between the learned "fittest" model and the found proximal secure model gives an insight on how far our real system is from verifying the given property. For the sake of completeness, we propose series of experiments on synthetic data allowing to provide experimental evidence that we can attain the desired goals
Ahmed, Nacer Amina. "Contributions au déploiement sécurisé de processus métiers dans le cloud." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0013/document.
Повний текст джерелаThe fast evolution and development of technologies lead companies to grow faster in order to remain competitive and to offer services which are at the cutting edge of technology, meeting today’s market needs. Indeed, companies that are subject to frequent changes require a high level of flexibility and agility. Business Process Management (BPM) allows them to better manage their processes. Moreover, the emergence of Cloud Computing and all its advantages (flexibility and sharing, optimized cost, guaranteed accessibility... etc) make it particularly attractive. Thus, the combination of these two concepts allows companies to refloat their capital. However, the use of the cloud also implies new requirements in term of security, which stem from its shared environment, and which slow down its widespread adoption. The objective of this thesis consists in proposing concepts and tools that help and guide companies to deploy safely their processes in a cloud environment. A first contribution is an obfuscation algorithm that automates the decomposition and deployment of processes without any human intervention, based on the nature of the fragments. This algorithm limits the rate of information on each cloud through a set of separation constraints, which allow to deploy fragments considered as sensitive on different clouds. The second contribution of this thesis consists in complicating the structure of the process in order to limit the risk of clouds coalition. This is done through the introduction of fake fragments at certain strategic points in the process. The goal is to make generated collaborations more resistant to attacks, and thus reducing the likelihood of coalition. Even if obfuscation and complexification operations protect companies’ know-how during a cloud deployment, a risk remains. In this context, this thesis also proposes a risk model for evaluating and quantifying the security risks to which the process remain exposed after deployment. The purpose of this model is to combine security information with other dimensions of quality of service such as cost, for the selection of optimized configurations. The proposed approaches are implemented and tested through different process configurations. Their validity is verified through a set of metrics, whose objective is to measure the complexity of the processes as well as the remaining risk level after obfuscation
Duc, Guillaume. "Support matériel, logiciel et cryptographique pour une éxécution sécurisée de processus." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0041.
Повний текст джерелаThe majority of the solutions to the issue of computer security (algorithms, protocols, secure operating systems, applications) are running on insecure hardware architectures that may be vulnerable to physical (bus spying, modification of the memory content, etc. ) or logical (malicious operating system) attacks. Several secure architectures, which are able to protect the confidentiality and the correct execution of programs against such attacks, have been proposed for several years. After the presentation of some cryptographic bases and a review of the main secure architectures proposed in the litterature, we will present the secure architecture CryptoPage. This architecture guarantees the confidentiality of the code and the data of applications and the correct execution against hardware or software attacks. In addition, it also includes a mechanism to reduce the information leakage on the address bus, while keeping reasonable performances. We will also study how to delegate some security operations of the architecture to an untrusted operating system in order to get more flexibility but without compromising the security of thearchitecture. Finally, some other important mechanism are studied: encrypted processid entification, attestations of the results, management of software signals, management of the threads, inter-process communication
Sadok, Moufida. "Veille anticipative stratégique pour réduire le risque des agressions numériques." Grenoble 2, 2004. http://www.theses.fr/2004GRE21020.
Повний текст джерелаThis research work addresses the problem related to management in the digital era and show the need to consider risk, generated by the digital aggressions targeting the security of the information resources of a corporate which highly using the ICT, as a risk management rather then a technical risk. We have constructed and implemented a method, called MARRAN, for the analysis and the reduction of digital aggressions risk. This method aims to support the process of collective interpretation of information considered as a week signal type in order to reduce the reaction time against digital aggressions and even anticipate there occurrence. MARRAN is based on major actor, the mediator, who accomplishes a set of actions that aim to help the IRT (Incident Response Team) members to construct individual views, conciliate individual divergent views, and refine the needed reasoning. The mediator needs to have skills such as the credibility, which is based on experience and expertise and the well knowledge of the corporate, its objectives and the characteristics of its information system. We have validated MARRAN method based on real cases of digital aggressions and have specified its replication conditions. Also, we have evaluated MARRAN with experts in information security. MARRAN is supported by software that was developed to use various applications in internet technology and allowing mainly the construction of a knowledge base through the capitalization of the attack scenarios processed by the IRT members
Heerde, Harold Johann Wilhelm van. "Privacy-aware data management by means of data degradation." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0031.
Повний текст джерелаService-providers collect more and more privacy-sensitive information, even though it is hard to protect this information against hackers, abuse of weak privacy policies, negligence, and malicious database administrators. In this thesis, we take the position that endless retention of privacy-sensitive information will inevitably lead to unauthorized data disclosure. Limiting the retention of privacy-sensitive information limits the amount of stored data and therefore the impact of such a disclosure. Removing data from a database system is not a straightforward task; data degradation has an impact on the storage structure, indexing, transaction management, and logging mechanisms. To show the feasibility of data degradation, we provide several techniques to implement it; mainly, a combination of keeping data sorted on degradation time and using encryption techniques where possible. The techniques are founded with a prototype implementation and a theoretical analysis
Hamieh, Ali. "La sécurité dans les réseaux sans fil ad hoc : les attaques jamming et les noeuds greedy." Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0009.
Повний текст джерелаAd hoc networks are vulnerable to security attacks such as greedy behaviors and jamming attacks. In this dissertation, we proposed RLGREEDY, an advancement for a detection system to identify and differentiate a greedy node without modifying MAC protocol. In effect this proposed system measures the waiting time of nodes to access the channel for detection of greedy nodes. Furthermore, concerning the detection of jamming, the system proposed in this thesis RLJAM focuses on calculating a correlation between error and correct reception times. To counter these jamming attacks, POWJAM and DIRJAM are proposed in this dissertation. Our first approach, POWJAM, is to hide the communications from reactive jammers through changing the transmission power and use of a different path for communication. The second approach, DIRJAM, is to react at jamming in wireless ad hoc networks using directional antenna, making minimal changes in reactive routing protocol to be reactive in the presence of jamming attacks
Syed, Idrus Syed Zulkarnain. "Soft biometrics for keystroke dynamics." Caen, 2014. http://www.theses.fr/2014CAEN2024.
Повний текст джерелаAt present, there are a number of usages of biometric systems for many specific purposes such as physical access control, attendance monitoring, electronic payment (e-payment) and others. This PhD thesis focuses on biometric authentication and we propose to use keystroke dynamics in order to avoid password-based authentication problems. Keystroke dynamics measures the rhythm a person exhibits while typing on a keyboard. In this sense, keystroke dynamics is a behavioral biometric modality, as well as signature dynamics, gait and voice. Among the advantages of keystroke dynamics in comparison to other modalities, we can mention that it is a low cost and usable modality: indeed, no extra sensor or device is required and users often type a password. The counterpart to these advantages is the worse performance compared to morphological biometric modalities such as fingerprint, face or iris. The rather worse performances of keystroke dynamics can be explained by the high intra-class variability of the users' behaviour. One way to handle this variability is to take into account additional information in the decision process. This can be done with: (i) multibiometrics (by combining keystroke and another modality); (ii) optimising the enrolment step (a template is stored as reference only if its quality level is sufficient); or (iii) with a new and promising solution: soft biometrics (profiling the user). We address in this PhD thesis these two last aspects. We propose several contributions in order to enhance the performance of keystroke dynamics systems. First, we created a benchmark dataset called 'GREYC-NISLAB Keystroke' with biometric data collection from 110 users in France and Norway. This new benchmark database is available to the international scientific community and contains some profiling information on users: the way of typing (one hand or two hands), gender, age and handedness. We then perform various studies in order to determine the recognition accuracy of soft biometric traits given keystroke dynamics features: (i) the way of typing (one hand or two hands); (ii) gender (male or female); (iii) age class (below 30 or 30 and above); and (iv) handedness (right-handed or left-handed). Subsequently, we study the biometric fusion with keystroke dynamics in order to increase the soft biometrics recognition performance. Finally, by combining the authentication process with soft criteria, we present an improvement of user verification. The results of our experiments show the benefits of the proposed methods
Hoolash, Mahendranath. "E-securité, e-confidentialité et e-integrité." Aix-Marseille 3, 2001. http://www.theses.fr/2001AIX30083.
Повний текст джерелаThe Information Trilogy : eConfidentiatfty, security and elmegrity determines the Information value on the Web. The main objective of any Information Security System (ISS) is to ensure the real and authentic value of Information we make use of. If such is not the case, we refer to our Information System as being corrupted. Hence, it is of primordial importance to protect information of any firm when it is stored on a hard disk on a PC, linked to the Internet. This, since it can be read, copied, altered or deleted from a remote connected person, without the owner's notice. Today, these threats come mainly from the Internet. Because of this, there is an increasing demand for electronic Identification and Authentication systems. One of the main goals of my thesis is to browse exhaustively through all the main Authentification techniques available since 3- 4 years. And, see how, some two of these techniques have been applied in my Extranet Project at GEMS : project which started in 1999. The spinal cord of this high security site is based on : VPN (Virtual Private Network and RSA Securid). .
Hodique, Yann. "Sûreté et optimisation par les systèmes de types en contexte ouvert et contraint." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-15.pdf.
Повний текст джерелаElbaz, Reouven. "Mécanismes matériels pour des transferts processeur mémoire sécurisés dans les systèmes embarqués." Montpellier 2, 2006. http://www.theses.fr/2006MON20119.
Повний текст джерелаBerbain, Côme. "Analyse et conception d'algorithmes de chiffrement à flot." Paris 7, 2007. http://www.theses.fr/2007PA077124.
Повний текст джерелаThe primary goal of cryptography is to protect the confidentiality of data and communications. Stream ciphers is one of the two most popular families of symmetric encryption algorithms that allow to guaranty confidentiality and to achieve high performances. In the first part of this thesis, we present different cryptanalysis techniques against stream ciphers: correlation attack against the stream cipher GRAIN, guess and determine attack against the BSG mechanism, algebraic attack against special kinds of non-linear feedback shift registers, and chosen IV attack against a reduced version of the stream cipher SALSA. In a second part, we focus on proofs of security for stream ciphers: we introduce the new algorithm QUAD and give some provable security arguments in order to link its security to the conjectured intractability of Multivariate Quadratic problem. We also try to extend the security requirements of stream ciphers to the case where initialisation values (IV) are used: we present a construction which allows us to build a secure IV dependent stream cipher from a number generator and apply it to QUAD, which becomes the first IV dependent stream cipher with provable security arguments. We also present the algorithms DECIM and SOSEMANUK, to which we made design contributions. Finally in a third part, we present efficient software and hardware implementations of the QUAD algorithm
Kang, Eun-Young. "Abstractions booléennes pour la vérification des systèmes temps-réel." Thesis, Nancy 1, 2007. http://www.theses.fr/2007NAN10089/document.
Повний текст джерелаThis thesis provides an efficient formal scheme for the tool-supported real-time system verification by combination of abstraction-based deductive and model checking techniques in order to handle the limitations of the applied verification techniques. This method is based on IAR (Iterative Abstract Refinement) to compute finite state abstractions. Given a transition system and a finite set of predicates, this method determines a finite abstraction, where each state of the abstract state space is a true assignment to the abstraction predicates. A theorem prover can be used to verify that the finite abstract model is a correct abstraction of a given system by checking conformance between an abstract and a concrete model by establishing/proving that a set of verification conditions are obtained during the IAR procedure. Then the safety/liveness properties are checked over the abstract model. If the verification condition holds successfully, IAR terminates its procedure. Otherwise more analysis is applied to identify if the abstract model needs to be more precise by adding extra predicates. As abstraction form, we adopt a class of predicate diagrams and define a variant of predicate diagram PDT (Predicate Diagram for Timed systems) that can be used to verify real-time and parameterized systems
El, Chaer Nidal. "La criminalité informatique devant la justice pénale." Poitiers, 2003. http://www.theses.fr/2003POIT3006.
Повний текст джерелаHourdin, Vincent. "Contexte et sécurité dans les intergiciels d'informatique ambiante." Nice, 2010. http://www.theses.fr/2010NICE4076.
Повний текст джерелаIn ubiquitous computing, context is key. Computer applications are extending their interactions with the environment: new inputs and outputs are used, such as sensors and other mobile devices interacting with the physical environment. Middlewares, created in distributed computing to hide the complexity of lower layers, are then loaded with new concerns, such as taking into account the context, adaptation of applications or security. A middleware layer representation of these concerns cannot express all heir interdependencies. In pervasive computing, distribution is required to obtain contextual information, but it is also necessary to take into account the context in distribution, for example to restrict interactions between entities in a defined context. In addition,asynchronous interactions used in those new environments require special attention when taking into account the context. Similarly, security is involved both in the middleware layers of distribution and context-sensitivity. In this thesis we present a model taking into account the context both in security and distribution. Access control must evolve to incorporate a dynamic and reactive authorization, based on information related to environment or simply on the authentication information of entities. Contextual information evolve with their own dynamic, independent of applications. It is also necessary to detect context changes to reenforce the authorization. We are experimenting this context-awareness targetting interaction control with the experimental framework WComp, derived from the SLCA/AA (Service Lightweight Component Architecture / Aspects of Assembly) model. SLCA allows to create dynamic middlewares and applications for which functional cutting is not translated into layers but into an interleaving of functionalities. Aspects of assembly are a mechanism for compositional adaptation of assemblies of components. We use them to express our non-functional concerns and to compose them with existing applications in a deterministic and reactive manner. For this purpose, we introduce context-aware interaction control rules. The middleware thus allows to adapt, according to context, our non-functional concerns and the behavior of the application
Clavier, Christophe. "De la sécurité physique des crypto-systèmes embarqués." Versailles-St Quentin en Yvelines, 2007. http://www.theses.fr/2007VERS0028.
Повний текст джерелаIn a world full of threats, the development of widespread digital applications has led to the need for a practical device containing cryptographic functions that provide the everyday needs for secure transactions, confidentiality of communications, identification of the subject or authentication for access to a particular service. Among the cryptographic embedded devices ensuring these functionalities, smart cards are certainly the most widely used. Their portability (a wallet may easily contain a dozen) and their ability to protect its data and programs against intruders, make it as the ideal ``bunker'' for key storage and the execution of cryptographic functions during mobile usage requiring a high level of security. Whilst the design of mathematically robust (or even proven secure in some models) cryptographic schemes is an obvious requirement, it is apparently insufficient in the light of the first physical attacks that were published in 1996. Taking advantage of weaknesses related to the basic implementation of security routines, these threats include side-channel analysis which obtains information about the internal state of the process, and the exploitation of induced faults allowing certain cryptanalysis to be performed which otherwise would not have been possible. This thesis presents a series of research works covering the physical security of embedded cryptosystems. Two parts of this document are dedicated to the description of some attacks and to a study of the efficiency of conceivable countermeasures. A third part deals with that particular and still mainly unexplored area which considers the applicability of physical attacks when the cryptographic function is, partly or totally, unknown by the adversary
Sassolas, Mathieu. "Méthodes qualitatives et quantitatives pour la détection d'information cachée." Paris 6, 2011. http://www.theses.fr/2011PA066581.
Повний текст джерелаRanéa, Pierre-Guy. "La tolérance aux intrusions par fragmentation-dissémination." Toulouse, INPT, 1989. http://www.theses.fr/1989INPT007H.
Повний текст джерелаParadinas, Pierre. "La Biocarte : intégration d'une carte à microprocesseur dans un réseau professionnel santé." Lille 1, 1988. http://www.theses.fr/1988LIL10100.
Повний текст джерелаLuna, del Aguila Felipe. "Information flow security for asynchronous, distributed, and mobile applications." Nice, 2005. http://www.theses.fr/2005NICE4038.
Повний текст джерелаThe objective for this work is to propose a security solution to regulate information flows, specifically through an access and flow control mechanism, targeted to distributed applications using active objects with asynchronous communications. It includes a security policy and the mechanism that will enforce the rules present in such policies. Data confidentiality and secure information flows is provided through dynamic checks in communications. While information flows are generally verified statically {Mye,BN03,Her,HR_infoFlow,ZZN+,Sab,HVY,CBC}, our attention is focused on dynamic verifications. To achieve it, the proposed model has an information control policy that includes discretionary rules, and because these rules are by nature dynamically enforceable, it is possible to take advantage of the dynamic checks to carry out at the same time all mandatory checks. As another advantage of this approach, dynamic checks do not require to modify compilers, do not alter the programming language, do not require modifications to existing source codes, and provide flexibility at run-time. Thus, dynamic checks fit well in a middleware layer which, in a non-intrusive manner, provides and ensures security services to upper-level applications. The underlying programming model {CKV} is based on active objects, asynchronous communications, and data-flow synchronizations. These notions are related to the domain of distributed systems but with a characteristic that distinguishes it from others: the presence of mobile entities, independents and capables to interact with other also mobile entities. Hence, the proposed security model heavily relies on security policy rules with mandatory enforcements for the control of information flow. Security levels are used to independently tag the entities involved in the communication events: active objects and transmitted data. These "independent" tagging is however subject to discretionary rules. The combination of mandatory and discretionary rules allows to relax the strict controls imposed by the sole use of mandatory rules. The final security model follows an approach whose advantages are twofold: A sound foundation. The security model is founded on a strong theoretical background, the Asynchronous Sequential Processes (ASP) calculus {CHS_POPL04}, related to well-known formalisms {HR_infoFlow,Hen,CBC,CF}. Then, the formal semantics of ASP are extended with predicate conditions. This provides a formal basis to our model and, at the same time, makes it possible to dynamically check for unauthorized accesses. Finally, in order to prove the correctness of the security model, an intuitive secure information flow property is defined and proved to be ensured by the application of the access control model. Scalability and flexibility. A practical use of this model is also targeted, with an implementation into middlewares, e. G. ProActive. The granularity of this security model is defined in order to make it both efficient (because there are no security checks inside an activity) and finely tunable: levels can be defined on activities because of the absence of shared memory, but a specific level can be given for request parameters and created activities. Moreover, the practical implementation of the security mechanism allows its use through high level library calls with no need to change the programming language or the existence of special compilers
Allard, Tristan. "Sanitizing microdata without leak : a decentralized approach." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0049.
Повний текст джерелаThe frontiers between humans and the digital world are tightening. An unprecedented and rapidly increasing amount of individual data ends up nowadays into well-structured and consistent databases. Privacy-preserving data publishing models and algorithms are an attempt for benefiting collectively from this wealth of data while still preserving the privacy of individuals. However, few works have focused on their implementation issues, leading to highly vulnerable practical scenarios. This thesis tackles precisely this issue, benefiting from the rise of portable, large capacity, and tamper-resistant devices used as personal data servers. Its contribution is a set of correct and secure protocols permitting to perform traditional non-interactive privacy-preserving data publishing algorithms and models in an original context where each individual manages her data autonomously through her own secure personal data server
Serme, Gabriel. "Modularisation de la sécurité informatique dans les systèmes distribués." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0063/document.
Повний текст джерелаAddressing security in the software development lifecycle still is an open issue today, especially in distributed software. Addressing security concerns requires a specific know-how, which means that security experts must collaborate with application programmers to develop secure software. Object-oriented and component-based development is commonly used to support collaborative development and to improve scalability and maintenance in software engineering. Unfortunately, those programming styles do not lend well to support collaborative development activities in this context, as security is a cross-cutting problem that breaks object or component modules. We investigated in this thesis several modularization techniques that address these issues. We first introduce the use of aspect-oriented programming in order to support secure programming in a more automated fashion and to minimize the number of vulnerabilities in applications introduced at the development phase. Our approach especially focuses on the injection of security checks to protect from vulnerabilities like input manipulation. We then discuss how to automate the enforcement of security policies programmatically and modularly. We first focus on access control policies in web services, whose enforcement is achieved through the instrumentation of the orchestration mechanism. We then address the enforcement of privacy protection policies through the expert-assisted weaving of privacy filters into software. We finally propose a new type of aspect-oriented pointcut capturing the information flow in distributed software to unify the implementation of our different security modularization techniques
Trouessin, Gilles. "Traitements fiables de données confidentielles par fragmentation-redondance-dissémination." Toulouse 3, 1991. http://www.theses.fr/1991TOU30260.
Повний текст джерелаMunoz, Martine. "La protection des échanges de données informatisées." Nice, 1997. http://www.theses.fr/1997NICE0045.
Повний текст джерелаPreda, Stere. "Reliable context aware security policy deployment - applications to IPv6 environments." Télécom Bretagne, 2010. http://www.theses.fr/2010TELB0145.
Повний текст джерелаOrganization networks are continuously growing so as to sustain the newly created organizational requirements and activities. In parallel, concerns for assets protection are also increasing and security management becomes an ever-changing task for the security officer. In this dissertation we address several key aspects of network security management providing solutions for each aspect:1. Policy deployment based on access control models. Invalid or unusable data of the security policy will have been removed before deploying the abstract policy through a set of algorithms; in this manner, the security officer’s tasks are greatly simplified. 2. Formal algorithm development. The correctness of the algorithms involved in the policy deployment process is of undeniable importance. These algorithms should be proved so that the security officers trust the automatic policy deployment process. 3. Management of contextual requirements and of limited security functionalities. The access control model should be robust enough to cover contextual requirements. The issue is that the security devices are not always able to interpret the contexts. And sometimes, there are security requirements which cannot be deployed given the existing limited security functionalities in the IS. 4. New IPv6 security mechanisms. We have naturally come to consider the design of new IPv6 security mechanisms when dealing with the lack of functionalities in an information system. Our research outlines a comprehensive approach to deploy access control security policies. Actually, it constitutes a step further towards an automatic and reliable management of network security
Delaunay, Pascal. "Attaques physiques sur des algorithmes de chiffrement par flot." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0006.
Повний текст джерелаSince 1999 and Paul Kocher's initial publication, several side-channel attacks have been published. Most of these attacks target public-key cryptosystems and bloc ciphers but only a few of them target stream ciphers, despite being widely used on daily applications. After some remids on side-channel attacks, linear and non-linear feedback shift registers and fast correlation attacks, we propose at first three fast correlation attacks targetting linear feedback shift registers and using side-channel information to improve their accuracy. Next, we present two flaws in non-linear feedback shift registers which allow full recovery of the internal state using well-chosen side-channel attacks. We finally use these vulnerabilities to mount two side-channel attacks against VEST, an eSTREAM candidate, to recover partial information from the internal state
Oualha, Nouha. "Sécurité et coopération pour le stockage de donnéees pair-à-pair." Paris, ENST, 2009. http://www.theses.fr/2009ENST0028.
Повний текст джерелаSelf-organizing algorithms and protocols have recently received a lot of interest in mobile ad-hoc networks as well as in peer-to-peer (P2P) systems, as illustrated by file sharing or VoIP. P2P storage, whereby peers collectively leverage their storage resources towards ensuring the reliability and availability of user data, is an emerging field of application. P2P storage however brings up far-reaching security issues that have to be dealt with, in particular with respect to peer selfishness, as illustrated by free-riding attacks. The continuous observation of the behavior of peers and monitoring of the storage process is an important requirement to secure a storage system against such attacks. Detecting peer misbehavior requires appropriate primitives like proof of data possession, a form of proof of knowledge whereby the holder interactively tries to convince the verifier that it possesses some data without actually retrieving them or copying them at the verifier. We propose and review several proof of data possession protocols. We in particular study how data verification and maintenance can be handed over to volunteers to accommodate peer churn. We then propose two mechanisms, one based on reputation and the other on remuneration, for enforcing cooperation by means of such data possession verification protocols, as periodically delivered by storage peers. We assess the effectiveness of such incentives with game theoretical techniques. We in particular discuss the use of non-cooperative one-stage and repeated Bayesian games as well as that of evolutionary games
Humphries, Christopher. "User-centred security event visualisation." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S086/document.
Повний текст джерелаManaging the vast quantities of data generated in the context of information system security becomes more difficult every day. Visualisation tools are a solution to help face this challenge. They represent large quantities of data in a synthetic and often aesthetic way to help understand and manipulate them. In this document, we first present a classification of security visualisation tools according to each of their objectives. These can be one of three: monitoring (following events in real time to identify attacks as early as possible), analysis (the exploration and manipulation a posteriori of a an important quantity of data to discover important events) or reporting (representation a posteriori of known information in a clear and synthetic fashion to help communication and transmission). We then present ELVis, a tool capable of representing security events from various sources coherently. ELVis automatically proposes appropriate representations in function of the type of information (time, IP address, port, data volume, etc.). In addition, ELVis can be extended to accept new sources of data. Lastly, we present CORGI, an successor to ELVIS which allows the simultaneous manipulation of multiple sources of data to correlate them. With the help of CORGI, it is possible to filter security events from a datasource by multiple criteria, which facilitates following events on the currently analysed information systems