Dissertations / Theses on the topic 'Sécurité Web'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Sécurité Web.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Luo, Zhengqin. "Sémantique et sécurité des applications Web." Nice, 2011. http://www.theses.fr/2011NICE4058.
Full textIn this work we study the formal semantics and security problems of Web applications. The thesis is divided into three parts. The first part proposes a small-step operational semantics for a multitier programing language HOP, which can be used to globally reasoning about Web applications. The semantics covers a core of the HOP language, including dynamic generations of client code, and interactions between servers and clients. The second part studies a new technique to automatically prevent code injection attacks, based on multitier compilation. We add a new phase in the compiler to compare the intended and the actual syntax structure of the output. The validity of our technique is proved correct in the operational semantics of HOP. The last part of the thesis studies Mashic, a source-to-source compiler of JavaScript to isolate untrusted script by ifram sandbox and postmessage in HTML5. The compiler is proved correct in a formal semantics of JavaScript
Zahoor, Ehtesham. "Gouvernance de service : aspects sécurité et données." Phd thesis, Université Nancy II, 2011. http://tel.archives-ouvertes.fr/tel-00643552.
Full textSomé, Dolière Francis. "Sécurité et vie privée dans les applications web." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4085/document.
Full textIn this thesis, we studied security and privacy threats in web applications and browser extensions. There are many attacks targeting the web of which XSS (Cross-Site Scripting) is one of the most notorious. Third party tracking is the ability of an attacker to benefit from its presence in many web applications in order to track the user has she browses the web, and build her browsing profile. Extensions are third party software that users install to extend their browser functionality and improve their browsing experience. Malicious or poorly programmed extensions can be exploited by attackers in web applications, in order to benefit from extensions privileged capabilities and access sensitive user information. Content Security Policy (CSP) is a security mechanism for mitigating the impact of content injection attacks in general and in particular XSS. The Same Origin Policy (SOP) is a security mechanism implemented by browsers to isolate web applications of different origins from one another. In a first work on CSP, we analyzed the interplay of CSP with SOP and demonstrated that the latter allows the former to be bypassed. Then we scrutinized the three CSP versions and found that a CSP is differently interpreted depending on the browser, the version of CSP it implements, and how compliant the implementation is with respect to the specification. To help developers deploy effective policies that encompass all these differences in CSP versions and browsers implementations, we proposed the deployment of dependency-free policies that effectively protect against attacks in all browsers. Finally, previous studies have identified many limitations of CSP. We reviewed the different solutions proposed in the wild, and showed that they do not fully mitigate the identified shortcomings of CSP. Therefore, we proposed to extend the CSP specification, and showed the feasibility of our proposals with an example of implementation. Regarding third party tracking, we introduced and implemented a tracking preserving architecture, that can be deployed by web developers willing to include third party content in their applications while preventing tracking. Intuitively, third party requests are automatically routed to a trusted middle party server which removes tracking information from the requests. Finally considering browser extensions, we first showed that the extensions that users install and the websites they are logged into, can serve to uniquely identify and track them. We then studied the communications between browser extensions and web applications and demonstrate that malicious or poorly programmed extensions can be exploited by web applications to benefit from extensions privileged capabilities. Also, we demonstrated that extensions can disable the Same Origin Policy by tampering with CORS headers. All this enables web applications to read sensitive user information. To mitigate these threats, we proposed countermeasures and a more fine-grained permissions system and review process for browser extensions. We believe that this can help browser vendors identify malicious extensions and warn users about the threats posed by extensions they install
Kamel, Nassima. "Sécurité des cartes à puce à serveur Web embarqué." Limoges, 2012. https://aurore.unilim.fr/theses/nxfile/default/9dc553cd-e9df-4530-a716-d3191d68dfa0/blobholder:0/2012LIMO4039.pdf.
Full textSmart cards are widely used secure devices in today’s world, which can store data in a secured manner and ensure data security during transactions. The success of smart card is mainly due to their tamper-resistant nature which allows them to store sensitive data’s like cryptographic keys. Since they are using in multiple secure domains, like banking, health insurance, etc. More and more researches are taken place in this domain for security and attacks. The last generation of smart card, defines an embedded web server. There are two types of specifications for these devices, the first one is defined by OMA organisation that propose a simple HTTP web server named Smart Card Web Server (SCWS), the second is proposed by Sun Microsystems (currently Oracle), consists of a Java card 3 connected edition platform, that includes a Java servlet 2. 4 API with improved Java Card API and security features. In addition to network benefits from the robustness of smart card, the use of web standards provide a continuous user experience, equivalent to that seen while surfing on the internet and it enhances the look and feel of GUI interfaces. The GUI interfaces are accessible from a browser which is located on the terminal on which the card is connected. However, in addition to the classical attacks (physical and logical), the integration of web server on smart card, exposes the smart card to some existing classical web application attacks. The most important one is the cross site scripting attack, also named XSS. It consists of injecting malicious data to the given web application inputs and if the resource returned to the browser includes the malicious code, it will be interpreted and executed, causing an attack. A web application is vulnerable to XSS if it uses an untrusted data without filtering malicious characters before. On the other hand, to ensure the communication between web applications and browser or other network entities, it is necessary to integrate some protocols to the smart card, for example HTTP, BIP or TCP/IP. The vulnerabilities in the implementation of these protocols can facilitate some attacks. Our contribution on this thesis is divided in two parts, in the first part, we are interested on the security of web applications against XSS attack. We suggest a static analysis tool, based on tainting approach, that allow to verify if a web application is secured or not, including filtering data in all insertion points where XSS is possible. We also implement, an API filter, compatible with Java Card 3 platform, that developers can import during the development of their applications. The second part consists of verifying the conformance and the robustness of the implemented HTTP protocol. For that we propose an intelligent fuzzing tool that includes a set of optimisations that allows to reduce the time of fuzzing
Scholte, Theodoor. "Amélioration de la sécurité par la conception des logiciels web." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0024/document.
Full textThe web has become a backbone of our industry and daily life. The growing popularity of web applications and services and the increasing number of critical transactions being performed, has raised security concerns. For this reason, much effort has been spent over the past decade to make web applications more secure. Despite these efforts, recent data from SANS institute estimates that up to 60% of Internet attacks target web applications and critical vulnerabilities such as cross-site scripting and SQL injection are still very common. In this thesis, we conduct two empirical studies on a large number of web applications vulnerabilities with the aim of gaining deeper insights in how input validation flaws have evolved in the past decade and how these common vulnerabilities can be prevented. Our results suggest that the complexity of the attacks have not changed significantly and that many web problems are still simple in nature. Our studies also show that most SQL injection and a significant number of cross-site scripting vulnerabilities can be prevented using straight-forward validation mechanisms based on common data types. With these empirical results as foundation, we present IPAAS which helps developers that are unaware of security issues to write more secure web applications than they otherwise would do. It includes a novel technique for preventing the exploitation of cross-site scripting and SQL injection vulnerabilities based on automated data type detection of input parameters. We show that this technique results in significant and tangible security improvements for real web applications
Scholte, Theodoor. "Amélioration de la sécurité par la conception des logiciels web." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0024.
Full textThe web has become a backbone of our industry and daily life. The growing popularity of web applications and services and the increasing number of critical transactions being performed, has raised security concerns. For this reason, much effort has been spent over the past decade to make web applications more secure. Despite these efforts, recent data from SANS institute estimates that up to 60% of Internet attacks target web applications and critical vulnerabilities such as cross-site scripting and SQL injection are still very common. In this thesis, we conduct two empirical studies on a large number of web applications vulnerabilities with the aim of gaining deeper insights in how input validation flaws have evolved in the past decade and how these common vulnerabilities can be prevented. Our results suggest that the complexity of the attacks have not changed significantly and that many web problems are still simple in nature. Our studies also show that most SQL injection and a significant number of cross-site scripting vulnerabilities can be prevented using straight-forward validation mechanisms based on common data types. With these empirical results as foundation, we present IPAAS which helps developers that are unaware of security issues to write more secure web applications than they otherwise would do. It includes a novel technique for preventing the exploitation of cross-site scripting and SQL injection vulnerabilities based on automated data type detection of input parameters. We show that this technique results in significant and tangible security improvements for real web applications
Mohamed, El-Marouf Ahmed. "Mesure de distance entre politiques de sécurité dans un service Web." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25929.
Full textThe main contribution of this paper is to suggest a new method to measure the similarity between security policies written in XACML. This is done in two steps: first the safety policy is formalized in SPL, secondly the results will be used to measure the distance between policies. The choice of the distance to use depends on the types of predicates (categorical or numeric). Thus, a synthetic table is provided to link the different metrics that are calculated in accordance with their predicate. A prototype has been coded in PHP and implemented to validate our contribution. Recommendations were issued in conclusion to enrich the proposed approach.
Mekki, Mohamed-Anis. "Synthèse et compilation de services web sécurisés." Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10123/document.
Full textAutomatic composition of web services is a challenging task. Many works have considered simplified automata models that abstract away from the structure of messages exchanged by the services. For the domain of secured services we propose a novel approach to automated composition of services based on their security policies. Given a community of services and a goal service, we reduce the problem of composing the goal from services in the community to a security problem where an intruder we call mediator should intercept and redirect messages from the service community and a client service till reaching a satisfying state. We have implemented the algorithm in AVANTSSAR Platform and applied the tool to several case studies. Then we present a tool that compiles the obtained trace describing the execution of a the mediator into its corresponding runnable code. For that we first compute an executable specification as prudent as possible of her role in the orchestration. This specification is expressed in ASLan language, a formal language designed for modeling Web Services tied with security policies. Then we can check with automatic tools that this ASLan specification verifies some required security properties such as secrecy and authentication. If no flaw is found, we compile the specification into a Java servlet that can be used by the mediatior to lead the orchestration
Ouedraogo, Wendpanga Francis. "Gestionnaire contextualisé de sécurité pour des « Process 2.0 »." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0132/document.
Full textTo fit the competitive and globalized economic environment, companies and especially SMEs / SMIs are more and more involved in collaborative strategies, requiring organizational adaptation to fit this openness constraints and increase agility (i.e. the ability to adapt and fit the structural changes). While the Web 2.0 allows sharing data (images, knowledge, CV, micro-blogging, etc...) and while SOA aims at increasing service re-using rate and service interoperability, no process sharing strategies are developed. To overcome this limit, we propose to share processes as well to set a "process 2.0" framework allowing sharing activities. This will support an agile collaborative process enactment by searching and composing services depending on the required business organization and the service semantics. Coupled with the cloud computing deployment opportunity, this strategy will lead to couple more strongly Business, SaaS and PaaS levels. However, this challenges security constraints management in a dynamic environment. The development of security policies is usually based on a systematic risks analysis, reducing them by adopting appropriate countermeasures. These approaches are complex and as a consequence difficult to implement by end users. Moreover risks are assessed in a "closed" and static environment so that these methods do not fit the dynamic business services composition approach, as services can be composed and run in different business contexts (including the functionalities provided by each service, the organization (Who does what?), the coordination between these services and also the kind of data (strategic or no...) that are used and exchanged) and runtime environment (public vs private platform…). By analyzing these contextual information, we can define specific security constraints to each business service, specify the convenient security policies and implement appropriate countermeasures. In addition, it is also necessary to be able to propagate the security policies throughout the process to ensure consistency and overall security during the process execution. To address these issues, we propose to study the definition of security policies coupling Model Driven Security and Pattern based engineering approach to generate and deploy convenient security policies and protection means depending on the (may be untrusted) runtime environment. To this end, we propose a set of security patterns which meet the business and platform related security needs to set the security policies. The selection and the implementation of these security policies will be achieved thank to context-based patterns. Simple to understand by non-specialists, these patterns will be used by the model transformation process to generate these policies in a Model@Runtime strategy so that security services will be selected and orchestrated at runtime to provide a constant quality of protection (independent of the deployment)
Makiou, Abdelhamid. "Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0084/document.
Full textWeb applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets
Makiou, Abdelhamid. "Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0084.
Full textWeb applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets
Al-Kassar, Feras. "Testability Tarpits - Navigating the Challenges of Static Tools in Web Applications." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS675.
Full textThe goal of this thesis was to evaluate the effectiveness of a combination of commercial and open source security scanners. Through experimentation, we identified various code patterns that hinder the ability of state-of-the-art tools to analyze projects. By detecting these patterns during the software development lifecycle, our approach can offer valuable feedback to developers regarding the testability of their code. Additionally, it enables them to more accurately evaluate the residual risk that their code might still contain vulnerabilities, even if static analyzers report no findings. Our approach also suggests alternative methods to transform the code and enhance its testability for SAST
Boumlik, Laila. "Renforcement formel et automatique de politiques de sécurité dans la composition des services Web." Doctoral thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69595.
Full textThe Web services orchestration approach described by the Web Service-Business Process Execution Language (WS-BPEL), is now an integral part of the modern Web, such as cloud computing, Big Data, the Internet of Things (IoT) and social networks. Indeed, it is at the center of many information systems related to a variety of domains such as e-commerce, financial institutions and healthcare systems, etc. where sensitive data is shared, which creates significant security issues. WS-BPEL, also called BPEL, is the standard language for building complex Web services in a practical way. However, BPEL is not rigorously defined as a formal language thus leading to problems of ambiguity and confusion when understanding it. Moreover, without a formal basis, it would not be possible to provide any proof guaranteeing the proper functioning of services. This thesis addresses the formalization of BPEL and presents a formal approach based on the rewriting of programs allowing the enforcement of security policies on this language. More precisely, given a composition of Web services specified in BPEL and a security policy described in a temporal logic like LTL, our approach aims to generate a new version of the Web service which respects the given security policy. The new version of the service behaves exactly like the original one except when the policy is about to be violated. In this case the process could take other actions or simply be stopped. The formalization of BPEL has also been translated into the K-Framework environment, which opens the door to the use of its many formal tools including a model checker for the analysis of Web services.
Mendes, Suzan. "Sécurité des Communications Ouvertes : Vers une Infrastructure Globale pour les Services d'Authentification." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1995. http://tel.archives-ouvertes.fr/tel-00821147.
Full textRabhi, Issam. "Testabilité des services Web." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00738936.
Full textSerme, Gabriel. "Modularisation de la sécurité informatique dans les systèmes distribués." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0063.
Full textAddressing security in the software development lifecycle still is an open issue today, especially in distributed software. Addressing security concerns requires a specific know-how, which means that security experts must collaborate with application programmers to develop secure software. Object-oriented and component-based development is commonly used to support collaborative development and to improve scalability and maintenance in software engineering. Unfortunately, those programming styles do not lend well to support collaborative development activities in this context, as security is a cross-cutting problem that breaks object or component modules. We investigated in this thesis several modularization techniques that address these issues. We first introduce the use of aspect-oriented programming in order to support secure programming in a more automated fashion and to minimize the number of vulnerabilities in applications introduced at the development phase. Our approach especially focuses on the injection of security checks to protect from vulnerabilities like input manipulation. We then discuss how to automate the enforcement of security policies programmatically and modularly. We first focus on access control policies in web services, whose enforcement is achieved through the instrumentation of the orchestration mechanism. We then address the enforcement of privacy protection policies through the expert-assisted weaving of privacy filters into software. We finally propose a new type of aspect-oriented pointcut capturing the information flow in distributed software to unify the implementation of our different security modularization techniques
Chalouf, Mohamed Aymen. "Offre de service dans les réseaux de nouvelle génération : négociation sécurisée d’un niveau de service de bout en bout couvrant la qualité de service et la sécurité." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13905/document.
Full textBased on the IP technology, the next generation network (NGN) must overcome the main drawbacks of this technology consisting in the lack of quality of service (QoS), security and mobility management. To ensure a service offer in an NGN, a protocol for negotiating service level can be used. However, most of the existing negotiation protocols allow the establishment of a service level which includes only QoS. As for security and mobility, they were often not covered by these negotiations, and therefore managed independently. However, securing a service can cause degradation of the QoS, and the mobility of a user can change the service needs in terms of QoS and security. Thus, we need to simultaneously manage QoS and security while taking into account user’s mobility. In this context, we propose to develop a signaling protocol that allows fixed and mobile users to negotiate a service level covering both QoS and security, in a dynamic, automatic and secure manner. Our contribution is achieved in three steps. Initially, we rely on a signaling protocol, which performs QoS negotiation using web services, to enable the negotiation of both security and QoS while taking into account the impact of security on QoS. Then, this negotiation is automated by basing it on a user profile. This allows adjusting the service level according to changes which can occur on the user context. Thus, the service offer is more dynamic and can be adapted to changes of access network resulting from the mobility of the user. Finally, we propose to secure the negotiation flows in order to prevent the different attacks that can target the exchanged messages during a negotiation process
Canali, Davide. "Plusieurs axes d'analyse de sites web compromis et malicieux." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0009/document.
Full textThe incredible growth of the World Wide Web has allowed society to create new jobs, marketplaces, as well as new ways of sharing information and money. Unfortunately, however, the web also attracts miscreants who see it as a means of making money by abusing services and other people's property. In this dissertation, we perform a multidimensional analysis of attacks involving malicious or compromised websites, by observing that, while web attacks can be very complex in nature, they generally involve four main actors. These are the attackers, the vulnerable websites hosted on the premises of hosting providers, the web users who end up being victims of attacks, and the security companies who scan the Internet trying to block malicious or compromised websites. In particular, we first analyze web attacks from a hosting provider's point of view, showing that, while simple and free security measures should allow to detect simple signs of compromise on customers' websites, most hosting providers fail to do so. Second, we switch our point of view on the attackers, by studying their modus operandi and their goals in a distributed experiment involving the collection of attacks performed against hundreds of vulnerable web sites. Third, we observe the behavior of victims of web attacks, based on the analysis of their browsing habits. This allows us to understand if it would be feasible to build risk profiles for web users, similarly to what insurance companies do. Finally, we adopt the point of view of security companies and focus on finding an efficient solution to detecting web attacks that spread on compromised websites, and infect thousands of web users every day
Serme, Gabriel. "Modularisation de la sécurité informatique dans les systèmes distribués." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0063/document.
Full textAddressing security in the software development lifecycle still is an open issue today, especially in distributed software. Addressing security concerns requires a specific know-how, which means that security experts must collaborate with application programmers to develop secure software. Object-oriented and component-based development is commonly used to support collaborative development and to improve scalability and maintenance in software engineering. Unfortunately, those programming styles do not lend well to support collaborative development activities in this context, as security is a cross-cutting problem that breaks object or component modules. We investigated in this thesis several modularization techniques that address these issues. We first introduce the use of aspect-oriented programming in order to support secure programming in a more automated fashion and to minimize the number of vulnerabilities in applications introduced at the development phase. Our approach especially focuses on the injection of security checks to protect from vulnerabilities like input manipulation. We then discuss how to automate the enforcement of security policies programmatically and modularly. We first focus on access control policies in web services, whose enforcement is achieved through the instrumentation of the orchestration mechanism. We then address the enforcement of privacy protection policies through the expert-assisted weaving of privacy filters into software. We finally propose a new type of aspect-oriented pointcut capturing the information flow in distributed software to unify the implementation of our different security modularization techniques
Duchene, Fabien. "Detection of web vulnerabilities via model inference assisted evolutionary fuzzing." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM022/document.
Full textTesting is a viable approach for detecting implementation bugs which have a security impact, a.k.a. vulnerabilities. When the source code is not available, it is necessary to use black-box testing techniques. We address the problem of automatically detecting a certain class of vulnerabilities (Cross Site Scripting a.k.a. XSS) in web applications in a black-box test context. We propose an approach for inferring models of web applications and fuzzing from such models and an attack grammar. We infer control plus taint flow automata, from which we produce slices, which narrow the fuzzing search space. Genetic algorithms are then used to schedule the malicious inputs which are sent to the application. We incorporate a test verdict by performing a double taint inference on the browser parse tree and combining this with taint aware vulnerability patterns. Our implementations LigRE and KameleonFuzz outperform current open-source black-box scanners. We discovered 0-day XSS (i.e., previously unknown vulnerabilities) in web applications used by millions of users
Fonda, Maxime. "Protection obligatoire des serveurs d'applications Web : application aux processus métiers." Phd thesis, Université d'Orléans, 2014. http://tel.archives-ouvertes.fr/tel-01069411.
Full textVastel, Antoine. "Traçage versus sécurité : explorer les deux facettes des empreintes de navigateurs." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I097.
Full textNowadays, a wide range of devices can browse the web, ranging from smartphones, desktop computers, to connected TVs. To increase their browsing experience, users also customize settings in their browser, such as displaying the bookmark bar or their preferred languages. Customization and the diversity of devices are at the root of browser fingerprinting. Indeed, to manage this diversity, websites can access attributes about the device using JavaScript APIs, without asking for user consent. The combination of such attributes is called a browser fingerprint and has been shown to be highly unique, making of fingerprinting a suitable tracking technique. Its stateless nature makes it also suitable for enhancing authentication or detecting bots. In this thesis, I report three contributions to the browser fingerprinting field:1. I collect 122K fingerprints from 2,346 browsers and study their stability over more than 2 years. I show that, despite frequent changes in the fingerprints, a significant fraction of browsers can be tracked over a long period of time;2. I design a test suite to evaluate fingerprinting countermeasures. I apply our test suite to 7 countermeasures, some of them claiming to generate consistent fingerprints, and show that all of them can be identified, which can make their users more identifiable;3. I explore the use of browser fingerprinting for crawler detection. I measure its use in the wild, as well as the main detection techniques. Since fingerprints are collected on the client-side, I also evaluate its resilience against an adversarial crawler developer that tries to modify its crawler fingerprints to bypass security checks
Canali, Davide. "Plusieurs axes d'analyse de sites web compromis et malicieux." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0009.
Full textThe incredible growth of the World Wide Web has allowed society to create new jobs, marketplaces, as well as new ways of sharing information and money. Unfortunately, however, the web also attracts miscreants who see it as a means of making money by abusing services and other people's property. In this dissertation, we perform a multidimensional analysis of attacks involving malicious or compromised websites, by observing that, while web attacks can be very complex in nature, they generally involve four main actors. These are the attackers, the vulnerable websites hosted on the premises of hosting providers, the web users who end up being victims of attacks, and the security companies who scan the Internet trying to block malicious or compromised websites. In particular, we first analyze web attacks from a hosting provider's point of view, showing that, while simple and free security measures should allow to detect simple signs of compromise on customers' websites, most hosting providers fail to do so. Second, we switch our point of view on the attackers, by studying their modus operandi and their goals in a distributed experiment involving the collection of attacks performed against hundreds of vulnerable web sites. Third, we observe the behavior of victims of web attacks, based on the analysis of their browsing habits. This allows us to understand if it would be feasible to build risk profiles for web users, similarly to what insurance companies do. Finally, we adopt the point of view of security companies and focus on finding an efficient solution to detecting web attacks that spread on compromised websites, and infect thousands of web users every day
Pellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0064.
Full textMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Hossen, Karim. "Inférence automatique de modèles d'applications Web et protocoles pour la détection de vulnérabilités." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM077/document.
Full textIn the last decade, model-based testing (MBT) approaches have shown their efficiency in the software testing domain but a formal model of the system under test (SUT) is required and, most of the time, not available for several reasons like cost, time or rights. The goal of the SPaCIoS project is to develop a security testing tool using MBT approach. The goal of this work, funded by the SPaCIoS project, is to develop and implement a model inference method for Web applications and protocols. From the inferred model, vulnerability detection can be done following SPaCIoS model-checking method or by methods we have developed. We developed an inference algorithm adapted to Web applications and their properties. This method takes into account application data and their influence on the control flow. Using data mining algorithms, the inferred model is refined with optimized guards and output functions. We also worked on the automation of the inference. In active learning approaches, it is required to know the complete interface of the system in order to communicate with it. As this step can be time-consuming, this step has been made automatic using crawler and interface extraction method optimized for inference. This crawler is also available as standalone for third-party inference tools. In the complete inference algorithm, we have merged the inference algorithm and the interface extraction to build an automatic procedure. We present the free software SIMPA, containing the algorithms, and we show some of the results obtained on SPaCIoS case studies and protocols
Fonda, Maxime. "Protection obligatoire des serveurs d’applications Web : application aux processus métiers." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2011/document.
Full textThis thesis focuses on mandatory access control in Web applications server. We present a novel approach of mandatory protection based on an abstract Web application model. Existing models of Web applications such as SOA fit with our abstract model. Our mandatory protection uses a dedicated language that allows to express the security requirements of a Web application. This dedicated protection language uses our Web application model to control efficiently the accesses of the subjects to the objects of a Web application. We establish a method to automatically compute the requested security policies facilitating thus the administration of the mandatory protection. An implementation on Microsoft-based environments uses the IIS Web server and the .Net Framework. The solution is independent from the Web applications to protect since it uses an application adaptor to interface our mandatory protection with the applications. This implementation is fully running on the workflow environments from the QualNet society, that cofunded this Ph.D thesis. Experiments show that our mandatory protection supports large scale environments since the overhead is near to 5 % and decreases when the size of the application increases
Chamoun, Maroun. "Intégration de l'Internet 3G au sein d'une plate-forme active." Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00001749.
Full textVarin, Annie. "Risques technologiques et sécurité sur Internet : production d'un outil pour favoriser la prévention et fournir des pistes de solution." Mémoire, Université de Sherbrooke, 2011. http://savoirs.usherbrooke.ca/handle/11143/2674.
Full textMajorczyk, Frédéric. "Détection d'intrusions comportementale par diversification de COTS : application au cas des serveurs web." Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00355366.
Full textNotre travail s'inscrit dans le domaine de la détection d'intrusions, de manière essentielle, et permet une certaine tolérance aux intrusions. Contrairement aux méthodes de détection classiques en détection comportementale pour lesquelles il est nécessaire de définir et construire un modèle de référence du comportement de l'entité surveillée, nous avons suivi une méthode issue de la sureté de fonctionnement fondée sur la programmation N-versions pour laquelle le modèle de référence est implicite et est constitué par les autres logiciels constituant l'architecture. Nous proposons l'utilisation de COTS en remplacement des versions spécifiquement développées car développer N-versions est couteux et est réservé à des systèmes critiques du point de vue de la sécurité-innocuité. D'autres travaux et projets ont proposé des architectures fondées sur ces idées. Nos contributions se situent à différents niveaux. Nous avons pris en compte dans notre modèle général de détection d'intrusions les spécificités liées à l'utilisation de COTS en lieu et place de versions spécifiquement développées et proposé deux solutions pour parer aux problèmes induits par ces spécificités. Nous avons proposé deux approches de détection d'intrusions fondées sur cette architecture : l'une suivant une approche de type boite noire et l'autre suivant une approche de type boite grise. Notre méthode de type boite grise peut, en outre, aider l'administrateur de sécurité à effectuer un premier diagnostic des alertes. Nous avons réalisé une implémentation de ces deux approches dans le cadre des serveurs web et avons évalué pratiquement la pertinence et de la fiabilité de ces deux IDS.
Pellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0064/document.
Full textMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Brassard-Gourdeau, Éloi. "Toxicité et sentiment : comment l'étude des sentiments peut aider la détection de toxicité." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37564.
Full textAutomatic toxicity detection of online content is a major research field nowadays. Moderators cannot filter manually all the messages that are posted everyday and users constantly find new ways to circumvent classic filters. In this master’s thesis, I explore the benefits of sentiment detection for three majors challenges of automatic toxicity detection: standard toxicity detection, making filters harder to circumvent, and predicting conversations at high risk of becoming toxic. The two first challenges are studied in the first article. Our main intuition is that it is harder for a malicious user to hide the toxic sentiment of their message than to change a few toxic keywords. To test this hypothesis, a sentiment detection tool is built and used to measure the correlation between sentiment and toxicity. Next, the sentiment is used as features to train a toxicity detection model, and the model is tested in both a classic and a subversive context. The conclusion of those tests is that sentiment information helps toxicity detection, especially when using subversion. The third challenge is the subject of our second paper. The objective of that paper is to validate if the sentiments of the first messages of a conversation can help predict if it will derail into toxicity. The same sentiment detection tool is used, in addition to other features developed in previous related works. Our results show that sentiment does help improve that task as well.
Corre, Kevin. "User controlled trust and security level of Web real-time communications." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S029/document.
Full textIn this thesis, we propose three main contributions : In our first contribution we study the WebRTC identity architecture and more particularly its integration with existing authentication delegation protocols. This integration has not been studied yet. To fill this gap, we implement components of the WebRTC identity architecture and comment on the issues encountered in the process. In order to answer RQ1, we then study this specification from a privacy perspective an identify new privacy considerations related to the central position of identity provider. In the Web, the norm is the silo architecture of which users are captive. This is even more true of authentication delegation systems where most of the time it is not possible to freely choose an identity provider. In order to answer RQ3, we conduct a survey on the top 500 websites according to Alexa.com to identify the reasons why can't users choose their identity provider. Our results show that while the choice of an identity provider is possible in theory, the lack of implementation of existing standards by websites and identity providers prevent users to make this choice. In our second contribution, we aim at giving more control to users. To this end and in order to answer RQ2, we extend the WebRTC specification to allow identity parameters negotiation. We present a prototype implementation of our proposition to validate it. It reveals some limits due to the WebRTC API, in particular preventing to get feedback on the other peer's authentication strength. We then propose a web API allowing users to choose their identity provider in order to authenticate on a third-party website, answering RQ2. Our API reuse components of the WebRTC identity architecture in a client-server authentication scenario. Again, we validate our proposition by presenting a prototype implementation of our API based on a Firefox extension. Finally, in our third contribution, we look back on RQ1 and propose a trust and security model of a WebRTC session. Our proposed model integrates in a single metric the security parameters used in the session establishment, the encryption parameters for the media streams, and trust in actors of the communication setup as defined by the user. Our model objective is to help non-expert users to better understand the security of their WebRTC session. To validate our approach, we conduct a preliminary study on the comprehension of our model by non-expert users. This study is based on a web survey offering users to interact with a dynamic implementation of our model
Kondratyeva, Olga. "Timed FSM strategy for optimizing web service compositions w.r.t. the quality and safety issues." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLL004/document.
Full textService-oriented architecture (SOA) together with a family of Everything-as-a-Service (XaaS) concepts nowadays are used almost everywhere, and the proper organization of collaborative activities becomes an important challenge. With the goal of bringing to the end-user safe and reliable service with the guaranteed level of quality, issues of service compositions verification and validation become of high practical and theoretical interest. In the related works, numerous models and techniques are proposed, but mostly focused on functional and non-functional issues in isolation, while integration of these parameters within unified formal framework still remains a problem to be solved – and therefore became one of the core objectives of this thesis. In our work, we address the problems of web service composition verification and optimization with respect to functional, quality and safety properties of the composition. Finite state models are proven to be useful for testing and verification purposes as well as for service quality evaluation at each step of service development. Therefore, we propose to use the model of Finite State Machine with Timeouts (TFSM) for integrating functional service description with time-related quality and safety parameters, and derive the extension of the model in order to adequately inherit significant nondeterminism due to the lack of observability and control over third-party component services. For the purpose of component optimization in the composition, we propose a method for deriving the largest solution containing all allowed component service implementations, based on solving TFSM parallel equation. Further, techniques for extracting restricted solutions with required properties (minimized/maximized time parameters, deadlock- and livelock-safety, similarity to the initially given component, etc.) have been proposed. In cases when the specification of a composite service is provided as a set of functional requirements, possibly, augmented with quality requirements, we propose a technique to minimize this set with respect to the component under optimization. Application of the obtained results for more efficient candidate component services discovery and binding, alongside with extending the framework for more complex distributed modes of communications, are among the topics for the future work
Xydas, Ioannis. "Aide à la surveillance de l’application d’une politique de sécurité dans un réseau par prise de connaissance d’un graphe de fonctionnement du réseau." Limoges, 2007. https://aurore.unilim.fr/theses/nxfile/default/ba3a6a50-5708-4f1a-9d00-dca7fa1469cd/blobholder:0/2007LIMO4006.pdf.
Full textIn this thesis we study the possibility of applying visualization and visual analytics in the context of data analysis for network security. In particular, we studied Internet web security and by using an “intelligent” visual representation of web attacks we extracted knowledge from a network operation graph. To achieve this goal we designed and developed an intelligent prototype system. This system is a surveillance aid for the security and web analyst, offering him/her a user friendly visual tool to detect anomalies in web requests by monitoring and exploring 3D graphs, to understand quickly the kind of undergoing attack by means of colours and the ability to navigate into the web request payload, of either normal or malicious traffic, for further analysis and appropriate response. The fundamental parts of such a system are Artificial Intelligence and Visualization. A hybrid expert system such as an Evolutionary Artificial Neural Network proved to be ideal for the classification of the web attacks
Delignat-Lavaud, Antoine. "On the security of authentication protocols on the web." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE018/document.
Full textAs ever more private user data gets stored on the Web, ensuring proper protection of this data (in particular when it transits through untrusted networks, or when it is accessed by the user from her browser) becomes increasingly critical. However, in order to formally prove that, for instance, email from GMail can only be accessed by knowing the user’s password, assuming some reasonable set of assumptions about what an attacker cannot do (e.g. he cannot break AES encryption), one must precisely understand the security properties of many complex protocols and standards (including DNS, TLS, X.509, HTTP, HTML,JavaScript), and more importantly, the composite security goals of the complete Web stack.In addition to this compositional security challenge, onemust account for the powerful additional attacker capabilities that are specific to the Web, besides the usual tampering of network messages. For instance, a user may browse a malicious pages while keeping an active GMail session in a tab; this page is allowed to trigger arbitrary, implicitly authenticated requests to GMail using JavaScript (even though the isolation policy of the browser may prevent it from reading the response). An attacker may also inject himself into honest page (for instance, as a malicious advertising script, or exploiting a data sanitization flaw), get the user to click bad links, or try to impersonate other pages.Besides the attacker, the protocols and applications are themselves a lot more complex than typical examples from the protocol analysis literature. Logging into GMail already requires multiple TLS sessions and HTTP requests between (at least) three principals, representing dozens of atomic messages. Hence, ad hoc models and hand written proofs do not scale to the complexity of Web protocols, mandating the use of advanced verification automation and modeling tools.Lastly, even assuming that the design of GMail is indeed secure against such an attacker, any single programming bug may completely undermine the security of the whole system. Therefore, in addition to modeling protocols based on their specification, it is necessary to evaluate implementations in order to achieve practical security.The goal of this thesis is to develop new tools and methods that can serve as the foundation towards an extensive compositional Web security analysis framework that could be used to implement and formally verify applications against a reasonably extensive model of attacker capabilities on the Web. To this end, we investigate the design of Web protocols at various levels (TLS, HTTP, HTML, JavaScript) and evaluate their composition using a broad range of formal methods, including symbolic protocol models, type systems, model extraction, and type-based program verification. We also analyze current implementations and develop some new verified versions to run tests against. We uncover a broad range of vulnerabilities in protocols and their implementations, and propose countermeasures that we formally verify, some of which have been implemented in browsers and by various websites. For instance, the Triple Handshake attack we discovered required a protocol fix (RFC 7627), and influenced the design of the new version 1.3 of the TLS protocol
Subramanian, Deepak. "Information Flow Control for the Web Browser through a Mechanism of Split Addresses." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0006.
Full textThe modern world has evolved to the point where many services such as banking and shopping are provided thanks to web applications. These Web applications depend on server-side as well as client-side software. Because these web applications provide to users sensitive services such as banking and shopping, their security is of pivotal importance. From the server side, the range of the security threats includes attacks such as denial of service, security misconfiguration and injection of malicious code (i.e. SQL injection). From the client side, major part of the security issues come with the web browser that is the interface between the users and server side application: as any software, it can be subject to attacks such as buffer overflows. However, it is not sufficient to independently prevent security threats from each side, because some security issues of web applications are intrinsic to the web applications themselves. For instance, the modern internet consists of several webpages which are mashup webpages. A mashup, in web development, is a web page, or web application, that uses content from more than one source to create a single new service displayed in a single graphical interface. More generally, the difficulty of web application security lies in the fact that exploiting a server-side vulnerability can have a client-side impact, and vice versa. It must be noted that many vulnerabilities on the server side such as Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) have a direct impact on the web browser.In this thesis, we focus on the client side security of the web browsers, and limit ourselves to the context of Javascript. We do not consider solving the vulnerabilities themselves but providing a mechanism where user’s sensitive information is protected from disclosure (confidentiality) as well as unauthorized modifications (integrity) despite the vulnerability being exploited. For that purpose, we affirm that the vulnerabilities based on malicious script are characterized by illegal information flows. Hence, we propose an approach based on Information Flow Control (IFC). Indeed, IFC-based approaches are more encompassing in their scope to solve problems and also provide more streamlined solutions to handling the information security in its entirety. Our approach is based on a practical IFC model, called Address Split Design (ASD), that consists in splitting any variable that contains sensitive data and maintaining the symbol table to protect accesses to the secret part of these variables. We have implemented our model on the chromium V8 engine, a full-fledged JavaScript engine. Following the implementation, performance and conformance testing have been done on our implementation. The measured performance drop is significantly smaller than other comparative approaches. We further showed that implementation of our approach does not affect the general working of existing websites by performing such a test over the top websites of the internet. Further, we have also been able to verify that our model can be used to protect variables in several scenarios that would have otherwise caused disclosure of secret information
Levointurier, Christophe. "De la nécessité d'une vision holistique du code pour l'analyse statique et la correction automatique des Applications Web." Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00688117.
Full textSu, Ziyi. "Application de gestion des droits numériques au système d'information d'entreprise." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00737777.
Full textChiapponi, Elisa. "Detecting and Mitigating the New Generation of Scraping Bots." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS490.
Full textEvery day an invisible war for data takes place between e-commerce websites and web scrapers. E-commerce websites own the data at the heart of the conflict and would like to provide it only to genuine users. Web scrapers aim to have illimited and continuous access to the above-mentioned data to capitalize on it. To achieve this goal, scrapers send large amounts of requests to e-commerce websites, causing them financial problems. This led the security industry to engage in an arms race against scrapers to create better systems to detect and mitigate their requests. At present, the battle continues, but scrapers appear to have the upper hand, thanks to the usage of Residential IP Proxies (RESIPs). In this thesis, we aim to shift the balance by introducing novel detection and mitigation techniques that overcome the limitations of current state-of-the-art methods. We propose a deceptive mitigation technique that lures scrapers into believing they have obtained their target data while they receive modified information. We present two new detection techniques based on network measurements that identify scraping requests proxied through RESIPs. Thanks to an ongoing collaboration with Amadeus IT Group, we validate our results on real-world operational data. Being aware that scrapers will not stop looking for new ways to avoid detection and mitigation, this thesis provides additional contributions that can help in building the next defensive weapons for fighting scrapers. We propose a comprehensive characterization of RESIPs, the strongest weapon currently at the disposal of scrapers. Moreover, we investigate the possibility of acquiring threat intelligence on the scrapers by geolocating them when they send requests through a RESIP
Barquissau, Eric. "L’évaluation de la qualité de la relation client en ligne par les utilisateurs d’espaces clients de sites web : une application dans le secteur bancaire et dans le secteur de la téléphonie mobile." Thesis, Paris 10, 2013. http://www.theses.fr/2013PA100205.
Full textThe Internet has dramatically changed the way companies interact with their customers. Because of the importance of e-customer relationship management, companies have to reconsider their strategies in terms of relationship marketing. The purpose of this research is to investigate the way users of personal websites’ accounts evaluate e-relationship quality within two sectors: the banking sector and the mobile phone sector. This research deals with an important concept: appropriation.A qualitative study has been conducted in order to build a research model and to create a measurement scale to study the appropriation of a personal websites’ account. Therefore, an online survey (N=534) was conducted to test the hypothesis. The findings suggest that the appropriation of a personal websites’ account is a mediating variable between perceived ease of use, perceived usability and relationship quality, both in the banking sector and in the mobile phone sector. In the same way, privacy has a positive influence on e-relationship quality. Moreover, perceived interactivity has a positive influence on e-relationship quality, but that particular hypothesis is partially validated. Finally, social presence does not have a positive influence on e-relationship quality
Shbair, Wazen M. "Service-Level Monitoring of HTTPS Traffic." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0029/document.
Full textIn this thesis, we provide a privacy preserving for monitoring HTTPS services. First, we first investigate a recent technique for HTTPS services monitoring that is based on the Server Name Indication (SNI) field of the TLS handshake. We show that this method has many weakness, which can be used to cheat monitoring solutions.To mitigate this issue, we propose a novel DNS-based approach to validate the claimed value of SNI. The evaluation show the ability to overcome the shortage. Second, we propose a robust framework to identify the accessed HTTPS services from a traffic dump, without relying neither on a header field nor on the payload content. Our evaluation based on real traffic shows that we can identify encrypted HTTPS services with high accuracy. Third, we have improved our framework to monitor HTTPS services in real-time. By extracting statistical features over the TLS handshake packets and a few application data packets, we can identify HTTPS services very early in the session. The obtained results and a prototype implementation show that our method offers good identification accuracy, high HTTPS flow processing throughput, and a low overhead delay
Lalanne, Vincent. "Gestion des risques appliquée aux systèmes d’information distribués." Thesis, Pau, 2013. http://www.theses.fr/2013PAUU3052/document.
Full textIn this thesis we discuss the application of risk management to distributed information systems. We handle problems of interoperability and securisation of the exchanges within DRM systems and we propose the implementation of this system for the company: it needs to permit the distribution of self-protected contents. We then present the (our) participation in the creation of an innovative company which emphasizes on the security of information, in particular the management of risks through the ISO/IEC 27005:2011 standard. We present risks related to the use of services, highlighting in particular the ones which are not technological: we approach inheritent risks in clouds (provider failure, etc ...) but also the more insidious aspects of espionage and intrusion in personal data (Case PRISM in June 2013). In the last section, we present a concept of a DRM company which uses metadata to deploy settings in usage control models. We propose a draft formalization of metadata necessary for the implementation of a security policy and guarantee respect of regulations and legislation
Baina, Amine. "Controle d'accès pour les grandes infrastructures critiques. Application au réseau d'énergie électrique." Phd thesis, INSA de Toulouse, 2009. http://tel.archives-ouvertes.fr/tel-00432841.
Full textKhéfifi, Rania. "Informations personnelles sensibles aux contextes : modélisation, interrogation et composition." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112194/document.
Full textThis thesis was conducted within the PIMI project, financed by the National Agency of the Research. It concerns the modeling, the querying and thecomposition of personal information. We considered that the use and the accessto personal information is context dependent (e.g., social, geographical). More particularly, it aims to support the user when realising online,administrative or personal procedures. In this setting, the tackled problems arethe representation of heterogeneous information, the context-aware personalinformation spaces querying, the automatic form-filling and the automaticrealization of procedures defined at a high level of abstraction by compositionof online available services.To solve these problems, we have developped several contributions. The first oneconcerns the management of the personal information space. We havedefined a model allowing the description of personal information using severaldomain ontologies. Our model can be instantiated on the user's personalinformation with several usability values depending on the context and with ausability degree. We have also proposed two contextualquerying algorithms SQE and FQE which allow to query the recorded information.The second contribution concerns the use of these information by several onlineservices. It presents two use cases. In the case of the automaticforms-filling, we have proposed an algorithm allowing to generate a semanticquery from an annotated form representation. This query is evaluated by usingboth querying algorithms SQE and FQE. Then, in the case of the user objectiverealization (an abstract procedure) by service composition, we have extendedthe Graphplan algorithm to take into account the contextualization of the dataand the access policy rules specified by the user. The latter allows the user toincrease the control of its information and to limit their leaking
Baïna, Amine. "Contrôle d'Accès pour les Grandes Infrastructures Critiques : application au réseau d'énergie électrique." Toulouse, INSA, 2009. http://eprint.insa-toulouse.fr/archive/00000296/.
Full textBecause of its physical and logical vulnerabilities, critical infrastructure (CI) may suffer failures, and because of the interdependencies between CIs, simple failures can have dramatic consequences on the entire infrastructure. In our work, we mainly focus on information systems and communications (CII: Critical Information Infrastructure) dedicated to the electrical power grid. We proposed a new approach to address security problems faced by an IIC, particularly those related to access control and collaboration. The goal of this study is to provide each organization belonging to the IIC the opportunity to collaborate with others while maintaining control over its data and its internal security policy. We modeled and developed PolyOrBAC, a platform for collaborative access control, based on the access control model OrBAC and on the Web Services technology, this platform is applicable in the context of a critical infrastructure in general, and more particularly to an electrical power grid
Avanesov, Tigran. "Résolution de contraintes de déductibilité : application à la composition de services Web sécurisés." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00641237.
Full textPoitevin-Lavenu, François. "E-fiscalité : les règles fiscales à l'ère de la dématérialisation." Thesis, Paris 2, 2011. http://www.theses.fr/2011PA020043.
Full textThe business exchange dematerialization needs clear tax rules to preserve state tax rights and transactions security for the global business within the framework e-commerce growth. We have to examine domestic law and international tax law for direct or indirect taxes. The dematerialization process leads widely organization and prerogatives tax authorities’ revolution. The tax return, the collection of tax and the tax audit adaptations must be addressed and procedures are deeply changed. Companies and private individuals have to adapt to the new technologies even if they do e-business or not
El, jaouhari Saad. "A secure design of WoT services for smart cities." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0120/document.
Full textThe richness and the versatility of WebRTC, a new peer-to-peer, real-time and browser based communication technology, allowed the imagination of new and innovative services. In this thesis, we analyzed the capabilities required to allow a participant in a WebRTC session to access the smart Things belonging to his own environment as well as those of any other participant in the same session. The access to such environment, which we call “SmartSpace (SS)”, can be either passive, for example by monitoring the contextual information provided by the sensors, or active by requesting the execution of commands by the actuators, or a mixture of both. This approach deserves attention because it allows solving in an original way various issues such as allowing experts to remotely exercise and provide their expertise and/or knowing how. From a technical point of view the issue is not trivial because it requires a smooth and mastered articulation between two different technologies: WebRTC and the Internet of Things (IoT) /Web of Things (WoT). Hence, the first part of the problem studied in this thesis, consists in analyzing the possibilities of extending WebRTC capabilities with theWoT. So as to provide a secure and privacy-respectful access to the various smart objects located in the immediate environment of a participant to any otherend-user involved in the same ongoing WebRTC session. This approach is then illustrated in the ehealth domain and tested in a real smart home (a typical example of a smart space). Moreover,positioning our approach in the context of communication services operating in smart cities requires the ability to support a multiplicity of SSs,each with its own network and security policy. Hence,in order to allow a participant to access one of his own SSs or one of another participant (through a delegation of access process), it becomes necessary to dynamically identify, select, deploy, and enforce the SS’s specific routing and security rules, so as to have an effective, fast and secure access. Therefore, the second part of the problem studied in this Ph.D.consists in defining an efficient management of the routing and security issues regarding the possibility of having multiple SSs distributed over the entire network
Zheng, Lili. "Les antécédents et les conséquences des risques perçus dans les achats sur Internet en Chine et en France : une approche interculturelle." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENG011/document.
Full textThe perceived risks associated with online shopping have a critical effect on consumer decision making. In addition, cultural values provide a good theoretical basis for understanding perceived risk. Given the rapid globalization of online shopping, an understanding of the reasons why perceived risk vary in different cultures regarding online shopping should be crucial. The research question furnishing the main impetus for this study is: What are the significant differences in the effect of several determinants of perceived risk of online clothing shopping depending on cultural differences between China and France? Structural equation models with the maximum likelihood estimation method are employed to test all the hypothesized relationships. The research puts forth some findings. First, it is interesting to note that both the Chinese and French respondents perceive low levels of non-personal and personal risk regarding their online clothing purchases, but the Chinese respondents perceive higher non-personal risk than the French respondents and higher personal risk than the French respondents. The second key finding is that privacy concerns, security protection, and reputation have different effects on both consumer perception of non-personal risk and personal risk depending on cultural differences. Reputation is more valued in collectivist cultures (China), while privacy concerns and security protection are more valued in individualist cultures (France). Additionally, for both the Chinese and French samples, non-personal perceived risk significantly effects intention to repurchase. We found personal perceived risk has a significant effect only on Chinese consumer intention to repurchase
Duffort, Benoit. "Les politiques de défense française et britannique face à l'émergence de la politique européenne de sécurité et de défense [1991-2001]." Thesis, Paris 3, 2009. http://www.theses.fr/2009PA030048.
Full textHalf a century after the Dunkirk treaty, France and the United Kingdom signed in Saint-Malo a declaration on European defence of paramount historic significance. From this declaration originated the implementation, within the frame of the European Union, of the European Security and Defence Policy, which was declared ‘operational’ in December 2001 during the Laeken Summit. As leading parties of this process France and the United Kingdom had essential interests to safeguard in the conducting of the European and transatlantic negotiations which resulted in this historic compromise. Based on archival records which have been either recently released or consulted by special dispensation, on discussions with leading figures of the period or on parliamentary papers about the question, this thesis intends to analyse the evolution of the French and British defence policies in their fullest sense prior to this process and from the enforcement of the Common Foreign and Security Policy, from which originated the ESDP, instituted at the end of the 1991 Intergovernmental Conference which led to the implementation of the European Union