Дисертації з теми "Analyse des logiciels malveillants"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Analyse des logiciels malveillants".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Calvet, Joan. "Analyse Dynamique de Logiciels Malveillants." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00922384.
Повний текст джерелаThierry, Aurélien. "Désassemblage et détection de logiciels malveillants auto-modifiants." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0011/document.
Повний текст джерелаThis dissertation explores tactics for analysis and disassembly of malwares using some obfuscation techniques such as self-modification and code overlapping. Most malwares found in the wild use self-modification in order to hide their payload from an analyst. We propose an hybrid analysis which uses an execution trace derived from a dynamic analysis. This analysis cuts the self-modifying binary into several non self-modifying parts that we can examine through a static analysis using the trace as a guide. This second analysis circumvents more protection techniques such as code overlapping in order to recover the control flow graph of the studied binary. Moreover we review a morphological malware detector which compares the control flow graph of the studied binary against those of known malwares. We provide a formalization of this graph comparison problem along with efficient algorithms that solve it and a use case in the software similarity field
Palisse, Aurélien. "Analyse et détection de logiciels de rançon." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S003/document.
Повний текст джерелаThis phD thesis takes a look at ransomware, presents an autonomous malware analysis platform and proposes countermeasures against these types of attacks. Our countermeasures are real-time and are deployed on a machine (i.e., end-hosts). In 2013, the ransomware become a hot subject of discussion again, before becoming one of the biggest cyberthreats beginning of 2015. A detailed state of the art for existing countermeasures is included in this thesis. This state of the art will help evaluate the contribution of this thesis in regards to the existing current publications. We will also present an autonomous malware analysis platform composed of bare-metal machines. Our aim is to avoid altering the behaviour of analysed samples. A first countermeasure based on the use of a cryptographic library is proposed, however it can easily be bypassed. It is why we propose a second generic and agnostic countermeasure. This time, compromission indicators are used to analyse the behaviour of process on the file system. We explain how we configured this countermeasure in an empiric way to make it useable and effective. One of the challenge of this thesis is to collate performance, detection rate and a small amount of false positive. To finish, results from a user experience are presented. This experience analyses the user's behaviour when faced with a threat. In the final part, I propose ways to enhance our contributions but also other avenues that could be explored
Beaucamps, Philippe. "Analyse de Programmes Malveillants par Abstraction de Comportements." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2011. http://tel.archives-ouvertes.fr/tel-00646395.
Повний текст джерелаLebel, Bernard. "Analyse de maliciels sur Android par l'analyse de la mémoire vive." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/29851.
Повний текст джерелаMobile devices are at the core of modern society. Their versatility has allowed third-party developers to generate a rich experience for the user through mobile apps of all types (e.g. productivity, games, communications). As mobile platforms have become connected devices that gather nearly all of our personal and professional information, they are seen as a lucrative market by malware developers. Android is an open-sourced operating system from Google targeting specifically the mobile market and has been targeted by malicious activity due the widespread adoption of the latter by the consumers. As Android malwares threaten many consumers, it is essential that research in malware analysis address specifically this mobile platform. The work conducted during this Master’s focuses on the analysis of malwares on the Android platform. This was achieved through a literature review of the current malware trends and the approaches in static and dynamic analysis that exists to mitigate them. It was also proposed to explore live memory forensics applied to the analysis of malwares as a complement to existing methods. To demonstrate the applicability of the approach and its relevance to the Android malwares, a case study was proposed where an experimental malware has been designed to express malicious behaviours difficult to detect through current methods. The approach explored is called differential live memory analysis. It consists of analyzing the difference in the content of the live memory before and after the deployment of the malware. The results of the study have shown that this approach is promising and should be explored in future studies as a complement to current approaches.
Puodzius, Cassius. "Data-driven malware classification assisted by machine learning methods." Electronic Thesis or Diss., Rennes 1, 2022. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/3dabb48c-b635-46a5-bcbe-23992a2512ec.
Повний текст джерелаHistorically, malware (MW) analysis has heavily resorted to human savvy for manual signature creation to detect and classify MW. This procedure is very costly and time consuming, thus unable to cope with modern cyber threat scenario. The solution is to widely automate MW analysis. Toward this goal, MW classification allows optimizing the handling of large MW corpora by identifying resemblances across similar instances. Consequently, MW classification figures as a key activity related to MW analysis, which is paramount in the operation of computer security as a whole. This thesis addresses the problem of MW classification taking an approach in which human intervention is spared as much as possible. Furthermore, we steer clear of subjectivity inherent to human analysis by designing MW classification solely on data directly extracted from MW analysis, thus taking a data-driven approach. Our objective is to improve the automation of malware analysis and to combine it with machine learning methods that are able to autonomously spot and reveal unwitting commonalities within data. We phased our work in three stages. Initially we focused on improving MW analysis and its automation, studying new ways of leveraging symbolic execution in MW analysis and developing a distributed framework to scale up our computational power. Then we concentrated on the representation of MW behavior, with painstaking attention to its accuracy and robustness. Finally, we fixed attention on MW clustering, devising a methodology that has no restriction in the combination of syntactical and behavioral features and remains scalable in practice. As for our main contributions, we revamp the use of symbolic execution for MW analysis with special attention to the optimal use of SMT solver tactics and hyperparameter settings; we conceive a new evaluation paradigm for MW analysis systems; we formulate a compact graph representation of behavior, along with a corresponding function for pairwise similarity computation, which is accurate and robust; and we elaborate a new MW clustering strategy based on ensemble clustering that is flexible with respect to the combination of syntactical and behavioral features
Nisi, Dario. "Unveiling and mitigating common pitfalls in malware analysis." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS528.
Повний текст джерелаAs the importance of computer systems in modern-day societies grows, so does the damage that malicious software causes. The security industry and malware authors engaged in an arms race, in which the first creates better detection systems while the second try to evade them. In fact, any wrong assumption (no matter how subtle) in the design of an anti-malware tool may create new avenues for evading detection. This thesis focuses on two often overlooked aspects of modern malware analysis techniques: the use of API-level information to encode malicious behavior and the reimplementation of parsing routines for executable file formats in security-oriented tools. We show that taking advantage of these practices is possible on a large and automated scale. Moreover, we study the feasibility of fixing these problems at their roots, measuring the difficulties that anti-malware architects may encounter and providing strategies to solve them
Reynaud, Daniel. "Analyse de codes auto-modifiants pour la sécurité logicielle." Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL049N/document.
Повний текст джерелаSelf-modifying programs run in a very specific way: they are capable to rewrite their own code at runtime. Remarkably absent from theoretical computation models, they are present in every modern computer and operating system. Indeed, they are used by bootloaders, for just-in-time compilation or dynamic optimizations. They are also massively used by malware authors in order to bypass antivirus signatures and to delay analysis. Finally, they are unintentionally present in every program, since we can model code injection vulnerabilities (such as buffer overflows) as the ability for a program to accidentally execute data.In this thesis, we propose a formal framework in order to characterize advanced self-modifying behaviors and code armoring techniques. A prototype, TraceSurfer, allows us to detect these behaviors by using fine-grained execution traces and to visualize them as self-reference graphs. Finally, we assess the performance and efficiency of the tool by running it on a large corpus of malware samples
Lemay, Frédérick. "Instrumentation optimisée de code pour prévenir l'exécution de code malicieux." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29030/29030.pdf.
Повний текст джерелаKhoury, Raphaël. "Détection du code malicieux : système de type à effets et instrumentation du code." Thesis, Université Laval, 2005. http://www.theses.ulaval.ca/2005/23250/23250.pdf.
Повний текст джерелаThe purpose of this thesis is twofold. In the first place it presents a comparative study of the advantages and drawbacks of several approaches to insure software safety and security. It then focuses more particularly on combining static analyses and dynamic monitoring in order to produce a more powerful security architecture. The first chapters of the thesis present an analytical review of the various static, dynamic and hybrid approaches that can be used to secure a potentially malicious code. The advantages and drawbacks of each approach are thereby analyzed and the field of security properties that can be enforced by using it are identified. The thesis then focuses on the possibility of combining static and dynamic analysis through a new hybrid approach. This approach consists in a code instrumentation, that only alters those parts of a program where it is necessary to do so to insure the respect of a user-defined security policy expressed in a set of modal μ-calculus properties. this instrumentation is guided by a static analysis based on a type and effect system. The effects represent the accesses made to pretested system ressources.
Freyssinet, Eric. "Lutte contre les botnets : analyse et stratégie." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066390/document.
Повний текст джерелаBotnets, or networks of computers infected with malware and connected to a command and control system, is one of the main tools for criminal activities on the Internet today. They allow the development of a new type of crime: crime as a service (CaaS). They are a challenge for law enforcement. First by the importance of their impact on the security of networks and the commission of crimes on the Internet. Next, with regards to the extremely international dimension of their dissemination and therefore the enhanced difficulty in conducting investigations. Finally, through the large number of actors that may be involved (software developers, botnet masters, financial intermediaries, etc.). This thesis proposes a thorough study of botnets (components, operation, actors), the specificaion of a data collection method on botnet related activities and finally the technical and organizational arrangements in the fight against botnets; it concludes on proposals on the strategy for this fight. The work carried out has confirmed the relevance, for the effective study of botnets, of a model encompassing all their components, including infrastructure and actors. Besides an effort in providing definitions, the thesis describes a complete model of the life cycle of a botnet and offers methods for categorization of these objects. This work shows the need for a shared strategy which should include the detection elements, coordination between actors and the possibility or even the obligation for operators to implement mitigation measures
Pektaş, Abdurrahman. "Behavior based malware classification using online machine learning." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM065/document.
Повний текст джерелаRecently, malware, short for malicious software has greatly evolved and became a major threat to the home users, enterprises, and even to the governments. Despite the extensive use and availability of various anti-malware tools such as anti-viruses, intrusion detection systems, firewalls etc., malware authors can readily evade these precautions by using obfuscation techniques. To mitigate this problem, malware researchers have proposed various data mining and machine learning approaches for detecting and classifying malware samples according to the their static or dynamic feature set. Although the proposed methods are effective over small sample set, the scalability of these methods for large data-set are in question.Moreover, it is well-known fact that the majority of the malware is the variant of the previously known samples. Consequently, the volume of new variant created far outpaces the current capacity of malware analysis. Thus developing malware classification to cope with increasing number of malware is essential for security community. The key challenge in identifying the family of malware is to achieve a balance between increasing number of samples and classification accuracy. To overcome this limitation, unlike existing classification schemes which apply machine learning algorithm to stored data, i.e., they are off-line, we proposed a new malware classification system employing online machine learning algorithms that can provide instantaneous update about the new malware sample by following its introduction to the classification scheme.To achieve our goal, firstly we developed a portable, scalable and transparent malware analysis system called VirMon for dynamic analysis of malware targeting Windows OS. VirMon collects the behavioral activities of analyzed samples in low kernel level through its developed mini-filter driver. Secondly we set up a cluster of five machines for our online learning framework module (i.e. Jubatus), which allows to handle large scale of data. This configuration allows each analysis machine to perform its tasks and delivers the obtained results to the cluster manager.Essentially, the proposed framework consists of three major stages. The first stage consists in extracting the behavior of the sample file under scrutiny and observing its interactions with the OS resources. At this stage, the sample file is run in a sandboxed environment. Our framework supports two sandbox environments: VirMon and Cuckoo. During the second stage, we apply feature extraction to the analysis report. The label of each sample is determined by using Virustotal, an online multiple anti-virus scanner framework consisting of 46 engines. Then at the final stage, the malware dataset is partitioned into training and testing sets. The training set is used to obtain a classification model and the testing set is used for evaluation purposes .To validate the effectiveness and scalability of our method, we have evaluated our method on 18,000 recent malicious files including viruses, trojans, backdoors, worms, etc., obtained from VirusShare, and our experimental results show that our method performs malware classification with 92% of accuracy
Lespérance, Pierre-Luc. "Détection des variations d'attaques à l'aide d'une logique temporelle." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23481/23481.pdf.
Повний текст джерелаTa, Thanh Dinh. "Modèle de protection contre les codes malveillants dans un environnement distribué." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0040/document.
Повний текст джерелаThe thesis consists in two principal parts: the first one discusses the message for- mat extraction and the second one discusses the behavioral obfuscation of malwares and the detection. In the first part, we study the problem of “binary code coverage” and “input message format extraction”. For the first problem, we propose a new technique based on “smart” dynamic tainting analysis and reverse execution. For the second one, we propose a new method using an idea of classifying input message values by the corresponding execution traces received by executing the program with these input values. In the second part, we propose an abstract model for system calls interactions between malwares and the operating system at a host. We show that, in many cases, the behaviors of a malicious program can imitate ones of a benign program, and in these cases a behavioral detector cannot distinguish between the two programs
Lacasse, Alexandre. "Approche algébrique pour la prévention d'intrusions." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23379/23379.pdf.
Повний текст джерелаFerrara, Pietro. "ANALYSE STATIQUE DE LOGICIELS MULTITÂCHES PAR INTERPRÉTATION ABSTRAITE." Phd thesis, Ecole Polytechnique X, 2009. http://tel.archives-ouvertes.fr/tel-00417502.
Повний текст джерелаLes programmes multitâche exécutent plusieurs tâches en parallèle. Ces tâches communiquent implicitement par le biais de la mémoire partagée et elles se synchonisent sur des moniteurs (les primitives wait-notify, etc..). Il y a quelques années, les architectures avec double processeurs ont commencé à être disponibles sur le marché à petit prix. Aujourd'hui, presque tous les ordinateurs ont au moins deux noyaux, la tendance actuelle du marché étant de mettre de plus en plus de processeurs par puce. Cette révolution amène également de nouveaux défis en matière de programmation, car elle demande aux développeurs d'implanter des programmes multitâche. Le multitâche est supporté en natif par la plupart des langages de programmation courants, comme Java et C#.
Le but de l'analyse statique est de calculer des informations sur le comportement d'un programme, de manière conservative et automatique. Une application de l'analyse statique est le développement d'outils qui aident au débogage des programmes. Plusieurs méthodes d'analyse statique ont été proposées. Nous suivrons le cadre de l'interprétation abstraite, une théorie mathématique permettant de définir des approximations correctes de sémantiques de programmes. Cette méthode a déjà été utilisée pour un large spectre de langages de programmation.
L'idée fondamentale des analyseurs statiques génériques est de développer un outils qui puissent être interfacé avec différents domaines numériques et différentes propriétés. Pendant ces dernières années, beaucoup de travaux se sont attaqués à cet enjeu, et ils ont été appliqué avec succès pour déboguer des logiciels industriels. La force de ces analyseurs réside dans le fait qu'une grande partie de l'analyse peut être réutilisée pour vérifier plusieurs propriétés. L'utilisation de différents domaines numériques permet le développement d'analyses plus rapides mais moins précises, ou plus lentes mais plus précises.
Dans cette thèse, nous présentons la conception d'un analyseur générique pour des programmes multitâche. Avant tout, nous définissons le modèle mémoire, appelé happens-before memory model. Puis, nous approximons ce modéle mémoire en une semantique calculable. Les modéles mémoire définissent les comportements autorisés pendant l'exé-cution d'un programme multitâche. Commençant par la définition (informelle) de ce modèle mémoire particulier, nous définissons une sémantique qui construit toutes les exécutions finies selon ce modèle mémoire. Une exécution d'un programme multitâche est décrite par une function qui associe les tâches à des séquences (ou traces) d'états. Nous montrons comment concevoir une sémantique abstraite calculable, et nous montrons formellement la correction des résultat de cette analyse.
Ensuite, nous définissons et approximons une nouvelle propriété qui porte sur les comportements non-déterministes causés par le multitâche, c'est à dire aux entrelacements arbitraires pendant l'exécution de differentes instructions de lecture. Avant tout, le non déterminisme d'un programme multitâche se définit par une différence entre plusieurs exécutions. Si deux exécutions engendrent des comportements différents dus au valeurs qui sont lues ou écrites en mémoire partagée, alors le programme est non déterministe. Nous approximons cette propriété en deux étapes: dans un premier temps, nous regroupons, pour chaque tâche, la valeur (abstraite) qui peut être écrite dans la mémoire partagée à un point de programme donné. Dans un deuxième temps, nous résumons toutes les valeurs pouvant être écrites en parallèle, tout en nous rapellant l'ensemble des tâches qui pourraient les avoir écrites. à un premier niveau d'approximation, nous introduisons un nouveau concept de déterminisme faible. Nous proposons par ailleurs d'autres manière affaiblir la propriété de déterminisme, par exemple par projection des traces et des états, puis nous définissons une hierarchie globale de ces affaiblissements. Nous étudions aussi comment la présence de conflit sur les accès des données peut affecter le déterminisme du programme.
Nous appliquons ce cadre de travail théorique à Java. En particulier, nous définissons une sémantique du language objet de Java, selon sa spécification. Ensuite, nous approximons cette sémantique afin de garder uniquement l'information qui est nécessaire pour l'analyse des programmes multitâche. Le cœur de cette abstraction est une analyse d'alias qui approxime les références afin d'identifier les tâches, de vérifier les accès en mémoire partagée, et de détecter quand deux tâches ont un moniteur commun afin d'en déduire quelles parties du code ne peuvent pas être éxécutées en parallèle.
L'analyseur générique qui est décrit ci-dessus a été entierement implanté, dans un outils appelé Checkmate. Checkmate est ainsi le premier analyseur générique pour des programmes multitâche écrits en Java. Des résultats expérimentaux sont donnés et analysés en détails. En particulier, nous étudions la précision de l'analyse lorsqu'elle est appliquée à des schémas courants de la programmation concurrente, ainsi qu'à d'autres exemples. Nous observons également les performances de l'analyse lorsqu'elle est appliquée à une application incrémentale, ainsi qu'à des exemples de références bien connus.
Une autre contribution de cette thèse est l'extension d'un analyseur générique existant qui s'appelle Clousot et qui permet de vérifier le non débordement des mémoires tampons. Il s'avère que cette analyse passe à l'échelle des programmes industriels et qu'elle est précise. En résumé, nous présentons une application d'un analyseur statique générique industriel existant pour détecter et prouver une propriété présentant un intérêt pratique, ce qui montre la puissance de cette approche dans le développement d'outils qui soient utiles pour les développeurs.
Chaabane, Rim. "Analyse et optimisation de patterns de code." Paris 8, 2011. http://www.theses.fr/2011PA084174.
Повний текст джерелаNotre travail consiste en l’analyse et l’optimisation du code source d’applications de type système hérité (ou legacy system). Nous avons particulièrement travaillé sur un logiciel, appelé GP3, qui est développé et maintenu par la société de finances Sungard. Ce logiciel existe depuis plus d’une vingtaine d’années, il est écrit en un langage propriétaire, procédural et de 4ème génération, appelé ADL (ou Application Development Language). Ce logiciel à été initialement développé sous VMS et accédait à des bases de données d’ancienne génération. Pour des raisons commerciales, il fut porté sous UNIX et s’adresse maintenant à des SGBD-R de nouvelles génération ; Oracle et Sybase. Il a également été étendu de manière à offrir une interface web. Ce système hérité doit maintenant faire face à de nouveaux défis, comme la croissance de la taille des bases de données. Durant ces 20 dernières années, nous avons pu observer la fusion de plusieurs entreprises, ce qui implique la fusion de bases de données. Ces dernières peuvent dépasser les 1 Téra de données et plus encore sont à prévoir à l’avenir. Dans ce nouveau contexte, le langage ADL montre des limites à traiter une aussi importante masse de données. Des patterns de code, désignant des structures d’accès en base, sont suspectés d’être responsables de dégradations des performances. Notre travail consiste à détecter toutes les instances de patterns dans le code source, puis d’identifier les instances les plus consommatrices en temps d’exécution et en nombre d’appels. Nous avons développé un premier outil, nommé Adlmap, basé sur l’analyse statique de code, et qui permet de détecter toutes les accès en base dans le code source. Les accès en base identifiés comme patterns de code sont marqués. Le second outil que nous avons développé, nommé Pmonitor, est basé sur une analyse hybride ; combinaison d’analyses statique et dynamique. Cet outil nous permet de mesurer les performances des patterns de code et ainsi, d’identifier les instances les moins performantes
Millon, Etienne. "Analyse de sécurité de logiciels système par typage statique." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01067475.
Повний текст джерелаMarteau, Hubert. "Une méthode d'analyse de données textuelles pour les sciences sociales basée sur l'évolution des textes." Tours, 2005. http://www.theses.fr/2005TOUR4028.
Повний текст джерелаThis PhD Thesis aims at bringing to sociologists a data-processing tool wich allows them to analyse of semi-directing open talks. The proposed tool performs in two steps : an indexation of the talks followed by a classification. Usually, indexing methods rely on a general stastistical analysis. Such methods are suited for texts having contents and structure ( literary texts, scientific texts,. . . ). These texts have more vocabulary and structure than talks (limitation to 1000 words for suche texts). On the basis of the assumption that the sociological membership strongly induces the form of the speech, we propose various methods to evaluate the structure and the evolution of the texts. The methods attempt to find new representations of texts (image, signal) and to extract values from these new representations. Selected classification is a classification by trees (NJ). It has a low complexity and it respects distances, then this method is a good solution to provide a help to classification
Benayoun, Vincent. "Analyse de dépendances ML pour les évaluateurs de logiciels critiques." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2014. http://tel.archives-ouvertes.fr/tel-01062785.
Повний текст джерелаBabau, Jean-Philippe. "Etude du comportement temporel des applications temps réel à contraintes strictes basée sur une analyse d'ordonnançabilité." Poitiers, 1996. http://www.theses.fr/1996POIT2305.
Повний текст джерелаMordal, Karine. "Analyse et conception d'un modèle de qualité logiciel." Paris 8, 2012. http://www.theses.fr/2012PA084030.
Повний текст джерелаWe present a model and prototype for analysis and assess quality of large size industrial software, Squash model based on the empirical model developed by Qualixo and Air France-KLM enterprises. We determine an evaluation methodology for both technical and functional quality principles. Then we formalized a quality model taking into account the industrial applications requirements from this methodology. The implementation of this model is being validated by Generali. From the domain of software development and functional validation, we have formalized a model based on hierarchical quality models from standards ISO 9126 and Square. This model is divided into two levels: the conceptual quality level that defines the main quality principles and the technical quality that defines basic technical rules and measures used to assess the top level. The main issue of quality model conception is how to bring the gap between metrics and quality principles : the metrics meannig often defined for individual components cannot be easily transposed to higher abstraction levels. Furthermore, we need to highlight problems and do not hide progress. To achieve this, we use combinaison and aggregation formulas to asses quality principles from metrics and fuzzy logic to assess quality principles from non-metric measurements
BARROS, SERGE. "Analyse a priori des consequences de la modification de systemes logiciels : de la theorie a la pratique." Toulouse 3, 1997. http://www.theses.fr/1997TOU30267.
Повний текст джерелаGaroche, Pierre-Loïc Sallé Patrick. "Analyse statistique d'un calcul d'acteurs par interprétation abstraite Statistic analysis of an actor-based process calculus by abstract interpretation /." Toulouse : INP Toulouse, 2008. http://ethesis.inp-toulouse.fr/archive/00000629.
Повний текст джерелаIrfan, Muhammad Naeem. "Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00767894.
Повний текст джерелаNassar, Mahmoud Coulette Bernard. "Analyse/conception par points de vue le profil VUML /." Toulouse : INP Toulouse, 2005. http://ethesis.inp-toulouse.fr/archive/00000179.
Повний текст джерелаBoudjema, El Habib. "Défense contre les attaques de logiciels." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1015/document.
Повний текст джерелаIn the beginning of the third millennium we are witnessing a new age. This new age is characterized by the shift from an industrial economy to an economy based on information technology. It is the Information Age. Today, we rely on software in practically every aspect of our life. Information technology is used by all economic actors: manufactures, governments, banks, universities, hospitals, retail stores, etc. A single software vulnerability can lead to devastating consequences and irreparable damage. The situation is worsened by the software becoming larger and more complex making the task of avoiding software flaws more and more difficult task. Automated tools finding those vulnerabilities rapidly before it is late, are becoming a basic need for software industry community. This thesis is investigating security vulnerabilities occurring in C language applications. We searched the sources of these vulnerabilities with a focus on C library functions calling. We dressed a list of property checks to detect code portions leading to security vulnerabilities. Those properties give for a library function call the conditions making this call a source of a security vulnerability. When these conditions are met the corresponding call must be reported as vulnerable. These checks were implemented in Carto-C tool and experimented on the Juliet test base and on real life application sources. We also investigated the detection of exploitable vulnerability at binary code level. We started by defining what an exploitable vulnerability behavioral patterns are. The focus was on the most exploited vulnerability classes such as stack buffer overflow, heap buffer overflow and use-after-free. After, a new method on how to search for this patterns by exploring application execution paths is proposed. During the exploration, necessary information is extracted and used to find the patterns of the searched vulnerabilities. This method was implemented in our tool Vyper and experimented successfully on Juliet test base and real life application binaries.level. We started by defining what an exploitable vulnerability behavioral patterns are. The focus was on the most exploited vulnerability classes such as stack buffer overflow, heap buffer overflow and use-after-free. After, a new method on how to search for this patterns exploring application execution paths is proposed. During the exploration, necessary information is extracted and used to find the patterns of the searched vulnerabilities. This method was implemented in our Vyper tool and experimented successfully on Juliet test base and real life application binaries
Fontalbe, Patrick. "Evaluation partielle symbolique : une analyse semantique des programmes c en vue de leur verification." Paris 6, 1996. http://www.theses.fr/1996PA066147.
Повний текст джерелаAguila, Orieta Del. "Analyse et structuration des données dans les logiciels de CAO en électromagnétisme." Grenoble INPG, 1988. http://www.theses.fr/1988INPG0077.
Повний текст джерелаSantana, Orieta. "Analyse et structuration des données dans les logiciels de CAO en électromagnétisme." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37613089w.
Повний текст джерелаFerlin, Antoine. "Vérification de propriétés temporelles sur des logiciels avioniques par analyse dynamique formelle." Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0024/document.
Повний текст джерелаSoftware Verification is decisive for embedded software. The different verification approaches can be classified in four categories : non formal static analysis,formal static analysis, non formal dynamic analysis and formal dynamic analysis.The main goal of this thesis is to verify temporal properties on real industrial applications,with the help of formal dynamic analysis.There are three parts for this contribution. A language, which is well adapted to the properties we want to verify in the Airbus context was defined. This language is grounded on linear temporal logic and also on a regular expression language.Verification of a temporal property is done on an execution trace, generated from an existing test case. Generation also depends on required information to verify the formalized property. Static analysis is used to generate the trace depending on the formalized property.The thesis also proposes a pragmatic solution to the end of trace problem. In addition,specific adaptations and optimisations were defined to improve efficiency and user-friendliness and thus allow an industrial use of this approach. Two applications were implemented. Some experiments were led on different Airbus software
Hourtolle, Catherine. "Conception de logiciels sûrs de fonctionnement : analyse de la sécurité des logiciels : mécanismes de décision pour la programmation en n-versions." Toulouse, INPT, 1987. http://www.theses.fr/1987INPT059H.
Повний текст джерелаHourtolle, Catherine. "Conception de logiciels sûrs de fonctionnement analyse de la sécurité des logiciels, mécanismes de décision pour la programmation en N-versions /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37605979b.
Повний текст джерелаBourmaud, Gaëtan Rabardel Pierre. "Les systèmes d' instruments méthodes d' analyse et perspectives de conception /." Saint-Denis : Université de Paris VIII, 2006. http://www.bu.univ-paris8.fr/web/collections/theses/bourmaud_gaetan.pdf.
Повний текст джерелаBrejon, Paul. "Les logiciels d'energetique des batiments : developpement, evaluation technique, illustrations." Paris, ENMP, 1988. http://www.theses.fr/1988ENMP0110.
Повний текст джерелаErrami, Mounir. "Analyse statistique des structures tridimensionnelles de protéines et validation de familles structurales à bas taux d'identité." Lyon 1, 2002. http://www.theses.fr/2002LYO10178.
Повний текст джерелаEl, Hatib Souad. "Une approche sémantique de détection de maliciel Android basée sur la vérification de modèles et l'apprentissage automatique." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/66322.
Повний текст джерелаThe ever-increasing number of Android malware is accompanied by a deep concern about security issues in the mobile ecosystem. Unquestionably, Android malware detection has received much attention in the research community and therefore it becomes a crucial aspect of software security. Actually, malware proliferation goes hand in hand with the sophistication and complexity of malware. To illustrate, more elaborated malware like polymorphic and metamorphic malware, make use of code obfuscation techniques to build new variants that preserve the semantics of the original code but modify it’s syntax and thus escape the usual detection methods. In the present work, we propose a model-checking based approach that combines static analysis and machine learning. Mainly, from a given Android application we extract an abstract model expressed in terms of LNT, a process algebra language. Afterwards, security related Android behaviours specified by temporal logic formulas are checked against this model, the satisfaction of a specific formula is considered as a feature, finally machine learning algorithms are used to classify the application as malicious or not.
Josse, Sébastien. "Analyse et détection dynamique de code viraux dans un contexte cryptographique (et application à l'évaluation de logiciel antivirus)." Palaiseau, Ecole polytechnique, 2009. http://www.theses.fr/2009EPXX0019.
Повний текст джерелаThis thesis is devoted to the problem of anti-virus software evaluation. The end user of an anti-virus software package does not always know what confidence he can place in his product to counter effectively the viral threat. In order to answer this question, it is necessary to formulate the security issue which such a product must respond to and to have tools and methodological, technical and theoretical criteria for assessing the robustness of security functions and the relevance of design choices against an identified viral threat. I concentrate my analysis of the viral threat on a few mechanisms suited to the white box context, i. E. Allowing a virus to protect the confidentiality of its critical data and to mask its operations in an environment completely controlled by the attacker. A first mandatory step before any organization of a line of defense is to analyze viruses, in order to understand their operation and impact on the system. Software reverse-engineering methods and techniques occupy a central place here (alongside cryptanalysis methods). I basically focus on the dynamic methods of information extraction based on an emulation of the hardware supporting the operating system. The evaluation of a detection engine based on objective criteria requires an effort of modeling. I study a few models (formal grammars, abstract interpretation, statistics). Each of these models makes it possible to formalize some aspects of the viral detection problem. I make every effort to study the criteria it is possible to define in each of these formal frameworks and their use for part of the evaluation tasks of a detection engine. My work led me to implement a methodological approach and a testing platform for the robustness analysis of the functions and mechanisms of an anti-virus. I developed a tool to analyze viral code, called VxStripper, whose features are useful to several tests. The formal tools are used to qualify (on the basis of the detailed design information) compliance and effectiveness of a detection engine
Zhioua, Abir. "Analyse de diverses distances en vérification formelle probabiliste." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28874/28874.pdf.
Повний текст джерелаSims, Elodie-Jane. "Analyse statique de pointeurs et logique de séparation." Palaiseau, Ecole polytechnique, 2007. http://www.theses.fr/2007EPXX0038.
Повний текст джерелаBlanchet, Christophe. "Logiciel MPSA et ressources bioinformatiques client-serveur web dédiés à l'analyse de séquences de protéine." Lyon 1, 1999. http://www.theses.fr/1999LYO10139.
Повний текст джерелаKriaa, Hassen. "Analyse et conception révisable par interaction entre agents : Une approche hybride." Pau, 2000. http://www.theses.fr/2000PAUU3011.
Повний текст джерелаHan, Woo-Suck. "Analyse linéaire et non-linéaire de plaques et coques par éléments finis en statique et dynamique sur micro-ordinateur." Vandoeuvre-les-Nancy, INPL, 1989. http://www.theses.fr/1989NAN10177.
Повний текст джерелаGarcia, Arnaud. "Analyse statistique et morphologique des images multivaluées : développements logiciels pour les applications cliniques." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2008. http://tel.archives-ouvertes.fr/tel-00422589.
Повний текст джерелаLambert, Thierry. "Réalisation d'un logiciel d'analyse de données." Paris 11, 1986. http://www.theses.fr/1986PA112274.
Повний текст джерелаChakkour, Tarik. "Développement, analyse et implémentation d’un modèle financier continu en temps." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS468/document.
Повний текст джерелаSOFI is a software tool marketed by the company MGDIS. It is designed to the public institutions such local communities to set out multiyear budgets. SOFI is based on a discrete financial modeling. Consequently, the default of SOFI is using tables of type Excel. In this thesis, we build a new model with using an other paradigm. This new model is based on continuous-in-time modeling and uses the mathematical tools such convolution and integration, etc. The continuous-in-time financial model relies on measure theory. We created measures and fields and we used them in our modeling in order to show the consistency of the model. We have showed in this modeling how some aspects of functioning of loan with constant rate could be inserted in continuous-in-time model. We build a financial model with variable rate which allows model a rate of constant loan which is defined at borrowed time. We set the algebraic spending density_ in terms of the loan density_E by a mathematical operator. This operator is not always invertible. We study this inverse problem of the financial model on the space of square-integrable functions defined on a compact, and on radon measure space. We have developed two industrials libraries to implement measures and fields for the continuous-in-time financial model. We called them LemfAN (Library Embedded Finance And Numerical Analysis) and Lemf (Library Embedded Finance). The integration and the validation of these libraries in SOFI are succefully made
Zinovieva-Leroux, Eléna. "Méthodes symboliques pour la génération de tests de systèmes réactifs comportant des données." Rennes 1, 2004. https://tel.archives-ouvertes.fr/tel-00142441.
Повний текст джерелаAbdali, Abdelkebir. "Systèmes experts et analyse de données industrielles." Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0032.
Повний текст джерелаTo analyses industrial process behavio, many kinds of information are needed. As tye ar mostly numerical, statistical and data analysis methods are well-suited to this activity. Their results must be interpreted with other knowledge about analysis prcess. Our work falls within the framework of the application of the techniques of the Artificial Intelligence to the Statistics. Its aim is to study the feasibility and the development of statistical expert systems in an industrial process field. The prototype ALADIN is a knowledge-base system designed to be an intelligent assistant to help a non-specialist user analyze data collected on industrial processes, written in Turbo-Prolong, it is coupled with the statistical package MODULAD. The architecture of this system is flexible and combing knowledge with general plants, the studied process and statistical methods. Its validation is performed on continuous manufacturing processes (cement and cast iron processes). At present time, we have limited to principal Components analysis problems
Roume, Cyril. "Analyse et restructuration de hiérarchies de classes." Montpellier 2, 2004. http://www.theses.fr/2004MON20088.
Повний текст джерелаGaroche, Pierre-Loïc. "Analyse statistique d'un calcul d'acteurs par interprétation abstraite." Toulouse, INPT, 2008. http://ethesis.inp-toulouse.fr/archive/00000629/.
Повний текст джерелаThe Actor model, introduced by Hewitt and Agha in the late 80s, describes a concurrent communicating system as a set of autonomous agents, with non uniform interfaces and communicating by the use of labeled messages. The CAP process calculus, proposed by Colaço, is based on this model and allows to describe non trivial realistic systems, without the need of complex encodings. CAP is a higher-order calculus: messages can carry actor behaviors. Multiple works address the analysis of CAP properties, mainly by the use of inference-based type systems using behavioral types and sub-typing. Otherwise, more recent works, by Venet and later Feret, propose the use of abstract interpretation to analyze process calculi. These approaches allow to compute non-uniform properties. For example, they are able to differentiate recursive instances of the same thread. This thesis is at the crossroad of these two approaches, applying abstract interpretation to the analysis of CAP. Following the framework of Feret, CAP is firstly expressed in a non standard form, easing its analysis. The set of reachable states is then over-approximated via a sound by construction representation within existing abstract domains. [. . . ]