Dissertations / Theses on the topic 'Security, Fuzzing'

To see the other types of publications on this topic, follow the link: Security, Fuzzing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Security, Fuzzing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sayed, Shereef. "Black-Box Fuzzing of the REDHAWK Software Communications Architecture." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54566.

Full text
Abstract:
As the complexity of software increases, so does the complexity of software testing. This challenge is especially true for modern military communications as radio functionality becomes more digital than analog. The Software Communications Architecture was introduced to manage the increased complexity of software radios. But the challenge of testing software radios still remains. A common methodology of software testing is the unit test. However, unit testing of software assumes that the software under test can be decomposed into its fundamental units of work. The intention of such decomposition is to simplify the problem of identifying the set of test cases needed to demonstrate correct behavior. In practice, large software efforts can rarely be decomposed in simple and obvious ways. In this paper, we introduce the fuzzing methodology of software testing as it applies to software radios. Fuzzing is a methodology that acts only on the inputs of a system and iteratively generates new test cases in order to identify points of failure in the system under test. The REDHAWK implementation of the Software Communications Architecture is employed as the system under test by a fuzzing framework called Peach. Fuzz testing of REDHAWK identified a software bug within the Core Framework, along with a systemic flaw that leaves the system in an invalid state and open to malicious use. It is recommended that a form of Fault Detection be integrated into REDHAWK for collocated processes at a minimum, and distributed processes at best, in order to provide a more fault tolerant system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Sletmo, Patrik. "Introducing probabilities within grey-box fuzzing." Thesis, Linköpings universitet, Databas och informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161893.

Full text
Abstract:
Over the recent years, the software industry has faced a steady increase in the number of exposed and exploited software vulnerabilities. With more software and devices being connected to the internet every day, the need for proactive security measures has never been more important. One promising new technology for making software more secure is fuzz testing. This automated testing technique is based around generating a large number of test cases with the intention of revealing dangerous bugs and vulnerabilities. In this thesis work, a new direction within grey-box fuzz testing is evaluated against previous work. The presented approach uses sampled probability data in order to guide the fuzz testing towards program states that are expected to be easy to reach and beneficial for the discovery of software vulnerabilities. Evaluation of the design shows that the suggested approach provides no obvious advantage over existing solutions, but also indicates that the performance advantage could be dependent on the structure of the system under test. However, analysis of the design itself highlights several design decisions that could benefit from more extensive research. While the design proposed in this thesis work is insufficient for replacing current state of the art fuzz testing software, it provides a solid foundation for future research within the field. With the many insights gained from the design and implementation work, this thesis work aims to both inspire others and showcase the challenges of creating a probability-based approach to grey-box fuzz testing.
APA, Harvard, Vancouver, ISO, and other styles
3

McDonough, Kenton Robert. "Torpedo: A Fuzzing Framework for Discovering Adversarial Container Workloads." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104159.

Full text
Abstract:
Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. In contrast to traditional VM technologies, containers theoretically provide the same process isolation guarantees with less overhead and additionally introduce finer grained options for resource allocation. Cloud providers have widely adopted container based architectures as the standard for multi-tenant hosting services and rely on underlying security guarantees to ensure that adversarial workloads cannot disrupt the activities of coresident containers on a given host. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way cgroups have been added to the Linux kernel, there exist vulnerabilities that allow containerized processes to generate "out of band" workloads and negatively impact the performance of the entire host without being appropriately charged. Because of the relative complexity of the kernel, discovering these vulnerabilities through traditional static analysis tools may be very challenging. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for sequences of system calls that break process isolation boundaries. TORPEDO combines traditional code coverage feedback with resource utilization measurements to motivate the generation of "adversarial" programs based on user-defined criteria. Experiments conducted on the default docker runtime runC as well as the virtualized runtime gVisor independently reconfirm several known vulnerabilities and discover interesting new results and bugs, giving us a promising framework to conduct more research.
Master of Science
Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. By abstracting away many of the system details required to deploy software, developers can rapidly prototype, deploy, and take advantage of massive distributed frameworks when deploying new software products. These paradigms are supported with corresponding business models offered by cloud providers, who allocate space on powerful physical hardware among many potentially competing services. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way containers have been implemented by the Linux kernel, there exist vulnerabilities that allow containerized programs to generate "out of band" workloads and negatively impact the performance of other containers. In general, these vulnerabilities are difficult to identify, but can be very severe. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for programs that negatively impact other containers. TORPEDO uses a novel technique that combines resource monitoring with code coverage approximations, and initial testing on common container software has revealed new interesting vulnerabilities and bugs.
APA, Harvard, Vancouver, ISO, and other styles
4

Dutta, Rahul Kumar. "A Framework for Software Security Testing and Evaluation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121645.

Full text
Abstract:
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
APA, Harvard, Vancouver, ISO, and other styles
5

Duchene, Fabien. "Detection of web vulnerabilities via model inference assisted evolutionary fuzzing." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM022/document.

Full text
Abstract:
Le test est une approche efficace pour détecter des bogues d'implémentation ayant un impact sur la sécurité, c.a.d. des vulnérabilités. Lorsque le code source n'est pas disponible, il est nécessaire d'utiliser des techniques de test en boîte noire. Nous nous intéressons au problème de détection automatique d'une classe de vulnérabilités (Cross Site Scripting alias XSS) dans les applications web dans un contexte de test en boîte noire. Nous proposons une approche pour inférer des modèles de telles applications et frelatons des séquences d'entrées générées à partir de ces modèles et d'une grammaire d'attaque. Nous inférons des automates de contrôle et de teinte, dont nous extrayons des sous-modèles afin de réduire l'espace de recherche de l'étape de frelatage. Nous utilisons des algorithmes génétiques pour guider la production d'entrées malicieuses envoyées à l'application. Nous produisons un verdict de test grâce à une double inférence de teinte sur l'arbre d'analyse grammaticale d'un navigateur et à l'utilisation de motifs de vulnérabilités comportant des annotations de teinte. Nos implémentations LigRE et KameleonFuzz obtiennent de meilleurs résultats que les scanneurs boîte noire open-source. Nous avons découvert des XSS ``0-day'' (c.a.d. des vulnérabilités jusque lors inconnues publiquement) dans des applications web utilisées par des millions d'utilisateurs
Testing is a viable approach for detecting implementation bugs which have a security impact, a.k.a. vulnerabilities. When the source code is not available, it is necessary to use black-box testing techniques. We address the problem of automatically detecting a certain class of vulnerabilities (Cross Site Scripting a.k.a. XSS) in web applications in a black-box test context. We propose an approach for inferring models of web applications and fuzzing from such models and an attack grammar. We infer control plus taint flow automata, from which we produce slices, which narrow the fuzzing search space. Genetic algorithms are then used to schedule the malicious inputs which are sent to the application. We incorporate a test verdict by performing a double taint inference on the browser parse tree and combining this with taint aware vulnerability patterns. Our implementations LigRE and KameleonFuzz outperform current open-source black-box scanners. We discovered 0-day XSS (i.e., previously unknown vulnerabilities) in web applications used by millions of users
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Jin. "Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright163007760528389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lone, Sang Fernand. "Protection des systèmes informatiques contre les attaques par entrées-sorties." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00863020.

Full text
Abstract:
Les attaques ciblant les systèmes informatiques vont aujourd'hui au delà de simples logiciels malveillants et impliquent de plus en plus des composants matériels. Cette thèse s'intéresse à cette nouvelle classe d'attaques et traite, plus précisément, des attaques par entrées-sorties qui détournent des fonctionnalités légitimes du matériel, tels que les mécanismes entrées-sorties, à différentes fins malveillantes. L'objectif est d'étudier ces attaques, qui sont extrêmement difficiles à détecter par des techniques logicielles classiques (dans la mesure où leur mise en oeuvre ne nécessite pas l'intervention des processeurs) afin de proposer des contre-mesures adaptées, basées sur des composants matériels fiables et incontournables. Ce manuscrit se concentre sur deux cas : celui des composants matériels qui peuvent être délibérément conçus pour être malveillants et agissants de la même façon qu'un programme intégrant un cheval de Troie ; et celui des composants matériels vulnérables qui ont été modifiés par un pirate informatique, localement ou au travers du réseau, afin d'y intégrer des fonctions malveillantes (typiquement, une porte dérobée dans son firmware). Pour identifier les attaques par entrées-sorties, nous avons commencé par élaborer un modèle d'attaques qui tient compte des différents niveaux d'abstraction d'un système informatique. Nous nous sommes ensuite appuyés sur ce modèle d'attaques pour les étudier selon deux approches complémentaires : une analyse de vulnérabilités traditionnelle, consistant à identifier une vulnérabilité, développer des preuves de concept et proposer des contre-mesures ; et une analyse de vulnérabilités par fuzzing sur les bus d'entrées-sorties, reposant sur un outil d'injection de fautes que nous avons conçu, baptisé IronHide, capable de simuler des attaques depuis un composant matériel malveillant. Les résultats obtenus pour chacunes de ces approches sont discutés et quelques contre-mesures aux vulnérabilités identifiées, basées sur des composants matériels existants, sont proposées.
APA, Harvard, Vancouver, ISO, and other styles
8

Potnuru, Srinath. "Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294140.

Full text
Abstract:
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols.
5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmad, Abbas. "Model-Based Testing for IoT Systems : Methods and tools." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD008/document.

Full text
Abstract:
L'internet des objets (IoT) est aujourd'hui un moyen d'innovation et de transformation pour de nombreuses entreprises. Les applications s'étendent à un grand nombre de domaines, tels que les villes intelligentes, les maisons intelligentes, la santé, etc. Le Groupe Gartner estime à 21 milliards le nombre d'objets connectés d'ici 2020. Le grand nombre d'objets connectés introduit des problèmes, tels que la conformité et l'interopérabilité en raison de l'hétérogénéité des protocoles de communication et de l'absence d'une norme mondialement acceptée. Le grand nombre d'utilisations introduit des problèmes de déploiement sécurisé et d'évolution du réseau des IoT pour former des infrastructures de grande taille. Cette thèse aborde la problématique de la validation de l'internet des objets pour répondre aux défis des systèmes IoT. Pour cela, nous proposons une approche utilisant la génération de tests à partir de modèles (MBT). Nous avons confronté cette approche à travers de multiples expérimentations utilisant des systèmes réels grâce à notre participation à des projets internationaux. L'effort important qui doit être fait sur les aspects du test rappelle à tout développeur de système IoT que: ne rien faire est plus cher que de faire au fur et à mesure
The Internet of Things (IoT) is nowadays globally a mean of innovation and transformation for many companies. Applications extend to a large number of domains, such as smart cities, smart homes, healthcare, etc. The Gartner Group estimates an increase up to 21 billion connected things by 2020. The large span of "things" introduces problematic aspects, such as conformance and interoperability due to the heterogeneity of communication protocols and the lack of a globally-accepted standard. The large span of usages introduces problems regarding secure deployments and scalability of the network over large-scale infrastructures. This thesis deals with the problem of the validation of the Internet of Things to meet the challenges of IoT systems. For that, we propose an approach using the generation of tests from models (MBT). We have confronted this approach through multiple experiments using real systems thanks to our participation in international projects. The important effort which is needed to be placed on the testing aspects reminds every IoT system developer that doing nothing is more expensive later on than doing it on the go
APA, Harvard, Vancouver, ISO, and other styles
10

(10746420), Hui Peng. "FUZZING HARD-TO-COVER CODE." Thesis, 2021.

Find full text
Abstract:
Fuzzing is a simple yet effect approach to discover bugs by repeatedly testing the target system using randomly generated inputs. In this thesis, we identify several limitations in state-of-the-art fuzzing techniques: (1) the coverage wall issue , fuzzer-generated inputs cannot bypass complex sanity checks in the target programs and are unable to cover code paths protected by such checks; (2) inability to adapt to interfaces to inject fuzzer-generated inputs, one important example of such interface is the software/hardware interface between drivers and their devices; (3) dependency on code coverage feedback, this dependency makes it hard to apply fuzzing to targets where code coverage collection is challenging (due to proprietary components or special software design).

To address the coverage wall issue, we propose T-Fuzz, a novel approach to overcome the issue from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the coverage wall is reached, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program.

By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, we found 4 new bugs in previously-fuzzed programs and libraries.

To address the inability to adapt to inferfaces, we propose USBFuzz. We target the USB interface, fuzzing the software/hardware barrier. USBFuzz uses device emulation
to inject fuzzer-generated input to drivers under test, and applies coverage-guided fuzzing to device drivers if code coverage collection is supported from the kernel. In its core, USBFuzz emulates an special USB device that provides data to the device driver (when it performs IO operations). This allows us to fuzz the input space of drivers from the device’s perspective, an angle that is difficult to achieve with real hardware. USBFuzz discovered 53 bugs in Linux (out of which 37 are new, and 36 are memory bugs of high security impact, potentially allowing arbitrary read or write in the kernel address space), one bug in FreeBSD, four bugs (resulting in Blue Screens of Death) in Windows and three bugs (two causing an unplanned restart, one freezing the system) in MacOS.

To break the dependency on code coverage feedback, we propose WebGLFuzzer. To fuzz the WebGL interface (a set of JavaScript APIs in browsers allowing high performance graphics rendering taking advantage of GPU acceleration on the device), where code coverage collection is challenging, we introduce WebGLFuzzer, which internally uses a log guided fuzzing technique. WebGLFuzzer is not dependent on code coverage feedback, but instead, makes use of the log messages emitted by browsers to guide its input mutation. Compared with coverage guided fuzzing, our log guided fuzzing technique is able to perform more meaningful mutation under the guidance of the log message. To this end, WebGLFuzzer uses static analysis to identify which argument to mutate or which API call to insert to the current program to fix the internal WebGL program state given a log message emitted by the browser. WebGLFuzzer is under evaluation and so far, it has found 6 bugs, one of which is able to freeze the X-Server.
APA, Harvard, Vancouver, ISO, and other styles
11

Barbosa, João Fernando da Costa Meireles. "Automated Repair of Security Vulnerabilities using Coverage-guided Fuzzing." Master's thesis, 2021. https://hdl.handle.net/10216/135943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Barbosa, João Fernando da Costa Meireles. "Automated Repair of Security Vulnerabilities using Coverage-guided Fuzzing." Dissertação, 2021. https://hdl.handle.net/10216/135943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Atlidakis, Evangelos. "Structure and Feedback in Cloud Service API Fuzzing." Thesis, 2021. https://doi.org/10.7916/d8-2bry-am81.

Full text
Abstract:
Over the last decade, we have witnessed an explosion in cloud services for hosting software applications (Software-as-a-Service), for building distributed services (Platform- as-a-Service), and for providing general computing infrastructure (Infrastructure-as-a- Service). Today, most cloud services are programmatically accessed through Application Programming Interfaces (APIs) that follow the REpresentational State Trans- fer (REST) software architectural style and cloud service developers use interface-description languages to describe and document their services. My thesis is that we can leverage the structured usage of cloud services through REST APIs and feedback obtained during interaction with such services in order to build systems that test cloud services in an automatic, efficient, and learning-based way through their APIs. In this dissertation, I introduce stateful REST API fuzzing and describe its implementation in RESTler: the first stateful REST API fuzzing system. Stateful means that RESTler attempts to explore latent service states that are reachable only with sequences of multiple interdependent API requests. I then describe how stateful REST API fuzzing can be extended with active property checkers that test for violations of desirable REST API security properties. Finally, I introduce Pythia, a new fuzzing system that augments stateful REST API fuzzing with coverage-guided feedback and learning-based mutations.
APA, Harvard, Vancouver, ISO, and other styles
14

(6640856), Sushant Dinesh. "Retrowrite: Statically Instrumenting COTS Binaries for Fuzzing and Sanitization." Thesis, 2019.

Find full text
Abstract:
End users of closed-source software currently cannot easily analyze the security
of programs or patch them if flaws are found. Notably, end users can include devel
opers who use third party libraries. The current state of the art for coverage-guided
binary fuzzing or binary sanitization is dynamic binary translation, which results
in prohibitive overhead. Existing static rewriting techniques cannot fully recover
symbolization information, and so have difficulty modifying binaries to track code
coverage for fuzzing or add security checks for sanitizers.
The ideal solution for adding instrumentation is a static rewriter that can intel
ligently add in the required instrumentation as if it were inserted at compile time.
This requires analysis to statically disambiguate between references and scalars, a
problem known to be undecidable in the general case. We show that recovering this
information is possible in practice for the most common class of software and li
braries: 64 bit, position independent code. Based on our observation, we design a
binary-rewriting instrumentation to support American Fuzzy Lop (AFL) and Address
Sanitizer (ASan), and show that we achieve compiler levels of performance, while re
taining precision. Binaries rewritten for coverage-guided fuzzing using RetroWrite
are identical in performance to compiler-instrumented binaries and outperforms the
default QEMU-based instrumentation by 7.5x while triggering more bugs. Our im
plementation of binary-only Address Sanitizer is 3x faster than Valgrind memcheck,
the state-of-the-art binary-only memory checker, and detects 80% more bugs in our
security evaluation.
APA, Harvard, Vancouver, ISO, and other styles
15

Ho, Chia-Lun, and 何嘉倫. "Design and Implementation of a Fuzzing Tool for Enhancing the Security of RESTful Web Services." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/34gpn9.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
106
Recently, it had become a trend to build websites using systematic development approaches and frameworks. Among these, the RESTful web service is one of the key development technologies. Many well-known web development frameworks (e.g., Laravel, Ruby on Rails) and websites (e.g., Twitter, LinkedIn, WordPress) had provided RESTful API for ordinary user data access support. Moreover, with the popularity of this RESTful web service, the related security issues had become more diversified and more complex. In this thesis, we would like to accomplish the two requirements: First, to design and build a fuzzing tool for helping identify unknown potential vulnerability of a RESTful website under test; Second, by inspecting the RESTful APIs of the website under check, the security team could use the proposed fuzzing tool to identify the most likely potential vulnerability information. In this study, we proposed a set of hybrid fuzzing scheme(i.e., fuzzing + genetic algorithm) and implemented a prototype of the proposed scheme to help identify potential vulnerabilities of RESTful APIs of a website security under test. We would further elaborate the above as follows: 1. Scope of this study and its extended applications: In consideration of simplifying this thesis, we had mainly focused our study on general web browsing behaviors (i.e., by analyzing the URLs of RESTful web pages). In essence, with minor modifications, the same scheme could be easily adapted (and extended) to conduct the analysis of common anomalies, such as the analysis of invalid login account/password attempts. 2. Different test methodologies and application scopes: Most conventional fuzzing tools mainly focus on the study of common vulnerabilities such as SQL injection and XSS, by injecting specific input strings. However, our proposed hybrid scheme (i.e., fuzzing + Genetic Algorithm, or GA), is mainly designed to help identify unknown potential vulnerabilities of a RESTful website. 3. Fuzzing strategy: In the design of any fuzzing scheme, there should be an important set of fuzzy strategy. As we all known, GA is a good solution candidate for most optimal search problems. Hence, in this study, we chose GA as an adjustment tool for implementing the fuzzy strategy. 4. Performance evaluation: In this study, we built a prototype of the proposed hybrid-fuzzing scheme, to help identify potential vulnerabilities of typical RESTful websites. Using the proposed fuzzing tool, with no more than 100,000 generations, the tester could accomplish the checking of a typical RESTful website within 10 minutes. Overall, by running the proposed fuzzing tool, 80\% of the vulnerabilities cases in this study terminated the process within 50,000 generations. Especially, some potential vulnerability instances had even terminated the search process within 10,000 generations.
APA, Harvard, Vancouver, ISO, and other styles
16

(9217391), Yuseok Jeon. "Practical Type and Memory Safety Violation Detection Mechanisms." Thesis, 2020.

Find full text
Abstract:
System programming languages such as C and C++ are designed to give the programmer full control over the underlying hardware. However, this freedom comes at the cost of type and memory safety violations which may allow an attacker to compromise applications. In particular, type safety violation, also known as type confusion, is one of the major attack vectors to corrupt modern C++ applications. In the past years, several type confusion detectors have been proposed, but they are severely limited by high performance overhead, low detection coverage, and high false positive rates. To address these issues, we propose HexType and V-Type. First, we propose HexType, a tool that provides low-overhead disjoint metadata structures, compiler optimizations, and handles specific object allocation patterns. Thus, compared to prior work, HexType significantly improves detection coverage and reduces performance overhead. In addition, HexType discovers new type confusion bugs in real world programs such as Qt and Apache Xerces-C++. However, HexType still has considerable overhead from managing the disjoint metadata structure and tracking individual objects, and has false positives from imprecise object tracking, although HexType significantly reduces performance overhead and detection coverage. To address these issues, we propose a further advanced mechanism V-Type, which forcibly changes non-polymorphic types into polymorphic types to make sure all objects maintain type information. By doing this, V-Type removes the burden of tracking object allocation and deallocation and of managing a disjoint metadata structure, which reduces performance overhead and improves detection precision. Another major attack vector is memory safety violations, which attackers can take advantage of by accessing out of bound or deleted memory. For memory safety violation detection, combining a fuzzer with sanitizers is a popular and effective approach. However, we find that heavy metadata structure of current sanitizers hinders fuzzing effectiveness. Thus, we introduce FuZZan to optimize sanitizer metadata structures for fuzzing. Consequently, FuZZan improves fuzzing throughput, and this helps the tester to discover more unique paths given the same amount of time and to find bugs faster. In conclusion, my research aims to eliminate critical and common C/C++ memory and type safety violations through practical programming analysis techniques. For this goal, through these three projects, I contribute to our community to effectively detect type and memory safety violations.
APA, Harvard, Vancouver, ISO, and other styles
17

(10716420), Taegyu Kim. "Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers." Thesis, 2021.

Find full text
Abstract:
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains.
In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
APA, Harvard, Vancouver, ISO, and other styles
18

(6632954), Kyriakos K. Ispoglou. "INFERENCE OF RESIDUAL ATTACK SURFACE UNDER MITIGATIONS." Thesis, 2019.

Find full text
Abstract:
Despite the broad diversity of attacks and the many different ways an adversary can exploit a system, each attack can be divided into different phases. These phases include the discovery of a vulnerability in the system, its exploitation and the achieving persistence on the compromised system for (potential) further compromise and future access. Determining the exploitability of a system –and hence the success of an attack– remains a challenging, manual task. Not only because the problem cannot be formally defined but also because advanced protections and mitigations further complicate the analysis and hence, raise the bar for any successful attack. Nevertheless, it is still possible for an attacker to circumvent all of the existing defenses –under certain circumstances.

In this dissertation, we define and infer the Residual Attack Surface on a system. That is, we expose the limitations of the state-of-the-art mitigations, by showing practical ways to circumvent them. This work is divided into four parts. It assumes an attack with three phases and proposes new techniques to infer the Residual Attack Surface on each stage.

For the first part, we focus on the vulnerability discovery. We propose FuzzGen, a tool for automatically generating fuzzer stubs for libraries. The synthesized fuzzers are target specific, thus resulting in high code coverage. This enables developers to expose and fix vulnerabilities (that reside deep in the code and require initializing a complex state to trigger them), before they can be exploited. We then move to the vulnerability exploitation part and we present a novel technique called Block Oriented Programming (BOP), that automates data-only attacks. Data-only attacks defeat advanced control-flow hijacking defenses such as Control Flow Integrity. Our framework, called BOPC, maps arbitrary exploit payloads into execution traces and encodes them as a set of memory writes. Therefore an attacker’s intended execution “sticks” to the execution flow of the underlying binary and never departs from it. In the third part of the dissertation, we present an extension of BOPC that presents some measurements that give strong indications of what types of exploit payloads are not possible to execute. Therefore, BOPC enables developers to test what data an attacker would compromise and enables evaluation of the Residual Attack Surface to assess an application’s risk. Finally, for the last part, which is to achieve persistence on the compromised system, we present a new technique to construct arbitrary malware that evades current dynamic and behavioral analysis. The desired malware is split into hundreds (or thousands) of little pieces and each piece is injected into a different process. A special emulator coordinates and synchronizes the execution of all individual pieces, thus achieving a “distributed execution” under multiple address spaces. malWASH highlights weaknesses of current dynamic and behavioral analysis schemes and argues for full-system provenance.

Our envision is to expose all the weaknesses of the deployed mitigations, protections and defenses through the Residual Attack Surface. That way, we can help the research community to reinforce the existing defenses, or come up with new, more effective ones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography