To see the other types of publications on this topic, follow the link: Static Analysis Tool.

Dissertations / Theses on the topic 'Static Analysis Tool'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Static Analysis Tool.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Morgenthaler, John David. "Static analysis for a software transformation tool /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9804509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dutko, Adam M. "THE RELATIONAL DATABASE: A NEW STATIC ANALYSIS TOOL?" Cleveland State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=csu1313678735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baca, Dejan. "Automated static code analysis : A tool for early vulnerability detection." Licentiate thesis, Karlskrona : Department of Systems and Software Engineering, School of Engineering, Blekinge Institute of Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gustafson, Christopher, and Sam Florin. "Qualification of Tool for Static Code Analysis : Processes and Requirements for Approval of Static Code Analysis in the Aviation Industry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277941.

Full text
Abstract:
In the aviation industry, the use of software development tools is not as easily adopted as in other industries. Due to the catastrophic consequences of software errors in airborne systems, software development processes has rigorous requirements. One of these requirements is that a code standard must be followed. Code standards are used to exclude code constructions which could result in unwanted behaviours. The process of manually ensuring a specific code standard can be costly. This process could be automated by a tool for static code analysis, however, this requires a formal qualification. This thesis evaluates the process of qualifying a tool for static code analysis in accordance with the requirements of the major aviation authorities EASA and FAA. To describe the qualification process, a literature study was conducted. To further explain how an existing tool could be put through the qualification process, a case study of the existing tool Parasoft C/C++ test was conducted. The results of the literature study show what processes must be completed in order to qualify a static code analysis tool. Importantly, the study shows that no requirements are put on the development process of the tool. This was an important takeaway as it meant that an existing tool could be qualified without any additional data from the developer of the tool. The case study of Parasoft C/C++ test showed how the tool could be configured and verified to analyze code in accordance with a small set of code rules. Furthermore, three documents including qualification data were produced showing how the qualification process should be documented in order to communicate the process to an authority. The results of the thesis do not provide the full picture of how a tool could be qualified as the software, in which the tool is used, is considerations the are specific to the software the tool is used to develop still need to be taken into consideration. The thesis does, however, provide guidance on the majority of the applicable requirements. Future research could be done to provide the complete picture of the qualification process, as well as how the process would look like for other types of tools.
Inom flygindustrin är användandet av olika programmeringsverktyg inte lika självklart som inom andra industrier. På grund av de katastrofala konsekvenser som fel i mjukvaran i ett flygplan kan resultera i finns det rigorösa krav på mjukvaruutvecklingsprocessen. Ett av dessa krav är att en viss kodstandard måste upprätthållas. Kodstandarder används för att exkludera vissa strukturer i kod som kan leda till oönskat beteende. Upprätthållandet av en viss kodstandard är en långdragen process att genomföra manuellt, och kan därför automatiseras med hjälp av ett statiskt kodanalysverktyg. För att kunna använda ett sådant verktyg behövs däremot en formell verktygskvalificering. I denna uppsats kommer kvalificeringsprocessen av ett verktyg för statisk kodanalys att evalueras enligt de krav som de två stora flygmyndigheterna EASA och FAA ställer. För att förklara processen av att kvalificera ett sådant verktyg gjordes en litteraturstudie följt av en fallstudie av det existerande verktyget Parasoft C/C++ test. Resultaten av litteraturstudien beskriver de olika processerna som måste genomföras för att kvalificera ett statiskt kodanalysverktyg. Noterbart är att resultaten visar att inga krav ställs på utvecklingsprocessen av verktyget själv. Detta betyder att ett existerande kommersiellt verktyg kan kvalificeras utan att verktygsutvecklarna själva behöver bidra med extra information. Fallstudien visade hur verktyget Parasoft C/C++ test kan konfigureras och verifieras att följa en viss kodstandard. Vidare resulterade fallstudien i utkast av de nödvändiga dokumenten som behöver produceras för att kommunicera kvalificeringsprocessen till en myndighet. De resultat som presenteras i denna uppsats är i sig inte tillräckliga för beskriva hela kvalificeringsprocessen. Ytterligare överväganden som är specifika till den mjukvaran som verktyget ska användas till att utveckla måste göras för att en komplett kvalificering ska kunna genomföras. Uppsatsen bidrar däremot med riktlinjer och vägledning av majoriteten av de processerna som behöver genomföras. Ytterligare forskning kan göras för att bidra med den kompletta bilden av verktygskvalificering av ett statiskt kodanalysverktyg, samt hur kvalificering kan göras av andra typer av verktyg.
APA, Harvard, Vancouver, ISO, and other styles
5

Eads, Joshua Michael. "EtherAnnotate: a transparent malware analysis tool for integrating dynamic and static examination." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Eads_09007dcc807a2d75.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 4, 2010) Includes bibliographical references (p. 65-68).
APA, Harvard, Vancouver, ISO, and other styles
6

Al, Awadi Wali. "An Assessment of Static and Dynamic malware analysis techniques for the android platform." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2015. https://ro.ecu.edu.au/theses/1635.

Full text
Abstract:
With Smartphones becoming an increasingly important part of human life, the security of these devices is very much at stake. The versatility of these phones and their associated applications has fostered an increasing number of mobile malware attacks. The purpose of the research was to answer the following research questions: 1. What are the existing methods for analysing mobile malware? 2. How can methods for analysing mobile malware be evaluated? 3. What would comprise a suitable test bed(s) for analysing mobile malware? The research analyses and compares the various tools and methods available for compromising the Android OS and observing the malware activity before and after its installation onto an Android emulator. Among several available tools and methods, the approach made use of online scanning engines to perform pre installation of mobile malware analysis and the AppUse (Android Pentest Platform Unified Standalone Environment) tool to perform post installation. Both the above approaches facilitate better analysis of mobile malware before and after being installed onto the mobile device. This is because, with malware being the root cause of many security breaches, the developed mobile malware analysis allows future security practitioners in this field to determine if newly developed applications are malicious and, if so, what would their effect be on the target. In addition, the AppUse tool can allow security practitioners to first establish the behaviour of post installed malware infections onto the Android emulator then be able to effectively eliminate malware from individual systems as well as the Google Play Store. Moreover, mobile malware analysis can help with a successful incident response, assisting with mitigating the loss of intellectual property, personal information as well as other critical private data. It can strive to limit the damage of a security breach or to reduce the scope of damage of an attack. The basic structure of the research work began with a dynamic analysis, followed by a static analysis: a) Mobile malware were collected and downloaded from the Contagio website to compromise an Android emulator, b) Mobile malware were uploaded onto five online scanning engines for dynamic analysis to perform pre installation analysis, and c) AppUse tool was implemented and used for static analysis to perform post installation analysis by making use of its: a. Android emulator and, b. JD-GUI and Dex2Jar tools. The findings were that the AppUse methodology used in the research was successful but the outcome was not as anticipated. This was because the installed malicious applications on the Android emulator did not generate the derived behavioural reports; instead, only manifest files in xml format. To overcome this issue, JD-GUI and Dex2Jar tools were used to manually generate the analysis results from the Android emulator to analyse malware behaviour. The key contribution of this research work is the proposal of a dynamic pre-installation and a static post-installation analysis of ten distinct Android malware samples. To our knowledge, no research has been conducted on post installation of mobile malware analysis and this is the first research that uses the AppUse tool for mobile malware analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Hubert, Laurent. "Foundations and implementation of a tool bench for static analysis of Java bytecode programs." Rennes 1, 2010. http://www.theses.fr/2010REN1S122.

Full text
Abstract:
In this thesis we study the static analysis of Java bytecode and its semantics foundations. The initialization of an information system is a delicate operation where security properties are enforced and invariants installed. Initialization of fields, objects and classes in Java are difficult operations. These difficulties may lead to security breaches and to bugs, and make the static verification of software more difficult. This thesis proposes static analyses to better master initialization in Java. Hence, we propose a null pointer analysis that finely tracks initialization of fields. It allows proving the absence of dereferencing of null pointers (NullPointerException) and refining the intra-procedural control flow graph. We present another analysis to refine the inter-procedural control flow due to class initialization. This analysis directly allows inferring more precise information about static fields. Finally, we propose a type system that allows enforcer secure object initialization, hence offering a sound and automatic solution to a known security issue. We formalize these analyses, their semantic foundations, and prove their soundness. Furthermore, we also provide implementations. We developed several tools from our analyses, with a strong focus at having sound but also efficient tools. To ease the adaptation of such analyses, which have been formalized on idealized languages, to the full-featured Java bytecode, we have developed a library that have been made available to the community and is now used in other research labs across Europe
Dans cette thèse, nous nous intéressons à l’analyse statique du bytecode Java. L’initialisation d’un système d’information est une phase délicate où des propriétés de sécurité sont vérifiées et des invariants installés. L’initialisation en Java pose des difficultés, que ce soit pour les champs, les objets ou les classes. De ces difficultés peuvent résulter des failles de sécurité, des erreurs d’exécution (bugs), ou une plus grande difficulté à valider statiquement ces logiciels. Cette thèse propose des analyses statiques répondant aux problèmes d’initialisation de champs, d’objets et de classes. Ainsi, nous décrivons une analyse de pointeurs nuls qui suit finement l’initialisation des champs et permet de prouver l’absence d’exception de pointeur nuls (NullPointerException) et de raffiner le graphe de flot de contrôle intra-procédural. Nous proposons aussi une analyse pour raffiner le graphe de flot de contrôle inter-procédural liée à l’initialisation de classe et permettant de modéliser plus finement le contenu des champs statiques. Enfin, nous proposons un système de type permettant de garantir que les objets manipulés sont complètement initialisés, et offrant ainsi une solution formelle et automatique à un problème de sécurité connu. Les fondations sémantiques de ces analyses sont données. Les analyses sont décrites formellement et prouvées correctes. Pour pouvoir adapter ces analyses, formalisées sur de petits langages, au bytecode, nous avons développé une bibliothèque logicielle. Elle nous a permis de produire des prototypes efficaces gérant l’intégralité du bytecode Java
APA, Harvard, Vancouver, ISO, and other styles
8

Gebhard, Gernot [Verfasser], and Reinhard [Akademischer Betreuer] Wilhelm. "Static timing analysis tool validation in the presence of timing anomalies / Gernot Gebhard. Betreuer: Reinhard Wilhelm." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2013. http://d-nb.info/1053679947/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lerner, Harry 1969. "Static types to dynamic variables : re-assessing the methods of prehistoric Huron chipped stone tool documentation and analysis in Ontario." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33298.

Full text
Abstract:
An assemblage of prehistoric Huron chipped stone tools has been analyzed in terms of its inherently dynamic properties. It is hypothesized that the series of measurements and ratios that has been developed is more efficient than existing systems for gauging the changing nature of these implements over time. The statistical evaluation of the data revealed strong linear relationships between various pairs of variables, such as projectile point length and tip angle and end scraper bit edge angles and bit height. It was found that comparing these data to other attributes of these tools, such as use-wear traces and reduction techniques, can be very informative about how each category of tools changed through manufacture, use, and maintenance. The results of this analysis were then compared to those of a more traditional study of a contemporaneous collection of Huron stone tools (Poulton, 1985), demonstrating the utility of the techniques developed.
APA, Harvard, Vancouver, ISO, and other styles
10

Hameed, Muhammad Muzaffar, and Muhammad Zeeshan ul Haq. "DefectoFix : An interactive defect fix logging tool." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5268.

Full text
Abstract:
Despite the large efforts made during the development phase to produce fault free system, most of the software implementations still require the testing of entire system. The main problem in the software testing is the automation that could verify the system without manual intervention. Recent work in software testing is related to the automated fault injection by using fault models from repository. This requires a lot of efforts, which adds to the complexity of the system. To solve this issue, this thesis suggests DefectoFix framework. DefectoFix is an interactive defect fix logging tools that contains five components namely Version Control Sysem (VCS), source code files, differencing algorithm, Defect Fix Model (DFM) creation and additional information (project name, class name, file name, revision number, diff model). The proposed differencing algorithm extracts detailed information by detecting differences in source code files. This algorithm performs comparison at sub-tree levels of source code files. The extracted differences with additional information are stored as DFM in repository. DFM(s) can later be used for the automated fault injection process. The validation of DefectoFix framework is performed by a tool developed using Ruby programming language. Our case study confirms that the proposed framework generates a correct DFM and is useful in automated fault injection and software validation activities.
APA, Harvard, Vancouver, ISO, and other styles
11

Král, Benjamin. "Forenzní analýza malware." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385910.

Full text
Abstract:
This master's thesis describes methodologies used in malware forensic analysis including methods used in static and dynamic analysis. Based on those methods a tool intended to be used by Computer Security Incident Response Teams (CSIRT) is designed to allow fast analysis and decisions regarding malware samples in security incident investigations. The design of this tool is thorougly described in the work along with the tool's requirements on which the tool design is based on. Based on the design a ForensIRT tool is implemented and then used to analyze a malware sample Cridex to demonstrate its capabilities. Finally the analysis results are compared to those of other comparable available malware forensics tools.
APA, Harvard, Vancouver, ISO, and other styles
12

Homdim, Tchuenteu Joel Landry. "Analysis and dynamic modeling of intermediate distributors for balancing of production lines." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18626/.

Full text
Abstract:
The work carried out at the company Pulsar Engineering s.r.l, and discussed in this thesis, focuses on the construction of a model for the dynamic simulation of the operations of a machine that allows feeding and sorting/merging in the tissue sector called REDS INTERMEDIATE. The goal is to derive a powerful dynamic model that can simulate a large range of REDS intermediate that could work in different existing operating modes (DIVERTER, COMBINER and By-pass modes) and containing all existing operating strategies (REVOLVER and TETRIS strategies). This was possible with the aid of a powerful simulation tool called PLS DYNAMIC/ TISSUEPLS DYNAMIC. It is important to emphasize that we will deal with a simplified production line since we are interested in just getting the REDS INTERMEDIATE model. This model can be used to: - Obtain a real estimate of the parameters necessary for the design of a production line. - See the behaviour of the PULSAR line in a 2D and 3D interface proposed by the software. The following discussion reports the study in question presenting some result, starting from a general description of the production lines, and a static analysis of the REDS INTERMEDIATE.
APA, Harvard, Vancouver, ISO, and other styles
13

Hellström, Patrik. "Tools for static code analysis: A survey." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16658.

Full text
Abstract:

This thesis has investigated what different tools for static code analysis, with anemphasis on security, there exist and which of these that possibly could be used in a project at Ericsson AB in Linköping in which a HIGA (Home IMS Gateway) is constructed. The HIGA is a residential gateway that opens up for the possibility to extend an operator’s Internet Multimedia Subsystem (IMS) all the way to the user’s home and thereby let the end user connect his/her non compliant IMS devices, such as a media server, to an IMS network.

Static analysis is the process of examining the source code of a program and in that way test a program for various weaknesses without having to actually execute it (compared to dynamic analysis such as testing).

As a complement to the regular testing, that today is being performed in the HIGA project, four different static analysis tools were evaluated to find out which one was best suited for use in the HIGA project. Two of them were open source tools and two were commercial.

All of the tools were evaluated in five different areas: documentation, installation & integration procedure, usability, performance and types of bugs found. Furthermore all of the tools were later on used to perform testing of two modules of the HIGA.

The evaluation showed many differences between the tools in all areas and not surprisingly the two open source tools turned out to be far less mature than the commercial ones. The tools that were best suited for use in the HIGA project were Fortify SCA and Flawfinder.

As far as the evaluation of the HIGA code is concerned some different bugs which could have jeopardized security and availability of the services provided by it were found.

APA, Harvard, Vancouver, ISO, and other styles
14

Königsson, Niklas. "Limitations of static analysis tools : An evaluation of open source tools for C." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155299.

Full text
Abstract:
This paper contains an evaluation of common open source static analysistools available for C. The tools algorithms are examined and measured in a test environment designed for such benchmarks to present their strengths and weaknesses. The examined tools represent different approaches to static analysis to get a good coverage of the algorithms that are commonly used. The test environment shows how many bugs that are correctly reportedby the tools, and also how many falsely reported bug they produce. The revealed strengths and weaknesses are discussed in relation to the tools algorithms to gain a deeper understanding of their limitations.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramos, Alexander. "Evaluating the ability of static code analysis tools to detect injection vulnerabilities." Thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-128302.

Full text
Abstract:
Identifying and eliminating security vulnerabilities in programs can be very time consuming. A way to automate and speed up the process is to integrate static code analysis tools in the development process. Choosing a static code analysis tool for a project is not an easy task since different tools have their own strengths and performance characteristics. One way of testing the qualifications of a tool for finding flaws is to test them against a test suite, constructed for the specific purpose of static code analysis tool testing. In this paper the tools Visual Code Grepper, FindBugs and SonarQube are tested for their ability to detect SQL, OS command and LDAP injection vulnerabilities against the Juliet test suite v1.2 for Java and the performance of the tools are evaluated. Since the tools have their own techniques for finding errors and vulnerabilities, diverse results are obtained where the tools show their strengths and weaknesses which are presented in tables and graphs. In general, the FindBugs tool seems to be the most suitable tool for detecting potential injections, however further studies including more test cases should be conducted to cover more of what the tools are capable of detecting. To cover most of the vulnerabilities in a program, it would be ideal to use as many tools as possible to locate the maximum amount of flaws
APA, Harvard, Vancouver, ISO, and other styles
16

Mamun, Md Abdullah Al, and Aklima Khanam. "Concurrent Software Testing : A Systematic Review and an Evaluation of Static Analysis Tools." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4310.

Full text
Abstract:
Verification and validation is one of the most important concerns in the area of software engineering towards more reliable software development. Hence it is important to overcome the challenges of testing concurrent programs. The extensive use of concurrent systems warrants more attention to the concurrent software testing. For testing concurrent software, automatic tools development is getting increased focus. The first part of this study presents a systematic review that aims to explore the state-of-the-art of concurrent software testing. The systematic review reports several issues like concurrent software characteristics, bugs, testing techniques and tools, test case generation techniques and tools, and benchmarks developed for the tools. The second part presents the evaluation of four commercial and open source static analysis tools detecting Java multithreaded bugs. An empirical evaluation of the tools would help the industry as well as the academia to learn more about the effectiveness of the static analysis tools for concurrency bugs.
APA, Harvard, Vancouver, ISO, and other styles
17

Busch, Benjamin C. "Cognitive bargaining model an analysis tool for third party incentives?" Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Dec/09Dec%5FBusch.pdf.

Full text
Abstract:
Thesis (M.A. in Security Studies (Defense Decision-Making))--Naval Postgraduate School, December 2009.
Thesis Advisor(s): Looney, Robert. Second Reader: Tsypkin, Mikhail. "December 2009." Description based on title screen as viewed on January 29, 2010. Author(s) subject terms: Inducements, bargaining, war, Ukraine, Russia, denuclearization, Prospect Theory, rational choice, cognitive, model, bargaining and war. Includes bibliographical references (p. 75-80). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
18

Hunt, Andrew W. "Basic Expeditionary Airfield Resource (BEAR) Requirements Analysis Tool (BRAT)." Quantico, VA : Marine Corps Command and Staff College, 2008. http://handle.dtic.mil/100.2/ADA491134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pekari, Gregory Chivers Kurt Miles Erickson Brian G. Belcher Robert C. Kartashov Vitalii. "An analysis comparing Commander Submarine Force U.S. Pacific Fleet (CSP) current inventory management tool versus PACFLT Regional Inventory Stocking Model (PRISM) : a proposed demand-based management tool /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FPekari.pdf.

Full text
Abstract:
Thesis (M.B.A.)--Naval Postgraduate School, June 2003.
"MBA professional report"--Cover. Joint authors: Kurt Miles Chivers, Brian G. Erickson, Robert C. Belcher, Vitalii Kartashov. Thesis advisor(s): Raymond Franck, Keebom Kang, Dan Dolk. Includes bibliographical references (p. 119-120). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Freeman, Wilma M. Milton Pamela. "Electronic Commerce : case analyses and tools utilized in the accomplishment of buying Defense /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FFreeman.pdf.

Full text
Abstract:
Thesis (M.S. in Contract Management)--Naval Postgraduate School, Sept. 2004.
Thesis advisor(s): Marshall Engelbeck, E. Cory Yoder. Includes bibliographical references (p. 57-61). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Dave. "Informatics tools for the analysis and assignment of phosphorylation status in proteomics." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/informatics-tools-for-the-analysis-and-assignment-of-phosphorylation-status-in-proteomics(48d2cc82-5bb2-4f07-9cdd-670467db4378).html.

Full text
Abstract:
Presently, progress in the field of phosphoproteomics has been accelerated by mass spectrometry. This is not a surprise owing to not only the accuracy, precision and high-throughput capabilities of MS but also due to the support it receives from informaticians whom allow the automated analysis; making the task of going from a complex sample to a statistically satisfactory set of phosphopeptides and corresponding site positions with relative ease. However, the process of identifying and subsequently pinpointing the phosphorylation moiety is not straightforward and remains a challenging task. Furthermore, it has been suggested that not all phosphorylation sites are of equal functional importance, to the extent that some may even lack function altogether. Clearly, such sites will confound the efforts towards functional characterisation. The work in this thesis is aimed at these two issues; accurate site localisation and functional annotation. To address the first issue, I adopt a multi-tool approach for identification and site localisation; utilising the different underlying algorithms of each tool and thereby allowing an orthogonal perspective on the same tandem mass spectra. Doing so enhanced accuracy over any single tool by itself. The power of this multi-tool approach stemmed from its ability to not predict more true positives but rather by removal of false positives. For the second issue, I first investigated the hypothesis that those of functional consequence exhibit stronger phosphorylation-characteristic features such as the degree of conservation and disorder. Indeed, it was found that some features were enriched for the functional group. More surprisingly, there were also some that were enriched for the less-functional; suggesting their incorporation into a prediction algorithm would hinder functional prediction. With this in mind, I train and optimise several machine-learning algorithms, using different combinations of features in an attempt to (separately) improve general phosphorylation and functional prediction.
APA, Harvard, Vancouver, ISO, and other styles
22

Fisch, Johan, and Carl Haglund. "Using the SEI CERT Secure Coding Standard to Reduce Vulnerabilities." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176409.

Full text
Abstract:
Security is a critical part of every software developed today and it will be even more important going forward when more devices are getting connected to the internet. By striving to improve the quality of the code, in particular the security aspects, there might be a reduction in the number of vulnerabilities and improvements of the software developed. By looking at issues from past problems and studying the code in question to see whether it follows the SEI CERT secure coding standards, it is possible to tell if compliance to this standard would be helpful to reduce future problems. In this thesis an analysis of vulnerabilities, written in C and C++, reported in Common Vulnerabilities and Exposures (CVE), will be done to verify whether applying the SEI CERT secure coding standard will help reduce vulnerabilities. This study also evaluates the SEI CERT rule coverage of three different static analysis tools, Rosecheckers, PVS-Studio and CodeChecker by executing them on these vulnerabilities. By using three different metrics, true positive, false negative and the run time. The results of the study are promising since it shows that compliance to the SEI CERT standard does indeed reduce vulnerabilities. Of the analyzed vulnerabilities it was found that about 60% of these could have been avoided, if the standard had been followed. The results of the tools were of great interest as well, it showed that the tools did not perform as well as the manual analysis, however, all of them found some SEI CERT rule violations in different areas. Conclusively, a combination of manual analysis and these three static analysis tools would have resulted in the highest number of vulnerabilities avoided.
APA, Harvard, Vancouver, ISO, and other styles
23

Milton, Pamela, and Wilma M. Freeman. "Electronic Commerce : case analyses and tools utilized in the accomplishment of buying Defense." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1430.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This study examines the significant issues relative to Ecommerce and how it has resulted in protests, disputes and litigations in the Federal acquisition process. How Ecommerce has evolved since the mandate in October 1993 by former President Clinton and in particularly how it relates to the Department of Defense Acquisition Workforce. It specifically addresses the traditional acquisition process versus the contemporary as it relates to Electronic Commerce and the tools utilized by the Acquisition Workforce to accomplish their buying activities.
Civilian, PEO Aviation
Civilian, US Army Aviation and Missile Command Acquisition Center
APA, Harvard, Vancouver, ISO, and other styles
24

Rahaman, Sazzadur. "From Theory to Practice: Deployment-grade Tools and Methodologies for Software Security." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99849.

Full text
Abstract:
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds.
Doctor of Philosophy
Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
APA, Harvard, Vancouver, ISO, and other styles
25

Base, Jessica. "Using International Trade as an Economic Development Tool: A Case Study Analysis and Applied Framework for Cleveland, Ohio." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1277123604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Nshimiyimana, Jean Marie Mr, Oluwafeyisayo Oyeniyi, Mathew Mr Seiler, Kimberly Ms Hawkins, and Temitope Mr Adeyanju. "Development of Public Health Indicator Visualization Tool." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/asrf/2019/schedule/32.

Full text
Abstract:
As the public and government officials become aware of the impact of public health on communities, it is important that relevant public health statistics be available for decision making. Existing web resources have limited visualization options, cannot visually compare a county to all others in the US, and cannot compare the counties in an arbitrary region to all others in the US. The College of Public Health Indicator Visualization Tool (CPHIVT) is a web application providing visualization and ranking for a county in the US in comparison to all counties for a specific health indicator. An iterative development methodology was used to complete major features and refine the features over time. Features divided into small tasks that could be completed within two-week cycles. After the first version of the web application was completed and presented to the client, client feedback on the application was used to refine specifications and was incorporated into planning for future iterations. Iterative development was adopted with a focus on improving and expanding existing features and making the application publicly available online. A suite of automated user interface tests is being developed to verify the application’s functions. Making a complete version of the application publicly available involves significant research and software configuration to deploy the web application in a secure and performant manner. The web application has two major components corresponding to its two major user groups. The first component allows authenticated users from the Department of Public Health to upload and manage sets of data for various health indicators. Tools are included to automatically process uploaded data points. This allows the information presented on the web site to be expanded and kept up to date over time with minimal effort. The second component is accessible to anyone and allows a user to choose to a state or county with text search or hierarchical navigation. The application then provides graphical charts showing that location’s standing for various health indicators compared to all other counties nationally. This is accomplished by applying percentile rankings to the counties and plotting the percentiles against the values for a selected indicator. A user can save a generated chart to a variety of export formats including PNG image or PDF document. The application is expected to serve as a tool for many community members. Staff and students at the College of Public Health will use this tool for presentations and research. County health departments will be able to use the tool when planning community programs. County government leaders can use this tool to determine areas of need in the community. Decision makers will have the ability to visualize their county or region as compared to the nation, not just to neighboring counties or within a state.
APA, Harvard, Vancouver, ISO, and other styles
27

Ramos, Cordoba Eloy. "Development of new tools for local electron distribution analysis." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/133376.

Full text
Abstract:
This thesis focuses in the development and application of new tools for the analysis of the electron distribution in molecules, focusing on the concepts of local spins, and oxidation state. The thesis can be divided into three parts. The first one deals with the formulation of a new atom in molecule definition reproducing to some extent the results of the QTAIM (Quantum theory of atoms in molecules) analysis at a much reduced computational cost. In the second part we propose a new methodology to obtain local spins from wave function analysis and we relate local spins with the chemical bond and the radical character of molecules. Finally, we study the electron configurations of the atom within the molecule and retrieve their oxidation states from a particular analysis of the effective atomic orbitals (eff-AOs)
Aquesta tesi es centra en el desenvolupament i aplicació de noves eines per a l'anàlisi de la distribució electrònica en molècules, posant èmfasi en els conceptes de espins locals i estats d'oxidació. La tesi es pot dividir en tres parts. La primera està dedicada a la formulació d'una nova definició d'àtom dins de la molècula que reprodueix les propietats de l'anàlisi QTAIM (Quantum theory of atoms in molecules) amb un cost computacional associat molt més baix. A la segona part proposem una nova metodologia per a obtenir espins locals a partir de l'anàlisi de la funció d'ona i relacionam aquest concepte amb l'enllaç químic iatom el caràcter radical de les molècules. Per últim, estudiem les configuracions electròniques dels àtoms dins de les molècules i obtenim estats d'oxidació efectius a partir de l'anàlisi dels orbitals atòmics efectius
APA, Harvard, Vancouver, ISO, and other styles
28

Palikareva, Hristina. "Techniques and tools for the verification of concurrent systems." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:fc2028e1-2a45-459a-afdd-70001893f3d8.

Full text
Abstract:
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
29

Dheka, Gilbert. "A comparative analysis of community mediation as a tool of transformation in the litigation systems of South Africa and the United States of America." University of the Western Cape, 2016. http://hdl.handle.net/11394/5514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Blank, Malin, and Anna Maria Persson. "The Swedish food retail market : An econometric analysis of the competition on local food retail markets." Thesis, Linköping University, Department of Management and Economics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2521.

Full text
Abstract:

The Swedish food retail market contains of three major actors, ICA, KF and Axfood, all in all dominating 75 percent of the total market shares. The scant number of retailing actors indicates that the Swedish food retail market is a highly concentrated oligopoly, which as a fact has given rise to definite discussions and argumentations concerning the market situation. But is the food retail market imperfect and how do we reach a workable competition? Economic theory does not provide any clear answer on these questions, but is rather divided into two fundamentally different approaches to define competition: the static and the dynamic perspective on competition.

In an attempt to examine the competition on local Swedish retail markets, the purpose of this study is to carry out an econometric model estimating the situation. The model serves to explain the variation of ICA’s achievements measured in terms of turnovers obtained in the company. The explanatory variables composing the model are divided into three separate groupings: degreeof market concentration, storespecific factors and region-specific factors. Furthermore, in order to find out which one of the competitive explanations best fits the reality, the regression results are interpreted from a static and a dynamic perspective of competition. In part, we also aim to compare the results with the outline of the Swedish competition law.

We found that the level of concentration obtained in our material is high and is steadily increasing. We also found that stores do not, in any great extent, use price, service and quality as competitive methods. Thus, to gain competitive advantage, market actors must find other ways to carry out strategic market activities. The region-specific variables had either none or very little influence on ICA’s turnover. According to these findings, neither the static nor the dynamic perspective of competition is solely able to produce an accurate method for reaching a state of a workable competition. Instead, a combination of the static and the dynamic ideas may be regarded as the most advantageous way to generate suitable conditions for competition to be efficient. Therefore, in order to promote workable competition, the Swedish competition law must consist of a balance between the static and the dynamic view of competition.

APA, Harvard, Vancouver, ISO, and other styles
31

Brannon, Brittany Ann. "Faulty Measurements and Shaky Tools: An Exploration into Hazus and the Seismic Vulnerabilities of Portland, OR." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1410.

Full text
Abstract:
Events or forces of nature with catastrophic consequences, or "natural disasters," have increased in both frequency and force due to climate change and increased urbanization in climate-sensitive areas. To create capacity to face these dangers, an entity must first quantify the threat and translate scientific knowledge on nature into comprehensible estimates of cost and loss. These estimates equip those at risk with knowledge to enact policy, formulate mitigation plans, raise awareness, and promote preparedness in light of potential destruction. Hazards-United States, or Hazus, is one such tool created by the federal government to estimate loss from a variety of threats, including earthquakes, hurricanes, and floods. Private and governmental agencies use Hazus to provide information and support to enact mitigation measures, craft plans, and create insurance assessments; hence the results of Hazus can have lasting and irreversible effects once the hazard in question occurs. This thesis addresses this problem and sheds light on the obvious and deterministic failings of Hazus in the context of the probable earthquake in Portland, OR; stripping away the tool's black box and exposing the grim vulnerabilities it fails to account for. The purpose of this thesis is twofold. First, this thesis aims to examine the critical flaws within Hazus and the omitted vulnerabilities particular to the Portland region and likely relevant in other areas of study. Second and more nationally applicable, this thesis intends to examine the influence Hazus outputs can have in the framing of seismic risk by the non-expert public. Combining the problem of inadequate understanding of risk in Portland with the questionable faith in Hazus alludes to a larger, socio-technical situation in need of attention by the academic and hazard mitigation community. This thesis addresses those issues in scope and adds to the growing body of literature on defining risk, hazard mitigation, and the consequences of natural disasters to urban environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Margolis, David. "An analysis of electronic surveillance in the USAPATRIOT act." Honors in the Major Thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/776.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf
Bachelors
Health and Public Affairs
Legal Studies
APA, Harvard, Vancouver, ISO, and other styles
33

Spong, Kaitlyn M. "“Your love is too thick”: An Analysis of Black Motherhood in Slave Narratives, Neo-Slave Narratives, and Our Contemporary Moment." ScholarWorks@UNO, 2018. https://scholarworks.uno.edu/td/2573.

Full text
Abstract:
In this paper, Kait Spong examines alternative practices of mothering that are strategic nature, heavily analyzing Patricia Hill Collins’ concepts of “othermothering” and “preservative love” as applied to Toni Morrison’s 1987 novel, Beloved and Harriet Jacob’s 1861 slave narrative, Incidents in the Life of a Slave Girl. Using literary analysis as a vehicle, Spong then applies these West African notions of motherhood to a modern context by evaluating contemporary social movements such as Black Lives Matter where black mothers have played a prominent role in making public statements against systemic issues such as police brutality, heightened surveillance, and the prison industrial complex.
APA, Harvard, Vancouver, ISO, and other styles
34

Simmons, Stephanie Catherine. "Exploring Colonization and Ethnogenesis through an Analysis of the Flaked Glass Tools of the Lower Columbia Chinookans and Fur Traders." Thesis, Portland State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1560956.

Full text
Abstract:

This thesis is an historical archaeological study of how Chinookan peoples at three villages and employees of the later multicultural Village at Fort Vancouver negotiated the processes of contact and colonization. Placed in the theoretical framework of practice theory, everyday ordinary activities are studied to understand how cultural identities are created, reinforced, and changed (Lightfoot et al. 1998; Martindale 2009; Voss 2008). Additionally uneven power relationships are examined, in this case between the colonizer and the colonized, which could lead to subjugation but also resistance (Silliman 2001). In order to investigate these issues, this thesis studies how the new foreign material of vessel glass was and was not used during the everyday practice of tool production.

Archaeological studies have found that vessel glass, which has physical properties similar to obsidian, was used to create a variety of tool forms by cultures worldwide (Conte and Romero 2008). Modified glass studies (Harrison 2003; Martindale and Jurakic 2006) have demonstrated that they can contribute important new insights into how cultures negotiated colonization. In this study, modified glass tools from three contact period Chinookan sites: Cathlapotle, Meier, and Middle Village, and the later multiethnic Employee Village of Fort Vancouver were examined. Glass tool and debitage analysis based on lithic macroscopic analytical techniques was used to determine manufacturing techniques, tool types, and functions. Additionally, these data were compared to previous analyses of lithics and trade goods at the study sites.

This thesis demonstrates that Chinookans modified glass into tools, though there was variation in the degree to which glass was modified and the types of tools that were produced between sites. Some of these differences are probably related to availability, how glass was conceptualized by Native Peoples, or other unidentified causes. This study suggests that in some ways glass was just another raw material, similar to stone, that was used to create tools that mirrored the existing lithic technology. However at Cathlapotle at least, glass appears to have been relatively scarce and perhaps valued even as a status item. While at Middle Village, glass (as opposed to stone) was being used about a third of the time to produce tools.

Glass tool technology at Cathlapotle, Meier, and Middle Village was very similar to the existing stone tool technology dominated by expedient/low energy tools; however, novel new bottle abraders do appear at Middle Village. This multifaceted response reflects how some traditional lifeways continued, while at the same time new materials and technology was recontextualized in ways that made sense to Chinookan peoples.

Glass tools increase at the Fort Vancouver Employee Village rather than decrease through time. This response appears to be a type of resistance to the HBC's economic hegemony and rigid social structure. Though it is impossible to know if such resistance was consciously acted on or was just part of everyday activities that made sense in the economic climate of the time.

Overall, this thesis demonstrates how a mundane object such as vessel glass, can provide a wealth of information about how groups like the Chinookans dealt with a changing world, and how the multiethnic community at Fort Vancouver dealt with the hegemony of the HBC. Chinookan peoples and the later inhabitants of the Fort Vancouver Employee Village responded to colonization in ways that made sense to their larger cultural system. These responses led to both continuity and change across time. (Abstract shortened by UMI.)

APA, Harvard, Vancouver, ISO, and other styles
35

Mendonça, Vinícius Rafael Lobo de. "Estudo, definição e implementação de um sistema de recomendação para priorizar os avisos gerados por ferramentas de análise estática." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4338.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-24T14:51:12Z No. of bitstreams: 2 Dissertação - Vinícius Rafael Lobo de Mendonça - 2014.pdf: 4110263 bytes, checksum: 2e2be342a6c3301f64fa41a675b85ba9 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-24T14:55:54Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Rafael Lobo de Mendonça - 2014.pdf: 4110263 bytes, checksum: 2e2be342a6c3301f64fa41a675b85ba9 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-03-24T14:55:54Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Rafael Lobo de Mendonça - 2014.pdf: 4110263 bytes, checksum: 2e2be342a6c3301f64fa41a675b85ba9 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-11-19
Recommendation systems try to guide the user carrying out a task providing him with useful information about it. Considering the context of software development, programs are ever increasing, making it difficult to carry out a detailed verification of warnings generated by automatic static analyzers. In this work, we propose a recommendation system, called WarningsFIX, which aims at helping developers on handling the high number of warnings reported by automatic static analyzers. The back end of this system is composed of seven open-source static analysis tools collecting data, which subsequently are used for visualizing information through TreeMaps. The intention is to combine the outcomes of different static analyzers such that WarningsFIX recommends the analysis of warnings with highest chance to be a true positive. Therefore, the information related to warnings are displayed in four levels of detail: program, package, class, and line. The nodes may be classified in the first three levels: amount of warnings, number of tools and suspicions rate. An exploratory study was carried out and the limitations, advantages and disadvantages of the proposed approach were discussed.
O Sistema de Recomendação apoia um usuário na realização de uma tarefa. Considerando o atual contexto do desenvolvimento de software, programas estão cada vez maiores, tornando difícil a realização de uma avaliação detalhada dos avisos gerados pelos analisadores estáticos. Nesse trabalho, propõe-se um sistema de recomendação, chamado WarningsFIX, que tem objetivo de ajudar os desenvolvedores manipular o alto nível dos avisos emitidos pelos analisadores estáticos. O back end desse sistema é composto de sete ferramentas de análise estática de código aberto para coleta de dados, que são visualizados por meio de TreeMap. O objetivo é combinar os resultados de diferentes analisadores estáticos, assim recomendar a análise de avisos com alta chance de ser verdadeiro positivo. Portanto, a informações relacionadas ao nó são visualizadas em quatro níveis de visualização: programa, pacote, classe e linha. Além disso, os nós podem ser classificados em três tipos: quantidade de avisos, quantidade de ferramentas e taxa de suspeição. Um estudo exploratório foi realizado e as limitações, vantagens e desvantagens da abordagem proposta foram discutidas.
APA, Harvard, Vancouver, ISO, and other styles
36

Holmberg, Anna. "Jämförelse av statiska kodanalysverktyg : En fallstudie om statiska kodanalysverktygs förmåga att hitta sårbarheter i kod." Thesis, Högskolan Dalarna, Mikrodataanalys, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:du-35593.

Full text
Abstract:
Security deficiencies that occur in web applications can have major consequences. PHP is a language that is often used for web applications and it places high demands on how the language is used to ensure it is safe. There are several features in PHP that should be handled with care to avoid security flaws. Static code analysis can help find vulnerabilities in code, but there are some drawbacks that can occur with static code analysis tools. One disadvantage is false positives which means that the tool reports vulnerabilities that do not exist. There are also false negatives which means the tool cannot find the vulnerability at all which can lead to a false sense of security for the user of the tool. With the help of completed test cases, three tools have been investigated in a case study to find out if the tools differ in their ability to avoid false positives and false negatives. The study also examines whether the tools' rules consider the PHP language's vulnerable functions. To answer the research question, a document collection was conducted to obtain information about the tools and various vulnerabilities. The purpose of this study is to compare the ability of static code analysis tools to find PHP code vulnerabilities. The tools that were investigated were SonarQube, Visual Code Grepper (VCG) and Exakat. The study's analysis shows that VCG found the most vulnerabilities but failed to avoid false positive vulnerabilities. Exakat had zero false positives but could not avoid false negatives to the same extent as VCG. SonarQube avoided all false positives but did not find any of the vulnerabilities tested in the test cases. According to the rules of the tools, VCG had more consideration for the risky functions found in PHP. The study's results show that the tools' ability to avoid false positives and false negatives differed and their adaptation to the PHP language's vulnerable functions.
Säkerhetsbrister som förekommer i webbapplikationer kan leda till stora konsekvenser. PHP är ett språk som ofta används för webbapplikationer och det ställer höga krav på hur språket används för att det ska vara säkert. Det finns flera funktioner i PHP som bör hanteras varsamt för att inte säkerhetsbrister ska uppstå. Statisk kodanalys kan hjälpa till med att hitta sårbarheter i kod men det finns vissa nackdelar som kan uppkomma med statiska kodanalysverktyg. En nackdel är falska positiva vilket betyder att verktyget rapporterar in sårbarheter som inte finns. Det finns också falska negativa som betyder att verktyget inte hittar sårbarheten alls vilket kan leda till en falsk trygghetskänsla för användaren av verktyget. Med hjälp av färdiga testfall så har tre verktyg utretts i en fallstudie för att ta reda på om verktygen skiljer sig i sin förmåga till att undvika falska positiva och falska negativa. Studien undersöker också om verktygens regler tar PHP-språkets sårbara funktioner i beaktning. För att kunna besvara forskningsfrågan har en dokumentsinsamling genomförts för att få information om verktygen och olika sårbarheter. Studiens syfte är att jämföra statiska kodanalysverktygs förmåga att hitta sårbarheter i PHP-kod. De verktyg som utreddes var SonarQube, Visual Code Grepper (VCG) och Exakat. Studiens analys visar att VCG hittade mest sårbarheter men lyckades inte undvika falska positiva sårbarheter. Exakat hade noll falska positiva men kunde inte undvika falska negativa i lika stor utsträckning som VCG. SonarQube undvek alla falska positiva men hittade inte någon av de sårbarheter som testades i testfallen. Enligt verktygens regler visade sig VCG ta mest hänsyn till de riskfyllda funktioner som finns i PHP. Studiens resultat visar att verktygens förmåga att undvika falska positiva och falska negativa och deras anpassning för PHP språkets sårbara funktioner skiljde sig åt.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Ping-Hung, and 林炳宏. "Nonlinear Static Analysis of Machine Tool Spindle." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/53411161558941559449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Devaraj, Arvind. "A Static Slicing Tool for Sequential Java Programs." Thesis, 2007. http://etd.iisc.ernet.in/2005/3891.

Full text
Abstract:
A program slice consists of a subset of the statements of a program that can potentially affect values computed at some point of interest. Such a point of interest along with a set of variables is called a slicing criterion. Slicing tools are useful for several applications, such as program understanding, testing, program integration, and so forth. Slicing object oriented programs has some special problems that need to be addressed due to features like inheritance, polymorphism and dynamic binding. Alias analysis is important for precision of slices. In this thesis we implement a slicing tool for sequential Java programs in the Soot framework. Soot is a front-end for Java developed at McGill University and it provides several forms of intermediate code. We have integrated the slicer into the framework. We also propose an improved technique for intraprocedural points-to analysis. We have implemented this technique and compare the results of the analysis with those for a flow-insensitive scheme in Soot. Performance results of the slicer are reported for several benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
39

Mao, Jun-Kai, and 毛俊凱. "The Development of Machine Tool Spindle Test Platform for Static and Dynamic Characteristics Analysis." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/05207158229000672678.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
94
In the manufacturing process of the machine tools, the performance of spindle is the key point in machine tools, and it decides the working process quality and production efficiency. With the view of engineering design, test and analyze is used to verify the quality and design after designed. With the view of assembled, test and analyze is used to verify the basis assembled, and is regarded as the reference for improve the products quality. With the view of customer, the result of analyze is to take for a reference that customer demand. It this paper, we developed the spindle single test platform, and planed the procedure about test and analyze of spindle. The object is to understand the quiet dynamic characteristic of the spindle even more by this. All of the results done by the test platform we developed, include amount of dynamic rotational accuracy is examined to make static stiffness measuring and is analyzed , running-in test and thermal displacement to the spindle of two different attitudes(each for before and after bearing contacting angle 250 and down preload of spindle, contact another piece for back end bearing contact angle 150 of spindle). The result Inspection, no matter in radial stiffness or axial stiffness, it is taller than 250 ones that lowered the spindle preload of back end bearing that 150 back end bearings have not lowered the spindle preload, The Inspection in dynamic rotational accuracy of two spindle, can find out that it is not high to the influence which dynamic rotational accuracy that the bearing is contact angle and preload, In thermal displacement, the spindle is more difficult to be influenced by environment temperature in case of pouring into cooling liquid, It is to make testing in the atmosphere that the past Running-in, but the result Inspection by way of the test platform, after the sleeve pours into the cooling liquid , can reach the even result of lubrication within shorter time.
APA, Harvard, Vancouver, ISO, and other styles
40

Peng, Zih-Yun, and 彭子芸. "Implementing a Worst-Case Execution Time ILP Memory Model for a Static Analysis Tool." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/36amd8.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
106
Real-time systems impose deadlines on tasks. Hard real-time systems require a guarantee that no deadline is ever missed. Such a guarantee is impossible if a program’s execution time ever exceeds its deadline. The challenge is therefore to determine the program’s worst case execution time (WCET), so the deadline can be set accordingly. Although it is impossible to know the true WCET of most nontrivial programs, an upper bound is sufficient to guarantee that the deadline is met. Such an upper bound can be derived through a static analysis that analyzes the worst-case execution time of each portion of the source code and the worst-case flow between these portions. One such static analyzer is the SWEdish execution time tool (SWEET). This thesis extends the work of previous students in our laboratory, who have adapted SWEET to work with the ARM processor, by fixing SWEET’s machine model [2] and memory model [1]. Despite those previous efforts, it was not possible to release the code for public use, because [1] only adapted the memory model to one of SWEET’s calculation methods (path-based). Another, more-commonly used method (IPET) was not supported. This thesis has therefore solved the problems of supporting IPET for the ARM, with the sophisticated memory model of [1].
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Ting-An, and 陳亭諳. "A Tool for Static WCET Analysis with Accurate Memory Modeling for ARM Programs that Use Scratchpad Memory." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/00562241414929200635.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
102
In order to guarantee the reliability of the real-time system, each process should be complete before the deadline. Therefore, providing accurate WCET for scheduler would be a key factor. WCET can derive by two method: measurement-base or static analysis. Since measurement-base cannot guarantee the safety of WCET, we use static analysis in this thesis. In this thesis, we use SWEET (SWEdish execution time tool) to estimates WCET for ARM. Since the memory module of SWEET for ARM is out of date and cannot provide accurate WCET. Therefore, we propose a simplified architecture for analyzing the time costs of memory read accesses and memory write accesses. This method can not only derive the memory access time of DRAM but also SPM. Additionally, in order to prevent over-optimizing issue of allocator on WCET, we also provide a more efficient way to generate nearly worst case flow paths. Experiment result shows our memory module can improve 43%~46% of WCET compares to the situation which assumed every memory access is worst.
APA, Harvard, Vancouver, ISO, and other styles
42

Mirko, Staderini. "Towards the Assessment and the Improvement of Smart Contract Security." Doctoral thesis, 2022. http://hdl.handle.net/2158/1272428.

Full text
Abstract:
Blockchain technologies (hereafter called Blockchain) allow storing information guaranteeing properties such as immutability, integrity and non-repudiation of data. Although Blockchain is not a panacea, this technology has rapidly evolved in recent years. The development of smart contracts (which automatically execute computerized transactions) has increased the application areas of the Blockchain. One of the most important issues is security; the problem is even more critical, considering that smart contracts cannot be patched once they are deployed into the Blockchain. Ethereum is one of the main platforms for smart contract development, and it offers Solidity as its primary (and Turingcomplete) language. Solidity is a new language which evolves rapidly. As a result, vulnerability records are still sparse, and consequently, the existing smart contract checking tools are still immature. On the other hand, Solidity is just another new programming language reusing its central notions from traditional languages extended by Ethereumspecific elements. Then, the most promising way to create a quality assurance process is adapting more general existing technologies to the peculiarities of Ethereum and, in particular, Solidity. Unfortunately, despite various studies and trials on the subject, no literature approach clearly solves the problems related to the vulnerability of smart contracts. To contribute to this hot field, we propose our methodology to assess and improve the smart contract security. At first, we address the problem of overcoming the Solidity rapid evolution through the definition of a set of 32 vulnerabilities and their language-independent classification in 10 categories. Then, we assess smart contract security by applying one of the most popular approaches to discover vulnerabilities: static analysis (SA). After selecting static analysis tools, we identify categories of vulnerabilities that SA tools cannot cover. Next step is to conduct an experimental campaign based on the analysis of contracts across the selected toolset. We realized that processing smart contracts, randomly extracted from Etherscan (a Blockchain explorer) with SA tools results in several positives. We determined thus, overall and for each category of vulnerabilities, the best-built tools (wrt. their effectiveness against the subset of 4 vulnerabilities they target) and the most effective ones (wrt. the entire vulnerability set). We found a lack of coverage of vulnerabilities in using each and every tool individually. This lack took us to the investigation of possible approaches to improve the security of smart contracts. A first approach has been to use several tools in a combined way to increase the coverage. Through this analysis we determined also the combinations with the highest coverage. Then we analyzed those vulnerabilities that escape the detection so to provide an ordering for deciding which vulnerabilities should be addressed first in the process of modifying static analysis tools to improve their coverage. As a last contribution, we investigated how to improve the tool effectiveness by determining where vulnerabilities are most likely located.
APA, Harvard, Vancouver, ISO, and other styles
43

"A Tool to Reduce Defects due to Dependencies between HTML5, JavaScript and CSS3." Master's thesis, 2016. http://hdl.handle.net/2286/R.I.39436.

Full text
Abstract:
abstract: One of the most common errors developers make is to provide incorrect string identifiers across the HTML5-JavaScript-CSS3 stack. The existing literature shows that a significant percentage of defects observed in real-world codebases belong to this category. Existing work focuses on semantic static analysis, while this thesis attempts to tackle the challenges that can be solved using syntactic static analysis. This thesis proposes a tool for quickly identifying defects at the time of injection due to dependencies between HTML5, JavaScript, and CSS3, specifically in syntactic errors in string identifiers. The proposed solution reduces the delta (time) between defect injection and defect discovery with the use of a dedicated just-in-time syntactic string identifier resolution tool. The solution focuses on modeling the nature of syntactic dependencies across the stack, and providing a tool that helps developers discover such dependencies. This thesis reports on an empirical study of the tool usage by developers in a realistic scenario, with the focus on defect injection and defect discovery times of defects of this nature (syntactic errors in string identifiers) with and without the use of the proposed tool. Further, the tool was validated against a set of real-world codebases to analyze the significance of these defects.
Dissertation/Thesis
Masters Thesis Computer Science 2016
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Yan-Hua, and 吳彥樺. "Static and Dynamic Analyses and Improved Design of Machine Tool Structures." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/99605839220892746410.

Full text
Abstract:
碩士
東南科技大學
機械工程研究所
101
This thesis investigates the structural vibration problems of machine tools, and proposes improved designs of the machines in order to reduce the possibilities of resonance and increase precision and stabilities during cutting operation. The CAD model of a machine tool was first imported to the finite element software ANSYS Workbench to perform convergence analysis, after which a model with a proper number of nodes was selected. Then, static analysis, dynamic analysis and harmonic response analysis were performed on the finite element model with appropriate loadings and boundary conditions, acquiring deformations, equivalent stresses, natural frequencies, mode shapes, and harmonic response functions of the structure. The Hooke’s law was also employed to calculate their dynamic stiffnesses. Based on the analyzed results, structural weakness was identified. By raising the fundamental natural frequency and reducing the structural mass, the dynamic characteristics of the machine tool can be enhanced. The design results show that the improved models produce higher natural frequencies and lower masses, which can effectively decrease the possibilities of structural resonance on the machine tool.
APA, Harvard, Vancouver, ISO, and other styles
45

Aljawder, Dana. "Identifying unsoundness of call graphs in android static analysis tools." Thesis, 2016. https://hdl.handle.net/2144/17085.

Full text
Abstract:
Analysis techniques are used to test mobile applications for bugs or malicious activity. Market operators such as Google and Amazon use analysis tools to scan applications before deployment. Creating a call graph is a crucial step in many of the static analysis tools for Android applications. Each edge in a call graph is a method call in the application. A sound call graph is one that contains all method calls of an application. The soundness of the call graph is critical for accurate analysis. Unsoundness in the call graph would render analysis of the application flawed. Therefore, any conclusions drawn from an unsound call graph could be invalid. In this project, we analyze the soundness of static call graphs. We propose and develop a novel approach to automatically identify unsoundness. We create a dynamic call graph to examine the soundness of the static call graph. We map the edges of the two graphs. Any edge observed dynamically but not present in the static call graph is a witness for unsoundness. We show that there are edges in the dynamic call graph that are not contained in the static call graph. We analyze 92 applications to find a total of 19,653 edges missed by a state-of-the-art static analysis tool. To further analyze these edges, our tool categorizes them into groups that can help identify the type of method call that was missed by the static analysis tool. These categories pinpoint where further research efforts are necessary to improve current state-of-the-art static analysis capabilities.
APA, Harvard, Vancouver, ISO, and other styles
46

Alikhashashneh, Enas A. "Using Machine Learning Techniques to Improve Static Code Analysis Tools Usefulness." Thesis, 2019. http://hdl.handle.net/1805/19942.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests (RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.
APA, Harvard, Vancouver, ISO, and other styles
47

(7013450), Enas Ahmad Alikhashashneh. "USING MACHINE LEARNING TECHNIQUES TO IMPROVE STATIC CODE ANALYSIS TOOLS USEFULNESS." Thesis, 2019.

Find full text
Abstract:

This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests

(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.

APA, Harvard, Vancouver, ISO, and other styles
48

Reynolds, Zachary P. "Identifying and documenting false positive patterns generated by static code analysis tools." Thesis, 2017. https://doi.org/10.7912/C22651.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Static code analysis tools are known to flag a large number of false positives. A false positive is a warning message generated by a static code analysis tool for a location in the source code that does not have any known problems. This thesis presents our approach and results in identifying and documenting false positives generated by static code analysis tools. The goal of our study was to understand the different kinds of false positives generated so we can (1) automatically determine if a warning message from a static code analysis tool truly indicates an error, and (2) reduce the number of false positives developers must triage. We used two open-source tools and one commercial tool in our study. Our approach led to a hierarchy of 14 core false positive patterns, with some patterns appearing in multiple variations. We implemented checkers to identify the code structures of false positive patterns and to eliminate them from the output of the tools. Preliminary results showed that we were able to reduce the number of warnings by 14.0%-99.9% with a precision of 94.2%-100.0% by applying our false positive filters in different cases.
APA, Harvard, Vancouver, ISO, and other styles
49

Chia-HsiuYeh and 葉家秀. "Volumetric Error Analysis for Five-Axis Virtual Machine Tools under Static Load." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/30362656835698934647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kao, Mei-Chan, and 高美琴. "The Analysis of Mathematical Tools in Senior High School Physics Textbooks - The Statics Unit as an Example." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/s6j86k.

Full text
Abstract:
碩士
淡江大學
中等學校教師在職進修數學教學碩士學位班
101
The main purpose of this study is to figure out the teaching material structure which is checked againt the Curriculum Guideline issued by the Ministry of Education (MOE) in 2009 and to analyze the representation statics unit in three versions of senior high physics textbooks, as well as to render the using of the mathematical signs when they are presented. With the tools of concept map , content analysis is applied to the teaching materials of three existing versions used in senior high physics textbooks. The findings of this study : There are some sequence differences of concept among different versions of statics unit materials, but they are consistant with the mapping in the Curriculum Guideline 2009. The amount of the common concepts comprises about 70 percent of the senior high statics unit concepts. In the teaching materials of statics unit , all the three versions primarily use the mathematical signs of Proportional relationship, Pythagorean Theorem, Algebra, Trigonometric function and Vector. Each version of using math signs were shows difference in depth in calculation. The statics unit is scheduled in the physics courses in the first term of Grade 11, which adopts many mathematical signs corresponding to the Curriculum Guideline, such as the Proportional relationship and Pythagorean Theorem in the math courses at junior high, the Algebra of Σ signs in the second term in grade 10 , the Trigonometric Function and Vector in the first term in grade 11. But part of the material of Trigonometric Function is in the first term in grade 12. Finally, the findings of this study as suggestions are made for the reference of textbook compilers, users and researchers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography