Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Static analysi.

Thèses sur le sujet « Static analysi »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Static analysi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

SANTORO, MAURO. « Inference of behavioral models that support program analysis ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19514.

Texte intégral
Résumé :
The use of models to study the behavior of systems is common to all fields. A behavioral model formalizes and abstracts the view of a system and gives insight about the behavior of the system being developed. In the software field, behavioral models can support software engineering tasks. In particular, relevant uses of behavioral models are included in all the main analysis and testing activities: models are used in program comprehension to complement the information available in specifications, are used in testing to ease test case generation, used as oracles to verify the correctness of the executions, and are used as failure detection to automatically identify anomalous behaviors. When behavioral models are not part of specifications, automated approaches can automatically derive behavioral models from programs. The degree of completeness and soundness of the generated models depends from the kind of inferred model and the quality of the data available for the inference. When model inference techniques do not work well or the data available for the inference are poor, the many testing and analysis techniques based on these models will necessarily provide poor results. This PhD thesis concentrates on the problem of inferring Finite State Automata (the model that is likely most used to describe the behavior of software systems) that describe the behavior of programs and components and can be useful as support for testing and analysis activities. The thesis contributes to the state of the art by: (1) Empirically studying the effectiveness of techniques for the inference of FSAs when a variable amount of information (from scarce to good) is available for the inference; (2) Empirically comparing the effectiveness of techniques for the inference of FSAs and Extended FSAs; (3) Proposing a white-box technique that infers FSAs from service-based applications by starting from a complete model and then refining the model by incrementally removing inconsistencies; (4) Proposing a black-box technique that infers FSAs by starting from a partial model and then incrementally producing additional information to increase the completeness of the model.
Styles APA, Harvard, Vancouver, ISO, etc.
2

SHRESTHA, JAYESH. « Static Program Analysis ». Thesis, Uppsala universitet, Informationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-208293.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Jakobsson, Filip. « Static Analysis for BSPlib Programs ». Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE2005.

Texte intégral
Résumé :
La programmation parallèle consiste à utiliser des architectures à multiples unités de traitement, de manière à ce que le temps de calcul soit inversement proportionnel au nombre d’unités matérielles. Le modèle de BSP (Bulk Synchronous Parallel) permet de rendre le temps de calcul prévisible. BSPlib est une bibliothèque pour la programmation BSP en langage C. En BSPlib on entrelace des instructions de contrôle de la structure parallèle globale, et des instructions locales pour chaque unité de traitement. Cela permet des optimisations fines de la synchronisation, mais permet aussi l’écriture de programmes dont les calculs locaux divergent et masquent ainsi l’évolution globale du calcul BSP. Toutefois, les programmes BSPlib réalistes sont syntaxiquement alignés, une propriété qui garantit la convergence du flot de contrôle parallèle. Dans ce mémoire nous étudions les trois dimensions principales des programmes BSPlib du point de vue de l’alignement syntaxique : la synchronisation, le temps de calcul et la communication. D’abord nous présentons une analyse statique qui identifie les instructions syntaxiquement alignées et les utilise pour vérifier la sûreté de la synchronisation globale. Cette analyse a été implémentée en Frama-C et certifiée en Coq. Ensuite nous utilisons l’alignement syntaxique comme base d’une analyse statique du temps de calcul. Elle est fondée sur une analyse classique du coût pour les programmes séquentiels. Enfin nous définissons une condition suffisante pour la sûreté de l’enregistrement des variables. L’enregistrement en BSPlib permet la communication par accès aléatoire à la mémoire distante (DRMA) mais est sujet à des erreurs de programmation. Notre développement technique est la base d’une future analyse statique de ce mécanisme
The goal of scalable parallel programming is to program computer architectures composed of multiple processing units so that increasing the number of processing units leads to an increase in performance. Bulk Synchronous Parallel (BSP) is a widely used model for scalable parallel programming with predictable performance. BSPlib is a library for BSP programming in C. In BSPlib, parallel algorithms are expressed by intermingling instructions that control the global parallel structure, and instructions that express the local computation of each processing unit. This lets the programmer fine-tune synchronization, but also implement programs whose diverging parallel control flow obscures the underlying BSP structure. In practice however, the majority of BSPlib program are textually aligned, a property that ensures parallel control flow convergence. We examine three core aspects of BSPlib programs through the lens of textual alignment: synchronization, performanceandcommunication.First,wepresentastaticanalysisthatidentifiestextuallyalignedstatements and use it to verify safe synchronization. This analysis has been implemented in Frama-C and certified in Coq. Second, we exploit textual alignment to develop a static performance analysis for BSPlib programs, based on classic cost analysis for sequential programs. Third, we develop a textual alignment-based sufficient condition for safe registration. Registration in BSPlib enables communication by Direct Remote Memory Access but is error prone. This development forms the basis for a future static analysis of registration
Styles APA, Harvard, Vancouver, ISO, etc.
4

Djoudi, Adel. « Binary level static analysis ». Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX093.

Texte intégral
Résumé :
Les méthodes de vérification automatique des logiciels connaissent un succès croissant depuis le début des années 2000, suite à plusieurs succès industriels (Microsoft, Airbus, etc.). L'analyse statique vise, à partir d'une description du programme, à inférer automatiquement des propriétés vérifiées par celui-ci. Les techniques standards d'analyse statique travaillent sur le code source du logiciel, écrit par exemple en C ou Java. Cependant, avoir accès au code source n'est pas envisageable pour de nombreuses applications relatives à la sécurité, soit que le code source n'est pas disponible (code mobile, virus informatiques), soit que le développeur ne veut pas le divulguer (composants sur étagère, certification par un tiers).Nous nous intéressons dans cette thèse à la conception et au développement d'une plate-forme d'analyse statique de code binaire à des fins d'analyse de sécurité. Nos principales contributions se font à trois niveaux: sémantique, implémentation et analyse statique.Tout d'abord, la sémantique des programmes binaires analysés est basée sur un formalisme générique appelé DBA qui a été enrichi avec des mécanismes de spécification et d'abstraction. La définition de la sémantique des programmes binaires requiert aussi un modèle mémoire adéquat.Nous proposons un modèle mémoire adapté au binaire, inspiré des travaux récents sur le code C bas-niveau. Ce nouveau modèle permet de profiter de l'abstraction du modèle à régions tout en gardant l'expressivité du modèle plat.Ensuite, notre plate-forme d'analyse de code binaire nommée BinSec offre trois services de base: désassemblage, simulation et analyse statique.Chaque instruction machine est traduite vers un bloc d'instructions DBA avec une sémantique équivalente. Une large partie des instructions x86 est gérée par la plateforme. Une passe de simplification permet d'éliminer les calculs intermédiaires inutiles afin d'optimiser le fonctionnement des analyses ultérieures. Nos simplifications permettent notamment d'éliminer jusqu'à75% des mises à jours de flags.Enfin, nous avons développé un moteur d'analyse statique de programmes binaires basé sur l'interprétation abstraite. Outre des domaines adaptés aux spécificités du code binaire, nous nous sommes concentrés sur le contrôle par l'utilisateur du compromis entre précision/correction et efficacité. De plus, nous proposons une approche originale de reconstruction de conditions dehaut-niveau à partir des conditions bas-niveau afin de gagner plus de précision d'analyse. L'approche est sûre, efficace, indépendante de la plateforme cibleet peut atteindre des taux de reconstruction très élevés
Automatic software verification methods have seen increasing success since the early 2000s, thanks to several industrial successes (Microsoft, Airbus, etc.).Static program analysis aims to automatically infer verified properties of programs, based on their descriptions. The standard static analysis techniques apply on the software source code, written for instance in C or Java. However, access to source code is not possible for many safety-related applications, whether the source code is not available (mobile code, computer virus), or the developer does not disclose it (shelf components, third party certification).We are interested in this dissertation in design and development of a static binary analysis platform for safety analysis. Our contributions are made at three levels: semantics, implementation and static analysis.First, the semantics of analyzed binary programs is based on a generic, simple and concise formalism called DBA. It is extended with some specification and abstraction mechanisms in this dissertation. A well defined semantics of binary programs requires also an adequate memory model. We propose a new memory model adapted to binary level requirements and inspired from recent work on low-level C. This new model allows to enjoy the abstraction of the region-based memory model while keeping the expressiveness of the flat model.Second, our binary code analysis platform BinSec offers three basic services:disassembly, simulation and static analysis. Each machine instruction is translated into a block of semantically equivalent DBA instructions. The platform handles a large part of x86 instructions. A simplification step eliminates useless intermediate calculations in order to ease further analyses. Our simplifications especially allow to eliminate up to 75% of flag updates.Finally, we developed a static analysis engine for binary programs based on abstract interpretation. Besides abstract domains specifically adapted to binary analysis, we focused on the user control of trade offs between accuracy/correctness and efficiency. In addition, we offer an original approach for high-level conditions recovery from low-level conditions in order to enhance analysis precision. The approach is sound, efficient, platform-independent and it achieves very high ratio of recovery
Styles APA, Harvard, Vancouver, ISO, etc.
5

TELESCA, ALESSIO. « ADVANCED MODELLING OF OVER-STROKE DISPLACEMENT CAPACITY FOR CURVED SURFACE SLIDER DEVICES ». Doctoral thesis, Università degli studi della Basilicata, 2022. http://hdl.handle.net/11563/153765.

Texte intégral
Résumé :
This doctoral dissertation aims to report on the research work carried out and to provide a contribution to the field of seismic base isolation. Since its introduction, the base isolation strategy proved to be an effective solution for the protection of structures and their components from the earthquake-induced damage, enhancing their resilience and implying a significative decrease in time and cost of repair compared to a conventional fixed-base structure. Sliding isolation devices feature some important characteristics, over other devices, that make them particularly suitable for the application in the existing buildings retrofit such as the high displacements capacity combined with limited plan dimensions. Even though these devices diffusion has gotten more popular worldwide in last years, a full understanding of their performances and limits as well as their behaviour under real seismic excitations has not been yet completely achieved. When Curved Surface Sliders reach their displacement capacity, they enter the so-called over-stroke sliding regime which is characterized by an increase in stiffness and friction coefficient. While in the over-stroke displacements regime, anyways, sliding isolators are still capable, until certain threshold values, of preserving their ability to support gravity loads. In this doctoral dissertation, the analysis of Curved Surface Sliding devices influence on different structures and under different configurations is presented and a tool for to help professionals in the design phase is provided. The research main focuses are: i) the numerical investigation of the over-stroke displacement influence on base isolated structures; ii) the numerical investigation of displacement retaining elements influence on base isolated structures; iii) the development of a mechanical model and an algebraic solution describing the over-stroke sliding regime and the associated limit displacements.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Fu, Zhoulai. « Static analysis of numerical properties in the presence of pointers ». Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00918593.

Texte intégral
Résumé :
The fast and furious pace of change in computing technology has become an article of faith for many. The reliability of computer-based systems cru- cially depends on the correctness of its computing. Can man, who created the computer, be capable of preventing machine-made misfortune? The theory of static analysis strives to achieve this ambition. The analysis of numerical properties of programs has been an essential research topic for static analysis. These kinds of properties are commonly modeled and handled by the concept of numerical abstract domains. Unfor- tunately, lifting these domains to heap-manipulating programs is not obvious. On the other hand, points-to analyses have been intensively studied to an- alyze pointer behaviors and some scale to very large programs but without inferring any numerical properties. We propose a framework based on the theory of abstract interpretation that is able to combine existing numerical domains and points-to analyses in a modular way. The static numerical anal- ysis is prototyped using the SOOT framework for pointer analyses and the PPL library for numerical domains. The implementation is able to analyze large Java program within several minutes. The second part of this thesis consists of a theoretical study of the com- bination of the points-to analysis with another pointer analysis providing information called must-alias. Two pointer variables must alias at some pro- gram control point if they hold equal reference whenever the control point is reached. We have developed an algorithm of quadruple complexity that sharpens points-to analysis using must-alias information. The algorithm is proved correct following a semantics-based formalization and the concept of bisimulation borrowed from the game theory, model checking etc.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Agrawal, Akash. « Static Analysis to improve RTL Verification ». Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/75293.

Texte intégral
Résumé :
Integrated circuits have traveled a long way from being a general purpose microprocessor to an application specific circuit. It has become an integral part of the modern era of technology that we live in. As the applications and their complexities are increasing rapidly every day, so are the sizes of these circuits. With the increase in the design size, the associated testing effort to verify these designs is also increased. The goal of this thesis is to leverage some of the static analysis techniques to reduce the effort of testing and verification at the register transfer level. Studying a design at register transfer level gives exposure to the relational information for the design which is inaccessible at the structural level. In this thesis, we present a way to generate a Data Dependency Graph and a Control Flow Graph out of a register transfer level description of a circuit description. Next, the generated graphs are used to perform relation mining to improve the test generation process in terms of speed, branch coverage and number of test vectors generated. The generated control flow graph gives valuable information about the flow of information through the circuit design. We are using this information to create a framework to improve the branch reachability analysis mainly in terms of the speed. We show the efficiency of our methods by running them through a suite of ITC'99 benchmark circuits.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
8

Borchert, Thomas. « Code Profiling : Static Code Analysis ». Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1563.

Texte intégral
Résumé :

Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality.

In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses.

The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.

Styles APA, Harvard, Vancouver, ISO, etc.
9

Lanaspre, Benoit. « Static analysis for distributed prograph ». Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262726.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ahmad, S. H. S. « Static analysis of masonry arches ». Thesis, University of Salford, 2017. http://usir.salford.ac.uk/43067/.

Texte intégral
Résumé :
The aim of the present research was to provide a practical theoretical model based on elementary statics, for assessment for masonry arch bridges, that benefits from the large scale experimental programme at Salford University, together with insight gained from the Distinct Element numerical modelling work. The need for large scale laboratory controlled load tests of physical models that may be reliably confined to a specific domain of behaviour with known parameters and modelling constraints, was highlighted in chapter 2 with reference to literature. Load tests on various distributions of surcharge were carried and the mechanisms of failure observed. The numerical modelled was shown to agree with expected theoretical behaviour and shown good agreement with experimental results. A theoretical model was developed which benefitted from insight from the experimental and numerical work to provide a means of predicting the failure load of the arch-fill system for the lading arrangements carried out in the physical and numerical tests. The model provided predicted failure loads for a range of material variation within a reasonable expected range and showed promising resemblance to the physical modelling results.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Mountjoy, Jon-Dean. « Static analysis of functional languages ». Thesis, Rhodes University, 1994. http://hdl.handle.net/10962/d1006690.

Texte intégral
Résumé :
Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines.
KMBT_223
Styles APA, Harvard, Vancouver, ISO, etc.
12

Keerthi, Rajasekhar. « STABILITY AND STATIC NOISE MARGIN ANALYSIS OF STATIC RANDOM ACCESS MEMORY ». Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1195600920.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Erling, Fredrik. « Static CFD analysis of a novel valve design for internal combustion engines ». Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-15521.

Texte intégral
Résumé :
In this work CFD was used to simulate the flow through a novel valve design for internal combustion engines. CFD is numerical method for simulating the behaviour of systems involving flow processes. A FEM was used for solving the equations. Literature on the topic was studied to gain an understanding of the performance limiters on the Internal combustion engine. This understanding was used to set up models that better would mimic physical phenomena compared to previous studies. The models gave plausible results as to fluid velocities and in-cylinder flow patterns. Comsol Multiphysics 4.1 was used for the computations.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Bastos, Camila Bianka Silva. « Estudo dos impactos de um sistema fotovoltaico conectado à rede elétrica utilizando análises QSTS ». Universidade do Estado de Santa Catarina, 2015. http://tede.udesc.br/handle/handle/2081.

Texte intégral
Résumé :
Made available in DSpace on 2016-12-12T20:27:38Z (GMT). No. of bitstreams: 1 Camila Bianka Silva Bastos.pdf: 1963598 bytes, checksum: bee88eacc3f6e3c327425297316a691d (MD5) Previous issue date: 2015-02-27
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This dissertation presents a study of the operation of two different three-phase grid-connected test-grids with the connection of a 1MWp photovoltaic system. Two analysis methods are used to evaluate the impacts of this photovoltaic systeM, these methods being conventional static analysis and the analysis known as Quasi-Static Time-Series Analysis. Despite the fact that all grids have unique characteristics, it is important to use test-grids, which simulate the real grid characteristics, to analyze the kinds of problems that can occur and then look for alternatives, if necessary. The impacts evaluated are related to the system losses, minimized with the allocation study of the generation on the grid, voltage profile and tap position curve, when automatic load tap changers are used. It was verified that the photovoltaic system interconnection point is the most influenced one after its connection to the grid. The Quasi-Static Time-Series Analysis allow the correct evaluation of the load-generation interaction, running the time series power flow through estimated data for the load and irradiance curves during 168 hours. The conventional static analysis only considers critical operation conditions, like minimum and maximum load, and no generation or maximum generation, and does not evaluate different case scenarios that occur in reality. The photovoltaic systems can bring many advantages to the electric systems, like the improvement on the final consumer voltage profile, line losses reduction, and also environmental impacts reduction. However, with the increase of distributed photovoltaic generation on the electrical grid, it s necessary to be aware of the impacts that this may cause by performing interconnection studies.
Esta dissertação apresenta um estudo da operação de uma rede teste trifásica de média tensão com a interligação de um sistema fotovoltaico de 1,0 MWp. Dois métodos de análise são utilizados para avaliar os impactos deste sistema fotovoltaico, sendo estes métodos as análises estáticas convencionais eas análises conhecidas como Quasi-Static Time-Series Analysis. Apesar de cada rede elétrica apresentar características únicas, é importante a utilização de sistemas testes, que simulam as características de sistemas reais, para analisar que tipos problemas podem surgir e então buscar alternativas, se necessário. Os impactos avaliados se referem às perdas no sistema, minimizadas com a correta alocação da geração, perfil de tensão e curva de posição do tap, no caso de transformador com comutação automática de tap. Contata-se que o ponto de conexão do sistema fotovoltaico é o mais influenciado pela sua conexão à rede. As análises QSTS possibilitam avaliar corretamente a iteração entre carga e geração, efetuando o fluxo de potência consecutivo através de dados estimados para as curvas de carga e de irradiância solar ao longo de 168 horas. Já as análises convencionais consideram apenas condições críticas de operação, como por exemplo, carga leve ou nominal e geração nula ou máxima, não avaliando então diferentes cenários de operação que ocorrem na prática. Os sistemas fotovoltaicos podem trazer muitos benefícios aos sistemas elétricos, como melhoria do perfil de tensão de atendimento ao consumidor, redução de perdas nas linhas, além da redução nos impactos ambientais. Entretanto, com o aumento de geração fotovoltaica distribuída na rede, é necessário estar atento aos impactos que isto pode causar através de estudos de interconexão.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Zhou, Shuo. « Static timing analysis in VLSI design ». Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3207193.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed May 18, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 110-113).
Styles APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Connie. « Static Conflict Analysis of Transaction Programs ». Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.

Texte intégral
Résumé :
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Chapman, Roderick. « Static timing analysis and program proof ». Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261100.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

JARDIM, JORGE LUIZ DE ARAUJO. « ANALYSIS AND DESIGN OF STATIC EXCITERS ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1987. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9489@1.

Texte intégral
Résumé :
Os sistemas de potência são projetados para operarem com tensão e freqüência constantes, admitindo-se pequenas variações em torno de sues valores nominais. Estas grandezas são controladas principalmente pelos sistemas de excitação e reguladores de velocidade, respectivamente. Esta dissertação examina o projeto de sistemas de excitação modernos e estabelece as características de projeto dos componentes das excitatrizes estáticas. Os principais componentes (conversor, circuito de disparo, circuito de partida e regulador de tensão) são implementados em um protótipo de excitatriz. As respostas do protótipo à pequenas e grandes perturbações também são discutidas.
Power systems are designed to operate with constant voltagem and frequency, allowing small sinal variations around its rated valves. These quantities are mainly controlled by excitation systems and governors, respectively. This dissertation examines the design of modern excitation systems and estabilishes the desired characteristics of static exciter componentes. The main components (conversor, firing circuit, starting circuit and voltage regulator) are implemented in a exciter prototype. The prototype response to small and larger disturbances are also discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Yue, Hong. « Reliability analysis of static sealed joints ». Thesis, University of Leicester, 1995. http://hdl.handle.net/2381/34733.

Texte intégral
Résumé :
Leaking, friction and wear of seals are concerns for machine designers and users everywhere. Although perfect sealing may be the general aim, in practice, considering apparently identical seals in the same application, some may seal while some may not. This is due, at least in part, to surface-related random phenomena. Therefore, the importance of considering the reliability of sealed joints cannot be overemphasized. Up to now, there is no paper in the published literature about the reliability analysis of static sealed joints. All of these facts provide the motivation for the current research work. A computer simulation model for the leakage analysis of static sealed joints has been developed based on the percolation theory. The features of the leakage simulation model can be concluded as follows: (1) It reveals the effect of random properties of rough surfaces on the sealing performance and makes it possible to apply the statistical concepts in discussing the sealing reliability of static sealed joints; (2) It provides much simpler and more economic tool for the statistical analysis of leakage by computer simulation than by experiments; (3) It makes it possible to describe the leakage phenomenon more accurately using the leakage path model instead of the clearance between surface centre-lines; (4) It eliminates the need for individual asperity model of rough surfaces, because the actual digitized surface is used directly. The relationship between the leakage probability and the applied load, which is of great general interest to the designers of static sealed joints, has been predicted by the leakage simulation model. The simulated results show that for a given leakage probability, the required load will increase as the value of RMS o height a increases or the value of correlation length decreases. It is confirmed that a certain value of contact ratio can be used as the criterion for identifying the reliability of static sealed joints with a certain confidence level. The contact ratio criterion provides a simple, inexpensive and useful tool to evaluate the effects of rough surfaces, material properties and applied load on the sealing reliability of static sealed joints. However, in order to be of practical use, experimental work is required to evaluate its validity.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Carré, Jean-Loup. « Static analysis of embedded multithreaded programs ». Cachan, Ecole normale supérieure, 2010. https://theses.hal.science/tel-01199739.

Texte intégral
Résumé :
Cette thèse présente un algorithme d'analyse statique pour des programmes parallèles. Il généralise des techniques d'interprétation abstraite utilisée dans le cas de programmes sans parallélisme et permet de détecter des erreurs d'exécution, exempli gratia, les déréférencements de pointeur invalide, les débordements de tableaux, les débordements d'entiers. Nous avons implémenté cet algorithme. Il analyse un code industriel de taille conséquente (100 000 lignes de code) en quelques heures. Notre technique est modulaire, elle peut utiliser n'importe quel domaine abstrait créé pour le cas de programmes non-parallèles. En outre, sans change le calcul du point fixe, certains de nos domaines abstraits. Permettent la détection de data-races ou de deadlocks. Cette technique ne présuppose pas la consistance séquentielle (i. E. Ne présuppose pas que l'exécution d'un programme parallèle est l'entrelacement de l'exécution de sous-programmes) puisque, en pratique (les processeurs INTEL et SPARC, JAVA,. . . ) l'exécution des programmes n'est pas séquentiellement consistante. Exempli gratia notre technique fonctionne avec les modèles TSO (Total Store Ordering) et PSO (Partial Store Ordering)
This Phd thesis presents a static analysis algorithm for programs with threads. It generalizes abstract interpretation techniques used in the single-threaded case and allows to detect runtimes errors, e. G, invalid pointer dereferences, array overflows, integer overflows. We have implemented this algorithm. It analyzes a large industrial multithreaded code (100K LOC) in few hours. Our technique is modular, it uses any abtract domain designed for the single-threaded-case. Furthermore, without any change in the fixpoint computation, sorne abstract domains allow to detect data-races or deadlocks. This technique does not assume sequential consistency, since, in practice (INTEL and SPARC processors, JAVA,. . . ), program execution is not sequentially consistent. E. G, it works in TSO (Total Store ordering) or PSO (Partial Store Ordering) memory models
Styles APA, Harvard, Vancouver, ISO, etc.
21

Henriksson, Oscar, et Michael Falk. « Static Vulnerability Analysis of Docker Images ». Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14794.

Texte intégral
Résumé :
Docker is a popular tool for virtualization that allows for fast and easy deployment of applications and has been growing increasingly popular among companies. Docker also include a large library of images from the repository Docker Hub which mainly is user created and uncontrolled. This leads to low frequency of updates which results in vulnerabilities in the images. In this thesis we are developing a tool for determining what vulnerabilities that exists inside Docker images with a Linux distribution. This is done by using our own tool for downloading and retrieving the necessary data from the images and then utilizing Outpost24's scanner for finding vulnerabilities in Linux packages. With the help of this tool we also publish statistics of vulnerabilities from the top downloaded images of Docker Hub. The result is a tool that can successfully scan a Docker image for vulnerabilities in certain Linux distributions. From a survey over the top 1000 Docker images it has also been shown that the amount of vulnerabilities have increased in comparison to earlier surveys of Docker images.
Styles APA, Harvard, Vancouver, ISO, etc.
22

SCOCCO, MAURA. « Analysis of static and perturbed posture ». Doctoral thesis, Università Politecnica delle Marche, 2008. http://hdl.handle.net/11566/242439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Andreescu, Oana Fabiana. « Static analysis of functional programs with an application to the frame problem in deductive verification ». Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S047/document.

Texte intégral
Résumé :
Dans le domaine de la vérification formelle de logiciels, il est impératif d'identifier les limites au sein desquelles les éléments ou fonctions opèrent. Ces limites constituent les propriétés de frame (frame properties en anglais). Elles sont habituellement spécifiées manuellement par le programmeur et leur validité doit être vérifiée: il est nécessaire de prouver que les opérations du programme n'outrepassent pas les limites ainsi déclarées. Dans le contexte de la vérification formelle interactive de systèmes complexes, comme les systèmes d'exploitation, un effort considérable est investi dans la spécification et la preuve des propriétés de frame. Cependant, la plupart des opérations ont un effet très localisé et ne menacent donc qu'un nombre limité d'invariants. Étant donné que la spécification et la preuve de propriétés de frame est une tache fastidieuse, il est judicieux d'automatiser l'identification des invariants qui ne sont pas affectés par une opération donnée. Nous présentons dans cette thèse une solution inférant automatiquement leur préservation. Notre solution a pour but de réduire le nombre de preuves à la charge du programmeur. Elle est basée sur l'analyse statique, et ne nécessite aucune annotation de frame. Notre stratégie consiste à combiner une analyse de dépendances avec une analyse de corrélations. Nous avons conçu et implémenté ces deux analyses statiques pour un langage fonctionnel fortement typé qui manipule structures, variants et tableaux. Typiquement, une propriété fonctionnelle ne dépend que de quelques fragments de l'état du programme. L'analyse de dépendances détermine quelles parties de cet état influent sur le résultat de la propriété fonctionnelle. De même, une fonction ne modifiera que certaines parties de ses arguments, copiant le reste à l'identique. L'analyse de corrélations détecte quelles parties de l'entrée d'une fonction se retrouvent copiées directement (i.e. non modifiés) dans son résultat. Ces deux analyses calculent une approximation conservatrice. Grâce aux résultats de ces deux analyses statiques, un prouveur de théorèmes interactif peut inférer automatiquement la préservation des invariants qui portent sur la partie non affectée par l’opération concernée. Nous avons appliqué ces deux analyses statiques à la spécification fonctionnelle d'un micro-noyau, et obtenu des résultats non seulement d'une précision adéquate, mais qui montrent par ailleurs que notre approche peut passer à l'échelle
In the field of software verification, the frame problem refers to establishing the boundaries within which program elements operate. It has notoriously tedious consequences on the specification of frame properties, which indicate the parts of the program state that an operation is allowed to modify, as well as on their verification, i.e. proving that operations modify only what is specified by their frame properties. In the context of interactive formal verification of complex systems, such as operating systems, much effort is spent addressing these consequences and proving the preservation of the systems' invariants. However, most operations have a localized effect on the system and impact only a limited number of invariants at the same time. In this thesis we address the issue of identifying those invariants that are unaffected by an operation and we present a solution for automatically inferring their preservation. Our solution is meant to ease the proof burden for the programmer. It is based on static analysis and does not require any additional frame annotations. Our strategy consists in combining a dependency analysis and a correlation analysis. We have designed and implemented both static analyses for a strongly-typed, functional language that handles structures, variants and arrays. The dependency analysis computes a conservative approximation of the input fragments on which functional properties and operations depend. The correlation analysis computes a safe approximation of the parts of an input state to a function that are copied to the output state. It summarizes not only what is modified but also how it is modified and to what extent. By employing these two static analyses and by subsequently reasoning based on their combined results, an interactive theorem prover can automate the discharching of proof obligations for unmodified parts of the state. We have applied both of our static analyses to a functional specification of a micro-kernel and the obtained results demonstrate both their precision and their scalability
Styles APA, Harvard, Vancouver, ISO, etc.
24

Abbas, Abdullah. « Static analysis of semantic web queries with ShEx schema constraints ». Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM064/document.

Texte intégral
Résumé :
La disponibilité de gros volumes de données structurées selon le modèle Resource Description Framework (RDF) est en constante augmentation. Cette situation implique un intérêt scientifique et un besoin important de rechercher de nouvelles méthodes d’analyse et de compilation de requêtes pour tirer le meilleur parti de l’extraction de données RDF. SPARQL est le plus utilisé et le mieux supporté des langages de requêtes sur des données RDF. En parallèle des langages de requêtes, les langages de définition de schéma d’expression de contraintes sur des jeux de données RDF ont également évolués. Les Shape Expressions (ShEx) sont de plus en plus utilisées pour valider des données RDF et pour indiquer les motifs de graphes attendus. Les schémas sont importants pour les tâches d’analyse statique telles que l’optimisation ou l’injection de requêtes. Notre intention est d’examiner les moyens et méthodologies d’analyse statique et d’optimisation de requêtes associés à des contraintes de schéma.Notre contribution se divise en deux grandes parties. Dans la première, nous considérons le problème de l’injection de requêtes SPARQL en présence de contraintes ShEx. Nous proposons une procédure rigoureuse et complète pour le problème de l’injection de requêtes avec ShEx, en prenant en charge plusieurs fragments de SPARQL. Plus particulièrement, notre procédure gère les patterns de requêtes OPTIONAL, qui s’avèrent former un important fonctionnalité à étudier avec les schémas. Nous fournissons ensuite les limites de complexité de notre problème en considération des fragments gérés. Nous proposons également une méthode alternative pour l’injection de requêtes SPARQL avec ShEx. Celle-ci réduit le problème à une satisfiabilité de Logique de Premier Ordre, qui permet de considérer une extension du fragment SPARQL traité par la première méthode. Il s’agit de la première étude traitant l’injection de requêtes SPARQL en présence de contraintes ShEx.Dans la seconde partie de nos contributions, nous proposons une méthode d’analyse pour optimiser l’évaluation de requêtes SPARQL groupées, sur des graphes RDF, en tirant avantage des contraintes ShEx. Notre optimisation s’appuie sur le calcul et l’assignation de rangs aux triple patterns d’une requête, permettant de déterminer leur ordre d’exécution. La présence de jointures intermédiaires entre ces patterns est la raison pour laquelle l’ordonnancement est important pour gagner en efficicacité. Nous définissons un ensemble de schémas ShEx bien- formulés, qui possède d’intéressantes caractéristiques pour l’optimisation de requêtes SPARQL. Nous développons ensuite notre méthode d’optimisation par l’exploitation d’informations extraites d’un schéma ShEx. Enfin, nous rendons compte des résultats des évaluations effectuées, montrant les avantages de l’application de notre optimisation face à l’état de l’art des systèmes d’évaluation de requêtes
Data structured in the Resource Description Framework (RDF) are increasingly available in large volumes. This leads to a major need and research interest in novel methods for query analysis and compilation for making the most of RDF data extraction. SPARQL is the widely used and well supported standard query language for RDF data. In parallel to query language evolutions, schema languages for expressing constraints on RDF datasets also evolve. Shape Expressions (ShEx) are increasingly used to validate RDF data, and to communicate expected graph patterns. Schemas in general are important for static analysis tasks such as query optimisation and containment. Our purpose is to investigate the means and methodologies for SPARQL query static analysis and optimisation in the presence of ShEx schema constraints.Our contribution is mainly divided into two parts. In the first part we consider the problem of SPARQL query containment in the presence of ShEx constraints. We propose a sound and complete procedure for the problem of containment with ShEx, considering several SPARQL fragments. Particularly our procedure considers OPTIONAL query patterns, that turns out to be an important feature to be studied with schemas. We provide complexity bounds for the containment problem with respect to the language fragments considered. We also propose alternative method for SPARQL query containment with ShEx by reduction into First Order Logic satisfiability, which allows for considering SPARQL fragment extension in comparison to the first method. This is the first work addressing SPARQL query containment in the presence of ShEx constraints.In the second part of our contribution we propose an analysis method to optimise the evaluation of conjunctive SPARQL queries, on RDF graphs, by taking advantage of ShEx constraints. The optimisation is based on computing and assigning ranks to query triple patterns, dictating their order of execution. The presence of intermediate joins between the query triple patterns is the reason why ordering is important in increasing efficiency. We define a set of well-formed ShEx schemas, that possess interesting characteristics for SPARQL query optimisation. We then develop our optimisation method by exploiting information extracted from a ShEx schema. We finally report on evaluation results performed showing the advantages of applying our optimisation on the top of an existing state-of-the-art query evaluation system
Styles APA, Harvard, Vancouver, ISO, etc.
25

Hedlin, Johan, et Joakim Kahlström. « Detecting access to sensitive data in software extensions through static analysis ». Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162281.

Texte intégral
Résumé :
Static analysis is a technique to automatically audit code without having to execute or manually read through it. It is highly effective and can scan large amounts of code or text very quickly. This thesis uses static analysis to find potential threats within a software's extension modules. These extensions are developed by third parties and should not be allowed to access information belonging to other extensions. However, due to the structure of the software there is no easy way to restrict this and still keep the software's functionality intact. The use of a static analysis tool could detect such threats by analyzing the code of an extension before it is published online, and therefore keep all current functionality intact. As the software is based on a lesser known language and there is a specific threat by way of information disclosure, a new static analysis tool has to be developed. To achieve this, a combination of language specific functionality and features available in C++ are combined to create an extendable tool which has the capability to detect cross-extension data access.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Vera, Xavier. « Towards a static cache analysis for whole program analysis / ». Västerås : Mälardalen Univ, 2002. http://www.mrtc.mdh.se/publications/0382.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Cornilleau, Pierre-Emmanuel. « Certification of static analysis in many-sorted first-order logic ». Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00846347.

Texte intégral
Résumé :
Static program analysis is a core technology for both verifying and finding errors in programs but most static analyzers are complex pieces of software that are not without error. A Static analysis formalised as an abstract interpreter can be proved sound, however such proofs are significantly harder to do on the actual implementation of an analyser. To alleviate this problem we propose to generate Verification Conditions (VCs, formulae valid only if the results of the analyser are correct) and to discharge them using an Automated Theorem Prover (ATP). We generate formulae in Many-Sorted First-Order Logic (MSFOL), a logic that has been successfully used in deductive program verification. MSFOL is expressive enough to describe the results of complex analyses and to formalise the operational semantics of object-oriented languages. Using the same logic for both tasks allows us to prove the soundness of the VC generator using deductive verification tools. To ensure that VCs can be automatically discharged for complex analyses of the heap, we introduce a VC calculus that produces formulae belonging to a decidable fragment of MSFOL. Furthermore, to be able to certify different analyses with the same calculus, we describe a family of analyses with a parametric concretisation function and instrumentation of the semantics. To improve the reliability of ATPs, we also studied the result certification of Satisfiability Modulo Theory solvers, a family of ATPs dedicated to MSFOL. We propose a modular proof-system and a modular proof-verifier programmed and proved correct in Coq, that rely on exchangeable verifiers for each of the underlying theories.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Hellström, Patrik. « Tools for static code analysis : A survey ». Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16658.

Texte intégral
Résumé :

This thesis has investigated what different tools for static code analysis, with anemphasis on security, there exist and which of these that possibly could be used in a project at Ericsson AB in Linköping in which a HIGA (Home IMS Gateway) is constructed. The HIGA is a residential gateway that opens up for the possibility to extend an operator’s Internet Multimedia Subsystem (IMS) all the way to the user’s home and thereby let the end user connect his/her non compliant IMS devices, such as a media server, to an IMS network.

Static analysis is the process of examining the source code of a program and in that way test a program for various weaknesses without having to actually execute it (compared to dynamic analysis such as testing).

As a complement to the regular testing, that today is being performed in the HIGA project, four different static analysis tools were evaluated to find out which one was best suited for use in the HIGA project. Two of them were open source tools and two were commercial.

All of the tools were evaluated in five different areas: documentation, installation & integration procedure, usability, performance and types of bugs found. Furthermore all of the tools were later on used to perform testing of two modules of the HIGA.

The evaluation showed many differences between the tools in all areas and not surprisingly the two open source tools turned out to be far less mature than the commercial ones. The tools that were best suited for use in the HIGA project were Fortify SCA and Flawfinder.

As far as the evaluation of the HIGA code is concerned some different bugs which could have jeopardized security and availability of the services provided by it were found.

Styles APA, Harvard, Vancouver, ISO, etc.
29

Kvarnström, Olle. « Static Code Analysis of C++ in LLVM ». Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-128625.

Texte intégral
Résumé :
Just like the release of the Clang compiler, the advent of LLVM in the field of static code analysis already shows great promise. When given the task of covering rules not ideally covered by a commercial contender, the end result is not only overwhelmingly positive, the implementation time is only a fraction of what was initially expected. While LLVM’s support for sophisticated AST analysis is remarkable, being the main reason these positive results, it’s support for data flow analysis is not yet up to par. Despite this, as well as a lack of thorough documentation, LLVM should already be a strong rival for any commercial tool today.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Nguyen, Phung Hua Computer Science &amp Engineering Faculty of Engineering UNSW. « Static analysis for incomplete object-oriented programs ». Awarded by:University of New South Wales. School of Computer Science and Engineering, 2005. http://handle.unsw.edu.au/1959.4/24228.

Texte intégral
Résumé :
Static analysis is significant since it provides the information about the run- time behaviour of an analysed program. Such information has many applications in compiler optimisations and software engineering tools. Interprocedural anal- ysis is a form of static analysis, which can exploit information available across procedure boundaries. The analysis is traditionally designed as whole-program analysis, which processes the entire program. However, whole-program analysis is problematic when parts of the analysed program are not available to partici- pate in analysis. In this case, a whole-program analysis has to make conservative assumptions to be able to produce safe analysis results at the expense of some possible precision loss. To improve analysis precision, an analysis can exploit the access control mechanism provided by the underlying program language. This thesis introduces a points-to analysis technique for incomplete object-oriented programs, called com- pleteness analysis, which exploits the access and modification properties of classes, methods and fields to enhance the analysis precision. Two variations of the tech- nique, compositional and sequential completeness analysis, are described. This thesis also presents a mutability analysis (MA) and MA-based side-effect analy- sis, which are based on the output of completeness analysis, to determine whether a variable is potentially modified by the execution of a program statement. The results of experiments carried out on a set of Java library packages are presented to demonstrate the improvement in analysis precision.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Størkson, Knut Vilhelm. « Static Analysis of Fire Water Pump Module ». Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for konstruksjonsteknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19324.

Texte intégral
Résumé :
This thesis is based on a project from the previous semester (fall 2011) and initiated by Sigve Gjerstad at Frank Mohn Flato y AS. The thesis introduces the FWP system and different aspects regarding a static analysis of it's main component; The FWP Module. It takes a brief look at different meshing techniques and other choices that needs to be made in the early stages of an analysis. A series of simple analyses are carried out to show how shell elements are the best representation for a plate structure subjected to pressure. A series of simplified blast load analyses are presented where different choices within the Finite Element Method are compared. It's concluded that it is sufficient to consider only one of the two load steps to get the maximum values of stress and deformation. This saves us computation time with no loss of accuracy. The analyses also conclude that a blast load analysis is dependant on a non-linear material model to get reasonable result. A linear material model assumes stress is proportional to strain, even beyond the yield strength. This results in unrealistic high stresses. Implicit solver versus explicit solver are compared in the case of blast loading, which is a problem that requires short time increments. It is clear that the results are similar, however the computational cost is much higher for the implicit solver. It is also shown that stainless steel is more beneficial than structural steel in blast load scenarios. Finally, model simplification is studied as yet another way to decrease the computation time. This implies simplifying solid models with a mid-surface features, representing the model with shells
Styles APA, Harvard, Vancouver, ISO, etc.
32

Schwartz, Edward J. « Abstraction Recovery for Scalable Static Binary Analysis ». Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/336.

Texte intégral
Résumé :
Many source code tools help software programmers analyze programs as they are being developed, but such tools can no longer be applied once the final programs are shipped to the user. This greatly limits users, security experts, and anyone other than the programmer who wishes to perform additional testing and program analysis. This dissertation is concerned with the development of scalable techniques for statically analyzing binary programs, which can be employed by anyone who has access to the binary. Unfortunately, static binary analysis is often more difficult than static source code analysis because the abstractions that are the basis of source code programs, such as variables, types, functions, and control flow structure, are not explicitly present in binary programs. Previous approaches work around the the lack of abstractions by reasoning about the program at a lower level, but this approach has not scaled as well as equivalent source code techniques that use abstractions. This dissertation investigates an alternative approach to static binary analysis which is called abstraction recovery. The premise of abstraction recovery is that since many binaries are actually compiled from an abstract source language which is more suitable for analysis, the first step of static binary analysis should be to recover such abstractions. Abstraction recovery is shown to be feasible in two real-world applications. First, C abstractions are recovered by a newly developed decompiler. The second application recovers gadget abstractions to automatically generate return-oriented programming (ROP) attacks. Experiments using the decompiler demonstrate that recovering C abstractions improves scalability over low-level analysis, with applications such as verification and detection of buffer overflows seeing an average of 17× improvement. Similarly, gadget abstractions speed up automated ROP attacks by 99×. Though some binary analysis problems do not lend themselves to abstraction recovery because they reason about low-level or syntactic details, abstraction recovery is an attractive alternative to conventional low-level analysis when users are interested in the behavior of the original abstract program from which a binary was compiled, which is often the case.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Gustavsson, Andreas. « Static Execution Time Analysis of Parallel Systems ». Doctoral thesis, Mälardalens högskola, Inbyggda system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-31399.

Texte intégral
Résumé :
The past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism is no longer feasible due to extensive power consumption and heat dissipation. Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level. This is most often done using multiple, relatively slow and simple, processing cores situated on a single processor chip. The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g., a bus, to that memory and also all higher levels of memory). To fully exploit this type of parallel processor chip, programs running on it will have to be concurrent. Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal behavior. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is crucial that methods to derive safe estimations on the timing properties of parallel computer systems are developed, if at all possible. This thesis presents a method to derive safe (lower and upper) bounds on the execution time of a given parallel system, thus showing that such methods must exist. The interface to the method is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis. The method is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way. The thesis also proves the soundness of the presented method (i.e., that the estimated timing bounds are indeed safe) and evaluates a prototype implementation of it.
Den strategi som historiskt sett använts för att öka processorers prestanda (genom ökad klockfrekvens och ökad instruktionsnivåparallellism) är inte längre hållbar på grund av den ökade energikonsumtion som krävs. Därför är den nuvarande trenden inom processordesign att låta mjukvaran påverka det parallella exekveringsbeteendet. Detta görs vanligtvis genom att placera multipla processorkärnor på ett och samma processorchip. Kärnorna delar vanligtvis på några av processorchipets resurser, såsom cache-minne (och därmed också det nätverk, till exempel en buss, som ansluter kärnorna till detta minne, samt alla minnen på högre nivåer). För att utnyttja all den prestanda som denna typ av processorer erbjuder så måste mjukvaran som körs på dem kunna delas upp över de tillgängliga kärnorna. Eftersom flerkärniga processorer är standard idag så måste även realtidssystem baseras på dessa och den nämnda typen av kod.  Ett realtidssystem är ett datorsystem som måste vara både funktionellt och tidsmässigt korrekt. För vissa typer av realtidssystem kan ett inkorrekt tidsmässigt beteende ha katastrofala följder. Därför är det ytterst viktigt att metoder för att analysera och beräkna säkra gränser för det tidsmässiga beteendet hos parallella datorsystem tas fram. Denna avhandling presenterar en metod för att beräkna säkra gränser för exekveringstiden hos ett givet parallellt system, och visar därmed att sådana metoder existerar. Gränssnittet till metoden är ett litet formellt definierat trådat programmeringsspråk där trådarna tillåts kommunicera och synkronisera med varandra. Metoden baseras på abstrakt exekvering för att effektivt beräkna de säkra (men ofta överskattade) gränserna för exekveringstiden. Abstrakt exekvering baseras i sin tur på abstrakta interpreteringstekniker som vida används inom tidsanalys av sekventiella datorsystem. Avhandlingen bevisar även korrektheten hos den presenterade metoden (det vill säga att de beräknade gränserna för det analyserade systemets exekveringstid är säkra) och utvärderar en prototypimplementation av den.
Worst-Case Execution Time Analysis of Parallel Systems
RALF3 - Software for Embedded High Performance Architectures
Styles APA, Harvard, Vancouver, ISO, etc.
34

Valente, Frederico Miguel Goulão. « Static analysis on embedded heterogeneous multiprocessor systems ». Master's thesis, Universidade de Aveiro, 2008. http://hdl.handle.net/10773/2180.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Ehrhardt, Christian. « Static code analysis in multi-threaded environments ». [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:289-vts-60825.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Morgenthaler, John David. « Static analysis for a software transformation tool / ». Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9804509.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Yang, Shengqian. « Static Analyses of GUI Behavior in Android Applications ». The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1443558986.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Liu, Jiangchao. « Static analysis on numeric and structural properties of array contents ». Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE046/document.

Texte intégral
Résumé :
Dans cette thèse, nous étudions l'analyse statique par interprétation abstraites de programmes manipulant des tableaux, afin d'inférer des propriétés sur les valeurs numériques et les structures de données qui y sont stockées. Les tableaux sont omniprésents dans de nombreux programmes, et les erreurs liées à leur manipulation sont difficile à éviter en pratique. De nombreux travaux de recherche ont été consacrés à la vérification de tels programmes. Les travaux existants s'intéressent plus particulièrement aux propriétés concernant les valeurs numériques stockées dans les tableaux. Toutefois, les programmes bas-niveau (comme les systèmes embarqués ou les systèmes d'exploitation temps réel) utilisent souvent des tableaux afin d'y stocker des structures de données telles que des listes, de manière à éviter d'avoir recours à l'allocation de mémoire dynamique. Dans cette thèse, nous présentons des techniques permettant de vérifier par interprétation abstraite des propriétés concernant à la fois les données numériques ainsi que les structures composites stockées dans des tableaux. Notre première contribution est une abstraction qui permet de décrire des stores à valeurs numériques et avec valeurs optionnelles (i.e., lorsqu'une variable peut soit avoir une valeur numérique, soit ne pas avoir de valeur du tout), ou bien avec valeurs ensemblistes (i.e., lorsqu'une variable est associée à un ensemble de valeurs qui peut être vide ou non). Cette abstraction peut être utilisée pour décrire des stores où certaines variables ont un type option, ou bien un type ensembliste. Elle peut aussi servir à la construction de domaines abstraits pour décrire des propriétés complexes à l'aide de variables symboliques, par exemple, pour résumer le contenu de zones dans des tableaux. Notre seconde contribution est un domaine abstrait pour la description de tableaux, qui utilise des propriétés sémantiques des valeurs contenues afin de partitionner les cellules de tableaux en groupes homogènes. Ainsi, des cellules contenant des valeurs similaires sont décrites par les mêmes prédicats abstraits. De plus, au contraire des analyses de tableaux conventionnelles, les groupes ainsi formés ne sont pas nécessairement contigüs, ce qui contribue à la généralité de l'analyse. Notre analyse peut regrouper des cellules non-congitües, lorsque celles-ci ont des propriétés similaires. Ce domaine abstrait permet de construire des analyses complètement automatiques et capables d'inférer des invariants complexes sur les tableaux. Notre troisième contribution repose sur une combinaison de cette abstraction des tableaux avec différents domaines abstraits issus de l'analyse de forme des structures de données et reposant sur la logique de séparation. Cette combinaison appelée coalescence opère localement, et relie des résumés pour des structures dynamiques à des groupes de cellules du tableau. La coalescence permet de définir de manière locale des algorithmes d'analyse statique dans le domaine combiné. Nous l'utilisons pour relier notre domaine abstrait pour tableaux et une analyse de forme générique, dont la tâche est de décrire des structures chaînées. L'analyse ainsi obtenue peut vérifier à la fois des propriétés de sûreté et des propriétés de correction fonctionnelle. De nombreux programmes bas-niveau stockent des structures dynamiques chaînées dans des tableaux afin de n'utiliser que des zones mémoire allouées statiquement. La vérification de tels programmes est difficile, puisqu'elle nécessite à la fois de raisonner sur les tableaux et sur les structures chaînées. Nous construisons une analyse statique reposant sur ces trois contributions, et permettant d'analyser avec succés de tels programmes. Nous présentons des résultats d'analyse permettant la vérification de composants de systèmes d'exploitation et pilotes de périphériques
We study the static analysis on both numeric and structural properties of array contents in the framework of abstract interpretation. Since arrays are ubiquitous in most software systems, and software defects related to mis-uses of arrays are hard to avoid in practice, a lot of efforts have been devoted to ensuring the correctness of programs manipulating arrays. Current verification of these programs by static analysis focuses on numeric content properties. However, in some lowlevel programs (like embedded systems or real-time operating systems), arrays often contain structural data (e.g., lists) without using dynamic allocation. In this manuscript, we present a series of techniques to verify both numeric and structural properties of array contents. Our first technique is used to describe properties of numerical stores with optional values (i.e., where some variables may have no value) or sets of values (i.e., where some variables may store a possibly empty set of values). Our approach lifts numerical abstract domains based on common linear inequality into abstract domains describing stores with optional values and sets of values. This abstraction can be used in order to analyze languages with some form of option scalar type. It can also be applied to the construction of abstract domains to describe complex memory properties that introduce symbolic variables, e.g., in order to summarize unbounded memory blocks like in arrays. Our second technique is an abstract domain which utilizes semantic properties to split array cells into groups. Cells with similar properties will be packed into groups and abstracted together. Additionally, groups are not necessarily contiguous. Compared to conventional array partitioning analyses that split arrays into contiguous partitions to infer properties of sets of array cells. Our analysis can group together non-contiguous cells when they have similar properties. Our abstract domain can infer complex array invariants in a fully automatic way. The third technique is used to combine different shape domains. This combination locally ties summaries in both abstract domains and is called a coalesced abstraction. Coalescing allows to define efficient and precise static analysis algorithms in the combined domain. We utilize it to combine our array abstraction (i.e., our second technique) and a shape abstraction which captures linked structures with separation logicbased inductive predicates. The product domain can verify both safety and functional properties of programs manipulating arrays storing dynamically linked structures, such as lists. Storing dynamic structures in arrays is a programming pattern commonly used in low-level systems, so as to avoid relying on dynamic allocation. The verification of such programs is very challenging as it requires reasoning both about the array structure with numeric indexes and about the linked structures stored in the array. Combining the three techniques that we have proposed, we can build an automatic static analysis for the verification of programs manipulating arrays storing linked structures. We report on the successful verification of several operating system kernel components and drivers
Styles APA, Harvard, Vancouver, ISO, etc.
39

Heintz, Sofia. « Muscular forces from static optimization ». Licentiate thesis, KTH, Mechanics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3943.

Texte intégral
Résumé :

At every joint there is a redundant set of muscle activated during movement or loading of the system. Optimization techniques are needed to evaluate individual forces in every muscle. The objective in this thesis was to use static optimization techniques to calculate individual muscle forces in the human extremities.

A cost function based on a performance criterion of the involved muscular forces was set to be minimized together with constraints on the muscle forces, restraining negative and excessive values. Load-sharing, load capacity and optimal forces of a system can be evaluated, based on a description of the muscle architectural properties, such as moment arm, physiological cross-sectional area, and peak isometric force.

The upper and lower extremities were modelled in two separate studies. The upper extremity was modelled as a two link-segment with fixed configurations. Load-sharing properties in a simplified model were analyzed. In a more complex model of the elbow and shoulder joint system of muscular forces, the overall total loading capacity was evaluated.

A lower limb model was then used and optimal forces during gait were evaluated. Gait analysis was performed with simultaneous electromyography (EMG). Gait kinematics and kinetics were used in the static optimization to evaluate of optimal individual muscle forces. EMG recordings measure muscle activation. The raw EMG data was processed and a linear envelope of the signal was used to view the activation profile. A method described as the EMG-to-force method which scales and transforms subject specific EMG data is used to compare the evaluated optimal forces.

Reasonably good correlation between calculated muscle forces from static optimization and EMG profiles was shown. Also, the possibility to view load-sharing properties of a musculoskeletal system demonstrate a promising complement to traditional motion analysis techniques. However, validation of the accurate muscular forces are needed but not possible.

Future work is focused on adding more accurate settings in the muscle architectural properties such as moment arms and physiological cross-sectional areas. Further perspectives with this mathematic modelling technique include analyzing pathological movement, such as cerebral palsy and rheumatoid arthritis where muscular weakness, pain and joint deformities are common. In these, better understanding of muscular action and function are needed for better treatment.

Styles APA, Harvard, Vancouver, ISO, etc.
40

Mosquera, Jenyfer. « Static and pseudo-static stability analysis of tailings storage facilities using deterministic and probabilistic methods ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117155.

Texte intégral
Résumé :
Tailings facilities are vast man-made structures designed and built for the storage and management of mill effluents throughout the life of a mining project. There are different types of tailings storage facilities (TSF) classified in accordance with the method of construction of the embankment and the mechanical properties of the tailings to be stored. The composition of tailings is determined by the mineral processing technique used to obtain the concentrate as well as the physical and chemical properties of the ore body. As a common denominator, TSFs are vulnerable to failure due to design or operational deficiencies, site-specific features, or due to random variables such as material properties, seismic events or unusual precipitation. As a result, long-term risk based stability assessment of mine wastes storage facilities is necessary.The stability analyses of TSFs are traditionally conducted using the Limit Equilibrium Method (LEM). However, it has been demonstrated that relying exclusively on this approach may not warrant full understanding of the behaviour of the TSF because the LEM neglects the stress-deformation constitutive relationships that ensure displacement compatibility. Furthermore, the intrinsic variability of tailings properties is not taken into account either because it is basically a deterministic method. In order to overcome these limitations of the LEM, new methods and techniques have been proposed for slope stability assessment. The Strength Reduction Technique (SRT) based on the Finite Element Method (FEM), for instance, has been successfully applied for this purpose. Likewise, stability assessment with the probabilistic approach has gained more and more popularity in mining engineering because it offers a comprehensive and more realistic estimation of TSFs performance. In the light of the advances in numerical modelling and geotechnical engineering applied to the mining industry, this thesis presents a stability analysis comparison between an upstream tailings storage facility (UTSF), and a water retention tailings dam (WRTD). First, the effect of embankment/tailings height increase on the overall stability is evaluated under static and pseudo-static states. Second, the effect of the phreatic surface location in the UTSF, and the embankment to core permeability ratio in the WRTD are investigated. The analyses are conducted using rigorous and simplified LEMs and the FEM - SRT. In order to take into consideration the effect of the intrinsic variability of tailings properties on stability, parametric analyses are conducted to identify the critical random variables of each TSF. Finally, the Monte Carlo Simulation (MCS), and the Point Estimate Method (PEM) are applied to recalculate the FOS and to estimate the probability of failure and reliability indices of each analysis. The results are compared against the minimum static and pseudo-static stability requirements and design guidelines applicable to mining operations in the Province of Quebec, Canada.Keywords: Tailings storage facilities (TSF), Limit Equilibrium Method (LEM), Shear Reduction Technique (SST), pseudo-static seismic coefficient, probability of failure, Point Estimate Method (PEM), Reliability Index.
Les parcs à résidus miniers (PRMs) sont de vastes structures utilisées pour le stockage et la gestion des déchets pendant l'opération et après la clôture d'un site minier. Différentes techniques d'entreposage existent, dépendant principalement de la méthode de construction de la digue et des propriétés physiques, chimiques et mécaniques des résidus à stocker. La composition des résidus est déterminée par la technique utilisée pour extraire le minerai du gisement ainsi que par les propriétés physico-chimiques du gisement. De manière générale, les installations de stockage de résidus miniers sont dans une certaine mesure, sujettes à des ruptures. Celles-ci sont associées à des défauts de conception et d'exploitation, des conditions spécifiques au site, des facteurs environnementaux, ainsi que des variables aléatoires telles que les propriétés des matériaux, les événements sismiques, ou les précipitations inhabituelles. Par conséquent, la stabilité des PRMs à long terme est nécessaire sur la base de l'évaluation de risques.Les analyses de stabilité sont généralement effectuées à l'aide de la méthode d'équilibre limite (MEL), cependant, il a été prouvé que s'appuyer exclusivement sur les MELs n'est pas exact car la relation entre déformation et contrainte est négligée dans cette approche, tout comme le déplacement ayant lieu au pendant la construction et l'opération des PRMs. En outre, la variabilité spatiale intrinsèque des propriétés des résidus et autres matériaux utilisés pour la construction des PRMs n'est pas prise en compte. En conséquence, de nouvelles méthodes et techniques ont été développées pour surmonter les limites de la MEL. La méthode des éléments finis (MEF) et la Technique de réduction de cisaillement (TRC), par exemple, ont été appliquées avec succès pour l'analyse de la stabilité des PRMs. De même, l'approche probabiliste pour l'analyse de la stabilité des pentes a gagné en popularité car elle offre une simulation complète et plus réaliste de la performance des PRMs.À la lumière des progrès réalisés dans le domaine de la modélisation numérique et de la géotechnique pour l'industrie minière, cette thèse présente une comparaison entre une installation d'entreposage des résidus en amont et un barrage de stériles et d'eaux de décantation.En premier lieu, l'effet de l'augmentation de la hauteur des résidus sur la stabilité globale est évalué en vertu des états statiques et pseudo-statiques. En deuxième lieu, l'effet de l'emplacement de la nappe phréatique dans installation d'entreposage des résidus en amont et le rapport de perméabilité de remblai dans le barrage de stériles et d'eaux de décantation sont étudiés. Les analyses sont conduites en utilisant la modélisation numérique des MELs et la MEF – TRC.Des analyses paramétriques sont effectuées pour identifier les variables aléatoires critiques de chaque parc à résidus miniers. Finalement, pour évaluer, la simulation de Monte Carlo (MCS) et la méthode d'estimation ponctuelle (MEP) sont appliquées pour recalculer les facteurs de stabilité et pour estimer la probabilité de défaillance et les indices de fiabilité qui leur sont associées. Les résultats de chaque analyse sont comparés aux exigences minimales de stabilité des pentes applicables aux opérations minières dans la province de Québec, Canada.Mots-clés: Parcs à résidus miniers (PRMs), coefficient sismique, Technique de Réduction de Cisaillement (TRC), probabilité de défaillance, Méthode d'Estimation Ponctuelle (MEP), indice de fiabilité.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Sawin, Jason E. « Improving the Static Resolution of Dynamic Java Features ». The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1248652226.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Gustafson, Christopher, et Sam Florin. « Qualification of Tool for Static Code Analysis : Processes and Requirements for Approval of Static Code Analysis in the Aviation Industry ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277941.

Texte intégral
Résumé :
In the aviation industry, the use of software development tools is not as easily adopted as in other industries. Due to the catastrophic consequences of software errors in airborne systems, software development processes has rigorous requirements. One of these requirements is that a code standard must be followed. Code standards are used to exclude code constructions which could result in unwanted behaviours. The process of manually ensuring a specific code standard can be costly. This process could be automated by a tool for static code analysis, however, this requires a formal qualification. This thesis evaluates the process of qualifying a tool for static code analysis in accordance with the requirements of the major aviation authorities EASA and FAA. To describe the qualification process, a literature study was conducted. To further explain how an existing tool could be put through the qualification process, a case study of the existing tool Parasoft C/C++ test was conducted. The results of the literature study show what processes must be completed in order to qualify a static code analysis tool. Importantly, the study shows that no requirements are put on the development process of the tool. This was an important takeaway as it meant that an existing tool could be qualified without any additional data from the developer of the tool. The case study of Parasoft C/C++ test showed how the tool could be configured and verified to analyze code in accordance with a small set of code rules. Furthermore, three documents including qualification data were produced showing how the qualification process should be documented in order to communicate the process to an authority. The results of the thesis do not provide the full picture of how a tool could be qualified as the software, in which the tool is used, is considerations the are specific to the software the tool is used to develop still need to be taken into consideration. The thesis does, however, provide guidance on the majority of the applicable requirements. Future research could be done to provide the complete picture of the qualification process, as well as how the process would look like for other types of tools.
Inom flygindustrin är användandet av olika programmeringsverktyg inte lika självklart som inom andra industrier. På grund av de katastrofala konsekvenser som fel i mjukvaran i ett flygplan kan resultera i finns det rigorösa krav på mjukvaruutvecklingsprocessen. Ett av dessa krav är att en viss kodstandard måste upprätthållas. Kodstandarder används för att exkludera vissa strukturer i kod som kan leda till oönskat beteende. Upprätthållandet av en viss kodstandard är en långdragen process att genomföra manuellt, och kan därför automatiseras med hjälp av ett statiskt kodanalysverktyg. För att kunna använda ett sådant verktyg behövs däremot en formell verktygskvalificering. I denna uppsats kommer kvalificeringsprocessen av ett verktyg för statisk kodanalys att evalueras enligt de krav som de två stora flygmyndigheterna EASA och FAA ställer. För att förklara processen av att kvalificera ett sådant verktyg gjordes en litteraturstudie följt av en fallstudie av det existerande verktyget Parasoft C/C++ test. Resultaten av litteraturstudien beskriver de olika processerna som måste genomföras för att kvalificera ett statiskt kodanalysverktyg. Noterbart är att resultaten visar att inga krav ställs på utvecklingsprocessen av verktyget själv. Detta betyder att ett existerande kommersiellt verktyg kan kvalificeras utan att verktygsutvecklarna själva behöver bidra med extra information. Fallstudien visade hur verktyget Parasoft C/C++ test kan konfigureras och verifieras att följa en viss kodstandard. Vidare resulterade fallstudien i utkast av de nödvändiga dokumenten som behöver produceras för att kommunicera kvalificeringsprocessen till en myndighet. De resultat som presenteras i denna uppsats är i sig inte tillräckliga för beskriva hela kvalificeringsprocessen. Ytterligare överväganden som är specifika till den mjukvaran som verktyget ska användas till att utveckla måste göras för att en komplett kvalificering ska kunna genomföras. Uppsatsen bidrar däremot med riktlinjer och vägledning av majoriteten av de processerna som behöver genomföras. Ytterligare forskning kan göras för att bidra med den kompletta bilden av verktygskvalificering av ett statiskt kodanalysverktyg, samt hur kvalificering kan göras av andra typer av verktyg.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Saillard, Emmanuelle. « Static/Dynamic Analyses for Validation and Improvements of Multi-Model HPC Applications ». Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0176/document.

Texte intégral
Résumé :
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performances, oblige à recourir à différents langages parallèles. Ainsi, l’utilisation conjointe de MPI pour le parallélisme gros grain, à mémoire distribuée et OpenMP pour du parallélisme de thread, fait partie des pratiques de développement d’applications pour supercalculateurs. Des erreurs, liées à l’utilisation conjointe de ces langages de parallélisme, sont actuellement difficiles à détecter et cela limite l’écriture de codes, permettant des interactions plus poussées entre ces niveaux de parallélisme. Des outils ont été proposés afin de palier ce problème. Cependant, ces outils sont généralement focalisés sur un type de modèle et permettent une vérification dite statique (à la compilation) ou dynamique (à l’exécution). Pourtant une combinaison statique/- dynamique donnerait des informations plus pertinentes. En effet, le compilateur est en mesure de donner des informations relatives au comportement général du code, indépendamment du jeu d’entrée. C’est par exemple le cas des problèmes liés aux communications collectives du modèle MPI. Cette thèse a pour objectif de développer des analyses statiques/dynamiques permettant la vérification d’une application parallèle mélangeant plusieurs modèles de programmation, afin de diriger les développeurs vers un code parallèle multi-modèles correct et performant. La vérification se fait en deux étapes. Premièrement, de potentielles erreurs sont détectées lors de la phase de compilation. Ensuite, un test au runtime est ajouté pour savoir si le problème va réellement se produire. Grâce à ces analyses combinées, nous renvoyons des messages précis aux utilisateurs et évitons les situations de blocage
Supercomputing plays an important role in several innovative fields, speeding up prototyping or validating scientific theories. However, supercomputers are evolving rapidly with now millions of processing units, posing the questions of their programmability. Despite the emergence of more widespread and functional parallel programming models, developing correct and effective parallel applications still remains a complex task. Although debugging solutions have emerged to address this issue, they often come with restrictions. However programming model evolutions stress the requirement for a convenient validation tool able to handle hybrid applications. Indeed as current scientific applications mainly rely on the Message Passing Interface (MPI) parallel programming model, new hardwares designed for Exascale with higher node-level parallelism clearly advocate for an MPI+X solutions with X a thread-based model such as OpenMP. But integrating two different programming models inside the same application can be error-prone leading to complex bugs - mostly detected unfortunately at runtime. In an MPI+X program not only the correctness of MPI should be ensured but also its interactions with the multi-threaded model, for example identical MPI collective operations cannot be performed by multiple nonsynchronized threads. This thesis aims at developing a combination of static and dynamic analysis to enable an early verification of hybrid HPC applications. The first pass statically verifies the thread level required by an MPI+OpenMP application and outlines execution paths leading to potential deadlocks. Thanks to this analysis, the code is selectively instrumented, displaying an error and synchronously interrupting all processes if the actual scheduling leads to a deadlock situation
Styles APA, Harvard, Vancouver, ISO, etc.
44

Karlsson, Filip. « Simulation driven design : An iterative approach for mechanical engineers with focus on static analysis ». Thesis, Högskolan i Halmstad, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-28919.

Texte intégral
Résumé :
This thesis of 15 hp has been implemented at Halmstad University, in collaboration with Saab Dynamics in Linköping. Saab Dynamics is a company operating in the defence industry where competition is tough. This necessitates new ways to increase efficiency in the company, which is the basis for this thesis. Saab Dynamics wants to introduce simulation driven design. Since Saab Dynamics engineers have little experience of simulation, required a user methodology with clear guidelines. Due to lack of time, they chose to assign the task to students, which resulted in this thesis. The aim of the thesis is to develop a methodology in mechanical design, where the designer uses the FE analysis early in the design process to develop the structures' mechanical properties. The methodology should be seen as a guide and a source of information to enable an iterative approach with FE-analysis, which is the basis of simulation-driven design. The iterative process of simulation driven design, which can lead to reduced lead times and cost savings in the design process. The work was carried out by three students from the mechanical engineering program between December 2014 and May 2015. Because of the scale of the project, it has been carried out by a total of three students with individual focus areas. The work has followed a self-developed method and the project began with theoretical studies of the topic to get an understanding of what has been done and what research in simulation driven design. Then conducted an empirical study on the Saab Dynamics in Linköping, in order to increase understanding of how the design process looks like. Meanwhile, sustainable development and ethical aspects has been taken into account. Much time has been devoted to investigate the possibilities and limitations of 3D Experience, which is Dassault Systèmes latest platform for 3D modelling- and simulation software. 3D Experience is the software, the methodology is based on. This thesis has resulted in a methodology for simulating at the designer level that the project team in consultation with the supervisor at Saab Dynamics managed to adapt to the company's requirements.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Hovemeyer, David. « Simple and effective static analysis to find bugs ». College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2901.

Texte intégral
Résumé :
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Erdal, Feride. « Web Market Analysis : Static, Dynamic And Content Evaluation ». Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614694/index.pdf.

Texte intégral
Résumé :
Importance of web services increases as the technology improves and the need for the challenging e-commerce strategies increases. This thesis focuses on web market analysis of web sites by evaluating from the perspectives of static, dynamic and content. Firstly, web site evaluation methods and web analytic tools are introduced. Then evaluation methodology is described from three perspectives. Finally, results obtained from the evaluation of 113 web sites are presented as well as their correlations.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Lewis, Tim. « Static dependency analysis of recursive structures for parallelisation ». Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/24832.

Texte intégral
Résumé :
A new approach is presented for the analysis of dependencies in complex recursive data structures with pointer linkages that can be determined statically, at compile time. The pointer linkages in a data structure are described by the programmer in the form of two-variable Finite State Automata (2FSA) which supplements the code that operates over the data structure. Some flexibility is possible in that the linkages can specify a number of possible target nodes for a pointer. An analysis method is described to extract data dependency information, also in the form of 2FSAs, from the recursive code that operates over these structures. For restricted, simple forms of recursion these data dependencies are exact; but, in general, heuristics are presented which provide approximate information. This method uses a novel technique that approximates the transitive closure of these 2FSA relations. In the context of dependency analysis, approximations must be ‘safe' in that they are permitted to overestimate dependencies thereby including spurious ones, but none must be missed. A test is developed that permits the safety of these approximate solutions to be validated. When a recursive program is partitioned into a number of separate threads by the programmer this dependency information can be used to synchronise the access to the recursive structure. This ensures that the threads execute correctly in parallel, enabling a multithreaded version of the code to be constructed. A multithreaded Micronet processor architecture was chosen as a target for this approach. Front- and back-ends of a compiler were developed to compile these multithreaded programs into executables for this architecture, which were then run on a simulation of the processor. The timing results for selected benchmarks are used to demonstrate that useful parallelism can be extracted.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Mallakunta, Narendra. « Static and dynamic analysis of rectangular sandwich plates ». Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5984.

Texte intégral
Résumé :
The static and dynamic characteristics of homogeneous rectangular plates and rectangular sandwich plates are studied by the finite element method using a 8-node isoparametric rectangular element. A computer code utilizing the finite element method is developed to generate solutions for the static and dynamic analysis of homogeneous plates and sandwich plates for conditions of plane stress, plane strain, bending, and combined stress and bending for small deformation problems only. However, in this work, the scope is limited to bending problems only. Further, only the values for the center deflection are generated, in the static analysis, even though the code has the capability to generate the various stress components. In the dynamic analysis, the natural frequency and the associated mode shapes are determined. The boundary conditions are taken as free, simply supported, clamped edge constraints and their combinations. Uniformly distributed loads, concentrated loads or a combination of both can be applied. This study concentrates on free vibration problems in the case of the dynamic analysis. The effect of considering non-uniform shear distribution in the core of the sandwich plate is studied for both the static and dynamic analysis. The impact of considering two different orders of numerical integration is also studied. (Abstract shortened by UMI.)
Styles APA, Harvard, Vancouver, ISO, etc.
49

Zhan, H. J. « Static and dynamic analysis of toroidal LPG tanks ». Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27611.

Texte intégral
Résumé :
Liquefied petroleum gas (LPG) is considered clean, safe and cheap, offering a viable alternative to conventional fuels. Nowadays, the use of LPG as a fuel source in motor vehicles is steadily increasing. The LPG tanks in motor vehicles can be of toroidal shape. The toroidal LPG tanks are generally of non-circular cross-section, and may be supported at points, lines or patches on the surface. Among the mechanical properties of interest for toroidal LPG tanks are the static behavior under internal pressure, the vibration characteristics, the buckling and collapse loads, and the properties under impact loading arising from accident conditions. In the current work, a shell-theory finite element analysis is carried out of toroidal LPG tanks, with non-circular cross-section. The analysis serves to determine the natural frequencies, the buckling and collapse pressures, and the deformation of impacted tanks. The differential quadrature method is used as an alternate means in the vibration analysis. A variety of support conditions are considered, including lines of support at the inner and outer equators of the tank. For validation, comparison is made with previously published results for stress, vibration and buckling of circular and elliptical toroidal shells, and impact deformation of spherical and cylindrical shells. Finally, a parametric study is carried out to determine the influence on the natural frequency, buckling and collapse pressures, and the deformation of the impacted tanks, of shell size, shell thickness, material properties, and support conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Ackland, Patrik. « Fast and Scalable Static Analysis using Deterministic Concurrency ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210927.

Texte intégral
Résumé :
This thesis presents an algorithm for solving a subset of static analysis data flow problems known as Interprocedural Finite Distribute Subset problems. The algorithm, called IFDS-RA, is an implementation of the IFDS algorithm which is an algorithm for solving such problems. IFDS-RA is implemented using Reactive Async which is a deterministic, concurrent, programming model. The scalability of IFDS-RA is compared to the state-of-the-art Heros implementation of the IFDS algorithm and evaluated using three different taint analyses on one to eight processing cores. The results show that IFDS-RA performs better than Heros when using multiple cores. Additionally, the results show that Heros does not take advantage of multiple cores even if there are multiple cores available on the system.
Detta examensarbete presenterar en algoritm för att lösa en klass av problem i statisk analys känd som Interprocedural Finite Distribute Subset problem.  Algoritmen, IFDS-RA, är en implementation av IFDS algoritmen som är utvecklad för att lösa denna typ av problem. IFDS-RA använder sig av Reactive Async som är en deterministisk programmeringsmodell för samtida exekvering av program.  Prestendan evalueras genom att mäta exekveringstid för tre stycken taint analyser med en till åtta processorkärnor och jämförs med state-of-the-art implementationen Heros. Resultaten visar att IFDS-RA presterar bättre än Heros när de använder sig av flera processorkärnor samt att Heros inte använder sig av flera processorkärnor även om de finns tillgängliga.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie