Dissertations / Theses on the topic 'Automatic Static Analysi'

To see the other types of publications on this topic, follow the link: Automatic Static Analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automatic Static Analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sterner, Kenneth. "Automated checking of programming assignments using static analysis." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53337.

Full text
Abstract:
Computer science and software engineering education usually contain programming courses that require writing code that is graded. These assignments are corrected through manual code review by teachers or course assistants. The large amount of assignments motivates us to find ways to automatically correct certain parts of the assignments. One method to ensure certain requirements of written code is fulfilled is by using static analysis, which analyzes code without executing it. We utilize Clang-tidy and Clang Static Analyzer, existing static analysis tools for C/C++, and extend their capabilities to automate requirement checking based on existing assignments,such as prohibiting certain language constructs and ensuring certain function signatures match the ones provided in instructions. We evaluate our forked version of the Clang tooling on actual student hand-ins to show that the tool is capable of automating some aspects that would otherwise require manual code review. We were able to find several errors, even in assignments that were considered complete. However, Clang Static Analyzer also failed to find a memory leak, which leads us to conclude that despite the benefits,static analysis is best used as a complement to assist in finding errors.
APA, Harvard, Vancouver, ISO, and other styles
2

Baca, Dejan. "Automated static code analysis : A tool for early vulnerability detection." Licentiate thesis, Karlskrona : Department of Systems and Software Engineering, School of Engineering, Blekinge Institute of Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Joni, Jeffry Hartono. "Quasi-static force analysis of an automated live-bird transfer system." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/16781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xie, Yichen. "Static detection of software errors precise and scalable algorithms for automatic detection of software errors." Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2991792&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nimal, Vincent P. J. "Static analyses over weak memory." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:469907ec-6f61-4015-984e-7ca8757b992c.

Full text
Abstract:
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
APA, Harvard, Vancouver, ISO, and other styles
6

Aung, Arkar Min. "Automatic Eye-Gaze Following from 2-D Static Images: Application to Classroom Observation Video Analysis." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/251.

Full text
Abstract:
In this work, we develop an end-to-end neural network-based computer vision system to automatically identify where each person within a 2-D image of a school classroom is looking (“gaze following�), as well as who she/he is looking at. Automatic gaze following could help facilitate data-mining of large datasets of classroom observation videos that are collected routinely in schools around the world in order to understand social interactions between teachers and students. Our network is based on the architecture by Recasens, et al. (2015) but is extended to (1) predict not only where, but who the person is looking at; and (2) predict whether each person is looking at a target inside or outside the image. Since our focus is on classroom observation videos, we collect gaze dataset (48,907 gaze annotations over 2,263 classroom images) for students and teachers in classrooms. Results of our experiments indicate that the proposed neural network can estimate the gaze target - either the spatial location or the face of a person - with substantially higher accuracy compared to several baselines.
APA, Harvard, Vancouver, ISO, and other styles
7

Diarra, Rokiatou. "Automatic Parallelization for Heterogeneous Embedded Systems." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS485.

Full text
Abstract:
L'utilisation d'architectures hétérogènes, combinant des processeurs multicoeurs avec des accélérateurs tels que les GPU, FPGA et Intel Xeon Phi, a augmenté ces dernières années. Les GPUs peuvent atteindre des performances significatives pour certaines catégories d'applications. Néanmoins, pour atteindre ces performances avec des API de bas niveau comme CUDA et OpenCL, il est nécessaire de réécrire le code séquentiel, de bien connaître l’architecture des GPUs et d’appliquer des optimisations complexes, parfois non portables. D'autre part, les modèles de programmation basés sur des directives (par exemple, OpenACC, OpenMP) offrent une abstraction de haut niveau du matériel sous-jacent, simplifiant ainsi la maintenance du code et améliorant la productivité. Ils permettent aux utilisateurs d’accélérer leurs codes séquentiels sur les GPUs en insérant simplement des directives. Les compilateurs d'OpenACC/OpenMP ont la lourde tâche d'appliquer les optimisations nécessaires à partir des directives fournies par l'utilisateur et de générer des codes exploitant efficacement l'architecture sous-jacente. Bien que les compilateurs d'OpenACC/OpenMP soient matures et puissent appliquer certaines optimisations automatiquement, le code généré peut ne pas atteindre l'accélération prévue, car les compilateurs ne disposent pas d'une vue complète de l'ensemble de l'application. Ainsi, il existe généralement un écart de performance important entre les codes accélérés avec OpenACC/OpenMP et ceux optimisés manuellement avec CUDA/OpenCL. Afin d'aider les programmeurs à accélérer efficacement leurs codes séquentiels sur GPU avec les modèles basés sur des directives et à élargir l'impact d'OpenMP/OpenACC dans le monde universitaire et industrielle, cette thèse aborde plusieurs problématiques de recherche. Nous avons étudié les modèles de programmation OpenACC et OpenMP et proposé une méthodologie efficace de parallélisation d'applications avec les approches de programmation basées sur des directives. Notre expérience de portage d'applications a révélé qu'il était insuffisant d'insérer simplement des directives de déchargement OpenMP/OpenACC pour informer le compilateur qu'une région de code particulière devait être compilée pour être exécutée sur la GPU. Il est essentiel de combiner les directives de déchargement avec celles de parallélisation de boucle. Bien que les compilateurs actuels soient matures et effectuent plusieurs optimisations, l'utilisateur peut leur fournir davantage d'informations par le biais des clauses des directives de parallélisation de boucle afin d'obtenir un code mieux optimisé. Nous avons également révélé le défi consistant à choisir le bon nombre de threads devant exécuter une boucle. Le nombre de threads choisi par défaut par le compilateur peut ne pas produire les meilleures performances. L'utilisateur doit donc essayer manuellement différents nombres de threads pour améliorer les performances. Nous démontrons que les modèles de programmation OpenMP et OpenACC peuvent atteindre de meilleures performances avec un effort de programmation moindre, mais les compilateurs OpenMP/OpenACC atteignent rapidement leur limite lorsque le code de région déchargée a une forte intensité arithmétique, nécessite un nombre très élevé d'accès à la mémoire globale et contient plusieurs boucles imbriquées. Dans de tels cas, des langages de bas niveau doivent être utilisés. Nous discutons également du problème d'alias des pointeurs dans les codes GPU et proposons deux outils d'analyse statiques qui permettent d'insérer automatiquement les qualificateurs de type et le remplacement par scalaire dans le code source
Recent years have seen an increase of heterogeneous architectures combining multi-core CPUs with accelerators such as GPU, FPGA, and Intel Xeon Phi. GPU can achieve significant performance for certain categories of application. Nevertheless, achieving this performance with low-level APIs (e.g. CUDA, OpenCL) requires to rewrite the sequential code, to have a good knowledge of GPU architecture, and to apply complex optimizations that are sometimes not portable. On the other hand, directive-based programming models (e.g. OpenACC, OpenMP) offer a high-level abstraction of the underlying hardware, thus simplifying the code maintenance and improving productivity. They allow users to accelerate their sequential codes on GPU by simply inserting directives. OpenACC/OpenMP compilers have the daunting task of applying the necessary optimizations from the user-provided directives and generating efficient codes that take advantage of the GPU architecture. Although the OpenACC / OpenMP compilers are mature and able to apply some optimizations automatically, the generated code may not achieve the expected speedup as the compilers do not have a full view of the whole application. Thus, there is generally a significant performance gap between the codes accelerated with OpenACC/OpenMP and those hand-optimized with CUDA/OpenCL. To help programmers for speeding up efficiently their legacy sequential codes on GPU with directive-based models and broaden OpenMP/OpenACC impact in both academia and industry, several research issues are discussed in this dissertation. We investigated OpenACC and OpenMP programming models and proposed an effective application parallelization methodology with directive-based programming approaches. Our application porting experience revealed that it is insufficient to simply insert OpenMP/OpenACC offloading directives to inform the compiler that a particular code region must be compiled for GPU execution. It is highly essential to combine offloading directives with loop parallelization constructs. Although current compilers are mature and perform several optimizations, the user may provide them more information through loop parallelization constructs clauses in order to get an optimized code. We have also revealed the challenge of choosing good loop schedules. The default loop schedule chosen by the compiler may not produce the best performance, so the user has to manually try different loop schedules to improve the performance. We demonstrate that OpenMP and OpenACC programming models can achieve best performance with lesser programming effort, but OpenMP/OpenACC compilers quickly reach their limit when the offloaded region code is computed/memory bound and contain several nested loops. In such cases, low-level languages may be used. We also discuss pointers aliasing problem in GPU codes and propose two static analysis tools that perform automatically at source level type qualifier insertion and scalar promotion to solve aliasing issues
APA, Harvard, Vancouver, ISO, and other styles
8

Rungta, Neha Shyam. "Guided Testing for Automatic Error Discovery in Concurrent Software." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3175.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Ran. "An extensible static analysis framework for automated analysis, validation and performance improvement of model management programs." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/14375/.

Full text
Abstract:
Model Driven Engineering (MDE) is a state-of-the-art software engineering approach, which adopts models as first class artefacts. In MDE, modelling tools and task-specific model management languages are used to reason about the system under development and to (automatically) produce software artefacts such as working code and documentation. Existing tools which provide state-of-the-art model management languages exhibit the lack of support for automatic static analysis for error detection (especially when models defined in various modelling technologies are involved within a multi-step MDE development process) and for performance optimisation (especially when very large models are involved in model management operations). This thesis investigates the hypothesis that static analysis of model management programs in the context of MDE can help with the detection of potential runtime errors and can be also used to achieve automated performance optimisation of such programs. To assess the validity of this hypothesis, a static analysis framework for the Epsilon family of model management languages is designed and implemented. The static analysis framework is evaluated in terms of its support for analysis of task-specific model management programs involving models defined in different modelling technologies, and its ability to improve the performance of model management programs operating on large models.
APA, Harvard, Vancouver, ISO, and other styles
10

de, Carvalho Gomes Pedro. "Automatic Extraction of Program Models for Formal Software Verification." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176286.

Full text
Abstract:
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool
Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade.

QC 20151101

APA, Harvard, Vancouver, ISO, and other styles
11

Moriggl, Irene. "Intelligent Code Inspection using Static Code Features : An approach for Java." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4149.

Full text
Abstract:
Effective defect detection is still a hot issue when it comes to software quality assurance. Static source code analysis plays thereby an important role, since it offers the possibility for automated defect detection in early stages of the development. As detecting defects can be seen as a classification problem, machine learning is recently investigated to be used for this purpose. This study presents a new model for automated defect detection by means of machine learn- ers based on static Java code features. The model comprises the extraction of necessary features as well as the application of suitable classifiers to them. It is realized by a prototype for the feature extraction and a study on the prototype’s output in order to identify the most suitable classifiers. Finally, the overall approach is evaluated in a using an open source project. The suitability study and the evaluation show, that several classifiers are suitable for the model and that the Rotation Forest, Multilayer Perceptron and the JRip classifier make the approach most effective. They detect defects with an accuracy higher than 96%. Although the approach comprises only a prototype, it shows the potential to become an effective alternative to nowa- days defect detection methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Tolubaeva, Munara. "Dosso - Automatic Detector Of Shared Objects In Multithreaded Java Programs." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610493/index.pdf.

Full text
Abstract:
In this thesis, we present a simple and efficient automated analysis tool called DoSSO that detects shared objects in multithreaded Java programs. DoSSO reports only the shared objects that are modified by at least one thread. Based on this tool, we propose a new approach in developing concurrent software where programmers implement the system without considering synchronization issues first and then use appropriate locking mechanism only for the objects reported by DoSSO. To evaluate the applicability of DoSSO, we have conducted a case study on a distributed and concurrent system with graphical user interfaces. Case study results showed that DoSSO is able to identify objects that become shared among explicitly defined threads and event threads, and objects that become shared through RMI.
APA, Harvard, Vancouver, ISO, and other styles
13

Mussot, Vincent. "Automates d'annotation de flot pour l'expression et l'intégration de propriétés dans l'analyse de WCET." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30247/document.

Full text
Abstract:
Dans le domaine des systèmes critiques, l'analyse des temps d'exécution des programmes est nécessaire pour planifier et ordonnancer au mieux différentes tâches et par extension pour dimensionner les systèmes. La durée d'exécution d'un programme dépend de divers facteurs comme ses entrées ou le matériel utilisé. Or cette variation temporelle pose problème dans les systèmes temps-réel dans lesquels il est nécessaire de dimensionner précisément les temps processeur alloués à chaque tâche, et pour cela, connaître leur temps d'exécution au pire cas. Au sein de l'équipe TRACES à l'IRIT, nous cherchons à calculer une borne supérieure à ce temps d'exécution au pire cas qui soit la plus précise possible. Pour cela, nous travaillons sur le graphe de flot de contrôle d'un programme qui représente un sur-ensemble des ses exécutions possibles, que nous accompagnons d'annotations sur des comportements spécifiques du programme susceptibles de réduire la sur-approximation de notre estimation. Dans les outils destinés au calcul du temps d'exécution au pire cas des programmes, les annotations sont habituellement exprimées et intégrées grâce à des langages d'annotation spécifiques. Nous proposons d'utiliser des automates appelés automates d'annotation de flot en lieu et place de ces langages, afin de fonder non seulement l'expression, mais également l'intégration d'annotations dans l'analyse sur des bases formelles. Nous présentons ces automates enrichis de contraintes, de variables et d'une hiérarchie et nous montrons comment ils supportent les divers types d'annotations utilisés dans le domaine de l'analyse du temps d'exécution au pire cas. Par ailleurs, l'intégration des annotations dans une analyse se fait habituellement par l'association de contraintes numériques au graphe de flot de contrôle. Les automates que nous présentons supportent cette méthode mais leur expressivité offre également de nouvelles possibilités d'intégration basées sur le dépliage du graphe de flot de contrôle. Nous présentons des résultats expérimentaux issus de la comparaison de ces deux méthodes qui montrent comment le dépliage de graphe peut améliorer la précision de l'analyse. A terme, ce gain de précision dans l'estimation du temps d'exécution au pire cas permettra de mieux exploiter le matériel sans faire courir de risques à l'utilisateur ou au système
In the domain of critical systems, the analysis of execution times of programs is needed to schedule various task at best and by extension to dimension the whole system. The execution time of a program depends on multiple factors such as entries of the program or the targeted hardware. Yet this time variation is an issue in real-time systems where the duration is required to allow correct processor time to each task, and in this purpose, we need to know their worst-case execution time. In the TRACES team at IRIT, we try to compute a safe upper bound of this worst-case execution time that would be as precise as possible. In order to do so, we work on the control flow graph of a program that represents an over-set of its possible executions and we combine this structure with annotations on specific behaviours of the program that might reduce the over-approximation of our estimation. Tools designed to compute worst-case execution times of programmes usually support the expression and the integration of annotations thanks to specific annotation languages. Our proposal is to replace these languages with a type of automata named flow fact automata so that not only the expression but also the integration of annotations in the analysis inherit from the formal basis of automata. Based on these automata enriched with constraints, variables and a hierarchy, we show how they support the various annotation types used in the worst-case execution time domain. Additionally, the integration of annotations in an analysis usually lead to associate numerical constraint to the control flow graph. The automata presented here support this method but their expressiveness offers new integration possibilities based on the partial unfolding of control flow graph. We present experimental results from the comparison of these two methods that show how the graph unfolding can improve the analysis precision. In the end, this precision gain in the worst-case execution time will ensure a better usage of the hardware as well as the absence of risks for the user or the system itself
APA, Harvard, Vancouver, ISO, and other styles
14

Barcenas, Patino Ismael. "Raisonnement automatisé sur les arbres avec des contraintes de cardinalité." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00569058.

Full text
Abstract:
Les contraintes arithmétiques sont largement utilisées dans les langages formels comme les expressions, les grammaires d'arbres et les chemins réguliers. Ces contraintes sont utilisées dans les modéles de contenu des types (XML Schemas) pour imposer des bornes sur le nombre d'occurrences de nœuds. Dans les langages de requêtes (XPath, XQuery), ces contraintes permettent de sélectionner les nœuds ayant un nombre limité de nœuds accessibles par une expression de chemin donnée. Les types et chemins étendus avec les contraintes de comptage constituent le prolongement naturel de leurs homologues sans comptage déjà considérés comme des constructions fondamentales dans les langages de programmation et les systèmes de type pour XML. Un des défis majeurs en programmation XML consiste à développer des techniques automatisées permettant d'assurer statiquement un typage correct et des optimisations de programmes manipulant les données XML. À cette fin, il est nécessaire de résoudre certaines tâches de raisonnement qui impliquent des constructions telles que les types et les expressions XPath avec des contraintes de comptage. Dans un futur proche, les compilateurs de programmes XML devront résoudre des problèmes de base tels que le sous-typage afin de s'assurer au moment de la compilation qu'un programme ne pourra jamais générer de documents non valides à l'exécution. Cette thèse étudie les logiques capables d'exprimer des contraintes de comptage sur les structures d'arbres. Il a été montré récemment que le mu-calcul sur les graphes, lorsqu'il est étendu à des contraintes de comptage portant exclusivement sur les nœuds successeurs immédiats est indécidable. Dans cette thèse, nous montrons que, sur les arbres finis, la logique avec contraintes de comptage est décidable en temps exponentiel. En outre, cette logique fournit des opérateurs de comptage selon des chemins plus généraux. En effet, la logique peut exprimer des contraintes numériques sur le nombre de nœuds descendants ou même ascendants. Nous présentons également des traductions linéaires d'expressions XPath et de types XML comportant des contraintes de comptage dans la logique.
APA, Harvard, Vancouver, ISO, and other styles
15

Magill, Stephen. "Instrumentation Analysis: An Automated Method for Producing Numeric Abstractions of Heap-Manipulating Programs." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/73.

Full text
Abstract:
A number of questions regarding programs involving heap-based data structures can be phrased as questions about numeric properties of those structures. A data structure traversal might terminate if the length of some path is eventually zero or a function to remove n elements from a collection may only be safe if the collection has size at least n. In this thesis, we develop proof methods for reasoning about the connection between heap-manipulating programs and numeric programs. In addition, we develop an automatic method for producing numeric abstractions of heap-manipulating programs. These numeric abstractions are expressed as simple imperative programs over integer variables and have the feature that if a property holds of the numeric program, then it also holds of the original, heap-manipulating program. This is true for both safety and liveness. The abstraction procedure makes use of a shape analysis based on separation logic and has support for user-defined inductive data structures. We also discuss a number of applications of this technique. Numeric abstractions, once obtained, can be analyzed with a variety of existing verification tools. Termination provers can be used to reason about termination of the numeric abstraction, and thus termination of the original program. Safety checkers can be used to reason about assertion safety. And bound inference tools can be used to obtain bounds on the values of program variables. With small changes to the program source, bounds analysis also allows the computation of symbolic bounds on memory use and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
16

Alnaeli, Saleh M. "EMPIRICALLY EXAMINING THE ROADBLOCKS TO THE AUTOMATIC PARALLELIZATION AND ANALYSIS OF OPEN SOURCE SOFTWARE SYSTEMS." Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1429223400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.

Full text
Abstract:
Les systèmes critiques basés sur l’arithmétique flottante exigent un processus rigoureux de vérification et de validation pour augmenter notre confiance en leur sureté et leur fiabilité. Malheureusement, les techniques existentes fournissent souvent une surestimation d’erreurs d’arrondi. Nous citons Arian 5 et le missile Patriot comme fameux exemples de désastres causés par les erreurs de calculs. Ces dernières années, plusieurs techniques concernant la transformation d’expressions arithmétiques pour améliorer la précision numérique ont été proposées. Dans ce travail, nous allons une étape plus loin en transformant automatiquement non seulement des expressions arithmétiques mais des programmes complets contenant des affectations, des structures de contrôle et des fonctions. Nous définissons un ensemble de règles de transformation permettant la génération, sous certaines conditions et en un temps polynômial, des expressions pluslarges en appliquant des calculs formels limités, au sein de plusieurs itérations d’une boucle. Par la suite, ces larges expressions sont re-parenthésées pour trouver la meilleure expression améliorant ainsi la précision numérique des calculs de programmes. Notre approche se base sur les techniques d’analyse statique par interprétation abstraite pour sur-rapprocher les erreurs d’arrondi dans les programmes et au moment de la transformation des expressions. Cette approche est implémenté dans notre outil et des résultats expérimentaux sur des algorithmes numériques classiques et des programmes venant du monde d’embarqués sont présentés
Critical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
APA, Harvard, Vancouver, ISO, and other styles
18

Steidl, Daniela [Verfasser], Manfred [Akademischer Betreuer] Broy, and Andy [Akademischer Betreuer] Zaidman. "Cost-Effective Quality Assurance For Long-Lived Software Using Automated Static Analysis / Daniela Steidl. Betreuer: Manfred Broy. Gutachter: Manfred Broy ; Andy Zaidman." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1085017516/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rudraiah, Dakshinamurthy Amruth. "A Compiler-based Framework for Automatic Extraction of Program Skeletons for Exascale Hardware/Software Co-design." Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5695.

Full text
Abstract:
The design of high-performance computing architectures requires performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a "program skeleton" that we discuss in this paper is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed for the purposes of the skeleton. In this work, we develop a semi-automatic approach for extracting program skeletons based on compiler program analysis. We demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator. Extracting such a program skeleton from a large-scale parallel program requires a substantial amount of manual effort and often introduces human errors. We outline a semi-automatic approach for extracting program skeletons from large-scale parallel applications that reduces cost and eliminates errors inherent in manual approaches. Our skeleton generation approach is based on the use of the extensible and open-source ROSE compiler infrastructure that allows us to perform flow and dependency analysis on larger programs in order to determine what code can be removed from the program to generate a skeleton.
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
20

Evangelisti, Gionata. "Analisi delle criticità e progettazione di un sistema di trasporto automatico inbound nel settore ceramico." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
L’industria italiana delle piastrelle ceramiche è rinomata a livello mondiale e per mantenere la propria immagine ha bisogno di qualità in tutti i processi. Negli ultimi anni, per soddisfare le richieste dei clienti, la logistica del settore ceramico si è dotata di soluzioni automatiche, che, per essere efficaci, necessitano di un’attenta analisi e di un’intelligente progettazione. In questo senso, l’obiettivo della tesi è quello di osservare e migliorare i flussi logistici di un’azienda del settore ceramico, che si è recentemente dotata di un Magazzino Automatico Verticale (MAV). Per prima cosa si sono mappati i flussi all’interno del MAV al fine di individuarne i punti critici. Successivamente, per rispondere alle problematiche emerse, si è progettato un sistema di trasporto automatico inbound necessario a collegare la produzione al MAV. Per comprendere e monitorare i flussi è stato importante studiare le particolarità del prodotto ceramico ed effettuare una raccolta dati direttamente sul campo. Da questo lavoro preliminare è emerso come il processo di rifornimento del MAV fosse quello maggiormente critico. L’azienda ha, così, deciso di affrontare il problema implementando un sistema di trasporto automatico dalla produzione al MAV, diviso in tre parti. Nel corso di questa tesi si è progettato il primo tratto, dove avviene l’attività di trasbordo. Si è individuato il percorso in cui è possibile movimentare la merce e si sono dimensionati staticamente la flotta di LGVs e il numero di batterie, rispettivamente sei e 12, necessari per svolgere il trasbordo. Insieme al primo tratto, l’azienda ha progettato il secondo ed ha previsto per il 2022 il completamento del sistema di trasporto automatico che consentirà di rifornire più efficacemente il MAV.
APA, Harvard, Vancouver, ISO, and other styles
21

Croft, Elizabeth W. "Transaction fees in banking machine networks : a spatial and empirical analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0025/NQ38873.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Masi, Riccardo. "Software verification and validation methods with advanced design patterns and formal code analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
This thesis focuses on the description and the improvement of the host company software life cycle, with a focus on the Verification and Validation phase. The host company is an international group, the world leader in the supply of advanced technologies for the ceramic, metal, packaging industries, food and beverage, and the production of plastic containers and advanced materials. The software life cycle is an extremely important development process for building the state-of-art of software products and it is a process that requires methodology, control, and appropriate documentation. For companies, quality assurance in software development has become a very expensive activity from an economic point of view and the verification and validation phase is essential to reduce these costs. The starting point of the thesis consists of the analysis and evaluation of the answers obtained through a company survey submitted to the software developers during the first phase of the internship. Subsequently, the description of a typical software life cycle management is predominant, with particular attention to the Verification and Validation phase, explained through some practice examples. Afterward, we will analyze in detail the different methodologies and strategies of the Software Verification and Validation process, starting from static analysis, passing through classical methodologies of dynamic analysis, and concluding with innovative Verification and Validation solutions to automate the process. The main goal of the thesis is the optimization and standardization of the automation software life cycle of the host company, proposing innovative solutions for every single phase of the process and possible future research and updates.
APA, Harvard, Vancouver, ISO, and other styles
23

Šohajek, Jiří. "Analýza vyvrtávacího procesu automatické horizontální vyvrtávačky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-228141.

Full text
Abstract:
The object of this dissertation is to analyse a drilling process of an automatic horizontal boring machine the SVD2 for drilling of bearing holes. There were problems with vibrations during drilling process, causing an extreme noise and this dissertation solves them. First of all it was necessary to analyse the vibrations, to examine a machining process and measure machine dynamics and statics. Another task was to compare measured results with mathematical models and after their analysis to design structural and other solutions. The SVD2 machine was designed on the basis of a previous type the SVD. Its conversion to a new one was based on adding of six spindles. As a result a number of drilled bodies was increased from one to four, so a performance was increased four times. That is why there was an original intention to rebuilt the other two spindle machines. To leave the basical parts of the machine without any modification ( such as a reinforcement of a mount and adjusting of a bigger distance of a side linear spindle guideway and a cross linear guideway of a support for clamping devices), would be unsuitable solution, which was confirmed after a measurement and a following analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Csallner, Christoph. "Combining over- and under-approximating program analyses for automatic software testing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24764.

Full text
Abstract:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spencer.
APA, Harvard, Vancouver, ISO, and other styles
25

Hamendi, Mohammed. "Automatically Testing Student Assignments." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-202110.

Full text
Abstract:
The freshmen programming courses at the University of Economics in Prague offer a unique approach to learning the art of programming and software engineering. The introductory courses follow the Architecture First methodology that gives students the opportunity to learn programming from the top down, without being constrained by the specifics and syntax of any one programming language. It teaches the thought processes needed to build programs, allowing the student to absorb the big ideas of computer programming. The average number of freshmen at the faculty of Informatics and Statistics is around seven hundred students. The task of correcting programming assignment and preparing appropriate feedback would be a mammoth undertaking for teaching staff in most university settings worldwide that offer similar computing degrees. It is therefore quite often the case that the faculty provisions some sort of automated testing technology that can handle the volume and provide both the teaching staff and the students with the tools needed to manage the assignments. These automated tools or systems have been, and continue to be, the subject of many research topics across the world and continue to evolve as new technologies and teaching methods evolve. This study first introduces the theoretical background of automated assessment and grading tools and systems and then provides an analysis of the fields current state. Taking that as input to the next phase, the study uses that information to then design and implement a custom-built system that would enable the automated testing of the structure and other aspects of student assignments. The main goal for the resulting system is to provide an intuitive and convenient way of declaring what needs to be tested for a given assignment and then providing the mechanism to run those tests automatically. The resulting system, DynoGrader, dynamically validates student assignments at runtime using Java runtime annotation processing mechanisms and Java Reflection API.
APA, Harvard, Vancouver, ISO, and other styles
26

Parlea, Lorena Georgeta. "Towards Automating Structural Analysis of Complex RNA Molecules and Some Applications In Nanotechnology." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1429316311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hameed, Muhammad Muzaffar, and Muhammad Zeeshan ul Haq. "DefectoFix : An interactive defect fix logging tool." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5268.

Full text
Abstract:
Despite the large efforts made during the development phase to produce fault free system, most of the software implementations still require the testing of entire system. The main problem in the software testing is the automation that could verify the system without manual intervention. Recent work in software testing is related to the automated fault injection by using fault models from repository. This requires a lot of efforts, which adds to the complexity of the system. To solve this issue, this thesis suggests DefectoFix framework. DefectoFix is an interactive defect fix logging tools that contains five components namely Version Control Sysem (VCS), source code files, differencing algorithm, Defect Fix Model (DFM) creation and additional information (project name, class name, file name, revision number, diff model). The proposed differencing algorithm extracts detailed information by detecting differences in source code files. This algorithm performs comparison at sub-tree levels of source code files. The extracted differences with additional information are stored as DFM in repository. DFM(s) can later be used for the automated fault injection process. The validation of DefectoFix framework is performed by a tool developed using Ruby programming language. Our case study confirms that the proposed framework generates a correct DFM and is useful in automated fault injection and software validation activities.
APA, Harvard, Vancouver, ISO, and other styles
28

Menon, Malavika Vasudevan. "Parameter Estimation Technique for Models in PSS/E using Real-Time Data and Automation." ScholarWorks@UNO, 2017. https://scholarworks.uno.edu/td/2436.

Full text
Abstract:
The purpose of this thesis is to use automation to create appropriate models in PSS/E with the data from Hardware-in-Loop real-time simulations. With the increase in technology of power electronics, the use of High Voltage Direct Current Technology and Flexible Alternating Current Transmission System devices in the electrical power system have increased tremendously. Static Var Compensators are widely used and it is important to have accurate and reliable models for studies relating to power systems planning and interaction. An automation method is proposed to find the parameters of an SVC model in PSS/E with the data from the Hardware-in- loop real-time simulation of the SVC physical controller using Hypersim. The effect of the SVC on the system under steady state and fault conditions are analyzed with HIL simulation of an SVC physical controller in Hypersim and its corresponding model in PSS/E in the IEEE 14 bus system. The parameters of the SVC model in PSS/E can be effectively varied to bring its response closer to that of the response from HIL simulations in Hypersim. An error function is used as a measure to understand the extent of difference between the model and the physical controller.
APA, Harvard, Vancouver, ISO, and other styles
29

Letko, Zdeněk. "Dynamická detekce a léčení časově závislých chyb nad daty v prostředí Java." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235989.

Full text
Abstract:
Finding concurrency bugs in complex software is difficult. As a contribution to coping with this problem the thesis proposes an architecture for a fully automated dynamic detection and healing of data races and atomicity violations in Java. Two distinct algorithms for detecting of data races are presented. One of them is a novel algorithm called AtomRace which detects data races as a special case of atomicity violations. The healing is based on suppressing a recurrence of the detected problem and can be performed by introducing an additional synchronization or by legally influencing the Java scheduler. Basically forces certain parts of the code  to be executed atomically. The proposed architecture uses bytecode instrumentation to be able to track and influence the execution. The architecture and algorithms were implemented and tested on multiple case studies.
APA, Harvard, Vancouver, ISO, and other styles
30

Bishop, Gary D. "Uncertainty analysis of runoff estimates from runoff-depth contour maps produced by five automated procedures for the northeastern United States." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4313.

Full text
Abstract:
Maps of runoff-depth have been found to be useful tools in a variety of water resource applications. Producing such maps can be a challenging and expensive task. One of the standard methods of producing these maps is to use a manual procedure based on gaged runoff data, topographic and past runoff-depth maps, and the expert opinion of hydrologists. This thesis examined five new automated procedures for producing runoff-depth contour maps to see if the maps produced by these procedures had similar accuracy and characteristics when compared to the manual procedure. An uncertainty analysis was used to determine the accuracy of the automated procedure maps by withholding gaged runoff data from the creation of the contour maps and then interpolating estimated runoff back to these sites from the maps produced. Subtracting gaged runoff from estimated runoff produced interpolation error values. The mean interpolation error was used to define the accuracy of each map and was then compared to a similar study by Rochelle, et al., (1989) conducted on a manual procedure map.
APA, Harvard, Vancouver, ISO, and other styles
31

Beaucamps, Philippe. "Analyse de Programmes Malveillants par Abstraction de Comportements." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2011. http://tel.archives-ouvertes.fr/tel-00646395.

Full text
Abstract:
L'analyse comportementale traditionnelle opère en général au niveau de l'implantation du comportement malveillant. Pourtant, elle s'intéresse surtout à l'identification d'un comportement donné, indépendamment de sa mise en œuvre technique, et elle se situe donc plus naturellement à un niveau fonctionnel. Dans cette thèse, nous définissons une forme d'analyse comportementale de programmes qui opère non pas sur les interactions élémentaires d'un programme avec le système mais sur la fonction que le programme réalise. Cette fonction est extraite des traces d'un programme, un procédé que nous appelons abstraction. Nous définissons de façon simple, intuitive et formelle les fonctionnalités de base à abstraire et les comportements à détecter, puis nous proposons un mécanisme d'abstraction applicable à un cadre d'analyse statique ou dynamique, avec des algorithmes pratiques à complexité raisonnable, enfin nous décrivons une technique d'analyse comportementale intégrant ce mécanisme d'abstraction. Notre méthode est particulièrement adaptée à l'analyse des programmes dans des langages de haut niveau ou dont le code source est connu, pour lesquels l'analyse statique est facilitée : les programmes conçus pour des machines virtuelles comme Java ou .NET, les scripts Web, les extensions de navigateurs, les composants off-the-shelf. Le formalisme d'analyse comportementale par abstraction que nous proposons repose sur la théorie de la réécriture de mots et de termes, les langages réguliers de mots et de termes et le model checking. Il permet d'identifier efficacement des fonctionnalités dans des traces et ainsi d'obtenir une représentation des traces à un niveau fonctionnel ; il définit les fonctionnalités et les comportements de façon naturelle, à l'aide de formules de logique temporelle, ce qui garantit leur simplicité et leur flexibilité et permet l'utilisation de techniques de model checking pour la détection de ces comportements ; il opère sur un ensemble quelconque de traces d'exécution ; il prend en compte le flux de données dans les traces d'exécution ; et il permet, sans perte d'efficacité, de tenir compte de l'incertitude dans l'identification des fonctionnalités. Nous validons nos résultats par un ensemble d'expériences, menées sur des codes malicieux existants, dont les traces sont obtenues soit par instrumentation binaire dynamique, soit par analyse statique.
APA, Harvard, Vancouver, ISO, and other styles
32

Namanya, Anitta P. "A Heuristic Featured Based Quantification Framework for Efficient Malware Detection. Measuring the Malicious intent of a file using anomaly probabilistic scoring and evidence combinational theory with fuzzy hashing for malware detection in Portable Executable files." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/15863.

Full text
Abstract:
Malware is still one of the most prominent vectors through which computer networks and systems are compromised. A compromised computer system or network provides data and or processing resources to the world of cybercrime. With cybercrime projected to cost the world $6 trillion by 2021, malware is expected to continue being a growing challenge. Statistics around malware growth over the last decade support this theory as malware numbers enjoy almost an exponential increase over the period. Recent reports on the complexity of the malware show that the fight against malware as a means of building more resilient cyberspace is an evolving challenge. Compounding the problem is the lack of cyber security expertise to handle the expected rise in incidents. This thesis proposes advancing automation of the malware static analysis and detection to improve the decision-making confidence levels of a standard computer user in regards to a file’s malicious status. Therefore, this work introduces a framework that relies on two novel approaches to score the malicious intent of a file. The first approach attaches a probabilistic score to heuristic anomalies to calculate an overall file malicious score while the second approach uses fuzzy hashes and evidence combination theory for more efficient malware detection. The approaches’ resultant quantifiable scores measure the malicious intent of the file. The designed schemes were validated using a dataset of “clean” and “malicious” files. The results obtained show that the framework achieves true positive – false positive detection rate “trade-offs” for efficient malware detection.
APA, Harvard, Vancouver, ISO, and other styles
33

Boke, Tevfik Ali. "Dynamic Stability Analysis Of Modular, Self-reconfigurable Robotic Systems." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606016/index.pdf.

Full text
Abstract:
In this study, an efficient algorithm has been developed for the dynamic stability analysis of self-reconfigurable, modular robots. Such an algorithm is essential for the motion planning of self-reconfigurable robotic systems. The building block of the algorithm is the determination of the stability of a rigid body in contact with the ground when there exists Coulomb friction between the two bodies. This problem is linearized by approximating the friction cone with a pyramid and then solved, efficiently, using linear programming. The effects of changing the number of faces of the pyramid and the number of contact points are investigated. A novel definition of stability, called percentage stability, is introduced to counteract the adverse effects of the static indeterminacy problem between two contacting bodies. The algorithm developed for the dynamic stability analysis, is illustrated via various case studies using the recently introduced self-reconfigurable robotic system, called I-Cubes.
APA, Harvard, Vancouver, ISO, and other styles
34

Driver, Linda C. "Cost-benefit analysis of bedside terminals." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917026.

Full text
Abstract:
Bedside terminals are an approach to data entry that is maximally effective, due to capture at the point of care, to meet the challenges facing nursing today in relation to information documentation. The purpose of this evaluation research study was to determine if bedside terminals are justifiable though a cost-benefit analysis.The General Systems Theory, which was formulated by Ludwig von Bertalanffy in the late 1920's, was the theoretical framework used for this study (Putt, 1978).A non-standardized checklist of factors was developed by the researcher to evaluate the associated costs. related to bedside terminals. The factors included patient census, acuity, lost charges, reimbursement denials, and medication errors.A convenience sample of one surgical nursing unit from a large midwestern metropolitan hospital was chosen for data collection. Based on the literature, monetary values werearbitrarily assigned to the factors. Costs were assigned based on projected figures for bedside terminal implementation in 1993 obtained from the literature. All participants were notified of rights as human subjects and the confidentiality of this study.This study was significant because the results will be added to the limited information on the justification of bedside terminals using a cost-benefit analysis available in the current literature.Projections of bedside terminal costs were limited due to the unwillingness of bedside terminal vendors to provide current costs to compare against the quantitative benefits collected in this study. Reimbursement denials were not obtained due to the accounting practices of the institution. Due to these limitations, a prospective rather than the retrospective approach used in this study for data collection would be recommended to ensure obtaining information on all data elements. The results of this study should be considered when contemplating purchase of bedside terminals. Based on the results of this cost-benefit analysis study, the purchase of bedside terminals is cost-justified. A favorable return on investment of a one year payback was obtained.
School of Nursing
APA, Harvard, Vancouver, ISO, and other styles
35

Browning, Mary. "Cost-benefit analysis of bedside computers." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917025.

Full text
Abstract:
Bedside computer terminals are an approach to data entry that is maximally effective to meet the challenges facing nursing today and process hospital information. The purpose of this evaluation research study was to determine if bedside computers are justifiable through a cost-benefit analysis. Costbenefit analysis was done to determine whether the benefits outweigh the costs involved in implementing a bedside computer system.The General Systems Theory formulated by Ludwig von Betalanaffy was the theoretical framework utilized for this project. A non-standardized checklist of factors was developed from the literature review. The factors included were patient census, acuity, lost charges, reimbursement denials, and medicine errors. Interrater reliability was established by a panel ofthree experts on cost-benefit analysis.A convenience sample of one 42 bed nursing unit from a large metropolitan Midwestern hospital was chosen for data retrieval. One month of financial data was analyzed for this study. The procedures for the protection of human subjects were followed.The study showed that bedside computers are costjustified.-based on the results of the cost-benefit analysis. The projection of cost versus benefits realized from the bedside terminals was limited due to the unwillingness of vendors to share actual cost information. Reimbursement denials were unable to be retrieved because of the financial accounting practices of the institution chosen.A prospective approach rather than the retrospective approach utilized in this study may produce the data necessary for reimbursement denials. Recognizing its limitations, the results of this study should be considered when contemplating the purchase of bedside terminals or the increasingly more advanced technology of hand-held and voice activated computers.
School of Nursing
APA, Harvard, Vancouver, ISO, and other styles
36

Homdim, Tchuenteu Joel Landry. "Analysis and dynamic modeling of intermediate distributors for balancing of production lines." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18626/.

Full text
Abstract:
The work carried out at the company Pulsar Engineering s.r.l, and discussed in this thesis, focuses on the construction of a model for the dynamic simulation of the operations of a machine that allows feeding and sorting/merging in the tissue sector called REDS INTERMEDIATE. The goal is to derive a powerful dynamic model that can simulate a large range of REDS intermediate that could work in different existing operating modes (DIVERTER, COMBINER and By-pass modes) and containing all existing operating strategies (REVOLVER and TETRIS strategies). This was possible with the aid of a powerful simulation tool called PLS DYNAMIC/ TISSUEPLS DYNAMIC. It is important to emphasize that we will deal with a simplified production line since we are interested in just getting the REDS INTERMEDIATE model. This model can be used to: - Obtain a real estimate of the parameters necessary for the design of a production line. - See the behaviour of the PULSAR line in a 2D and 3D interface proposed by the software. The following discussion reports the study in question presenting some result, starting from a general description of the production lines, and a static analysis of the REDS INTERMEDIATE.
APA, Harvard, Vancouver, ISO, and other styles
37

Mirza, Qublai K. A. "A Cloud-Based Intelligent and Energy Efficient Malware Detection Framework. A Framework for Cloud-Based, Energy Efficient, and Reliable Malware Detection in Real-Time Based on Training SVM, Decision Tree, and Boosting using Specified Heuristics Anomalies of Portable Executable Files." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16043.

Full text
Abstract:
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
APA, Harvard, Vancouver, ISO, and other styles
38

Palikareva, Hristina. "Techniques and tools for the verification of concurrent systems." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:fc2028e1-2a45-459a-afdd-70001893f3d8.

Full text
Abstract:
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
39

Surovič, Marek. "Statická detekce malware nad LLVM IR." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255427.

Full text
Abstract:
Tato práce se zabývá metodami pro behaviorální detekci malware, které využívají techniky formální analýzy a verifikace. Základem je odvozování stromových automatů z grafů závislostí systémových volání, které jsou získány pomocí statické analýzy LLVM IR. V rámci práce je implementován prototyp detektoru, který využívá překladačovou infrastrukturu LLVM. Pro experimentální ověření detektoru je použit překladač jazyka C/C++, který je schopen generovat mutace malware za pomoci obfuskujících transformací. Výsledky předběžných experimentů a případná budoucí rozšíření detektoru jsou diskutovány v závěru práce.
APA, Harvard, Vancouver, ISO, and other styles
40

Truong, Nghi Khue Dinh. "A web-based programming environment for novice programmers." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16471/1/Nghi_Truong_Thesis.pdf.

Full text
Abstract:
Learning to program is acknowledged to be difficult; programming is a complex intellectual activity and cannot be learnt without practice. Research has shown that first year IT students presently struggle with setting up compilers, learning how to use a programming editor and understanding abstract programming concepts. Large introductory class sizes pose a great challenge for instructors in providing timely, individualised feedback and guidance for students when they do their practice. This research investigates the problems and identifies solutions. An interactive and constructive web-based programming environment is designed to help beginning students learn to program in high-level, object-oriented programming languages such as Java and C#. The environment eliminates common starting hurdles for novice programmers and gives them the opportunity to successfully produce working programs at the earliest stage of their study. The environment allows students to undertake programming exercises anytime, anywhere, by "filling in the gaps" of a partial computer program presented in a web page, and enables them to receive guidance in getting their programs to compile and run. Feedback on quality and correctness is provided through a program analysis framework. Students learn by doing, receiving feedback and reflecting - all through the web. A key novel aspect of the environment is its capability in supporting small "fill in the gap" programming exercises. This type of exercise places a stronger emphasis on developing students' reading and code comprehension skills than the traditional approach of writing a complete program from scratch. It allows students to concentrate on critical dimensions of the problem to be solved and reduces the complexity of writing programs.
APA, Harvard, Vancouver, ISO, and other styles
41

Truong, Nghi Khue Dinh. "A web-based programming environment for novice programmers." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16471/.

Full text
Abstract:
Learning to program is acknowledged to be difficult; programming is a complex intellectual activity and cannot be learnt without practice. Research has shown that first year IT students presently struggle with setting up compilers, learning how to use a programming editor and understanding abstract programming concepts. Large introductory class sizes pose a great challenge for instructors in providing timely, individualised feedback and guidance for students when they do their practice. This research investigates the problems and identifies solutions. An interactive and constructive web-based programming environment is designed to help beginning students learn to program in high-level, object-oriented programming languages such as Java and C#. The environment eliminates common starting hurdles for novice programmers and gives them the opportunity to successfully produce working programs at the earliest stage of their study. The environment allows students to undertake programming exercises anytime, anywhere, by "filling in the gaps" of a partial computer program presented in a web page, and enables them to receive guidance in getting their programs to compile and run. Feedback on quality and correctness is provided through a program analysis framework. Students learn by doing, receiving feedback and reflecting - all through the web. A key novel aspect of the environment is its capability in supporting small "fill in the gap" programming exercises. This type of exercise places a stronger emphasis on developing students' reading and code comprehension skills than the traditional approach of writing a complete program from scratch. It allows students to concentrate on critical dimensions of the problem to be solved and reduces the complexity of writing programs.
APA, Harvard, Vancouver, ISO, and other styles
42

Holm, Oscar. "Improving the Development of Safety Critical Software : Automated Test Case Generation for MC/DC Coverage using Incremental SAT-Based Model Checking." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161335.

Full text
Abstract:
The importance and requirements of certifying safety critical software is today more apparent than ever. This study focuses on the standards and practices used within the avionics, automotive and medical domain when it comes to safety critical software. We identify critical problems and trends when certifying safety critical software and propose a proof-of-concept using static analysis, model checking and incremental SAT solving as a contribution towards solving the identified problems. We present quantitative execution times and code coverage results of our proposed solution. The proposed solution is developed under the assumptions of safety critical software standards and compared to other studies proposing similar methods. Lastly, we conclude the issues and advantages of our proof-of-concept in perspective of the software developer community
APA, Harvard, Vancouver, ISO, and other styles
43

Santos, Kleber Tarcísio Oliveira. "Uma abordagem unificada para especificar e checar restrições em múltiplas linguagens de programação por meio de um analisador estático no contexto de um juiz on-line." Pós-Graduação em Ciência da Computação, 2018. http://ri.ufs.br/jspui/handle/riufs/10639.

Full text
Abstract:
The teaching and learning process of computer programming is a complex task which requires a lot of practice and creativity. Usually, there are numerous solutions to the same problem. Therefore, the student needs that his solutions are evaluated quickly for a faster and effective learning. To face these challenges, teachers and students can rely on resources from the evolution of Information and Communication Technology. Virtual learning environments and online judge systems are attractive alternatives used in this context. This work presents a unified approach to specify and check source code restrictions supported by a static analyzer. Although current tools are able to indicate if the program produced the expected output from a given input, not all are able to determine if the student used (or not) a given programming language construct, such as creating a function and using it in the program. Among those that are capable, there are problems that were solved in the approach proposed in this work, such as: ease of use, unified approach and degree of flexibility. In addition, this work presents an analysis of the database of The Huxley with the purpose of discovering the main restrictions of source code used by the teachers and attended by the students. This analysis was done based on data obtained from the use of the developed static analyzer and in conjunction with a survey applied to the teachers of introduction to programming with the purpose of knowing the main restrictions that would be used by them if they had a tool to specify and check restrictions.
O processo de ensino e aprendizagem da programação de computadores é uma tarefa complexa que requer bastante prática e criatividade. Geralmente há inúmeras soluções para um mesmo problema. Por isso, o aluno precisa que suas soluções sejam avaliadas rapidamente visando um aprendizado mais ágil e eficaz. Para enfrentar esses desafios, os professores e alunos podem contar com recursos provenientes da evolução da Tecnologia da Informação e Comunicação. Os ambientes de aprendizagem virtual e os sistemas de juiz on-line são alternativas atrativas utilizadas nesse contexto. Este trabalho apresenta uma abordagem unificada de especificação e checagem de restrições de código-fonte apoiada por um analisador estático. Apesar das ferramentas atuais serem capazes de indicar se o programa produziu a saída esperada a partir de uma entrada fornecida, nem todas são capazes de determinar se o aluno utilizou (ou não) determinada construção de linguagem de programação, como por exemplo criar uma função e utilizá-la no programa. Entre as que são capazes, existem problemas que foram sanados na abordagem proposta neste trabalho, como: facilidade de uso, abordagem unificada e grau de flexibilidade. Além disto, este trabalho conta com uma análise da base de dados do The Huxley com o objetivo de descobrir quais são as principais restrições de código-fonte utilizadas pelos professores e atendidas pelos alunos. Esta análise foi feita com os dados obtidos da aplicação do analisador estático de código-fonte desenvolvido e em conjunto com um survey aplicado aos professores de introdução à programação com o propósito de conhecer as principais restrições que seriam utilizadas por eles se possuíssem uma ferramenta de especificação e checagem de restrições.
São Cristóvão, SE
APA, Harvard, Vancouver, ISO, and other styles
44

Stålnacke, Olof. "Formativ feedback i programmering med tillämpning av statisk kodanalys : Utveckling av ett verktyg." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63934.

Full text
Abstract:
Aim Develop an IT artifact that provides formative feedback for students based on their programming assignments. Background One of the best methods to learn programming is by practice. Providing feedback to students is an important and a valuable factor for improving learning, which plays a vital part in the student’s possibility to enhance and improve its solutions. Software development courses have several assignments and each course instructs about 100 students. To assess and provide feedback for all the students and each assignment demands considerable resources. In a survey conducted by TCO (2013) half of the respondents’ state that feedback is rarely or never given in reasonable time. Method Action Design Research (ADR) was used to intervene an organizational problem in parallel with building and evaluating an IT artifact. Conclusion The results from the study were four generated design principles and a proposed solution on how to use existing static code analysis tools for provide formative feedback to students.
APA, Harvard, Vancouver, ISO, and other styles
45

Glanon, Philippe Anicet. "Deployment of loop-intensive applications on heterogeneous multiprocessor architectures." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG029.

Full text
Abstract:
Les systèmes cyber-physiques (CPS en anglais) sont des systèmes distribués qui intègrent un large panel d'applications logicielles et de ressources de calcul hétérogènes connectées par divers moyens de communication (filaire ou non-filaire). Ces systèmes ont pour caractéristique de traiter en temps-réel, un volume important de données provenant de processus physiques, chimiques ou biologiques. Une problématique essentielle dans la phase de conception des CPSs est de prédire le comportement temporel des applications logicielles et de fournir des garanties de performances pour ces applications. Afin de répondre à cette problématique, des stratégies d'ordonnancement statique sont nécessaires. Ces stratégies doivent tenir compte de plusieurs contraintes, notamment les contraintes de dépendances cycliques induites par les boucles de calcul des applications ainsi que les contraintes de ressource et de communication des architectures de calcul. En effet, les boucles étant l'une des parties les plus critiques en temps d'exécution pour plusieurs applications de calcul intensif, le comportement temporel et les performances optimales des applications logicielles dépendent de l'ordonnancement optimal des structures de boucles embarqués dans les programmes de calcul. Pour prédire le comportement temporel des applications logicielles et fournir des garanties de performances pour ces applications, les stratégies d'ordonnancement statiques doivent donc explorer et exploiter efficacement le parallélisme embarqué dans les patterns d'exécution des programmes à boucles intensives tout en garantissant le respect des contraintes de ressources et de communication des architectures de calcul. L'ordonnancement d'un programme à boucles intensives sous contraintes ressources et communication est un problème complexe et difficile. Afin de résoudre efficacement ce problème, il est indispensable de concevoir des heuristiques. Cependant, pour concevoir des heuristiques efficaces, il est important de caractériser l'ensemble des solutions optimales pour le problème d'ordonnancement. Une solution optimale pour un problème d'ordonnancement est un ordonnancement qui réalise un objectif optimal de performance. Dans cette thèse, nous nous intéressons au problème d'ordonnancement des programmes à boucles intensives sur des architectures de calcul multiprocesseurs hétérogènes sous des contraintes de ressource et de communication, avec l'objectif d'optimiser le débit de fonctionnement des applications logicielles. Pour ce faire, nous utilisons les modèles de flots de données statiques pour décrire les structures de boucles spécifiées dans les programmes de calcul et nous concevons des stratégies d'ordonnancement périodiques sur la base des propriétés structurelles et mathématiques de ces modèles afin de générer des solutions optimales et approximatives d'ordonnancement
Cyber-physical systems (CPSs) are distributed computing-intensive systems, that integrate a wide range of software applications and heterogeneous processing resources, each interacting with the other ones through different communication resources to process a large volume of data sensed from physical, chemical or biological processes. An essential issue in the design stage of these systems is to predict the timing behaviour of software applications and to provide performance guarantee to these applications. In order tackle this issue, efficient static scheduling strategies are required to deploy the computations of software applications on the processing architectures. These scheduling strategies should deal with several constraints, which include the loop-carried dependency constraints between the computational programs as well as the resource and communication constraints of the processing architectures intended to execute these programs. Actually, loops being one of the most time-critical parts of many computing-intensive applications, the optimal timing behaviour and performance of the applications depends on the optimal schedule of loops structures enclosed in the computational programs executed by the applications. Therefore, to provide performance guarantee for the applications, the scheduling strategies should efficiently explore and exploit the parallelism embedded in the repetitive execution patterns of loops while ensuring the respect of resource and communications constraints of the processing architectures of CPSs. Scheduling a loop under resource and communication constraints is a complex problem. To solve it efficiently, heuristics are obviously necessary. However, to design efficient heuristics, it is important to characterize the set of optimal solutions for the scheduling problem. An optimal solution for a scheduling problem is a schedule that achieve an optimal performance goal. In this thesis, we tackle the study of resource-constrained and communication-constrained scheduling of loop-intensive applications on heterogeneous multiprocessor architectures with the goal of optimizing throughput performance for the applications. In order to characterize the set of optimal scheduling solutions and to design efficient scheduling heuristics, we use synchronous dataflow (SDF) model of computation to describe the loop structures specified in the computational programs of software applications and we design software pipelined scheduling strategies based on the structural and mathematical properties of the SDF model
APA, Harvard, Vancouver, ISO, and other styles
46

Mayer, Wolfgang. "Static and hybrid analysis in model-based debugging." 2007. http://arrow.unisa.edu.au:8081/1959.8/29562.

Full text
Abstract:
Defects in computer programs have great social and economic impacts and should be eliminated as much as possible. Since testing and debugging are among the most costly and time consuming tasks in the software development life cycle, a variety of intelligent debugging aids have been proposed within the last three decades. Model-based software debugging (MBSD) is a particular technique that exploits discrepancies between a program execution and the intended behaviour to isolate program fragments that could potentially explain an observed misbehaviour. In contrast to other techniques, model-based debugging does not require a formal specification of a program's behaviour, making the approach suitable for developers without training in formal software engineering practices. A key aspect of model-based debugging is the transformation of the given program into a model suitable for debugging. In this thesis, several models for analysing programs written in an object-oriented language are investigated, with Java as concrete example. The aim of this work is to assess the suitability of value-based models and generalisations thereof for debugging of programs making use of dynamically allocated data structures, recursive methods and polymorphic method invocations.
APA, Harvard, Vancouver, ISO, and other styles
47

Arceri, Vincenzo. "Taming Strings in Dynamic Languages - An Abstract Interpretation-based Static Analysis Approach." Doctoral thesis, 2020. http://hdl.handle.net/11562/1016351.

Full text
Abstract:
In the recent years, dynamic languages such as JavaScript, Python or PHP, have found several fields of applications, thanks to the multiple features provided, the agility of deploying software and the seeming facility of learning such languages. In particular, strings play a central role in dynamic languages, as they can be implicitly converted to other type values, used to access object properties or transformed at run-time into executable code. In particular, the possibility to dynamically generate code as strings transformation breaks the typical assumption in static program analysis that the code is an immutable object, indeed static. This happens because program’s essential data structures, such as the control-flow graph and the system of equation associated with the program to analyze, are themselves dynamically mutating objects. In a sentence: "You can’t check the code you don’t see". For all these reasons, dynamic languages still pone a big challenge for static program analysis, making it drastically hard and imprecise. The goal of this thesis is to tackle the problem of statically analyzing dynamic code by treating the code as any other data structure that can be statically analyzed, and by treating the static analyzer as any other function that can be recursively called. Since, in dynamically-generated code, the program code can be encoded as strings and then transformed into executable code, we first define a novel and suitable string abstraction, and the corresponding abstract semantics, able to both keep enough information to analyze string properties, in general, and keep enough information about the possible executable strings that may be converted to code. Such string abstraction will permits us to distill from a string abstract value the executable program expressed by it, allowing us to recursively call the static analyzer on the synthesized program. The final result of this thesis is an important first step towards a sound-by- construction abstract interpreter for real-world dynamic string manipulation languages, analyzing also string-to-code statements, that is the code that standard static analysis "can’t see".
APA, Harvard, Vancouver, ISO, and other styles
48

ALFONSI, RAFFAELE. "CityMobil2 project. Users preferences for automation and analysis of the ARTS in Oristano." Doctoral thesis, 2015. http://hdl.handle.net/11573/962783.

Full text
Abstract:
The first part of the work reports on the results of the investigations about users’ attitudes towards ARTS and conventional buses that have been carried out, within the CityMobil2 project, in twelve cities, through a common stated preference questionnaire. The related econometric analysis has been based on the estimation of a logit model with two alternatives considered: ARTS and minibus. Besides the contribution of the attributes of waiting time, riding time and fare, it has been estimated the alternative specific constant (ASC) of the ARTS, which represents the mean of all the unobserved attributes of the automated system affecting the choice; a positive value of the ASC, the observed attributes being the same, indicates a relatively higher preference for the ARTS. This is the situation for the cities where the ARTS is implemented inside a major facility. Instead, the effect of the socio-economic attributes of the users, turns out to be heterogeneous accross cities. The second part of the work reports on the technical and economic-financial analysis of an ARTS tested on a route on the seafront of Oristano (Sardinia, Italy). In particular, a cost-benefit analysis where ARTS is compared with conventional minibus is conducted. The differential consumer’s surplus is quantified employing site-specific values for waiting time, riding time and the ASC, derived from the estimation of the logit models. The results show the existence of several feasibility areas for the ARTS, depending on the potential demand level assumed.
APA, Harvard, Vancouver, ISO, and other styles
49

Cochrane, Linda Louise Loomis. "Survey of collection analysis practices in public and academic libraries in the United States, and the effect of automation thereon." Thesis, 1989. http://books.google.com/books?id=9tbgAAAAMAAJ.

Full text
Abstract:
Thesis (Ph. D.)--Oregon State University, 1990.
Typescript (photocopy). eContent provider-neutral record in process. Description based on print version record. Includes bibliography (leaves 107-117).
APA, Harvard, Vancouver, ISO, and other styles
50

Elshandidy, Tamer, I. Fraser, and K. Hussainey. "What drives mandatory and voluntary risk reporting variations across Germany, UK and US?" 2014. http://hdl.handle.net/10454/12862.

Full text
Abstract:
no
This paper utilises computerised textual analysis to explore the extent to which both firm and country characteristics influence mandatory and voluntary risk reporting (MRR and VRR) variations both within and between non-financial firms across Germany, the UK and the US, over the period from 2005 to 2010. We find significant variations in MRR and VRR between firms across the three countries. Further, we find, on average, that German firms tend to disclose significantly higher (lower) levels of risk information mandatorily than UK (US) firms. German firms, on average, tend to reveal considerably higher (lower) levels of VRR than US (UK) firms. Our results document that MRR and VRR variations are significantly influenced by systematic risk, the legal system and cultural values. We also find that country and firm characteristics have higher explanatory power over the observed variations in MRR than over those in VRR.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography