To see the other types of publications on this topic, follow the link: Programs analysis.

Dissertations / Theses on the topic 'Programs analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Programs analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nagulakonda, Vikram. "Assertion seeding development of program instrumentation through iterative formal analysis /." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=1080.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains v, 80 p. : ill. Includes abstract. Includes bibliographical references (p. 33-35).
APA, Harvard, Vancouver, ISO, and other styles
2

Jakobsson, Filip. "Static Analysis for BSPlib Programs." Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE2005.

Full text
Abstract:
La programmation parallèle consiste à utiliser des architectures à multiples unités de traitement, de manière à ce que le temps de calcul soit inversement proportionnel au nombre d’unités matérielles. Le modèle de BSP (Bulk Synchronous Parallel) permet de rendre le temps de calcul prévisible. BSPlib est une bibliothèque pour la programmation BSP en langage C. En BSPlib on entrelace des instructions de contrôle de la structure parallèle globale, et des instructions locales pour chaque unité de traitement. Cela permet des optimisations fines de la synchronisation, mais permet aussi l’écriture de programmes dont les calculs locaux divergent et masquent ainsi l’évolution globale du calcul BSP. Toutefois, les programmes BSPlib réalistes sont syntaxiquement alignés, une propriété qui garantit la convergence du flot de contrôle parallèle. Dans ce mémoire nous étudions les trois dimensions principales des programmes BSPlib du point de vue de l’alignement syntaxique : la synchronisation, le temps de calcul et la communication. D’abord nous présentons une analyse statique qui identifie les instructions syntaxiquement alignées et les utilise pour vérifier la sûreté de la synchronisation globale. Cette analyse a été implémentée en Frama-C et certifiée en Coq. Ensuite nous utilisons l’alignement syntaxique comme base d’une analyse statique du temps de calcul. Elle est fondée sur une analyse classique du coût pour les programmes séquentiels. Enfin nous définissons une condition suffisante pour la sûreté de l’enregistrement des variables. L’enregistrement en BSPlib permet la communication par accès aléatoire à la mémoire distante (DRMA) mais est sujet à des erreurs de programmation. Notre développement technique est la base d’une future analyse statique de ce mécanisme
The goal of scalable parallel programming is to program computer architectures composed of multiple processing units so that increasing the number of processing units leads to an increase in performance. Bulk Synchronous Parallel (BSP) is a widely used model for scalable parallel programming with predictable performance. BSPlib is a library for BSP programming in C. In BSPlib, parallel algorithms are expressed by intermingling instructions that control the global parallel structure, and instructions that express the local computation of each processing unit. This lets the programmer fine-tune synchronization, but also implement programs whose diverging parallel control flow obscures the underlying BSP structure. In practice however, the majority of BSPlib program are textually aligned, a property that ensures parallel control flow convergence. We examine three core aspects of BSPlib programs through the lens of textual alignment: synchronization, performanceandcommunication.First,wepresentastaticanalysisthatidentifiestextuallyalignedstatements and use it to verify safe synchronization. This analysis has been implemented in Frama-C and certified in Coq. Second, we exploit textual alignment to develop a static performance analysis for BSPlib programs, based on classic cost analysis for sequential programs. Third, we develop a textual alignment-based sufficient condition for safe registration. Registration in BSPlib enables communication by Direct Remote Memory Access but is error prone. This development forms the basis for a future static analysis of registration
APA, Harvard, Vancouver, ISO, and other styles
3

Armstrong, Alasdair. "Formal analysis of concurrent programs." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/13089/.

Full text
Abstract:
In this thesis, extensions of Kleene algebras are used to develop algebras for rely-guarantee style reasoning about concurrent programs. In addition to these algebras, detailed denotational models are implemented in the interactive theorem prover Isabelle/HOL. Formal soundness proofs link the algebras to their models. This follows a general algebraic approach for developing correct by construction verification tools within Isabelle. In this approach, algebras provide inference rules and abstract principles for reasoning about the control flow of programs, while the concrete models provide laws for reasoning about data flow. This yields a rapid, lightweight approach for the construction of verification and refinement tools. These tools are used to construct a popular example from the literature, via refinement, within the context of a general-purpose interactive theorem proving environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Minjang. "Dynamic program analysis algorithms to assist parallelization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45758.

Full text
Abstract:
All market-leading processor vendors have started to pursue multicore processors as an alternative to high-frequency single-core processors for better energy and power efficiency. This transition to multicore processors no longer provides the free performance gain enabled by increased clock frequency for programmers. Parallelization of existing serial programs has become the most powerful approach to improving application performance. Not surprisingly, parallel programming is still extremely difficult for many programmers mainly because thinking in parallel is simply beyond the human perception. However, we believe that software tools based on advanced analyses can significantly reduce this parallelization burden. Much active research and many tools exist for already parallelized programs such as finding concurrency bugs. Instead we focus on program analysis algorithms that assist the actual parallelization steps: (1) finding parallelization candidates, (2) understanding the parallelizability and profits of the candidates, and (3) writing parallel code. A few commercial tools are introduced for these steps. A number of researchers have proposed various methodologies and techniques to assist parallelization. However, many weaknesses and limitations still exist. In order to assist the parallelization steps more effectively and efficiently, this dissertation proposes Prospector, which consists of several new and enhanced program analysis algorithms. First, an efficient loop profiling algorithm is implemented. Frequently executed loop can be candidates for profitable parallelization targets. The detailed execution profiling for loops provides a guide for selecting initial parallelization targets. Second, an efficient and rich data-dependence profiling algorithm is presented. Data dependence is the most essential factor that determines parallelizability. Prospector exploits dynamic data-dependence profiling, which is an alternative and complementary approach to traditional static-only analyses. However, even state-of-the-art dynamic dependence analysis algorithms can only successfully profile a program with a small memory footprint. Prospector introduces an efficient data-dependence profiling algorithm to support large programs and inputs as well as provides highly detailed profiling information. Third, a new speedup prediction algorithm is proposed. Although the loop profiling can give a qualitative estimate of the expected profit, obtaining accurate speedup estimates needs more sophisticated analysis. Prospector introduces a new dynamic emulation method to predict parallel speedups from annotated serial code. Prospector also provides a memory performance model to predict speedup saturation due to increased memory traffic. Compared to the latest related work, Prospector significantly improves both prediction accuracy and coverage. Finally, Prospector provides algorithms that extract hidden parallelism and advice on writing parallel code. We present a number of case studies how Prospector assists manual parallelization in particular cases including privatization, reduction, mutex, and pipelining.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Connie. "Static Conflict Analysis of Transaction Programs." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.

Full text
Abstract:
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, HaiYing. "Dynamic purity analysis for Java programs." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18481.

Full text
Abstract:
The pure methods in a program are those that exhibit functional or side effect free behaviour, a useful property of methods or code in the context of program optimization as well as program understanding. However, gathering purity data is not a trivial task, and existing purity investigations present primarily static results based on a compile-time analysis of program code. We perform a detailed examination of dynamic method purity in Java programs using a Java Virtual Machine (JVM) based analysis. We evaluate multiple purity definitions that range from strong to weak, consider purity forms specific to dynamic execution, and accommodate constraints imposed by an example consumer application of purity data, memoization. We show that while dynamic method purity is actually fairly consistent between programs, examining pure invocation counts and the percentage of the bytecode instruction stream contained within some pure method reveals great variation. We also show that while weakening purity definitions exposes considerable dynamic purity, consumer requirements can limit the actual utility of this information. A good understanding of which methods are "pure" and in what sense is an important contribution to understanding when, how, and what optimizations or properties a program may exhibit.
Les fonctions purs dans un programme sont ceux qui démontre un comportement sans fonctionnalité ou effet secondaire. Ceci s'avère une propriété utile pour une fonction ou du code dans le contexte d'optimisation et de compréhension du programme. Cependant, récolter de l'information de pureté n'est pas une tâche facile, et les techniques existantes pour les analyses de pureté ne fournissent que des résultats statiques basés sur une analyses de la compilation du programme. Nous avons exécuter une analyse détaillée de la pureté dynamique des fonctions dans des applications Java en utilisant une approche basés sur un Java Virtual Machine (JVM). Nous avons évalué multiples définitions de pureté, forte et faible, et considéré les formats de pureté spécifiques à l'exécution, tout en considérant les contraintes qui nous sont imposées par un application "consommateur" d'information de pureté et de mémorisation. Nous démontrons que malgré la consistance de la pureté dynamique des fonctions parmi certains applications, l'examen du nombre d'invocation pure et le pourcentage de chaîne d'instruction "bytecode" trouvé dans les fonctions purs nous dévoile l'existante de grande variation. Nous montrons aussi que malgré l'affaiblissement de la définition de la pureté expose considérablement la pureté dynamique, les pré-requis des consommateurs peuvent actuellement limiter l'utilité de cet information. Une bonne compréhension de ce qu'est une fonction "pure" et dans quel sens, est une important contribution à comprendre quand, où, et quelles optimisations ou propriétés une application peut dévoilée.
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Nai-Wei. "Automatic complexity analysis of logic programs." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186287.

Full text
Abstract:
This dissertation describes research toward automatic complexity analysis of logic programs and its applications. Automatic complexity analysis of programs concerns the inference of the amount of computational resources consumed during program execution, and has been studied primarily in the context of imperative and functional languages. This dissertation extends these techniques to logic programs so that they can handle nondeterminism, namely, the generation of multiple solutions via backtracking. We describe the design and implementation of a (semi)-automatic worst-case complexity analysis system for logic programs. This system can conduct the worst-case analysis for several complexity measures, such as argument size, number of solutions, and execution time. This dissertation also describes an application of such analyses, namely, a runtime mechanism for controlling task granularity in parallel logic programming systems. The performance of parallel systems often starts to degrade when the concurrent tasks in the systems become too fine-grained. Our approach to granularity control is based on time complexity information. With this information, we can compare the execution cost of a procedure with the average process creation overhead of the underlying system to determine at runtime if we should spawn a procedure call as a new concurrent task or just execute it sequentially. Through experimental measurements, we show that this mechanism can substantially improve the performance of parallel systems in many cases. This dissertation also presents several source-level program transformation techniques for optimizing the evaluation of logic programs containing finite-domain constraints. These techniques are based on number-of-solutions complexity information. The techniques include planning the evaluation order of subgoals, reducing the domain of variables, and planning the instantiation order of variable values. This application allows us to solve a problem by starting with a more declarative but less efficient program, and then automatically transforming it into a more efficient program. Through experimental measurements we show that these program transformation techniques can significantly improve the efficiency of the class of programs containing finite-domain constraints in most cases.
APA, Harvard, Vancouver, ISO, and other styles
8

Mitchell, Neil. "Transformation and analysis of functional programs." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abu, Hashish Nabil. "Mutation analysis of dynamically typed programs." Thesis, University of Hull, 2013. http://hydra.hull.ac.uk/resources/hull:8444.

Full text
Abstract:
The increasing use of dynamically typed programming languages brings a new challenge to software testing. In these languages, types are not checked at compile-time. Type errors must be found by testing and in general, programs written in these languages require additional testing compared to statically typed languages. Mutation analysis (or mutation testing) has been shown to be effective in testing statically (or strongly) typed programs. In statically typed programs, the type information is essential to ensure only type-correct mutants are generated. Mutation analysis has not so far been fully used for dynamically typed programs. In dynamically typed programs, at compile-time, the types of the values held in variables are not known. Therefore, it is not clear if a variable should be mutated with number, Boolean, string, or object mutation operators. This thesis investigates and introduces new approaches for the mutation analysis of dynamically typed programs. The first approach is a static approach that employs the static type context of variables to determine, if possible, type information and generate mutants in the manner of traditional mutation analysis. With static mutation there is the danger that the type context does not allow the precise type to be determined and so type-mutations are produced. In a type-mutation, the original and mutant expressions have a different type. These mutants may be too easily killed and if they are then they represent incompetent mutants that do not force the tester to improve the test set. The second approach is designed to avoid type-mutations. This approach requires that the types of variables are discovered. The types of variables are discovered at run-time. Using type information, it is possible to generate only type-correct mutants. This dynamic approach, although more expensive computationally, is more likely to produce high quality, difficult to kill, mutants.
APA, Harvard, Vancouver, ISO, and other styles
10

Benton, Peter Nicholas. "Strictness analysis of lazy functional programs." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.281891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Carré, Jean-Loup. "Static analysis of embedded multithreaded programs." Cachan, Ecole normale supérieure, 2010. https://theses.hal.science/tel-01199739.

Full text
Abstract:
Cette thèse présente un algorithme d'analyse statique pour des programmes parallèles. Il généralise des techniques d'interprétation abstraite utilisée dans le cas de programmes sans parallélisme et permet de détecter des erreurs d'exécution, exempli gratia, les déréférencements de pointeur invalide, les débordements de tableaux, les débordements d'entiers. Nous avons implémenté cet algorithme. Il analyse un code industriel de taille conséquente (100 000 lignes de code) en quelques heures. Notre technique est modulaire, elle peut utiliser n'importe quel domaine abstrait créé pour le cas de programmes non-parallèles. En outre, sans change le calcul du point fixe, certains de nos domaines abstraits. Permettent la détection de data-races ou de deadlocks. Cette technique ne présuppose pas la consistance séquentielle (i. E. Ne présuppose pas que l'exécution d'un programme parallèle est l'entrelacement de l'exécution de sous-programmes) puisque, en pratique (les processeurs INTEL et SPARC, JAVA,. . . ) l'exécution des programmes n'est pas séquentiellement consistante. Exempli gratia notre technique fonctionne avec les modèles TSO (Total Store Ordering) et PSO (Partial Store Ordering)
This Phd thesis presents a static analysis algorithm for programs with threads. It generalizes abstract interpretation techniques used in the single-threaded case and allows to detect runtimes errors, e. G, invalid pointer dereferences, array overflows, integer overflows. We have implemented this algorithm. It analyzes a large industrial multithreaded code (100K LOC) in few hours. Our technique is modular, it uses any abtract domain designed for the single-threaded-case. Furthermore, without any change in the fixpoint computation, sorne abstract domains allow to detect data-races or deadlocks. This technique does not assume sequential consistency, since, in practice (INTEL and SPARC processors, JAVA,. . . ), program execution is not sequentially consistent. E. G, it works in TSO (Total Store ordering) or PSO (Partial Store Ordering) memory models
APA, Harvard, Vancouver, ISO, and other styles
12

Hardwicke, Shannon Bragg. "An Analysis of Student Assistance Programs." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/27780.

Full text
Abstract:
The purpose of this study was to examine a sample of students who participated in a student assistance program in Southwest Virginia. Using existing data from a school system in Southwest Virginia, this sample was observed to measure changes in student academic performance. The sample was also examined to determine the extent to which they instituted positive behaviors such as school attendance and reduction of disciplinary actions taken. In addition, this study assessed differences in demographic characteristics among student participants. Also studied was the extent to which gender differences related to academic performance and behavior. Significant changes in students’ academic performance, attendance and disciplinary measures were established in the present study. Negative associations were established for those participating in the student assistance program and grade point average. Positive associations were found for those participating in SAP and attendance and disciplinary measures. No significant differences were yielded in the comparison of gender to academic performance and behavior. SAP coordinators recorded that the majority of participants did improve since referral to program and most completed or currently remained in the student assistance program. However, a small percentage of students actually entered treatment programs following recommendations made to parents from the student assistance program committee. This research assessed only the demographic and individual characteristics: gender, gifted or special education status, ethnicity and age. Therefore, other demographics such as socio economic status may offer additional explanation into academic and behavior outcomes of students involved in student assistance programs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Stewart, Jonathan A. "An Analysis of Bilingual Programs in the Context of a Schoolwide Reading Program." DigitalCommons@USU, 2004. https://digitalcommons.usu.edu/etd/6217.

Full text
Abstract:
There has been much controversy over the effectiveness of bilingual education in helping English language learning (ELL) students to become successful students. One variable overlooked in this literature has been the use of effective instruction in these programs. This investigation compared students in a schoolwide reading program that utilizes research-based practices, Success for All (SF A) and its Spanish counterpart Éxito Para Todos (EPT). Three groups of third-grade students were compared at 8-week intervals throughout the school year: English-speaking students in SF A, ELL (English language learning) students in SF A with ESL (English as a Second Language), and ELL students in EPT. All three groups experienced gains over the school year, with the gap between the EPT and SF A only groups narrowed and no statistically significant differences discovered between the EPT and SF A + ESL groups.
APA, Harvard, Vancouver, ISO, and other styles
14

Scott, Christopher G. "Undergraduate leadership programs a case study analysis of Marietta College's McDonough Leadership Program /." Ohio : Ohio University, 2007. http://www.ohiolink.edu/etd/view.cgi?ohiou1187296643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Scott, Christopher G. "Undergraduate Leadership Programs: A Case Study Analysis of Marietta College’s McDonough Leadership Program." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1187296643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

McGraw, Lora K. "Challenging masculinities: a program analysis of male-based university sexual violence prevention programs." Thesis, Kansas State University, 2017. http://hdl.handle.net/2097/35389.

Full text
Abstract:
Master of Arts
Department of Sociology, Anthropology, and Social Work
Nadia Shapkina
This study examines male-based sexual violence prevention programs on college campuses. In an effort to combat the widespread problem of sexual violence against college women, universities have implemented sexual assault prevention programs. While past programs have focused on risk-reduction strategies that target women, new programs are beginning to focus on approaching men to challenge hegemonic masculinity and gender social norms that are conducive to sexual violence. Thus far, the methods of these programs have not been studied in detail. This study uses interviews, observation, and document analysis to analyze the methods and messages of male-based sexual violence prevention programs at six universities in the United States. The research describes and analyzes the origins, goals, structures, strategies, success, and challenges of these programs. Their strengths and limitations are discussed, and suggestions and considerations for the programs are provided. As male-based violence prevention programs become more popular on college campuses, this research offers a deeper understanding of these programs that may inform and improve the effort to combat violence against college women.
APA, Harvard, Vancouver, ISO, and other styles
17

Hackett, James Simpson. "An economic analysis of multiple use forestry using FORPLAN-Version 2." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/29033.

Full text
Abstract:
This thesis examines a mathematical programming model called FORPLAN as a planning tool for strategic analysis of forest management alternatives. This model uses economic efficiency as the objective of forest management planning. The dynamic theory of multiple use forestry is analyzed and expressed as a linear programming analogue in FORPLAN. The main weakness of this theory is that it focuses on single stand analysis. Even so, forest wide constraints applied to certain FORPLAN formulations compensate for this weakness. A strata-based forest management problem is developed to show the economic implications of four forest management alternatives: (1) timber production; (2) timber production subject to a non-declining yield limitation; (3) timber and black-tailed deer (Odocolieus hemionus columbianus) production; and (4) timber and black-tailed deer production, again including a non-declining yield of timber. Demand curves for two analysis areas and a supply curve for deer winter range are developed using parametric analysis. The ability of FORPLAN to address economic implications of current forest management policies is discussed. Economic analysis of forest management alternatives would play a useful role in forest planning in British Columbia. The need for such evaluation is underlined by the ever increasing number of resource conflicts caused by the dominance of the timber industry and the continually growing demand for other forest resources. Three conclusions are drawn from this study. First, FORPLAN has the technical capability to be an effective tool for analyzing strategic multiple use plans under economic efficiency criteria. It does not have the timber bias of earlier models and the capability of FORPLAN to integrate area and strata-based variables makes it a very powerful model. Second, parametric programming of FORPLAN solutions provides marginal analysis for inputs and outputs. Comparative examination of these curves and their elasticities provide information about the relative importance of different analysis areas. Lastly, managing for timber and hunting services for black-tailed deer by preserving old growth winter range is not an economically viable management option. The relative value of the timber is significantly greater than the hunting services for the deer that it is just not worth managing for both.
Forestry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Jerry. "Using dynamic analysis to infer Python programs and convert them into database programs." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121643.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 195-196).
I present Nero, a new system that automatically infers and regenerates programs that access databases. The developer first implements a Python program that uses lists and dictionaries to implement the database functionality. Nero then instruments the Python list and dictionary implementations and uses active learning to generate inputs that enable it to infer the behavior of the program. The program can be implemented in any arbitrary style as long as it implements behavior expressible in the domain specific language that characterizes the behaviors that Nero is designed to infer. The regenerated program replaces the Python lists and dictionaries with database tables and contains all code required to successfully access the databases. Results from several inferred and regenerated applications highlight the ability of Nero to enable developers with no knowledge of database programming to obtain programs that successfully access databases.
by Jerry Wu.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
19

Rodriguez, Dulma. "Amortised resource analysis for object-oriented programs." Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-149832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bernhard, Joshua C. "An analysis of Company XYZ's insurance programs." Menomonie, WI : University of Wisconsin--Stout, 2005. http://www.uwstout.edu/lib/thesis/2005/2005bernhardj.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Phung Hua Computer Science &amp Engineering Faculty of Engineering UNSW. "Static analysis for incomplete object-oriented programs." Awarded by:University of New South Wales. School of Computer Science and Engineering, 2005. http://handle.unsw.edu.au/1959.4/24228.

Full text
Abstract:
Static analysis is significant since it provides the information about the run- time behaviour of an analysed program. Such information has many applications in compiler optimisations and software engineering tools. Interprocedural anal- ysis is a form of static analysis, which can exploit information available across procedure boundaries. The analysis is traditionally designed as whole-program analysis, which processes the entire program. However, whole-program analysis is problematic when parts of the analysed program are not available to partici- pate in analysis. In this case, a whole-program analysis has to make conservative assumptions to be able to produce safe analysis results at the expense of some possible precision loss. To improve analysis precision, an analysis can exploit the access control mechanism provided by the underlying program language. This thesis introduces a points-to analysis technique for incomplete object-oriented programs, called com- pleteness analysis, which exploits the access and modification properties of classes, methods and fields to enhance the analysis precision. Two variations of the tech- nique, compositional and sequential completeness analysis, are described. This thesis also presents a mutability analysis (MA) and MA-based side-effect analy- sis, which are based on the output of completeness analysis, to determine whether a variable is potentially modified by the execution of a program statement. The results of experiments carried out on a set of Java library packages are presented to demonstrate the improvement in analysis precision.
APA, Harvard, Vancouver, ISO, and other styles
22

Novillo, Diego. "Analysis and optimization of explicitly parallel programs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0012/NQ60007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lehner, William D. "An analysis of Naval Officer acession programs." Monterey, Calif. : Naval Postgraduate School, 2008. http://bosun.nps.edu/uhtbin/hyperion-image.exe/08Mar%5FLehner.pdf.

Full text
Abstract:
Thesis (M.S. in Leadership and Human Resources Development)--Naval Postgraduate School, March 2008.
Thesis Advisor(s): Horner, Donald H., Jr. ; Mehay, Stephen L. "March 2008." Description based on title screen as viewed on May 1, 2008. Includes bibliographical references (p. 85-98). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
24

Lehner, William D. "An analysis of Naval Officer accession programs." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA479949.

Full text
Abstract:
Thesis (M.S. in Leadership and Human Resources Development)--Naval Postgraduate School, March 2008.
Thesis Advisor(s): Horner, Donald H., Jr. ; Mehay, Stephen L. "March 2008." Title from title page of PDF document (viewed on: Jul 3, 2008). Includes bibliographical references (p. 85-98).
APA, Harvard, Vancouver, ISO, and other styles
25

Sereni, Damien. "Termination analysis of higher-order functional programs." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Weiser, David A. "Hybrid analysis of multi-threaded Java programs." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400971421&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Greenquist, Chad J. "An analysis of IABC Minnesota training programs." Online version, 1998. http://www.uwstout.edu/lib/thesis/1999/1999greenquistc.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sands, David. "Calculi for time analysis of functional programs." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Balachandra, Lakshmi 1974. "Experimental learning programs : an analysis and review." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28687.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2004.
"June 2004 -- revised October 2004."
Includes bibliographical references (leaves 47-48).
Experiential Learning programs have increasingly been included in corporate training programs. Today there is a wide range of experiential learning programs using a variety of methodologies. However, there is a surprising dearth of research on the effectiveness of such programs for learning in business. This thesis reviews and analyzes one form of experiential learning--a program that utilizes outdoor activities for leadership and teamwork training--to understand the value proposition of such education for corporate clients. From this, a framework for implementing a successful experiential learning program was suggested and then analyzed by the design and delivery of a new, original experiential training program utilizing improvisational theater techniques. Finally, a method to evaluate experiential learning programs both before and after purchase is suggested.
by Lakshmi Balachandra.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
30

Lamb, Andrew Allinson 1980. "Linear analysis and optimization of stream programs." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29668.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 123-127).
As more complex DSP algorithms are realized in practice, there is an increasing need for high-level stream abstractions that can be compiled without sacrificing efficiency. Toward this end, we present a set of aggressive optimizations that target linear sections of a stream program. Our input language is StreamIt, which represents programs as a hierarchical graph of autonomous filters. A filter is linear if each of its outputs can be represented as an affine combination of its inputs. Linearity is common in DSP components; examples include FIR filters, expanders, compressors, FFTs and DCTs. We demonstrate that several algorithmic transformations, traditionally handtuned by DSP experts, can be completely automated by the compiler. First, we present a linear extraction analysis that automatically detects linear filters from the C-like code in their work function. Then, we give a procedure for combining adjacent linear filters into a single filter, a specialized caching strategy to remove redundant computations, and a method for translating a linear filter to operate in the frequency domain. We also present an optimization selection algorithm, which finds the sequence of combination and frequency transformations that yields the maximal benefit. We have completed a fully-automatic implementation of the above techniques as part of the StreamIt compiler. Using a suite of benchmarks, we show that our optimizations remove, on average, 86% of the floating point instructions required. In addition, we demonstrate an average execution time decrease of 450% and an 800% decrease in the best case.
by Andrew Allinson Lamb.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Rassa, Adam Omary. "A cost analysis of medicine donation programs to Tanzania’s neglected tropical diseases control program." University of the Western Cape, 2019. http://hdl.handle.net/11394/7055.

Full text
Abstract:
Masters of Public Health - see Magister Public Health
Overreliance on donor supported health programs has crippled many African countries and there is inadequate long-term planning on the future sustainability of health systems. In the age of uncertainty in global politics and global economy, the future of these donor funded programs is also uncertain. It is imperative for African nations to begin to take responsibility for their health programs. In as much as the name “donation” suggests that something is given free of charge, in actual sense this may not be the case due to hidden costs attached. In medicine access, the hidden costs are the supply chain costs including cost for clearance, storage and distribution of such medicines which are charged as a percentage of claimed commodity costs on donors’ or suppliers’ invoices. Since the medicines donated are in originators’ brands, the invoiced prices are high thus supply chain costs are high as well. In some cases, it is thought that the hidden costs are higher than the cost of medicines had they been sourced locally as generics. The aim of this research was to assess and determine the hidden supply chain costs associated with the four medicine donation programs supporting the Tanzania Neglected Tropical Diseases Program and inform policy decision on optimal financing options for the program Methodology The cost analysis of the two options was undertaken from a payers’ perspective which in this case is the Government of Tanzania (Ministry of Health). Data was collected on both product and supply chain cost drivers incurred in the medicine donation programs from July 2014 to June 2017. Costs of the current mechanism were obtained from the program’s quantification reports and transaction data for the study period. Transactional data was obtained from shipment documents including sales invoices, parking list, proof of delivery and goods receiving notes were evaluated for actual quantities shipped, commodity prices and other supply chain cost. To verify the actual supply chain cost charged by the program, both the official bills from Medical Stores Department (MSD) to the program and the electronic bills available at MSD electronic database covering the study period were studied.
APA, Harvard, Vancouver, ISO, and other styles
32

Hooker, Taylor. "Equine Assisted Programs for Military Service Members| A Program Evaluation Using Importance-Performance Analysis." Thesis, Clemson University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10792627.

Full text
Abstract:

Developing research, anecdotal evidence and a growing focus on non-pharmacological interventions for veterans with post-traumatic stress support the use equine-therapy as a therapeutic outlet; however, programmatic factors that contribute to veteran’s desire to attend such programs are under-investigated. Furthermore, evaluative processes in equine therapy for this particular population are scare and vary greatly from program to program. The use of the Importance-Performance Analysis (IPA) tool when applied to social services yields direct, applicable feedback of program success and relevancy. In this study, interviews with the selected population informed the evaluation tool used to assess the importance, and subsequent performance, of various program factors in a national military-specific equine therapy program. Results of this study provided insight into key factors being sought after in similar equine therapy programs to inform the development and maintenance of programs serving the veteran population. The application of the IPA, a consumer feedback tool typically reserved for market research, to the health and human services sector provided a new pathway for quality assurance and program analysis for the equine therapy field.

APA, Harvard, Vancouver, ISO, and other styles
33

Tinker, Audrey Kristen. "The Austin Green Building Program: an analysis of the program's effectiveness." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/1492.

Full text
Abstract:
Current water shortages in the United States and Texas are expected to only worsen so that by 2050, approximately 40% of both U.S. and Texas residents will live in areas of water scarcity (U.S. House Committee, 2003; Texas Water Development Board, 2003). In response to these grim projections, both lawmakers and environmentalists are calling for conservation measures so that future shortages or costly new supply initiatives are avoided. One area where substantial consumption decreases could be made is the municipal sector, which is projected to account for 35% of all water consumed in Texas by 2050 (Texas Water Development Board, 2002). Both organizations and voluntary programs have been established to reduce water consumption in this area. One of the largest and most innovative programs in the state is the Austin Green Building Program (AGBP). It was the first program of its kind in the U.S. that rates new homes and remodels in regards to five categories related to sustainability: energy efficiency, water efficiency, materials efficiency, health and safety and community (City of Austin, 2001). This research identified the factors (weather, home size, lot size, appraised value, and existence of a pool) that effect water consumption for residences qualifying as "Austin Green Homes", and identified those green features or designs that had the greatest effect on water consumption, that were most commonly included, and the reasons why contractors incorporated them. Non-green features such as temperature, rainfall, home and lot size, appraised value and a pool seemed to have the greatest impact on water consumption, from an analysis of R2 values, albeit a positive relation for each variable. When green features were investigated, findings showed that different features were effective in reducing water consumption for different builders and in many cases, water-conserving features actually led to increased use. Finally, results showed that large builders incorporated fewer water-related green features in their homes and achieved lower star ratings in general than small green builders.
APA, Harvard, Vancouver, ISO, and other styles
34

Nagapattinam, Ramamurthi Ragavendar. "Dynamic Trace-based Analysis of Vectorization Potential of Programs." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339733528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zhan, Jin Ping. "Review and verification of marine riser analysis programs." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11640.

Full text
Abstract:
Availability of diverse computation packages for marine risers eases structural assessment and reduces cost for experiments and design. Successful industrial applications have proven that time domain analysis programs provide effective solution for global response analysis. Nevertheless, good command of methodologies adopted by popular programs and awareness of limitations corresponding to different techniques are imperative, if the analyst intends to make proper use of the computerized tools. Frequently it remains uncertain to know the correctness of analyze results due to lack of comparison approach and proficient understanding of structural behavior, even if the analysis is conducted as instructed by software supplier. This thesis makes a review of the most popular computation programs. Comparison work is done to indicate their common features and particular characteristics. Further, a careful examination lists limitations and uncertainties of the applied analysis technologies and gives explanation to the source of the problems. General guidance is provided for how to avoid these unsolved imperfections. The core part of this thesis is to make verification of Riflex accounting for global analysis of catenary risers. Static validation is based on Faltisen’s catenary equations. MATLAB is employed to program a simple routine to calculate static configuration of SCR and LWR. Dynamic validation refers to a boundary layer value method proposed by J.A.P. Aranha. A semi-independent comparison is performed to verify dynamic bending moment at TDA. Last, parametric study is carried out to investigate stability of numeric integration. The effects of time step setting and mesh density are studied. Besides, to better understand structural behavior of catenary risers, effects of water depth, riser wall thickness and arc length are tested on the basis of previous work last semester.
APA, Harvard, Vancouver, ISO, and other styles
36

Cain, Andrew Angus, and n/a. "Dynamic data flow analysis for object oriented programs." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20060904.161506.

Full text
Abstract:
There are many tools and techniques to help developers debug and test their programs. Dynamic data flow analysis is such a technique. Existing approaches for performing dynamic data flow analysis for object oriented programs have tended to be data focused and procedural in nature. An approach to dynamic data flow analysis that used object oriented principals would provide a more natural solution to analysing object oriented programs. Dynamic data flow analysis approaches consist of two primary aspects; a model of the data flow information, and a method for collecting action information from a running program. The model for data flow analysis presented in this thesis uses a meta-level object oriented approach. To illustrate the application of this meta-level model, a model for the Java programming language is presented. This provides an instantiation of the meta-level model provided. Finally, several methods are presented for collecting action information from Java programs. The meta-level model contains elements to represent both data items and scoping components (i.e. methods, blocks, objects, and classes). At runtime the model is used to create a representation of the executing program that is used to perform dynamic data flow analysis. The structure of the model is created in such a way that locating the appropriate meta-level entity follows the scoping rules of the language. In this way actions that are reported to the meta-model are routed through the model to their corresponding meta-level elements. The Java model presented contains classes that can be used to create the runtime representation of the program under analysis. Events from the program under analysis are then used to update the model. Using this information developers are able to locate where data items are incorrectly used within their programs. Methods for collecting action information from Java programs include source code instrumentation, as used in earlier approaches, and approaches that use Java byte code transformation, and the facilities of the Java Platform Debugger Architecture. While these approaches aimed to achieve a comprehensive analysis, there are several issues that could not be resolved using the approaches covered. Of the approaches presented byte code transformation is the most practical.
APA, Harvard, Vancouver, ISO, and other styles
37

Baspaly, Dave. "Analysis of community mediation programs in North America." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ54546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Weston, Nathan Philip. "Incremental data-flow analysis for aspect-oriented programs." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bernard, Amy Lynn. "A descriptive analysis of selected smoking cessation programs." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/774763.

Full text
Abstract:
The purpose of this research was to compare and contrast the components and characteristics of selected widely available smoking cessation programs.To reach this goal, an evaluation form was developed after an extensive review of the literature which addressed the structure, duration, techniques, issues which were discussed, success rates and availability of the programs. This form was tested for content validity by a jury of experts and was used to review each of thirteen selected smoking cessation programs. The reviews were conducted by the author using program materials received from the sponsoring organizations. Any questions which could not be answered with these materials were answered through a telephone interview with a representative of the sponsoring organization.Once the reviews were completed, the information was transferred to table form and to a database so that collective data could be generated. The following conclusions were drawn from the table and the data generated: the existing smoking cessation programs appear to have been developed utilizing suggestions offered in to use similar program techniques, and a great deal of variance exists in terms of success rates and cost.
Department of Physiology and Health Science
APA, Harvard, Vancouver, ISO, and other styles
40

Green, Kerrie L. "A descriptive analysis of cardiac rehabilitation education programs." Virtual Press, 2000. http://liblink.bsu.edu/uhtbin/catkey/1177976.

Full text
Abstract:
The purpose of this research was to obtain information on the content of education within cardiac rehabilitation programs, methods of administering education, what the barriers are to providing education and which professionals administer education.To reach this goal, a questionnaire was modified from a previous study and a pilot study was undertaken to establish reliability of the questionnaire. The questionnaire was then sent to a sample of 100 directors of cardiac rehabilitation programs belonging to The American Association of Cardiovascular and Pulmonary Rehabilitation (AACVPR). The questionnaire focused on 13 established areas of education within cardiac rehabilitation programs.Once the questionnaires were completed, the information was transferred to a table format based upon the 13 content areas. The following conclusions were drawn from the research and the data gathered: 11 of the 13 content areas are offered at least 84% of the time, the major barriers for the 13 content areas were lack of time and lack of interest on the patient's behalf, the most frequent methods of education for all 13 content areas were individual education, print materials, and group education, and the primary educator overall for all 13 content areas was the nurse followed by the exercise physiologist and dietitian/nutritionist.
Department of Physiology and Health Science
APA, Harvard, Vancouver, ISO, and other styles
41

Nakade, Radha Vi. "Verification of Task Parallel Programs Using Predictive Analysis." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6176.

Full text
Abstract:
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
APA, Harvard, Vancouver, ISO, and other styles
42

Sălcianu, Alexandru D. (Alexandru Doru) 1975. "Pointer analysis and its applications for Java programs." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86781.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 135-137).
by Alexandru D. Sălcianu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
43

Navaratnam, K. K. "Cost-benefit analysis of secondary vocational education programs." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/76461.

Full text
Abstract:
The purpose of this study was to propose and field test a cost-benefit analysis model to determine the profitability of secondary vocational education programs. The model consisted of costs, process, and benefits components. Instructional personnel, building, equipment, materials and supplies, administration, travel, services, utilities, and maintenance were the major components of the costs. Process implied the actual conduct of the program. Increased earnings from graduates' employment, earnings from cooperative placement, provision of services, and noneconomic benefits obtained by the graduates were the components of the benefits. Costs and benefits data for field testing the model were obtained from four programs from the four vocational service areas of trade and industrial, occupational home economics, business education, and marketing and distributive education selected from both a comprehensive high school and an area vocational education center in the Roanoke County School Division, Virginia. All graduates of 1983/84 of the four programs were surveyed to gather data on them. A 73.9% return was obtained from the survey. The difference between the graduates' current earnings and earnings determined by using the Federal minimum wage for the same number of work hours by employed graduates was considered as an income benefit. Actual differences between discounted benefits and the gross costs were used to determine the profitability of programs. The following conclusions were drawn from the findings of this study: 1. The trade and industrial, business education, and marketing and distributive education programs were economically profitable. 2. The occupational home economics program was not economically profitable. 3. Graduates in each program have obtained several noneconomic benefits. 4. The proposed cost-benefit analysis model was determined useable and transportable to other vocational education settings. Based on the findings and conclusions of this study, the following recommendations were drawn: 1. That local vocational administrative units use the concept of cost-benefit analysis as an evaluation technique for secondary vocational education programs. 2. That a research study be conducted to determine what other costs and benefits should be considered in the model. 3. That a research study be conducted to determine the economic value of noneconomic benefits. 4. That a longitudinal cost-benefit analysis is needed to determine economic earning and type of jobs held by graduates after graduation. 5. That a study be conducted using cost-benefit analysis with an appropriate comparison group to vocational graduates. 6. That an annual cost-benefit analysis of vocational programs be conducted for each school system to make comparative judgement of their programs. 7. That post-secondary vocational programs explore the possibility of using cost-benefit analysis for evaluating programs.
Ed. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Fuller, Judith Ann. "A comparative analysis of two television reading programs /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487327695620589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Naci, Sofiane. "Data analysis and optimisation of array-dominated programs." Thesis, University of Cambridge, 2008. https://www.repository.cam.ac.uk/handle/1810/252096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hayashi, Yasushi. "Shape-based cost analysis of skeletal parallel programs." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/14029.

Full text
Abstract:
This work presents an automatic cost-analysis system for an implicitly parallel skeletal programming language. Although deducing interesting dynamic characteristics of parallel programs (and in particular, run time) is well known to be an intractable problem in the general case, it can be alleviated by placing restrictions upon the programs which can be expressed. By combining two research threads, the “skeletal” and “shapely” paradigms which take this route, we produce a completely automated, computation and communication sensitive cost analysis system. This builds on earlier work in the area by quantifying communication as well as computation costs, with the former being derived for the Bulk Synchronous Parallel (BSP) model. We present details of our shapely skeletal language and its BSP implementation strategy together with an account of the analysis mechanism by which program behaviour information (such as shape and cost) is statically deduced. This information can be used at compile-time to optimise a BSP implementation and to analyse computation and communication costs. The analysis has been implemented in Haskell. We consider different algorithms expressed in our language for some example problems and illustrate each BSP implementation, contrasting the analysis of their efficiency by traditional, intuitive methods with that achieved by our cost calculator. The accuracy of cost predictions by our cost calculator against the run time of real parallel programs is tested experimentally. Previous shape-based cost analysis required all elements of a vector (our nestable bulk data structure) to have the same shape. We partially relax this strict requirement on data structure regularity by introducing new shape expressions in our analysis framework. We demonstrate that this allows us to achieve the first automated analysis of a complete derivation, the well known maximum segment sum algorithm of Skillicorn and Cai.
APA, Harvard, Vancouver, ISO, and other styles
47

Ohlsson, Henry. "Cost-benefit analysis of labor market programs : applied to a temporary program in northern Sweden." Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 1988. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-65820.

Full text
Abstract:
The study's objective is to evaluate the relief works and special projects that were in effect during the period 1983-1986 due to the labor force reduction of almost 2 000 persons at the LKAB Mininq Company in 1983. These reductions caused the Swedish Parliament to set up a special labor market policy organization, the Malmfältsdeleqation, which besides initiating relief works and special projects also functioned as an employment exchange, an employability assessment center, and a training organizer. The study has three main chapters. In the first of these the welfare implications of public production in a diseguilibrium model are analyzed. The background to this is that the traditional cost-benefit rules are not very well suited to this particular evaluation problem. The object of the chapter is to derive rules within the context of a model that, whilst simple, resembles in essential points the real world situation within which the Malmfältsdelegation had to work. The delegation's relief works and special projects are represented, in the model, by production in public firms. The second main chapter is a descriptive account of the Malmfälts-deleqation's relief works and special projects. The variables discussed are costs, subsidies, temporary employment, and permanent employment. In this chapter the distribution of these variables is accounted for according to type of subsidy-receiver, location of the projects, branch, and occupational groups. Furthermore, the plans are compared with the outcomes. The actual evaluation can be found in the final main chapter. By way of introduction, there is a discussion of what the labor market situât Lon in Malmfälten would have been l ike in the absence of the temporary organization. With this as a reference, the actual incomes of the former LKAB-employees are compared with the incomes they would have had in two hypothetical alternative courses of events. The first of these implies that no extra labor market policy measures had been taken and the other is based on the assumption that the former LKAB-employees had been offered labor market services to the same extent as other job-seekers in the inland area of northern Sweden. An analysis of the welfare effects of the Malmfältsdelegation's relief works and special projects completes this chapter. This analysis is based on the cost-benefit rules presented in the first, main chapter. The principle conclusions of the study are that labor market policy measures may give positive income and welfare effects in a region facing a situation similar to that faced by Malmfälten durinq the first half of the eighties. However, the Malmfaltsdelegation's measures have not been more effective than those of the reqular market labor policy organization.

Diss. Umeå : Umeå universitet, 1988


digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
48

Nailor, Michael G. "The economics of voluntary cost-share programs, an analysis of the rural water quality program." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0032/MQ47351.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Settenvini, Matteo. "Algorithmic Analysis of Name-Bounded Programs : From Java programs to Petri Nets via π-calculus." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3112.

Full text
Abstract:
Context. Name-bounded analysis is a type of static analysis that allows us to take a concurrent program, abstract away from it, and check for some interesting properties, such as deadlock-freedom, or watching the propagation of variables across different components or layers of the system. Objectives. In this study we investigate the difficulties of giving a representation of computer programs in a name-bounded variation of π-calculus. Methods. A preliminary literature review is conducted to assess the presence (or lack thereof) of other successful translations from real-world programming languages to π-calculus, as well for the presence of relevant prior art in the modelling of concurrent systems. Results. This thesis gives a novel translation going from a relevant subset of the Java programming language, to its corresponding name-bounded π-calculus equivalent. In particular, the strengths of our translation are being able to dispose of names representing inactive objects when there are no circular references, and a transparent handling of polymorphism and dynamic method resolution. The resulting processes can then be further transformed into their Petri-Net representation, enabling us to check for important properties, such as reachability and coverability of program states. Conclusions. We conclude that some important properties that are not, in general, easy to check for concurrent programs, can be in fact be feasibly determined by giving a more constrained model in π-calculus first, and as Petri Nets afterwards.
+49 151 52966429
APA, Harvard, Vancouver, ISO, and other styles
50

Maglaras, Dimitrios. "A formal mechanism for analysis and re-implementation of legacy programs." Thesis, University of South Wales, 2001. https://pure.southwales.ac.uk/en/studentthesis/a-formal-mechanism-for-analysis-and-reimplementation-of-legacy-programs(2173ac6f-5d70-4bf4-8efb-c4fe4ff7d17a).html.

Full text
Abstract:
The last ten years of this century have been characterized as a period of software crisis. The need for information is growing day by day and the formal development tools have been proven insufficient to serve this need. This led software engineers to the creation of new technologies that would be more efficient in the manipulation of the data and the development of software systems. However, what will happen with all those large software applications that have been developed in the past, under formal development tools such as 3rd GLs? In the real world there are too many software applications, developed using 3rd GLs, still working in business and there are many reasons why these applications need to be modified in order to keep on running effectively. The introduction of EURO as global currency in Europe is a well-known problem concerning these old applications. In most cases the documentation describing the requirements, design and implementation of the legacy software systems does not exist or is too poor to make sense. This thesis will provide a mechanism to regain design and implementation information of a software system examining its source code. The mechanism is based on a software tool that will be able to extract useful information from the source code of an old application. The modularization of the information, concerning the design and implementation analysis of the software system, into smaller pieces of information, describes the scheme that will be used in order to retrieve, manipulate and finally provide this information to the users. This scheme treats the pieces of information, which are gathered from the source code, as separate objects related to each other. These objects together with their relations will be stored into a semantic network (database). The contents of this database will be browsed in such a way that will provide critical and meaningful information about the implementation and design of the software system. A software module, called parser, will be developed, which will be able to extract pieces of information by parsing all the source files of the old application line by line. This information is stored into a semantic network and a separate tool will be configured in order to retrieve information from the semantic network and provide it on the screen using a GUI. In the first chapter an introduction to this research project takes place. In the second chapter the documentation gathered concerning the research area of software analysis and reuse is studied and analyzed. In the third chapter all the requirements and specifications of the proposed mechanism are set. Chapters four and five present the design and implementation of the semantic network that will contain the pieces of information gathered from the source code and the source code parser. In the sixth chapter the developed mechanism is tested against its specifications. Finally, in the seventh chapter the analysis of a large industry, data-processing software application takes place.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography