Dissertations / Theses on the topic 'Random testing'

To see the other types of publications on this topic, follow the link: Random testing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Random testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Oftedal, Kristian. "Random Testing versus Partition Testing." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13985.

Full text
Abstract:
The difference between Partition Testing and Random Testing has been thoroughlyinvestigated theoretically. In this thesis we present a practical study ofthe differences between random testing and partition testing. Thestudy is performed on the open-source project Buddi with JUnit and Randoop as test tools. The comparisonis made with respect to coverage rate and fault rate. The resultsare discussed and analyzed. The observed differences are statisticallysignificant at the 10% level with respect to coverage rate, in favour ofpartition testing, and not statistically significant at the 10% level withrespect to the fault rate.
APA, Harvard, Vancouver, ISO, and other styles
2

Pacheco, Carlos Ph D. Massachusetts Institute of Technology. "Directed random testing." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53297.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 155-162).
Random testing can quickly generate many tests, is easy to implement, scales to large software applications, and reveals software errors. But it tends to generate many tests that are illegal or that exercise the same parts of the code as other tests, thus limiting its effectiveness. Directed random testing is a new approach to test generation that overcomes these limitations, by combining a bottom-up generation of tests with runtime guidance. A directed random test generator takes a collection of operations under test and generates new tests incrementally, by randomly selecting operations to apply and finding arguments from among previously-constructed tests. As soon as it generates a new test, the generator executes it, and the result determines whether the test is redundant, illegal, error-revealing, or useful for generating more tests. The technique outputs failing tests pointing to potential errors that should be corrected, and passing tests that can be used for regression testing. The thesis also contributes auxiliary techniques that post-process the generated tests, including a simplification technique that transforms a, failing test into a smaller one that better isolates the cause of failure, and a branch-directed test generation technique that aims to increase the code coverage achieved by the set of generated tests. Applied to 14 widely-used libraries (including the Java JDK and the core .NET framework libraries), directed random testing quickly reveals many serious, previously unknown errors in the libraries. And compared with other test generation tools (model checking, symbolic execution, and traditional random testing), it reveals more errors and achieves higher code coverage.
(cont.) In an industrial case study, a test team at Microsoft using the technique discovered in fifteen hours of human effort as many errors as they typically discover in a person-year of effort using other testing methods.
by Carlos Pacheco.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Kuo, Fei-Ching, and n/a. "On adaptive random testing." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20061109.091517.

Full text
Abstract:
Adaptive random testing (ART) has been proposed as an enhancement to random testing for situations where failure-causing inputs are clustered together. The basic idea of ART is to evenly spread test cases throughout the input domain. It has been shown by simulations and empirical analysis that ART frequently outperforms random testing. However, there are some outstanding issues on the cost-effectiveness and practicality of ART, which are the main foci of this thesis. Firstly, this thesis examines the basic factors that have an impact on the faultdetection effectiveness of adaptive random testing, and identifies favourable and unfavourable conditions for ART. Our study concludes that favourable conditions for ART occur more frequently than unfavourable conditions. Secondly, since all previous studies allow duplicate test cases, there has been a concern whether adaptive random testing performs better than random testing because ART uses fewer duplicate test cases. This thesis confirms that it is the even spread rather than less duplication of test cases which makes ART perform better than RT. Given that the even spread is the main pillar of the success of ART, an investigation has been conducted to study the relevance and appropriateness of several existing metrics of even spreading. Thirdly, the practicality of ART has been challenged for nonnumeric or high dimensional input domains. This thesis provides solutions that address these concerns. Finally, a new problem solving technique, namely, mirroring, has been developed. The integration of mirroring with adaptive random testing has been empirically shown to significantly increase the cost-effectiveness of ART. In summary, this thesis significantly contributes to both the foundation and the practical applications of adaptive random testing.
APA, Harvard, Vancouver, ISO, and other styles
4

Kuo, Fei-Ching. "On adaptive random testing." Australasian Digital Thesis Program, 2006. http://adt.lib.swin.edu.au/public/adt-VSWT20061109.091517.

Full text
Abstract:
Thesis (Ph.D) - Swinburne University of Technology, Faculty of Information & Communication Technologies, 2006.
A thesis submitted for the degree of PhD, Faculty of Information and Communication Technologies, Swinburne University of Technology, 2006. Typescript. Bibliography: p. 126-133.
APA, Harvard, Vancouver, ISO, and other styles
5

Mitran, Cosmin. "Guided random-based testing strategies." Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Department of Computer Science, 2007. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ciupa, Ilinca. "Strategies for random contract-based testing /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmad, Mian Asbat. "New strategies for automated random testing." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/7981/.

Full text
Abstract:
The ever increasing reliance on software-intensive systems is driving research to discover software faults more effectively and more efficiently. Despite intensive research, very few approaches have studied and used knowledge about fault domains to improve the testing or the feedback given to developers. The present thesis addresses this shortcoming: it leverages fault co-localization in a new random testing strategy called Dirt Spot Sweep- ing Random (DSSR), and it presents two new strategies: Automated Discovery of Failure Domain (ADFD) and Automated Discovery of Failure Domain+ (ADFD+). These improve the feedback given to developers by deducing more information about the failure domain (i.e. point, block, strip) in an automated way. The DSSR strategy adds the value causing the failure and its neighbouring values to the list of interesting values for exploring the underlying failure domain. The comparative evaluation showed significantly better performance of DSSR over Random and Random+ strategies. The ADFD strategy finds failures and failure domains and presents the pass and fail domains in graphical form. The results obtained by evaluating error-seeded numerical programs indicated highly effective performance of the ADFD strategy. The ADFD+ strategy is an extended version of ADFD strategy with respect to algorithm and graphical presentation of failure domains. In comparison with Randoop, ADFD+ strategy successfully detected all failures and failure domains while Randoop identified individual failures but could not detect failure domains. The ADFD and ADFD+ techniques were enhanced by integration of the automatic invariant detector Daikon, and the precision of identifying failure domains was determined through extensive experimental evaluation of real world Java projects contained in a database, namely Qualitas Corpus. The analyses of results, cross-checked by manual testing indicated that the ADFD and ADFD+ techniques are highly effective in providing assistance but are not an alternative to manual testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Pesaresi, Emanuele. "Leptokurtic signals in random control vibration testing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
In several industrial sectors, some components are subjected to mechanical vibrations which may lead to a premature failure. To ensure that they operate properly during their service life, the utilization of qualification tests has been consolidated over the years. It is often required to carry out accelerated tests for obvious reasons as feasibility and cost: the aim is to limit the duration of tests. The Test Tailoring procedure requires an appropriate definition for vibratory test profiles to be utilized as an excitation in terms of motion generated by vibrating tables or shakers. The synthesis of such profiles requires that signals be measured in real environments and then that their most important characteristics be reproduced in a laboratory, in particular their spectral content and damage potential.The conventional procedures permit the synthesis of an accelerated test profile in terms of a Power Spectral Density, which is characterized by a Gaussian distribution of the corresponding timeseries values. Such a kind of synthesis might be unfit to represent the real environment signal taken as a reference, owing to the latter’s usual non-Gaussianity. As a consequence, reliability could be compromised since the “nature” of the real signal is not preserved. Typical examples of non-Gaussian signals coming forth in real applications are the so-called Leptokurtic signals, whose high amplitude peaks originate a strongly non Gaussian probability distribution. A parameter called kurtosis is often employed to represent the number and severity of the peaks of the signal. A common reference is made to “kurtosis control” whenever it is required that the synthesized and the measured signal have not only the same spectral content but the same kurtosis value as well. In this work some novel Mission Synthesis algorithms are proposed, which generate test profiles by controlling precisely the kurtosis value and complying with the spectral content of the reference signal.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Ning Lareina. "A study on improving adaptive random testing." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36428061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Merkel, Robert Graham, and robert merkel@benambra org. "Analysis and enhancements of adaptive random testing." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20050804.144747.

Full text
Abstract:
Random testing is a standard software testing method. It is a popular method for reli-ability assessment, but its use for debug testing has been opposed by some authorities. Random testing does not use any information to guide test case selection, and so, it is argued, testing is less likely to be effective than other methods. Based on the observation that failures often cluster in contiguous regions, Adaptive Random Testing (ART) is a more effective random testing method. While retaining random selection of test cases, selection is guided by the idea that tests should be widely spread throughout the input domain. A simple way to implement this concept, FSCS-ART, involves randomly generating a number of candidates, and choosing the candidate most widely spread from any already-executed test. This method has already shown to be up to 50% more effective than random testing. This thesis examines a number of theoretical and practical issues related to ART. Firstly, an theoretical examination of the scope of adaptive methods to improve testing effectiveness is conducted. Our results show that the maximum improvement in failure detection effectiveness possible is only 50% - so ART performs close to this limit on many occasions. Secondly, the statistical validity of the previous empirical results is examined. A mathematical analysis of the sampling distribution of the various failure-detection effectiveness methods shows that the measure preferred in previous studies has a slightly unusual distribution known as the geometric distribution, and that that it and other measures are likely to show high variance, requiring very large sample sizes for accurate comparisons. A potential limitation of current ART methods is the relatively high selection overhead. A number of methods to obtain lower overheads are proposed and evaluated, involving a less-strict randomness or wide-spreading criterion. Two methods use dynamic, as-needed partitioning to divide the input domain, spreading test cases throughout the partitions as required. Another involves using a class of numeric sequences called quasi-random sequences. Finally, a more efficient implementation of the existing FSCS-ART method is proposed using the mathematical structure known as the Voronoi diagram. Finally, the use of ART on programs whose input is non-numeric is examined. While existing techniques can be used to generate random non-numeric candidates, a criterion for 'wide spread' is required to perform ART effectively. It is proposed to use the notion of category-partition as such a criterion.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Xuejun. "Random testing of open source C compilers." Thesis, The University of Utah, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3704288.

Full text
Abstract:

Compilers are indispensable tools to developers. We expect them to be correct. However, compiler correctness is very hard to be reasoned about. This can be partly explained by the daunting complexity of compilers.

In this dissertation, I will explain how we constructed a random program generator, Csmith, and used it to find hundreds of bugs in strong open source compilers such as the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). The success of Csmith depends on its ability of being expressive and unambiguous at the same time. Csmith is composed of a code generator and a GTAV (Generation-Time Analysis and Validation) engine. They work interactively to produce expressive yet unambiguous random programs. The expressiveness of Csmith is attributed to the code generator, while the unambiguity is assured by GTAV. GTAV performs program analyses, such as points-to analysis and effect analysis, efficiently to avoid ambiguities caused by undefined behaviors or unspecified behaviors.

During our 4.25 years of testing, Csmith has found over 450 bugs in the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). We analyzed the bugs by putting them into different categories, studying the root causes, finding their locations in compilers' source code, and evaluating their importance. We believe analysis results are useful to future random testers, as well as compiler writers/users.

APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Ning Lareina, and 劉寧. "A study on improving adaptive random testing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36428061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hansson, Bevin. "Random Testing of Code Generation in Compilers." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175852.

Full text
Abstract:
Compilers are a necessary tool for all software development. As modern compilers are large and complex systems, ensuring that the code they produce is accurate and correct is a vital but arduous task. Correctness of the code generation stage is important. Maintaining full coverage of test cases in a compiler is virtually impossible due to the large input and output domains. We propose that random testing is a highly viable method for testing a compiler. A method is presented to randomly generate a lower level code representation and use it to test the code generation stage of a compiler. This enables targeted testing of some of the most complex components of a modern compiler (register allocation, instruction scheduling) for the first time. The design is implemented in a state-of-the-art optimizing compiler, LLVM, to determine the effectiveness and viability of the method. Three distinct failures are observed during the evaluation phase. We analyze the causes behind these failures and conclude that the methods described in this work have the potential to uncover compiler defects which are not observable with other testing approaches.
Kompilatorer är nödvändiga för all mjukvaruutveckling. Det ärsvårt att säkerställa att koden som produceras är korrekt, eftersomkompilatorer är mycket stora och komplexa system. Kodriktigheteninom kodgenereringsstadiet (registerallokering och instruktionsschemaläggning) är särskilt viktig. Att uppnå full täckningav testfall i en kompilator är praktiskt taget omöjligt på grund avde stora domänerna för in- och utdata.Vi föreslår att slumpmässig testning är en mycket användbarmetod för att testa en kompilator. En metod presenteras för attgenerera slumpmässig kod på en lägre representationsnivå och testakodgenereringsstadiet i en kompilator. Detta möjliggör riktadtestning av några av de mest komplexa delarna i en modern kompilator(registerallokering, instruktionsschemaläggning) för förstagången.Designen implementeras i en toppmodern optimerande kompilator,LLVM, för att avgöra metodens effektivitet. Tre olika misslyckandenobserveras under utvärderingsfasen. Vi analyserar orsakernabakom dessa misslyckanden och drar slutsatsen att demetoder som beskrivs har potential att finna kompilatordefektersom inte kan observeras med andra testmetoder. Kompilatorer är nödvändiga för all mjukvaruutveckling. Det är svårt att säkerställa att koden som produceras är korrekt, eftersom kompilatorer är mycket stora och komplexa system. Kodriktigheten inom kodgenereringsstadiet (registerallokering och instruktionsschemal äggning) är särskilt viktig. Att uppnå full täckning av testfall i en kompilator är praktiskt taget omöjligt på grund av de stora domänerna för in- och utdata. Vi föreslår att slumpmässig testning är en mycket användbar metod för att testa en kompilator. En metod presenteras för att generera slumpmässig kod på en lägre representationsnivå och testa kodgenereringsstadiet i en kompilator. Detta möjliggör riktad testning av några av de mest komplexa delarna i en modern kompilator (registerallokering, instruktionsschemaläggning) för första gången. Designen implementeras i en toppmodern optimerande kompilator, LLVM, för att avgöra metodens effektivitet. Tre olika misslyckanden observeras under utvärderingsfasen. Vi analyserar orsakerna bakom dessa misslyckanden och drar slutsatsen att de metoder som beskrivs har potential att finna kompilatordefekter som inte kan observeras med andra testmetoder.
APA, Harvard, Vancouver, ISO, and other styles
14

Towey, David Peter. "Studies of different variations of Adaptive Random Testing." Thesis, View the Table of Contents & Abstract, 2006. http://sunzi.lib.hku.hk/hkuto/record/B3551212X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Castañeda, Lozano Roberto. "Constraint Programming for Random Testing of a Trading System." Thesis, KTH, Programvaru- och datorsystem, SCS, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-44908.

Full text
Abstract:
Financial markets use complex computer trading systems whose failures can cause serious economic damage, making reliability a major concern. Automated random testing has been shown to be useful in finding defects in these systems, but its inherent test oracle problem (automatic generation of the expected system output) is a drawback that has typically prevented its application on a larger scale. Two main tasks have been carried out in this thesis as a solution to the test oracle problem. First, an independent model of a real trading system based on constraint programming, a method for solving combinatorial problems, has been created. Then, the model has been integrated as a true test oracle in automated random tests. The test oracle maintains the expected state of an order book throughout a sequence of random trade order actions, and provides the expected output of every auction triggered in the order book by generating a corresponding constraint program that is solved with the aid of a constraint programming system. Constraint programming has allowed the development of an inexpensive, yet reliable test oracle. In 500 random test cases, the test oracle has detected two system failures. These failures correspond to defects that had been present for several years without being discovered neither by less complete oracles nor by the application of more systematic testing approaches. The main contributions of this thesis are: (1) empirical evidence of both the suitability of applying constraint programming to solve the test oracle problem and the effectiveness of true test oracles in random testing, and (2) a first attempt, as far as the author is aware, to model a non-theoretical continuous double auction using constraint programming.
Winner of the Swedish AI Society's prize for the best AI Master's Thesis 2010.
APA, Harvard, Vancouver, ISO, and other styles
16

Hay, Neil Conway. "The simulation of random environments for structural dynamics testing." Thesis, Edinburgh Napier University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sweeney, Erin. "Random Student Drug Testing: Perceptions of Superintendents and Parents." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1575293312844071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Alves, Gonçalo Filipe Rodrigues. "Testing the random walk hypothesis with technical trading rules." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/10939.

Full text
Abstract:
Mestrado em Finanças
Neste trabalho são testadas as hipóteses de passeio aleatório ao mercado acionista português, examinando as dezoito ações e o índice PSI-20. Considerando cotações diárias e mensais durante o período de 1999-2015. Foram utilizados os testes Augmented Dickey-Fuller (ADF), os testes de rácio de variância automático assim como os rácios de variâncias individuais e múltiplos propostos por Lo e Mackinlay, e Chow e Denning, respetivamente. Os vários testes utilizados para confirmar a hipótese de passeio aleatório das dezoito ações assim como do índice PSI-20, obtiveram resultados mistos contra a hipótese testada. Enquanto o teste Augmented Dickey-Fuller (ADF) rejeitou a hipótese de raiz unitária para todas as ações e também para o índice PSI-20 confirmando assim um passeio aleatório. Por outro lado, os testes de rácios de variâncias, rejeitam a hipótese testada para algumas das ações consideradas assim como para o índice PSI-20, contudo tende esse número de ações tende a diminuir quando se utiliza as cotações mensais.
This paper investigates the efficiency of the eighteen stocks that constitute the main Portuguese stock index, the PSI-20 of the Lisbon Stock Exchange. Tools used for the investigation were daily and monthly data from January 1999 to May of 2015, using the Augmented Dickey-Fuller (ADF) test, the automatic variance ratio by Choi and the individual and multiple variance ratios, by Lo and Mackinlay, and, Chow and Denning, which test the efficiency of the eighteen stocks and PSI-20 index. The Augmented Dickey-Fuller (ADF) tests the null hypothesis that the series has a unit root, while the variance ratio tests the random walk hypothesis. Based on these tests, the results provide mixed evidence against the random walk hypothesis. The results for the unit root tests do not reject the efficient market hypothesis for the entire sample, while the results from the variance ratio tests do, but tend to decrease in monthly data.
APA, Harvard, Vancouver, ISO, and other styles
19

Lains, João Luís da Silva. "Testing the random walk hypothesis with variance ratio statistics." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/11801.

Full text
Abstract:
Mestrado em Finanças
Esta dissertação tem como objetivo testar a hipótese de passeio aleatório na curva das yields relativa ás obrigações do tesouro dos Estados Unidos da América para o period entre 1980 e 2014. Para alcançar este objetivo e após revisão da literatura foram efectuados testes de variância e de raiz unitária considerados os mais indicados e poderosos. Os dados necessários para a realização deste estudo foram recolhidos tendo por base um estudo da Reserva Federal dos Estados Unidos da América, que efectua cálculo das yields desde 1961 até ao presente. O método escolhido para obter os resultados referentes à raiz unitária foi o Augmented Dickey-Fuller Unit Root Test e para os testes de variância foram usados: Chow Denning (1993) multiple variance test, Joint wright multiple version of Wrights rank and sign tests e Choi (1999) Automatic Variance ratio. A amostra inclui mais de 8000 observações para cada uma das yields estudadas(1,5,10 e 20 anos Zero-Coupon e Par Yields) durante um período de 34 anos. Os resultados permitiram a detecção de diversos periodos em que o passeio aleatório nas yields das obrigações do tesouro Norte-Americano é real mas também outros em que isso não se verificou. Para isso efectuámos uma análise comparativa entre os resultados dos testes de variância e eventos marcantes na economia americana entres os quais decidimos destacar 3 períodos: a década de 80, a expansao económica dos anos 90 até inicio do século XXI e o pós-crise de 2008 onde é implementado o quantitative Easing.
The random-walk hypothesis in the U.S. treasury yield curve was not previous studied and is surprising that researchers do not filled that void by testing it. However, the U.S treasury securities market is a benchmark, as the U.S treasury is considered to be risk-free. This benchmark is used to forecast economic development, to analyse securities in other markets, to price other fixed-income securities and to hedge positions taken in other markets. This study applies Chow Denning (1993) multiple variance test, Joint wright multiple version of Wright?s rank and sign tests, Choi (1999) Automatic Variance ratio Test and we also use the well-known Augmented Dickey-Fuller unit roots test to enable us to define the methodology to be used in the study. The database used permits the estimation of relative daily variation on U.S. treasury yield curve from January 1980 to December 2014. We hope that this analysis can provide useful information to traders and investors and will make a contribution in assisting to understand the pattern and behaviour of yields movement.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Xiaoke, and 許小珂. "Benchmarking the power of empirical tests for random numbergenerators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41508464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tso, Chi-wai, and 曹志煒. "Stringency of tests for random number generators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29748367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mattioli, Federico. "Testing a Random Number Generator: formal properties and automotive application." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18187/.

Full text
Abstract:
L'elaborato analizza un metodo di validazione dei generatori di numeri casuali (RNG), utilizzati per garantire la sicurezza dei moderni sistemi automotive. Il primo capitolo fornisce una panoramica della struttura di comunicazione dei moderni autoveicoli attraverso l'utilizzo di centraline (ECU): vengono riportati i principali punti di accesso ad un automobile, assieme a possibili tipologie di hacking; viene poi descritto l'utilizzo dei numeri casuali in crittografia, con particolare riferimento a quella utilizzata nei veicoli. Il secondo capitolo riporta le basi di probabilità necessarie all'approccio dei test statistici utilizzati per la validazione e riporta i principali approcci teorici al problema della casualità. Nei due capitoli centrali, viene proposta una descrizione dei metodi probabilistici ed entropici per l'analisi di dati reali utilizzati nei test. Vengono poi descritti e studiati i 15 test statistici proposti dal National Institute of Standards and Technology (NIST). Dopo i primi test, basati su proprietà molto semplici delle sequenze casuali, vengono proposti test più sofisticati, basati sull'uso della trasformata di Fourier (per testare eventuali comportamenti periodici), dell'entropia (strettamente connessi con la comprimibilità della sequenza), o sui random path. Due ulteriori test, permettono di valutare il buon funzionamento del generatore, e non solo delle singole sequenze generate. Infine, il quinto capitolo è dedicato all'implementazione dei test al fine di testare il TRNG delle centraline.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Huai. "On even spread of test cases in adaptive random testing." Swinburne Research Bank, 2008. http://hdl.handle.net/1959.3/40129.

Full text
Abstract:
Thesis (Ph.D) - Swinburne University of Technology, Faculty of Information & Communication Technologies, 2008.
A thesis submitted for the degree of Doctor of Philosophy, Faculty of Information and Communication Technologies, Swinburne University of Technology, 2008. Typescript. Bibliography: p. 107-123.
APA, Harvard, Vancouver, ISO, and other styles
24

Pareschi, Fabio <1976&gt. "Chaos-based random number generators: monolithic implementation, testing and applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2007. http://amsdottorato.unibo.it/467/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Petrie, Craig Steven. "An integrated random bit generator for applications in cryptography." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Jones, Tammi Lynn. "POLICIES, PRACTICES AND CONSTITUENT PERCEPTIONS OF RANDOM, SUSPICIONLESS DRUG TESTING IN PENNSYLVANIA'S PUBLIC SCHOOLS." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/26268.

Full text
Abstract:
Educational Administration
Ed.D.
The purpose of this study was to examine the policies and practices of school districts with random drug testing policies in Pennsylvania. Specifically, this study intended to help administrators understand the phenomenon of drug testing as one available means of substance use prevention. In response to the rising drug use in our schools, random drug testing has increasingly become one of the many possible solutions being used to prevent student drug use. Currently, drug testing programs have been examined in the workplace and in intercollegiate athletics. However, very little evaluative research has been conducted on whether school districts are satisfied with their random drug testing policies and practices. The researcher anticipates making a significant contribution for school administrators as they strive to generate drug-free schools. The literature review presented in this research study examined the historical perspective of drug use in our nation and the events and perceptions that led up to the job-related drug testing that began in the military and workplace. The role values play in the policymaking process is discussed as well as any conflicts that arise due to diversity in those values. The costs and benefits of a random drug testing policy are also presented. For this study, random drug testing was examined in the context of a range of school districts within Pennsylvania that have implemented similar policies. Statistical data was utilized in order to collect and analyze superintendents' perspectives and satisfaction with random drug testing programs in order to increase the overall understanding of drug testing as a strategy for prevention. Parents, teachers, coaches, administrators and communities may benefit from this detailed study by way of the recommendations that will be provided for future school leaders and various stakeholders considering the adoption of a random drug testing policy.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
27

Almowanes, Abdullah. "GENERATING RANDOM SHAPES FOR MONTE CARLO ACCURACY TESTING OF PAIRWISE COMPARISONS." Thesis, Laurentian University of Sudbury, 2013. https://zone.biblio.laurentian.ca/dspace/handle/10219/2097.

Full text
Abstract:
This thesis shows highly encouraging results as the gain of accuracy reached 18.4% when the pairwise comparisons method was used instead of the direct method for comparing random shapes. The thesis describes a heuristic for generating random but nice shapes, called placated shapes. Random, but visually nice shapes, are often needed for cognitive experiments and processes. These shapes are produced by applying the Gaussian blur to randomly generated polygons. Afterwards, the threshold is set to transform pixels to black and white from di erent shades of gray. This transformation produces placated shapes for easier estimation of areas. Randomly generated placated shapes are used to perform the Monte Carlo method to test the accuracy of cognitive processes by using pairwise comparisons. An on-line questionnaire has been implemented and participants were asked to estimate the areas of ve shapes using a provided unit of measure. They were also asked to compare the shapes in pairs. Such Monte Carlo experiment has never been conducted for 2D case. The received results are of considerable importance.
APA, Harvard, Vancouver, ISO, and other styles
28

Lian, Guinan. "Testing Primitive Polynomials for Generalized Feedback Shift Register Random Number Generators." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1131.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Alice, Reinaudo. "Empirical testing of pseudo random number generators based on elliptic curves." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-44875.

Full text
Abstract:
An introduction on random numbers, their history and applications is given, along with explanations of different methods currently used to generate them. Such generators can be of different kinds, and in particular they can be based on physical systems or algorithmic procedures. The latter type of procedures gives rise to pseudo-random number generators. Specifically, several such generators which are based on elliptic curves are examined. Therefore, in order to ease understanding, a basic primer on elliptic curves over fields and the operations arising from their group structure is also provided. Empirical tests to verify randomness of generated sequences are then considered. Afterwards, there are some statistical considerations and observations about theoretical properties of the generators at hand, useful in order to use them optimally. Finally, several randomly generated curves are created and used to produce pseudo-random sequences which are then tested by means of the previously described generators. In the end, an analysis of the results is attempted and some final considerations are made.
APA, Harvard, Vancouver, ISO, and other styles
30

White, John D. H. "A random signal ultrasonic test system for highly attenuating media." Thesis, Keele University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Abeyratne, Anura T. "Comparison of k-Weibull populations under random censoring /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9737910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Suzuki, Satoshi. "The Development of Embedded DRAM Statistical Quality Models at Test and Use Conditions." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/341.

Full text
Abstract:
Today, the use of embedded Dynamic Random Access Memory (eDRAM) is increasing in our electronics that require large memories, such as gaming consoles and computer network routers. Unlike external DRAMs, eDRAMs are embedded inside ASICs for faster read and write operations. Until recently, eDRAMs required high manufacturing cost. Present process technology developments enabled the manufacturing of eDRAM at competitive costs. Unlike SRAM, eDRAM exhibits retention time bit fails from defects and capacitor leakage current. This retention time fail causes memory bits to lose stored values before refresh. Also, a small portion of the memory bits are known to fail at a random retention time. At test conditions, more stringent than use conditions, if all possible retention time fail bits are detected and replaced, there will be no additional fail bits during use. However, detecting all the retention time fails requires long time and also rejects bits that do not fail at the use condition. This research seeks to maximize the detection of eDRAM fail bits during test by determining effective test conditions and model the failure rate of eDRAM retention time during use conditions.
APA, Harvard, Vancouver, ISO, and other styles
33

Angeli, Andrea. "Mission synthesis of sine-on-random excitations for accelerated vibration qualification testing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/9759/.

Full text
Abstract:
In most real-life environments, mechanical or electronic components are subjected to vibrations. Some of these components may have to pass qualification tests to verify that they can withstand the fatigue damage they will encounter during their operational life. In order to conduct a reliable test, the environmental excitations can be taken as a reference to synthesize the test profile: this procedure is referred to as “test tailoring”. Due to cost and feasibility reasons, accelerated qualification tests are usually performed. In this case, the duration of the original excitation which acts on the component for its entire life-cycle, typically hundreds or thousands of hours, is reduced. In particular, the “Mission Synthesis” procedure lets to quantify the induced damage of the environmental vibration through two functions: the Fatigue Damage Spectrum (FDS) quantifies the fatigue damage, while the Maximum Response Spectrum (MRS) quantifies the maximum stress. Then, a new random Power Spectral Density (PSD) can be synthesized, with same amount of induced damage, but a specified duration in order to conduct accelerated tests. In this work, the Mission Synthesis procedure is applied in the case of so-called Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic contributions, in the form of sine tones typically due to some rotating parts of the system (e.g. helicopters, engine-mounted components, …). In fact, a proper test tailoring should not only preserve the accumulated fatigue damage, but also the “nature” of the excitation (in this case the sinusoidal components superimposed on the random process) in order to obtain reliable results. The classic time-domain approach is taken as a reference for the comparison of different methods for the FDS calculation in presence of Sine-on-Random vibrations. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is presented.
APA, Harvard, Vancouver, ISO, and other styles
34

Johansson, Viktor, and Alexander Vallén. "Random testing with sanitizers to detect concurrency bugs in embedded avionics software." Thesis, Linköpings universitet, Programvara och system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153310.

Full text
Abstract:
Fuzz testing is a random testing technique that is effective at finding bugs in large software programs and protocols. We investigate if the technology can be used to find bugs in multi-threaded applications by fuzzing a real-time embedded avionics platform together with a tool specialized at finding data races between multiple threads. We choose to fuzz an API (available to applications executing on top) of the platform. This thesis evaluates aspects of integrating a fuzzing program, AFL and a sanitizer, ThreadSanitizer with an embedded system. We investigate the modifications needed to create a correct run-time environment for the system, including supplying test data in a safe manner and we discuss hardware dependencies. We present a setup where we show that the tools can be used to find planted data races, however slowdown introduced by the tools is significant and the fuzzer only managed to find very simple planted data races during the test runs. Our findings also indicate what appear to be conflicts in instrumentation between the fuzzer and the sanitizer.
APA, Harvard, Vancouver, ISO, and other styles
35

Fang, Jing. "On testing for the Cox model using resampling methods." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39558356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Fang, Jing, and 方婧. "On testing for the Cox model using resampling methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39558356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Rathbun, Shelia E. "A qualitative case study of student perceptions of a random drug testing policy." Diss., Wichita State University, 2011. http://hdl.handle.net/10057/3936.

Full text
Abstract:
This qualitative study involved high school students in focus groups and individual interviews who shared their perceptions of a random drug testing policy and its implementation in the fall of 2007 at their suburban high school. Student voices were captured and shared, as well as data shared regarding student responses on The Communities That Care Survey which is given yearly to all sophomores and seniors. There were strong perceptions from students regarding the implementation of random drug testing and students shared these perceptions openly and often strongly. However, students were not well-informed as to why the policy had been implemented nor about the random drug testing procedures and consequences to testing positive. Students voices were heard, but until policy makers and decision makers in schools begin working alongside students to teach students how to have a voice, students voices may remain ignored. Students were able to a coherent and effective critique regarding some of the issues; however, students lacked a clear understanding of the policy. The study used micropolitics and student voice as its theoretical framework. The study also researched random drug testing policies and practices in schools. The study also has valuable recommendations and implications for policy makers who are contemplating instituting any new policies that affects those at the bottom of the hierarchy in schools, the students.
Dissertation (Ed.D.)--Wichita State University, College of Education, Dept. of Educational Leadership
APA, Harvard, Vancouver, ISO, and other styles
38

Bamps, Cédric. "Self-Testing and Device-Independent Quantum Random Number Generation with Nonmaximally Entangled States." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/266954.

Full text
Abstract:
The generation of random number sequences, that is, of unpredictable sequences free from any structure, has found numerous applications in the field of information technologies. One of the most sensitive applications is cryptography, whose modern practice makes use of secret keys that must indeed be unpredictable for any potential adversary. This type of application demands highly secure randomness generators.This thesis contributes to the device-independent approach to quantum random number generation (DIRNG, for Device-Independent Random Number Generation). Those methods of randomness generation exploit the fundamental unpredictability of the measurement of quantum systems. In particular, the security of device-independent methods does not appeal to a specific model of the device itself, which is treated as a black box. This approach therefore stands in contrast to more traditional methods whose security rests on a precise theoretical model of the device, which may lead to vulnerabilities caused by hardware malfunctions or tampering by an adversary.Our contributions are the following. We first introduce a family of robust self-testing criteria for a class of quantum systems that involve partially entangled qubit pairs. This powerful form of inference allows us to certify that the contents of a quantum black box conforms to one of those systems, on the sole basis of macroscopically observable statistical properties of the black box.That result leads us to introduce and prove the security of a protocol for randomness generation based on such partially entangled black boxes. The advantage of this method resides in its low shared entanglement cost, which allows to reduce the use of quantum resources (both entanglement and quantum communication) compared to existing DIRNG protocols.We also present a protocol for randomness generation based on an original estimation of the black-box correlations. Contrary to existing DIRNG methods, which summarize the accumulated measurement data into a single quantity---the violation of a unique Bell inequality---, our method exploits a complete, multidimensional description of the black-box correlations that allows it to certify more randomness from the same number of measurements. We illustrate our results on a numerical simulation of the protocol using partially entangled states.
La génération de suites de nombres aléatoires, c'est-à-dire de suites imprévisibles et dépourvues de toute structure, trouve de nombreuses applications dans le domaine des technologies de l'information. L'une des plus sensibles est la cryptographie, dont les pratiques modernes font en effet appel à des clés secrètes qui doivent précisément être imprévisibles du point de vue d'adversaires potentiels. Ce type d'application exige des générateurs d'aléa de haute sécurité.Cette thèse s'inscrit dans le cadre de l'approche indépendante des appareils des méthodes quantiques de génération de nombres aléatoires (en anglais, Device-Independent Random Number Generation ou DIRNG). Ces méthodes exploitent la nature fondamentalement imprévisible de la mesure des systèmes quantiques. En particulier, l'appellation "indépendante des appareils" implique que la sécurité de ces méthodes ne fait pas appel à un modèle théorique particulier de l'appareil lui-même, qui est traité comme une boîte noire. Cette approche se distingue donc de méthodes plus traditionnelles dont la sécurité repose sur un modèle théorique précis de l'appareil et peut donc être compromise par un dysfonctionnement matériel ou l'intervention d'un adversaire.Les contributions apportées sont les suivantes. Nous démontrons tout d'abord une famille de critères de "self-testing" robuste pour une classe de systèmes quantiques impliquant des paires de systèmes à deux niveaux (qubits) partiellement intriquées. Cette forme d'inférence particulièrement puissante permet de certifier que le contenu d'une boîte noire quantique est conforme à l'un de ces systèmes, sur base uniquement de propriétés statistiques de la boîte observables macroscopiquement.Ce résultat nous amène à introduire et à prouver la sécurité d'une méthode de génération d'aléa basée sur ces boîtes noires partiellement intriquées. L'intérêt de cette méthode réside dans son faible coût en intrication, qui permet de réduire l'usage de ressources quantiques (intrication ou communication quantique) par rapport aux méthodes de DIRNG existantes.Nous présentons par ailleurs une méthode de génération d'aléa basée sur une estimation statistique originale des corrélations des boîtes noires. Contrairement aux méthodes de DIRNG existantes, qui résument l'ensemble des mesures observées à une seule grandeur (la violation d'une inégalité de Bell unique), notre méthode exploite une description complète (et donc multidimensionnelle) des corrélations des boîtes noires qui lui permet de certifier une plus grande quantité d'aléa pour un même nombre de mesures. Nous illustrons ensuite cette méthode numériquement sur un système de qubits partiellement intriqués.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
39

Vahlberg, Mikael. "Verification of Risk Algorithm Implementations in a Clearing System Using a Random Testing Framework." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139544.

Full text
Abstract:
Clearing is keeping track of transactions until they are settled. Standardized derivatives such as options and futures can be cleared through a clearinghouse if you are a clearing member. The clearinghouse step in as an intermediary between trades and manages all occurring counterparty risk. To be able to keep track of all transactions and also monitor members risk exposure a clearinghouse use advanced clearing software. Counterparty risk is mainly handled by collecting collateral from each clearing member, the initial collateral that a clearinghouse require from a member trading with derivatives, is called initial margin. Initial margin is calculated by a risk algorithm incorporated in the clearing software. Cinnober Financial Technology delivers clearing solutions to clearinghouses world wide, software providers to the _nancial industry have high demands on software quality. Ensuring high software quality can be done by performing various types of software testing. The goal of this thesis is to implement an extendable random testing framework that can test risk algorithm implementations that are part of a clearing system under development by Cinnober. By using the implemented framework, we aim to verify if the risk algorithm SPAN calculates fair initial margin amount. We also intend to increase the quality assurance of the risk domain that is responsible for all risk calculations. In this thesis we implement a random testing framework suitable for testing risk algorithms. Furthermore, we implement a framework extension for SPAN that is used to test the SPAN algorithm's initial margin calculations. The implementation consist of two main parts, the _rst being a random generation entity that feeds the clearing system with randomized input data. The second part is a veri_cation entity called test oracle, it is responsible for verifying the SPAN algorithm's calculation results. The random testing framework for risk algorithms was successfully implemented. By running the SPAN extension of the framework, we managed to _nd four issues related to the accuracy of the SPAN algorithm. This discovery led to the conclusion that the current SPAN algorithm implementation does not calculate fair initial margin. It also led to an immediate increase of quality assurance because the issues will be corrected. As a result of the frameworks extensible characteristics, long term quality also increases.
APA, Harvard, Vancouver, ISO, and other styles
40

Juneja, Lokesh Kumar. "Multiaxial fatigue damage model for random amplitude loading histories." Thesis, Virginia Tech, 1992. http://hdl.handle.net/10919/41522.

Full text
Abstract:
In spite of many multiaxial fatigue life prediction methods proposed over decades of research, no universally accepted approach yet exists. A multiaxial fatigue damage model developed for approximately proportional random amplitude loading is proposed in this study. A normal strain based analysis incorporating the multiaxial state of stress is conducted along a critical orientation assuming a constant strain ratio. The dominant deformation direction is chosen to be the critical orientation which is selected with the help of a principal strain histogram generated from the given multiaxial loading history. The uniaxial cyclic stress-strain curve is modified for the biaxial state of stress present along the critical orientation for the plane stress conditions. Modified versions of Morrow's and of Smith, Watson, and Topper's (SWT) mean-stress models are used to incorporate mean stresses. A maximum shear strain based analysis is, in addition, conducted to check for the shear dominant fatigue crack growth possibility along the critical direction. The most damaging maximum shear strain is chosen after analyzing the in-plane and the two out-of-plane shear strains.

The minimum of the two life values obtained from SWT model and the shear strain model is compared with the life estimated by the proposed model with the modified Morrow's mean stress model. The former is essentially the life predicted by Socie. The results of the proposed model, as reduced to the uniaxial case, are also compared with the experimental data obtained by conducting one-channel random amplitude loading history experiments.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
41

Brown, Stephanie N. "A New Era of Educational Assessment: the Use of Stratified Random Sampling in High Stakes Testing." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc407797/.

Full text
Abstract:
Although sampling techniques have been used effectively in education research and practice it is not clear how stratified random sampling techniques apply to high-stakes testing in the current educational environment. The present study focused on representative sampling as a possible means for reducing the quantity of state-administered tests in Texas public education. The purpose of this study was two-fold: (1) to determine if stratified random sampling is a viable option for reducing the number of students participating in Texas state assessments, and (2) to determine which sampling rate provides consistent estimates of the actual test results among the population of students. The study examined students’ scaled scores, percent of students passing, and student growth over a three-year period on state-mandated assessments in reading, mathematics, science, and social studies. Four sampling rates were considered (10%, 15%, 20%, & 25%) when analyzing student performance across demographic variables, including population estimates by socioeconomic status, limited English proficiency, and placement in special education classes. The data set for this study included five school districts and 68,641 students. Factorial ANOVAs were used initially to examine the effects of sampling rate on bias in reading and mathematics scores and bias in percentage of students passing these tests. Also 95% confidence intervals (CIs) and effect sizes for each model were examined to aid in the interpretation of the results. The results showed main effects for sampling rate and campus as well as a two-way interaction between these variables. The results indicated that a 20% sampling rate would closely approximate the parameter values regarding the mean TAKS reading and mathematics scale scores and the percentage of students passing these assessments. However, as population size decreases, sampling rate may have to be increased. For example, in populations with 30 or fewer students in a subgroup it is recommended that all students be included in the testing program. This study situated in one state contributes to the growing body of research being conducted on an international basis in sample-based educational assessments.
APA, Harvard, Vancouver, ISO, and other styles
42

Hart, Susan. "Organisational barriers and facilitators to the effective operation of Random Breath Testing (RBT) in Queensland." Queensland University of Technology, 2004. http://eprints.qut.edu.au/16451/.

Full text
Abstract:
Random breath testing (RBT) is one of the most successful drink driving countermeasures employed by police in Australia. Its success over the years has been evidenced by reductions in drink driving behaviour, reductions in alcohol-related crashes and fatal crashes and a corresponding community-wide increase in the disapproval of drink driving. Although a great deal of research has been able to highlight the relationship between increased police enforcement and road safety benefits, little is known about the organisational factors that assist or hinder the management and operation of RBT. The purpose of this thesis is to explore the perceived barriers and facilitators to the effective operation of RBT in the Queensland Police Service (QPS). Findings will have human resource implications for the QPS and will highlight areas that are currently functioning effectively.----- Study One involved 22 semi-structured interviews with 36 QPS managers involved in the day-to-day organisation and delivery of RBT operations. Managers were recruited with assistance from members of the QPS's State Traffic Support Branch. The interviews were approximately one hour long and involved exploration of the perceptions of managers involved in the planning and delivery of RBT operations using the concept of organisational alignment to structure the interviews. The results revealed that RBT management activity is facilitated by a range of factors, including: the belief in the importance of RBT; belief that the purpose of RBT has both a deterrent function and a detection function; the increasing use of intelligence to guide RBT strategies; the increasing use of RBT to support other crime reduction strategies; and a genuine desire to improve the current state of affairs. However, a number of apparent barriers to the effective operation of RBT were identified. These included concern about the strategy of the 1.1 testing strategy (i.e. conducting the equivalent of one test per licensed driver per annum), a misunderstanding of the role of general and specific deterrence and a lack of feedback in relation to the success of RBT.----- The second study involved a questionnaire that was distributed to a random sample of 950 operational police stratified across the regions who are responsible for undertaking RBT on a regular basis. There were 421 questionnaires returned representing a response rate of 44%. Questionnaires were also based on the concepts and constructs of organisational alignment and explored perceptions, beliefs and self- reported behaviour of officers. The results revealed that facilitating factors included a belief in QPS ownership of the RBT program, the agreement that the RBT vision includes road safety goals and apprehension goals, and overall motivation, support and belief in their capability to carry out RBT duties. Barriers included perceived strain related to the 1:1 testing strategy, the lack of feedback in relation to the success of RBT, misunderstanding about the role of deterrence and lack of rewards for participating in RBT duties.----- The results of both studies have implications for the planning and operation of RBT in the QPS. While the findings revealed that there were many aspects of the RBT program that were currently aligned with best practice guidelines, there are areas of misalignment. In particular, the main areas of misalignment included concern about the strain caused by the current 1:1 testing strategy, a lack of feedback about the success of RBT and a lack of education of the nature and role of deterrence in road safety and RBT operations in particular.
APA, Harvard, Vancouver, ISO, and other styles
43

Lineburg, Mark Young. "An Analysis of Random Student Drug Testing Policies and Patterns of Practice In Virginia Public Schools." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/26340.

Full text
Abstract:
There were two purposes to this study. First, the study was designed to determine which Virginia public school districts have articulated policies that govern random drug testing of students and if school districts' policies aligned with U.S. Supreme Court standards and Virginia statutes. The second purpose was to ascertain the patterns of practice in selected Virginia school districts that currently conduct random drug testing of students. This included identifying which student groups were being tested and for which drugs. It was also of interest to learn how school districts monitor the testing program and if drug testing practices were aligned with the policies that govern them. Data were gathered by examining student handbooks and district policies in order to determine which school districts had drug testing policies. These policies then were analyzed using a legal framework constructed from U.S. Supreme Court standards that have emerged from case law governing search and seizure in schools. Finally, data on patterns of practice were collected through in-depth interviewing and observation of those individuals responsible for implementing student drug testing in those districts that have such programs. The analyses revealed that the current policies and patterns of practice in random drug testing programs in Virginia public schools comply with Supreme Court standards and state statutes. Student groups subject to testing in Virginia public schools include student athletes and students in extracurricular activities in grades eight through twelve. Monitoring systems in the school districts implementing random drug testing were not consistent. There is evidence that the school districts implementing random drug testing programs have strong community support for the program.
Ed. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Marks, Anthony Michael. "Random question sequencing in computer-based testing (CBT) assessments and its effect on individual student performance." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-06042008-083644/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mehrmand, Arash. "A Factorial Experiment on Scalability of Search-based Software Testing." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4224.

Full text
Abstract:
Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although many researches have been done in the area of automated software testing, this research differs from all of them due to sample programs (SUTs) which are used. Since the program generation is automatic as well, Grammatical Evolution is used to guide the program generations. They are not goal based, but generated according to the grammar we provide, with different levels of complexity. Genetic algorithm is first applied to programs, then we apply random testing. Based on the results which come up, this paper recommends one method to use for software testing, if the SUT has the same conditions as we had in this study. SUTs are not like the sample programs, provided by other studies since they are generated using a grammar.
APA, Harvard, Vancouver, ISO, and other styles
46

Hulme, Charles A. "Testing and evaluation of the configurable fault tolerant processor (CFTP) for space-based application." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHulme.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 2003.
Thesis advisor(s): Herschel H. Loomis, Jr., Alan A. Ross. Includes bibliographical references (p. 241-243). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
47

Chitenderu, Tafadzwa Thelmah. "Testing random walk hypothesis in the stock market prices: evidence from South Africa's stock exchange (2000- 2011)." Thesis, University of Fort Hare, 2013. http://hdl.handle.net/10353/d1006931.

Full text
Abstract:
The Johannesburg Stock Exchange market was tested for the existence of the random walk hypothesis using All Share Index (ALSI) and time series data for the period between 2000 and 2011. The traditionally used methods, the unit root tests and autocorrelation test were employed first and they all confirmed that during the period under consideration, the JSE price index followed the random walk process. In addition, the ARIMA model was built and it was found that the ARIMA ( 1, 1, 1) was the model that best fitted the data in question. Furthermore, residual tests to help determine whether the residuals of the estimated equation show random walk process in the series were done. It was found that the ALSI resembles series that follow random walk hypothesis with strong evidence of RWH indicated in the conducted forecasting tests which showed vast variance between forecasted values and actual indicating little or no forecasting strength in the series. To further validate the findings in this research, the variance ratio test was conducted under heteroscedasticity and it also strongly corroborated that the existence of a random walk process cannot be rejected in the JSE. It was concluded that since the returns follow the random walk hypothesis, it can be said that JSE is efficient in the weak form level of the EMH and therefore opportunities of making excess returns based on out- performing the market is ruled out and is merely a game of chance. In other words, it will be of no use to choose stocks based on information about recent trends in stock prices.
APA, Harvard, Vancouver, ISO, and other styles
48

O’Donnell, John. "SOME PRACTICAL CONSIDERATIONS IN THE USE OF PSEUDO-RANDOM SEQUENCES FOR TESTING THE EOS AM-1 RECEIVER." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609651.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California
There are well-known advantages in using pseudo-random sequences for testing of data communication links. The sequences, also called pseudo-noise (PN) sequences, approximate random data very well, especially for sequences thousands of bits long. They are easy to generate and are widely used for bit error rate testing because it is easy to synchronize a slave pattern generator to a received PN stream for bit-by-bit comparison. There are other aspects of PN sequences, however, that are not as widely known or applied. This paper points out how some of the less familiar characteristics of PN sequences can be put to practical use in the design of a Digital Test Set and other specialbuilt test equipment used for checkout of the EOS AM-1 Space Data Receiver. The paper also shows how knowledge of these PN sequence characteristics can simplify troubleshooting the digital sections in the Space Data Receiver. Finally, the paper addresses the sufficiency of PN data testing in characterizing the performance of a receiver/data recovery system.
APA, Harvard, Vancouver, ISO, and other styles
49

Prasai, Nilam. "Testing Criterion Validity of Benefit Transfer Using Simulated Data." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/34685.

Full text
Abstract:
The purpose of this thesis is to investigate how the differences between the study and policy sites impact the performance of benefit function transfer. For this purpose, simulated data are created where all information necessary to conduct the benefit function transfer is available. We consider the six cases of difference between the study and policy sites- scale parameter, substitution possibilities, observable characteristics, population preferences, measurement error in variables, and a case of preference heterogeneity at the study site and fixed preferences at the policy site. These cases of difference were considered one at time and their impact on quality of transfer is investigated. RUM model based on reveled preference was used for this analysis. Function estimated at the study site is transferred to the policy site and willingness to pay for five different cases of policy changes are calculated at the study site. The willingness to pay so calculated is compared with true willingness to pay to evaluate the performance of benefit function transfer. When the study and policy site are different only in terms of scale parameter, equality of estimated and true expected WTP is not rejected for 89.7% or more when the sample size is 1000. Similarly, equality of estimated preference coefficients and true preference coefficients is not rejected for 88.8% or more. In this study, we find that benefit transfer performs better only in one direction. When the function is estimated at lower scale and transferred to the policy site with higher scale, the transfer error is less in magnitude than those which are estimated at higher scale and transferred to the policy site with lower scale. This study also finds that transfer error is less when the function from the study site having more site substitutes is transferred to the policy site having less site substitutes whenever there is difference in site substitution possibilities. Transfer error is magnified when measurement error is involved in any of the variables. This study do not suggest function transfer whenever the study siteâ s model is missing one of the important variable at the policy site or whenever the data on variables included in study siteâ s model is not available at the policy site for benefit transfer application. This study also suggests the use of large representative sample with sufficient variation to minimize transfer error in benefit transfer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Banadaki, Davood Dehgan, Sunay Sami Durmush, and Sharif Zahiri. "Statistical Assessment of Uncertainties Pertaining to Uniaxial Vibration Testing and Required Test Margin for Fatigue Life Verification." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2147.

Full text
Abstract:
In the automotive industry uniaxial vibration testing is a common method used to predict the lifetime of components. In reality truck components work under multiaxial loads meaning that the excitation is multiaxial. A common method to account for the multiaxial effect is to apply a safety margin to the uniaxial test results. The aim of this work is to find a safety margin between the uniaxial and multiaxial testing by means of virtual vibration testing and statistical methods. Additionally to the safety margin the effect of the fixture’s stiffness on the resulting stress in components has been also investigated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography