Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: TESTING DATA.

Rozprawy doktorskie na temat „TESTING DATA”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „TESTING DATA”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Araujo, Roberto Paulo Andrioli de. "Scalable data-flow testing". Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-14112014-155259/.

Pełny tekst źródła
Streszczenie:
Data-flow (DF) testing was introduced more than thirty years ago aiming at verifying a program by extensively exploring its structure. It requires tests that traverse paths in which the assignment of a value to a variable (a definition) and its subsequent reference (a use) is verified. This relationship is called definition-use association (dua). While control-flow (CF) testing tools have being able to tackle systems composed of large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This research presents the implementation of BA for programs compiled into Java bytecodes. Previous DF approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 250 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular CF testing tool.
Teste de fluxo de dados (TFD) foi introduzido há mais de trinta anos com o objetivo de criar uma avaliação mais abrangente da estrutura dos programas. TFD exige testes que percorrem caminhos nos quais a atribuição de valor a uma variável (definição) e a subsequente referência a esse valor (uso) são verificados. Essa relação é denominada associação definição-uso. Enquanto as ferramentas de teste de fluxo de controle são capazes de lidar com sistemas compostos de programas grandes e que executam durante bastante tempo, as ferramentas de TFD não têm obtido o mesmo sucesso. Esta situação é, em parte, devida aos custos associados ao rastreamento de associações definição-uso em tempo de execução. Recentemente, foi proposto um algoritmo --- chamado \\textit (BA) --- que usa vetores de bits e operações bit a bit para monitorar associações definição-uso em tempo de execução. Esta pesquisa apresenta a implementação de BA para programas compilados em Java. Abordagens anteriores são capazes de lidar com programas pequenos e de médio porte com altas penalidades em termos de execução e memória. Os resultados experimentais mostram que, usando BA, é possível utilizar TFD para verificar sistemas com mais de 250 mil linhas de código e 300 mil associações definição-uso. Além disso, para vários programas, a penalidade de execução imposta por BA é comparável àquela imposta por uma popular ferramenta de teste de fluxo de controle.
Style APA, Harvard, Vancouver, ISO itp.
2

McGaughey, Karen J. "Variance testing with data depth /". Search for this dissertation online, 2003. http://wwwlib.umi.com/cr/ksu/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Khan, M. Shahan Ali, i Ahmad ElMadi. "Data Warehouse Testing : An Exploratory Study". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4767.

Pełny tekst źródła
Streszczenie:
Context. The use of data warehouses, a specialized class of information systems, by organizations all over the globe, has recently experienced dramatic increase. A Data Warehouse (DW) serves organiza-tions for various important purposes such as reporting uses, strategic decision making purposes, etc. Maintaining the quality of such systems is a difficult task as DWs are much more complex than ordi-nary operational software applications. Therefore, conventional methods of software testing cannot be applied on DW systems. Objectives. The objectives of this thesis study was to investigate the current state of the art in DW testing, to explore various DW testing tools and techniques and the challenges in DW testing and, to identify the improvement opportunities for DW testing process. Methods. This study consists of an exploratory and a confirmatory part. In the exploratory part, a Systematic Literature Review (SLR) followed by Snowball Sampling Technique (SST), a case study at a Swedish government organization and interviews were conducted. For the SLR, a number of article sources were used, including Compendex, Inspec, IEEE Explore, ACM Digital Library, Springer Link, Science Direct, Scopus etc. References in selected studies and citation databases were used for performing backward and forward SST, respectively. 44 primary studies were identified as a result of the SLR and SST. For the case study, interviews with 6 practitioners were conducted. Case study was followed by conducting 9 additional interviews, with practitioners from different organizations in Sweden and from other countries. Exploratory phase was followed by confirmatory phase, where the challenges, identified during the exploratory phase, were validated by conducting 3 more interviews with industry practitioners. Results. In this study we identified various challenges that are faced by the industry practitioners as well as various tools and testing techniques that are used for testing the DW systems. 47 challenges were found and a number of testing tools and techniques were found in the study. Classification of challenges was performed and improvement suggestions were made to address these challenges in order to reduce their impact. Only 8 of the challenges were found to be common for the industry and the literature studies. Conclusions. Most of the identified challenges were related to test data creation and to the need for tools for various purposes of DW testing. The rising trend of DW systems requires a standardized testing approach and tools that can help to save time by automating the testing process. While tools for operational software testing are available commercially as well as from the open source community, there is a lack of such tools for DW testing. It was also found that a number of challenges are also related to the management activities, such as lack of communication and challenges in DW testing budget estimation etc. We also identified a need for a comprehensive framework for testing data warehouse systems and tools that can help to automate the testing tasks. Moreover, it was found that the impact of management factors on the quality of DW systems should be measured.
Shahan (+46 736 46 81 54), Ahmad (+46 727 72 72 11)
Style APA, Harvard, Vancouver, ISO itp.
4

Andersson, Johan, i Mats Burberg. "Testing For Normality of Censored Data". Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-253889.

Pełny tekst źródła
Streszczenie:
In order to make statistical inference, that is drawing conclusions from a sample to describe a population, it is crucial to know the correct distribution of the data. This paper focused on censored data from the normal distribution. The purpose of this paper was to answer whether we can test if data comes from a censored normal distribution. This by using normality tests and tests designed for censored data and investigate if we got correct size of these tests. This has been carried out with simulations in the program R for left censored data. The results indicated that with increasing censoring normality tests failed to accept normality in a sample. On the other hand the censoring tests met the requirements with increasing censoring level, which was the most important conclusion in this paper.
Style APA, Harvard, Vancouver, ISO itp.
5

Sestok, Charles K. (Charles Kasimer). "Data selection in binary hypothesis testing". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16613.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (p. 119-123).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Traditionally, statistical signal processing algorithms are developed from probabilistic models for data. The design of the algorithms and their ultimate performance depend upon these assumed models. In certain situations, collecting or processing all available measurements may be inefficient or prohibitively costly. A potential technique to cope with such situations is data selection, where a subset of the measurements that can be collected and processed in a cost-effective manner is used as input to the signal processing algorithm. Careful evaluation of the selection procedure is important, since the probabilistic description of distinct data subsets can vary significantly. An algorithm designed for the probabilistic description of a poorly chosen data subset can lose much of the potential performance available to a well-chosen subset. This thesis considers algorithms for data selection combined with binary hypothesis testing. We develop models for data selection in several cases, considering both random and deterministic approaches. Our considerations are divided into two classes depending upon the amount of information available about the competing hypotheses. In the first class, the target signal is precisely known, and data selection is done deterministically. In the second class, the target signal belongs to a large class of random signals, selection is performed randomly, and semi-parametric detectors are developed.
by Charles K. Sestok, IV.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Yan. "Multiple Testing in Discrete Data Setting". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1276747166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Clements, Nicolle. "Multiple Testing in Grouped Dependent Data". Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/253695.

Pełny tekst źródła
Streszczenie:
Statistics
Ph.D.
This dissertation is focused on multiple testing procedures to be used in data that are naturally grouped or possess a spatial structure. We propose `Two-Stage' procedure to control the False Discovery Rate (FDR) in situations where one-sided hypothesis testing is appropriate, such as astronomical source detection. Similarly, we propose a `Three-Stage' procedure to control the mixed directional False Discovery Rate (mdFDR) in situations where two-sided hypothesis testing is appropriate, such as vegetation monitoring in remote sensing NDVI data. The Two and Three-Stage procedures have provable FDR/mdFDR control under certain dependence situations. We also present the Adaptive versions which are examined under simulation studies. The `Stages' refer to testing hypotheses both group-wise and individually, which is motivated by the belief that the dependencies among the p-values associated with the spatially oriented hypotheses occur more locally than globally. Thus, these `Staged' procedures test hypotheses in groups that incorporate the local, unknown dependencies of neighboring p-values. If a group is found significant, further investigation is done to the individual p-values within that group. For the vegetation monitoring data, we extend the investigation by providing some spatio-temporal models and forecasts to some regions where significant change was detected through the multiple testing procedure.
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
8

Chandorkar, Chaitrali Santosh. "Data Driven Feed Forward Adaptive Testing". PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1049.

Pełny tekst źródła
Streszczenie:
Test cost is a critical component in the overall cost of the product. Test cost varies in direct proportion with test time. This thesis introduces a data driven feed forward adaptive technique for reducing test time at wafer sort while maintaining the product defect level. Test data from first insertion of wafer is statistically analyzed to make a decision about adaptive test flow at subsequent insertions. The data driven feed forward technique uses a statistical screen to analyze test data from first probe of wafer and provides recommendations for test elimination at second insertions. At the second insertion dies are subjected to only the optimum number of tests for a reduced test flow. This technique is applicable only for the products which are tested at two or more insertions. The statistical screen identifies the dies for reduced test flow based upon correlation of tests across insertions. The tests which are repeated at both the insertions and are highly correlated are the candidates of elimination at second insertion. The feed forward technique is applied to a mixed signal analog product and figures of merit are evaluated. Application of technique to the production data shows that there is an average 55% test time reduction when a single site is tested per touchdown and up to 10% when 16 sites are tested in parallel per touchdown.
Style APA, Harvard, Vancouver, ISO itp.
9

林旭明 i Yuk-ming Lam. "Automation in soil testing". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209774.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hu, Zongliang. "New developments in multiple testing and multivariate testing for high-dimensional data". HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/534.

Pełny tekst źródła
Streszczenie:
This thesis aims to develop some new and novel methods in advancing multivariate testing and multiple testing for high-dimensional small sample size data. In Chapter 2, we propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not need the requirement that the covariance matrices follow a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and readily applicable in practice. Monte Carlo simulations and a real data analysis are also carried out to demonstrate the advantages of the proposed methods. In Chapter 3, we propose a pairwise Hotelling's method for testing high-dimensional mean vectors. The new test statistics make a compromise on whether using all the correlations or completely abandoning them. To achieve the goal, we perform a screening procedure, pick up the paired covariates with strong correlations, and construct a classical Hotelling's statistic for each pair. While for the individual covariates without strong correlations with others, we apply squared t statistics to account for their respective contributions to the multivariate testing problem. As a consequence, our proposed test statistics involve a combination of the collected pairwise Hotelling's test statistics and squared t statistics. The asymptotic normality of our test statistics under the null and local alternative hypotheses are also derived under some regularity conditions. Numerical studies and two real data examples demonstrate the efficacy of our pairwise Hotelling's test. In Chapter 4, we propose a regularized t distribution and also explore its applications in multiple testing. The motivation of this topic dates back to microarray studies, where the expression levels of thousands of genes are measured simultaneously by the microarray technology. To identify genes that are differentially expressed between two or more groups, one needs to conduct hypothesis test for each gene. However, as microarray experiments are often with a small number of replicates, Student's t-tests using the sample means and standard deviations may suffer a low power for detecting differentially expressed genes. To overcome this problem, we first propose a regularized t distribution and derive its statistical properties including the probability density function and the moments. The noncentral regularized t distribution is also introduced for the power analysis. To demonstrate the usefulness of the proposed test, we apply the regularized t distribution to the gene expression detection problem. Simulation studies and two real data examples show that the regularized t-test outperforms the existing tests including Student's t-test and the Bayesian t-test in a wide range of settings, in particular when the sample size is small.
Style APA, Harvard, Vancouver, ISO itp.
11

Devine, Timothy Andrew. "Fusing Modeling and Testing to Enhance Environmental Testing Approaches". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/101685.

Pełny tekst źródła
Streszczenie:
A proper understanding of the dynamics of a mechanical system is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost effective method for rapidly developing a knowledge of the system, however the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however can be expensive both fiscally and temporally. Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group. Further advancement into ways to fuse modeling and testing together is able to produce a more comprehensive understanding of dynamic systems while remaining inexpensive in terms of computation, financial cost, and time. Due to this, the goal of the presented work is to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems. The first venue explored was an attempt at modeling unknown boundary conditions from an operational environment by modeling the same system in known configurations using a controlled environment, such as what is seen in a laboratory test. An analytical beam was studied under applied environmental loading with grounding stiffnesses added to simulate an operational condition and the response was attempted to be matched by a free boundaries beam with a reduced number of excitation points. Due to the properties of the inverse problem approach taken, the response between the two systems matched at control locations, however at non-control locations the responses showed a large degree of variation. From the mismatch in mechanical impedance, it is apparent that improperly representing boundary conditions can have drastic effects on the accuracy of models and recreational tests. With the progression now directed towards modeling and testing of boundary conditions, methods were explored to combine the two approaches working together in harmony. The second portion of this work focuses on modeling an unknown boundary connection using a collection of similar testable boundary conditions to parametrically interpolate to the unknown configuration. This was done by using data driven models of the known systems as the interpolating functions, with system boundary stiffness being the varied parameter. This approach yielded near identical parametric model response to the original system response in analytical systems and showed some early signs of promise for an experimental beam. After the two conducted studies, the potential for extending a parametric data driven model approach to other systems is discussed. In addition to this, improvements to the approach are discussed as well as the benefits it brings.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
12

Larsen, Fredrik Lied. "Conformance testing of Data Exchange Set implementations". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9258.

Pełny tekst źródła
Streszczenie:

Product information exchange has been described by a number of standards. The “Standard for the Exchange of Product model data” (STEP) is published by ISO as an international standard to cover this exchange. “Product Life Cycle Support” (PLCS) is a standard developed as an extension to STEP, covering the complete life cycle information needs for products. PLCS uses Data Exchange Sets (DEXs) to exchange information. A DEX is a subset of the PLCS structure applicable for product information exchange. A DEX is specified in a separate document form the PLCS standard, and is published under OASIS. The development of DEXs is ongoing and changing, nine DEXs have been identified and are being developed within the Organization for the Advancement of Structured Information Standards (OASIS). Each of the nine DEXs covers a specific business concept. Implementations based on the DEX specifications are necessary in order to send and receive populated DEXs with product information. The implementations add contents to a DEX structure in a neutral file format which can be exchanged. Interoperability between senders and receivers of DEXs can not be guaranteed, however, conformance testing of implementations can help increase the chances of interoperability. Conformance testing is the process of testing an implementation against a set of requirements stated in a specification or standard used to develop the implementation. Conformance testing is performed by sending inputs to the implementation and observing the output. The output is then analysed with respect to expected output. STEP dedicates a whole section of the standard to conformance testing of STEP implementations. This section describes how implementations of STEP shall be tested and analysed. PLCS is an extension of STEP, and DEXs are subsets of PLCS. Conformance testing for STEP is used as a basis for DEX conformance testing, because of the similarities between PLCS and STEP. A testing methodology based on STEP conformance testing and DEX specifications is developed. The testing methodology explains how conformance testing can be achieved on DEX implementations exemplified with a test example on a specific DEX. The thesis develops a proposed set of test methods for conformance testing DEX adapter implementations. Conformance testing of Export adapters tests the adapter’s ability to populate and output a correct DEX according to the specifications in the applicable DEX specification. Conformance testing of the Import adapter verifies that the content of the populated input DEX is retrievable in the receiving computer system. A specific DEX, “Identify a part and its constituent parts”, is finally used as an example on how to test a specific DEX specification. Test cases are derived from a set of test requirements identified from the DEX specification. Testing of these requirements is explained explicitly.

Style APA, Harvard, Vancouver, ISO itp.
13

Curry, Diarmuid. "Data Acquisition Blasts Off - Space Flight Testing". International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606142.

Pełny tekst źródła
Streszczenie:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
In principle, the requirements for a flight test data acquisition system for space testing (launch vehicles, orbiters, satellites and International Space Station (ISS) installations) are very similar to those for more earth-bound applications. In practice, there are important environmental and operational differences that present challenges for both users and vendors of flight test equipment. Environmental issues include the severe vibration and shock experienced on take-off, followed by a very sharp thermal shock, culminating (for orbital vehicles) in a low temperature, low pressure, high radiation operating environment. Operational issues can include the need to dynamically adapt to changing configurations (for example when an instrumented stage is released) and the difficulty in Telemetering data during the initial launch stage from a vehicle that may not be recoverable, and therefore does not offer the option of an on-board recorder. Addressing these challenges requires simple, rugged and flexible solutions. Traditionally these solutions have been bespoke, specifically designed equipment. In an increasingly cost-conscious environment engineers are now looking to commercial off-the-shelf solutions. This paper discusses these solutions and highlights the issues that instrumentation engineers need to consider when designing or selecting flight test equipment.
Style APA, Harvard, Vancouver, ISO itp.
14

Liu, Zheng. "Studies on Data Fusion of Nondestructive Testing". Kyoto University, 2000. http://hdl.handle.net/2433/180956.

Pełny tekst źródła
Streszczenie:
本文氏名 : 刘(劉) 征
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第8362号
工博第1927号
新制||工||1171(附属図書館)
UT51-2000-F266
京都大学大学院工学研究科資源工学専攻
(主査)教授 花崎 紘一, 教授 英保 茂, 教授 芦田 讓
学位規則第4条第1項該当
Style APA, Harvard, Vancouver, ISO itp.
15

McClellan, Griffin David. "Weakest Pre-Condition and Data Flow Testing". PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5200.

Pełny tekst źródła
Streszczenie:
Current data flow testing criteria cannot be applied to test array elements for two reasons: 1. The criteria are defined in terms of graph theory which is insufficiently expressive to investigate array elements. 2. Identifying input data which test a specified array element is an unsolvable problem. We solve the first problem by redefining the criteria without graph theory. We address the second problem with the invention of the wp_du method, which is based on Dijkstra's weakest pre-condition formalism. This method accomplishes the following: Given a program, a def-use pair and a variable (which can be an array element), the method computes a logical expression which characterizes all the input data which test that def-use pair with respect to that variable. Further, for any data flow criterion, this method can be used to construct a logical expression which characterizes all test sets which satisfy that data flow criterion. Although the wp_du method cannot avoid unsolvability, it does confine the presence of unsolvability to the final step in constructing a test set.
Style APA, Harvard, Vancouver, ISO itp.
16

Lamphear, Eric, Alfredo J. Berard i Lorin D. Klein. "ACCEPTANCE TESTING PROCEDURE (ATP) COMPLIANCE TESTING OF IRIG-106 CHAPTER 10 RECORDERS". International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604915.

Pełny tekst źródła
Streszczenie:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The Range Commanders Council (RCC) Inter-Range Instrumentation Group (IRIG) 106 Chapter 10 (CH 10) Solid State recording standard has made the possibility of large scale interoperability between ranges, test and operational communities, and maintenance a reality. The standard allows for software and hardware playback/analysis tools to be created that will work seamlessly with any IRIG-106 CH 10 compliant recorder. Incorporation of a standard also allows the same recorder to record Video, Audio as well as data from MIL-STD-1553 busses and instrumentation data (PCM, UART, etc.). The IRIG-106 CH 10 standard provides enormous benefits for its users, but without a fully compliant IRIG-106 CH 10 recorder, these benefits cannot be realized.
Style APA, Harvard, Vancouver, ISO itp.
17

Cousins, Michael Anthony. "Automated structural test data generation". Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261234.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Barwary, Sara, i Tina Abazari. "Preprocessing Data: A Study on Testing Transformations for Stationarity of Financial Data". Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254301.

Pełny tekst źródła
Streszczenie:
In thesis within Industrial Economics and Applied Mathematics in cooperation with Svenska Handelsbanken given transformations was examined in order to assess their ability to make a given time series stationary. In addition, a parameter α belonging to each of the transformation formulas was to be decided. To do this an extensive study of previous research was conducted and two different tests of hypothesis where obtained to confirm output. A result was concluded where a value or interval for α was chosen for each transformation. Moreover, the first difference transformation is proven to have a positive effect on stationarity of financial data.
Det här kandidatexamensarbetet inom Industriell Ekonomi och tillämpad matematik i samarbete med Handelsbanken undersöker givna transformationer för att bedöma deras förmåga att göra givna tidsserier stationära. Dessutom skulle en parameter α tillhörande varje transformations formel bestämmas. För att göra detta utfördes en omfattande studie av tidigare forskning och två olika hypotestester gjordes för att bekräfta output. Ett resultat sammanställdes där ett värde eller ett intervall för α valdes till varje transformation. Dessutom visade det sig att "first difference" transformationen är bra för stationäritet av finansiell data.
Style APA, Harvard, Vancouver, ISO itp.
19

Lee, Sang Han. "Estimating and testing of functional data with restrictions". [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1626.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Zhao, Chong. "Essays on unit root testing in panel data". Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4946/.

Pełny tekst źródła
Streszczenie:
This thesis discusses some issues on unit root testing in panel data. It first examines the intra-China price convergence by employing panel unit root tests that take cross-sectional dependence into account. Contrast to the existing literature, where tests assuming independence are employed and PPP is found in the vast majority of goods/services prices, our study finds mixed evidence in favor of PPP. Mixed panels with both I(1) and I(0) units are then considered, a large scale simulation study is undertaken. Size/power of panel unit root tests are examined under a variety of DGPs. A battery of procedures designed for mixed panels are employed, and their performance are examined by simulation. An application on intra-China PPP shows that, on average, only a small proportion of stationary units can be found in relative price panels. We then consider fractionally integrated processes and propose two different types of panel fractional integration test, a Fisher-type test and a multiple testing procedure that controls the false discovery rate (FDR) and classify units into null and alternative. Simulation evidence is provided. Empirical application shows that, in our intra-China PPP study, strong evidence can be found against the unit root null.
Style APA, Harvard, Vancouver, ISO itp.
21

Monteiro, Vitor Borges. "Infrastructure and growth: testing data in three panel". Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8284.

Pełny tekst źródła
Streszczenie:
nÃo hÃ
The thesis consists of three chapters that have in common estimation models for panel data. The first chapter titled "Energy Consumption, GDP per capita and Exports: Evidence of long-term causality in a panel for the Brazilian States" analyzes the order of causality between the variables and then checks the long-term elasticities using the methodology FMOLS. It shows that GDP per capita is caused by their own past achievements, by consumption of electricity and exports. The consumption of electricity and exports, only are not caused by GDP per capita. Through the model FMOLS were estimated elasticities of long-term. The 1% increase in energy consumption and exports increased respectively 0.07% and 0.04 % in GPD per capita. The second chapter, entitled "Sustainability of Health Expenditure and Sanitation in Brazil: an analysis with Panel Data for the period 1985 to 2005" examines the sustainability of Health Expenditure and Sanitation of the states and the Federal District of Brazil, during the period 1985 to 2005. For this, we use the ratio of Expenditure by Function (Health and Sanitation) and GDP. The unit root tests for panel data refute the null hypothesis of presence of the unit root (the stochastic process is stationary) at 5% significance level. Accordingly, we can infer that the policy of health expenditure as a proportion of GDP remained almost stable (sustainable) over the period in question. The third chapter entitled "Formation of Convergence Clubs and Analysis of the Determinants of Economic Growth" support the formation of 10 clubs of convergence for a sample of 112 countries with per capita GPD data from 1980 to 2014 using the Phillips and Sul methodology (2007). Logged clubs and estimated a panel to investigate the impact of macroeconomic variables in the dynamics of economic growth rate through the Arellano and Bond model (1991) showed that: i) Inflation impacts the growth rate negatively, with effect greater for clubs that converge to a higher level of per capita income ii) imports as a proportion of GDP have positive relationship with the growth rate of per capita income for the countries belonging to clubs intermediaries, and a negative effect for other clubs iii) Exports as a proportion of GDP have a positive effect for all clubs, but is more pronounced for clubs that converge to a lower level of income and iv) international reserves have a positive effect for clubs that converge to high levels of income and a negative effect on clubs that converge to low levels of income.
A tese à composta por trÃs capÃtulos que possuem em comum modelos de estimaÃÃo para dados em painel. O primeiro capÃtulo intitulado âConsumo de Energia ElÃtrica, PIB per capita e ExportaÃÃo: Uma evidÃncia de causalidade de longo prazo em um painel para os Estados brasileirosâ analisa a o ordem de causalidade entre as variÃveis e posteriormente verifica as elasticidades de longo prazo atravÃs da metodologia FMOLS. Evidencia-se que o PIB per capita à causado pelas suas prÃprias realizaÃÃes passadas, pelo consumo de energia elÃtrica e pelas exportaÃÃes. Jà o consumo de energia elÃtrica e as exportaÃÃes, apenas nÃo sÃo causados pelo PIB per capita. AtravÃs do modelo FMOLS, estimaram-se as elasticidades de longo prazo. O aumento de 1% no consumo de energia e exportaÃÃes aumenta respectivamente 0,07% e 0,04% no PIB per capita. O segundo capÃtulo, intitulado âSustentabilidade dos Gasto com SaÃde e Saneamento no Brasil: uma anÃlise com Dados em Painel para o perÃodo de 1985 a 2005â examina a sustentabilidade dos gastos com saÃde e saneamento dos Estados e do Distrito Federal brasileiro, durante o perÃodo de 1985 a 2005. Para isso, utiliza-se da razÃo entre a Despesa por FunÃÃo (SaÃde e Saneamento) e o PIB. Os testes de raiz unitÃria para dados em painel refutam a hipÃtese nula de presenÃa de raiz de raiz unitÃria (i.e., o processo estocÃstico à estacionÃrio) ao nÃvel de 5% de significÃncia. Nestes termos, pode-se inferir que a polÃtica de gastos com saÃde como proporÃÃo do PIB praticamente permaneceu estÃvel (i.e., sustentÃvel) ao longo do perÃodo em questÃo. O terceiro capÃtulo intitulado âFormaÃÃo de Clubes de ConvergÃncia e AnÃlise dos Determinantes do Crescimento EconÃmicoâ sustenta a formaÃÃo de 10 clubes de convergÃncia para uma amostra de 112 paÃses com dados do PIB per capita de 1980 a 2014 atravÃs da metodologia Phillips e Sul (2007). Identificados os clubes e estimado um painel para verificar o impacto de variÃveis macroeconÃmicas na dinÃmica da taxa de crescimento econÃmico atravÃs do modelo Arellano e Bond (1991), evidenciou-se: i) A inflaÃÃo impacta a taxa de crescimento de forma negativa, com efeito maior para clubes que convergem para um nÃvel de renda per capita mais elevado; ii) As importaÃÃes como proporÃÃo do PIB possuem relaÃÃo positiva com a taxa de crescimento da renda per capita para os paÃses pertencentes a clubes intermediÃrios, e efeito negativo para os clubes do extremo; iii) As exportaÃÃes como proporÃÃo do PIB possuem efeito positivo para todos os clubes, porÃm à mais acentuado para clubes que convergem para um nÃvel de renda mais baixo e; iv) As reservas internacionais possuem efeito positivo para clubes que convergem para elevados nÃveis de renda e efeito negativo para os clubes que convergem para baixos nÃveis de renda.
Style APA, Harvard, Vancouver, ISO itp.
22

Stenberg, Erik. "SEQUENTIAL A/B TESTING USING PRE-EXPERIMENT DATA". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385253.

Pełny tekst źródła
Streszczenie:
This thesis bridges the gap between two popular methods of achieving more efficient online experiments, sequential tests and variance reduction with pre-experiment data. Through simulations, it is shown that there is efficiency to be gained in using control-variates sequentially along with the popular mixture Sequential Probability Ratio Test. More efficient tests lead to faster decisions and smaller sample sizes required. The technique proposed is also tested using empirical data on users from the music streaming service Spotify. An R package which includes the main tests applied in this thesis is also presented.
Style APA, Harvard, Vancouver, ISO itp.
23

Cintura, Manuel. "An Embedded Data Logger for In-Vehicle Testing". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23841/.

Pełny tekst źródła
Streszczenie:
This thesis describes an embedded data logger project, composed of software part (in C++ language) and hardware part (Raspberry Pi). It is illustrated the whole procedure from the start of the project with requirements to the end with the experimental results and validation phase. The device is able to acquire, in a synchronous way, videos, CAN and Serial logs from the vehicle under test.
Style APA, Harvard, Vancouver, ISO itp.
24

Yung, Wing Ka Angela, i Wing Ka Angela Yung. "Collecting Normative Data For Video Head Impulse Testing". Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625327.

Pełny tekst źródła
Streszczenie:
The semicircular canals are involved in the coding of angular acceleration of the head and body. Presently, video-nystagmography (VNG) and specifically, caloric testing, is the gold standard for evaluation of semicircular canal function. Caloric irrigation via VNG can only evaluate horizontal semicircular canal function; with this test, there is no way to evaluate the function of the anterior and posterior vertical semicircular canals. The video Head Impulse Test (vHIT) is a relatively new protocol that has the capability to test the function of the horizontal, anterior vertical, and posterior vertical semicircular canals. Because the vHIT system is newly available to clinicians, there is a need to collect normative data, particularly for the vertical semicircular canals. For this study, data was collected from 12 participants with no complaint or history of balance difficulty. Additionally, we compared our data with normative data collected in an earlier study to determine consistency. Lateral average velocity gain measurements were consistent however, a comparison of RALP an LARP velocity gain measurements showed inconsistency.
Style APA, Harvard, Vancouver, ISO itp.
25

Becker, Ralf. "Testing for nonlinear structure in time-series data". Thesis, Queensland University of Technology, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Hull, Roy T. Jr. "TELEMETRY IN TESTING OF UNDERSEAS WEAPONS". International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612893.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The performance testing of underseas weapons involves many of the same challenges as for other “smart” systems. Data sets on the order of GigaBytes must be extracted, processed, analyzed, and stored. A few KiloBytes of significant information must be efficiently identified and accessed for analysis out of the great mass of data. Data from various sources must be time correlated and fused together to allow full analysis of the complex interactions which lead to a given test result. The fact that the various sources all use different formats and medias just adds to the fun. Testing of underseas weapons also involves some unique problems. Since real time data transmission is not practical; the vast bulk of the test data is recorded and then recovered with the vehicle at the end of the test. Acoustics are relied on for identification and ranging. As systems continue to get smarter; the rates, capacities, and “smarts” of the equipment and software used to process test data must similarly increase. The NUWES telemetry capabilities developed to test and analyze underseas weapons could be of use on other government related projects. “Key words: Telemetry, data processing, data analysis, undersea weapons, smart weapons, torpedoes, performance testing.”
Style APA, Harvard, Vancouver, ISO itp.
27

Tian, Xuwen, i 田旭文. "Data-driven textile flaw detection methods". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hdl.handle.net/10722/196091.

Pełny tekst źródła
Streszczenie:
This research develops three efficient textile flaw detection methods to facilitate automated textile inspection for the textile-related industries. Their novelty lies in detecting flaws with knowledge directly extracted from textile images, unlike existing methods which detect flaws with empirically specified texture features. The first two methods treat textile flaw detection as a texture classification problem, and consider that defect-free images of a textile fabric normally possess common latent images, called basis-images. The inner product of a basis-image and an image acquired from this fabric is a feature value of this fabric image. As the defect-free images are similar, their feature values gather in a cluster, whose boundary can be determined by using the feature values of known defect-free images. A fabric image is considered defect-free, if its feature values lie within this boundary. These methods extract the basis-images from known defect-free images in a training process, and require less consideration than existing methods on the degree of matching of a textile to the texture features specified for the textile. One method uses matrix singular value decomposition (SVD) to extract these basis-images containing the spatial relationship of pixels in rows or in columns. The alternative method uses tensor decomposition to find the relationship of pixels in both rows and columns within each training image and the common relationship of these training images. Tensor decomposition is found to be superior to matrix SVD in finding the basis-images needed to represent these defect-free images, because extracting and decomposing the tri-lateral relationship usually generates better basis-images. The third method solves the textile flaw detection problem by means of texture segmentation, and is suitable for online detection because it does not require texture features specified by experience or found from known defect-free images. The method detects the presence of flaws by using the contrast between regions in the feature images of a textile image. These feature images are the output of a filter bank consisting of Gabor filters with scales and rotations. This method selects the feature image with maximal image contrast, and partitions this image into regions with morphological watershed transform to facilitate faster searching of defect-free regions and to remove isolated pixels with exceptional feature values. Regions with no flaws have similar statistics, e.g. similar means. Regions with significantly dissimilar statistics may contain flaws and are removed iteratively from the set which initially contains all regions. Removing regions uses the thresholds determined by using Neyman-Pearson criterion and updated along with the remaining regions in the set. This procedure continues until the set only contains defect-free regions. The occurrence of the removed regions indicates the presence of flaws whose extents are decided by pixel classification using the thresholds derived from the defect-free regions. A prototype textile inspection system is built to demonstrate the automatic textile inspection process. The developed methods are proved reliable and effective by testing them with a variety of defective textile images. These methods also have several advantages, e.g. less empirical knowledge of textiles is needed for selecting texture features.
published_or_final_version
Industrial and Manufacturing Systems Engineering
Doctoral
Doctor of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
28

Gros, X. E. "Fusion of multiprobe NDT data". Thesis, Robert Gordon University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294936.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

曾偉明 i Wai-ming Peter Tsang. "Computer aided ultrasonic flaw detection and characterization". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31231007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Kumar, Dharmendra. "A COMPUTATIONALLY EFFICIENT METHOD OF ANALYZING THE PARAMETRIC SUBSTRUCTURES". Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275395.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Goyal, Shalabh. "Efficient Testing of High-Performance Data Converters Using Low-Cost Test Instrumentation". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14552.

Pełny tekst źródła
Streszczenie:
Test strategies were developed to reduce the overall production testing cost of high-performance data converters. A static linearity testing methodology, aimed at reducing the test time of A/D converters, was developed. The architectural information of A/D converters was used, and specific codes were measured. To test a high-performance A/D converters using low-performance and low-cost test equipment a dynamic testing methodology was developed. This involved post processing of measurement data. The effect of ground bounce on accuracy of specification measurement was analyzed, and a test strategy to estimate the A/D converter specifications more accurately in presence of ground bounce noise was developed. The proposed test strategies were simulated using behavioral modeling techniques and were implemented on commercially available A/D converter devices. The hardware experiments validated the proposed test strategies. The test cost analysis was done. It suggest that a significant reduction in cost can be obtained by using the proposed test methodologies for data converter production testing.
Style APA, Harvard, Vancouver, ISO itp.
32

Caprioli, Peter. "AMQP Standard Validation and Testing". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277850.

Pełny tekst źródła
Streszczenie:
As large-scale applications (such as the Internet of Things) become more common, the need to scale applications over multiple physical servers in- creases. One way of doing so is by utilizing middleware, a technique that breaks down a larger application into specific parts that each can run inde- pendently. Different middleware solutions use different protocols and mod- els. One such solution, AMQP (the Advanced Message Queueing Protocol), has become one of the most used middleware protocols as of late and mul- tiple open-source implementations of both the server and client side exists. In this thesis, a security and compatibility analysis of the wire-level protocol is performed against five popular AMQP libraries. Compatibility towards the official AMQP specification and variances between different implementa- tions are investigated. Multiple differences between libraries and the formal AMQP specification were found. Many of these differences are the same in all of the tested libraries, suggesting that they were developed using empir- ical development rather than following the specification. While these differ- ences were found to be subtle and generally do not pose any critical security, safety or stability risks, it was also shown that in some circumstances, it is possible to use these differences to perform a data injection attack, allowing an adversary to arbitrarily modify some aspects of the protocol. The protocol testing is performed using a software tester, AMQPTester. The tester is released alongside the thesis and allows for easy decoding/encoding of the protocol. Until the release of this thesis, no other dedicated AMQP testing tools existed. As such, future research will be made significantly easier.
Allt eftersom storskaliga datorapplikationer (t.ex. Internet of Things) blir vanligare så ökar behovet av att kunna skala upp dessa över flertalet fysiska servrar. En teknik som gör detta möjligt kallas Middleware. Denna teknik bryter ner en större applikation till mindre delar, individuellt kallade funk- tioner. Varje funktion körs oberoende av övriga funktioner vilket tillåter den större applikationen att skala mycket enkelt. Det finns flertalet Middleware- lösningar på marknaden idag. En av de mer populära kallas AMQP (Ad- vanced Message Queueing Protocol), som även har en stor mängd servrar och klienter på marknaden idag, varav många är släppta som öppen källkod. I rapporten undersöks fem populära klientimplementationer av AMQP med avseende på hur dessa hanterar det formellt definierade nätverksprotokollet. Ä ven skillnader mellan olika implementationer undersöks. Dessa skillnader evalueras sedan med avseende på både säkerhet och stabilitet. Ett flertal skillnader mellan de olika implementationerna och det formellt definierade protokollet upptäcktes. Många implementationer hade liknande avvikelser, vilket tyder på att dessa har utvecklats mot en specifik serverimplementation istället för mot den officiella specifikationen. De upptäckta skillnaderna visade sig vara små och utgör i de flesta fall inget hot mot säkerheten eller stabiliteten i protokollet. I vissa specifika fall var det, på grund av dessa skillnader, dock möjligt att genomföra en datainjektionsattack. Denna gör det möjlig för en attackerare att injecera arbiträra datatyper i vissa aspekter av protokollet. En mjukvarutestare, AMQPTester, används för att testa de olika imple- mentationerna. Denna testare publiceras tillsammans med rapporten och tillåter envar att själv med enkelhet koda/avkoda AMQP-protokollet. Hit- intills har inget testverktyg för AMQP existerat. I och med publicerandet av denna rapport och AMQPTester så förenklas således framtida forskning inom AMQP-protokollet.
Style APA, Harvard, Vancouver, ISO itp.
33

Zarneh, A. T. "Instrumentation for on-line autonomic function testing". Thesis, University of Bradford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371492.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Wenzel, Robert Joseph. "Multigigahertz digital test system electronics and high frequency data path modeling". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13334.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Tsai, Bor-Yuan. "A hybrid object-oriented class testing method : based on state-based and data-flow testing". Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311294.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Bittencourt, Marcelo Corrêa de. "Comparing different and inverter graph data structure". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/185987.

Pełny tekst źródła
Streszczenie:
Este documento apresenta uma análise de desempenho de quatro diferentes implementações de And-Inverter Graph (AIG). AIGs são estruturas de dados normalmente utilizadas em programas que são utilizados para design de circuitos digitais. Diferentes implementações da mesma estrutura de dados pode afetar o desempenho. Isto é demonstrado em trabalhos anteriores que avaliam o desempenho de diferentes pacotes BDD (Binary Decision Diagram), que é outra estrutura de dados largamente utilizada em síntese lógica. Foram implementadas quatro estruturas de dados diferentes utilizando grafos unidirecionais ou bidirecionais aos quais os nodos são referenciados utilizando ponteiros ou índices de inteiros não-negativos. Utilizando estas diferentes estruturas de dados de AIG, medimos como diferentes aspectos das implementações afetam o desempenho da execução de um algoritmo básico.
This document presents a performance analysis of four different And-Inverter Graph (AIG) implementations. AIG is a data structure commonly used in programs used for digital circuits design. Different implementations of the same data structure can affect performance. This is demonstrated by previous works that evaluate performance for different Binary Decision Diagram (BDD) packages, another data structure widely used in logic synthesis. We have implemented four distinct AIG data structures using a choice of unidirectional or bidirectional graphs in which the references to nodes are made using pointers or indexed using non-negative integers. Using these different AIG data structures, we measure how different implementation aspects affect performance in running basic algorithm.
Style APA, Harvard, Vancouver, ISO itp.
37

Holmeros, Linus. "Data acquisition system for rocket engine hot fire testing". Thesis, KTH, Maskinkonstruktion (Inst.), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99495.

Pełny tekst źródła
Streszczenie:
ECAPS har utvecklat ett unikt drivmedel med tillhörande raketmotor för satellitstyrning som skall kunna ersätta Hydrazin, vilket idag är det vanligaste bränsle ombord på satelliter. Hydrazin är extremt giftigt och cancerogent. Det nya drivmedlet har 6 % högre specifik impuls samt 30 % högre densitetsimpuls jämfört med Hydrazin. Dessutom utgör ECAPS drivmedel en betydligt mindre risk för människa och miljö. Denna rapport innefattar en litteraturstudie om raketmotorer som används på satteliter samt hur testmiljön är uppbyggd av ECAPS där de utvecklar egna motorer. En motor beskrivs först generellt från bränsletank till dysutlopp och kompletteras med teoretiska härledningar av användbara begrepp. För vidareutveckling av nya raketmotorer har det befintliga motortestsystemet för få mätkanaler och begränsad samplingshastighet (2 kHz). Vidare är operatörsinterface och mjukvaran i behov av uppgradering samt att antalet kanaler behöver bli fler .Rapporten behandlar implementeringen av ett nytt mätsystem som är skrivet i Labview 8.6 vilket har gett en förbättring av t.ex. prestanda, stabilitet och gränssnitt. Samplingsfrekvensen är nu 10 kHz på 24 kanaler med marginal för utbyggnad upp till 40 kanaler, larmfunktioner finns på temperaturgivare och valfria analoga givare, gränssnittet är logiskt och mer ergonomiskt samt att spårbarheten för alla typer av körningar sparas i unika loggar.
ECAPS has developed a unique propellant with a rocket engine which can be used to control satellites and replace Hydrazin which today is the most common fuel onboard on satellites. Hydrazin is extremely toxic and cancerogenic. The new propellant offers 6 % better specific impulse and 30 % better density impulse compared to hydrazine. ECPAS´s propellant also provides significant lower risks for both man and environment. The report includes a literature study about rocket engines which can be used on satellites and how the test environment is arranged where ECAPS develops their engines. The rocket engine is first generally described and then complemented a theoretical derivation of common concepts. For further development of new rocket engines the present engine test system has too few sensor channels and limited sampling capability (2 kHz). The operator interface and software can be upgraded and the number of channels needs to increase. This report treats the implementation of a new test system which is written in Labview 8.6 and has improved for example performance, stability and interface. The sampling frequency is now 10 kHz on 24 channels with a margin for up to 40 channels, alarm functions exists on both temperature and multiple choice sensors, the user interface is logic and more ergonomic together with increased traceability for different types of tests which are saved in unique logs.
Style APA, Harvard, Vancouver, ISO itp.
38

Suljkic, Jasmin, i Molin Eric Molin Eric. "Testing the predictability of stock markets on real data". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157691.

Pełny tekst źródła
Streszczenie:
Stock trading, one of the most common economic activities in the world where the values of stocks change quickly over time. Some are able to turn great profits while others turn great losses on stock trading. Being able to predict changes could be of great help in maximising chances of profitability. In this report we want to evaluate the predictability of stock markets using Artificial Neural Network models, Adaptive Neuro-Fuzzy inference systems and Autoregressive-moving-average models. The markets used is Stockholm, Korea and Barcelona Stock Exchange. We are using two test scenarios, one which consists of incrementing the initial 25 days of training with 5 days until the end of the stock year, and the other one consisting of moving the 25 training days, 5 days until the stock year is over. Our results consists of showing how the predictions look like on the Stockholm market for all the methods and test scenarios and also show graphs of the error rate (percentage) of all methods and test cases in each of the markets. Also a table showing the average error of the methods and test cases, to be able to evaluate which one will perform the best. Our results shows that the ANFIS with at least 50 days of training will perform the best.
Style APA, Harvard, Vancouver, ISO itp.
39

Carter, Jason W. "Testing effectiveness of genetic algorithms for exploratory data analysis". Thesis, Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/9065.

Pełny tekst źródła
Streszczenie:
Approved for public release; distribution is unlimited
Heuristic methods of solving exploratory data analysis problems suffer from one major weakness - uncertainty regarding the optimality of the results. The developers of DaMI (Data Mining Initiative), a genetic algorithm designed to mine the CCEP (Comprehensive Clinical Evaluation Program) database in the search for a Persian Gulf War syndrome, proposed a method to overcome this weakness: reproducibility -- the conjecture that consistent convergence on the same solutions is both necessary and sufficient to ensure a genetic algorithm has effectively searched an unknown solution space. We demonstrate the weakness of this conjecture in light of accepted genetic algorithm theory. We then test the conjecture by modifying the CCEP database with the insertion of an interesting solution of known quality and performing a discovery session using DaMI on this modified database. The necessity of reproducibility as a terminating condition is falsified by the algorithm finding the optimal solution without yielding strong reproducibility. The sufficiency of reproducibility as a terminating condition is analyzed by manual examination of the CCEP database in which strong reproducibility was experienced. Ex post facto knowledge of the solution space is used to prove that DaMI had not found the optimal solutions though it gave strong reproducibility, causing us to reject the conjecture that strong reproducibile is a sufficient terminating condition.
Style APA, Harvard, Vancouver, ISO itp.
40

Mastrippolito, Luigi. "NETWORKED DATA ACQUISITION DEVICES AS APPLIED TO AUTOMOTIVE TESTING". International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606740.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The US Army Aberdeen Test Center (ATC) is acquiring, transferring, and databasing data during all phases of automotive testing using networked data acquisition devices. The devices are small ruggedized computer-based systems programmed with specific data acquisition tasks and then networked together with other devices in order to share information within a test item or vehicle. One of the devices is also networked to a ground-station for monitor, control and data transfer of any of the devices on the net. Application of these devices has varied from single vehicle tests in a single geographical location up to a 100-vehicle nationwide test. Each device has a primary task such as acquiring data from vehicular data busses (MIL-STD-1553, SAE J1708 bus, SAE J1939 bus, RS-422 serial bus, etc.), GPS (time and position), analog sensors and video with audio. Each device has programmable options, maintained in a configuration file, that define the specific recording methods, real-time algorithms to be performed, data rates, and triggering parameters. The programmability of the system and bi-directional communications allow the configuration file to be modified remotely after the system is fielded. The primary data storage media of each device is onboard solid-state flash disk; therefore, a continuous communication link is not critical to data gathering. Data are gathered, quality checked and loaded into a database for analysis. The configuration file, as an integral part of the database, ensures configuration identity and management. A web based graphical user interface provides preprogrammed query options for viewing, summarizing, graphing, and consolidating data. The database can also be queried for more detailed analyses. The architecture for this network approach to field data acquisition was under the Aberdeen Test Center program Versatile Information System Integrated On-Line (VISION). This paper will describe how the merging of data acquisition systems to network communications and information management tools provides a powerful resource for system engineers, analysts, evaluators and acquisition personnel.
Style APA, Harvard, Vancouver, ISO itp.
41

Prasai, Nilam. "Testing Criterion Validity of Benefit Transfer Using Simulated Data". Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/34685.

Pełny tekst źródła
Streszczenie:
The purpose of this thesis is to investigate how the differences between the study and policy sites impact the performance of benefit function transfer. For this purpose, simulated data are created where all information necessary to conduct the benefit function transfer is available. We consider the six cases of difference between the study and policy sites- scale parameter, substitution possibilities, observable characteristics, population preferences, measurement error in variables, and a case of preference heterogeneity at the study site and fixed preferences at the policy site. These cases of difference were considered one at time and their impact on quality of transfer is investigated. RUM model based on reveled preference was used for this analysis. Function estimated at the study site is transferred to the policy site and willingness to pay for five different cases of policy changes are calculated at the study site. The willingness to pay so calculated is compared with true willingness to pay to evaluate the performance of benefit function transfer. When the study and policy site are different only in terms of scale parameter, equality of estimated and true expected WTP is not rejected for 89.7% or more when the sample size is 1000. Similarly, equality of estimated preference coefficients and true preference coefficients is not rejected for 88.8% or more. In this study, we find that benefit transfer performs better only in one direction. When the function is estimated at lower scale and transferred to the policy site with higher scale, the transfer error is less in magnitude than those which are estimated at higher scale and transferred to the policy site with lower scale. This study also finds that transfer error is less when the function from the study site having more site substitutes is transferred to the policy site having less site substitutes whenever there is difference in site substitution possibilities. Transfer error is magnified when measurement error is involved in any of the variables. This study do not suggest function transfer whenever the study siteâ s model is missing one of the important variable at the policy site or whenever the data on variables included in study siteâ s model is not available at the policy site for benefit transfer application. This study also suggests the use of large representative sample with sufficient variation to minimize transfer error in benefit transfer.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
42

Lu, Ruijin. "Scalable Estimation and Testing for Complex, High-Dimensional Data". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.

Pełny tekst źródła
Streszczenie:
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time.
Doctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
Style APA, Harvard, Vancouver, ISO itp.
43

Darken, Patrick Fitzgerald. "Testing for Changes in Trend in Water Quality Data". Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28936.

Pełny tekst źródła
Streszczenie:
Time Series of water quality variables typically possess many of several characteristics which complicate analysis. Of interest to researchers is often the trend over time of the water quality variable. However, sometimes water quality variable levels appear to increase or decrease monotonically for a period of time then switch direction after some intervention affects the factors which have a causal relationship with the level of the variable. Naturally, when analyzed for trend as a whole, these time series usually do not provide significant results. The problem of testing for a change in trend is addressed, and a method for perfoming this test based on a test of equivalence of two modified Kendall's Tau nonparametric correlation coefficients (neither necessarily equal to zero) is presented. The test is made valid for use with serially correlated data by use of a new bootstrap method titled the effective sample size bootstrap. Further issues involved in applying this test to water quality variables are also addressed.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
44

Sechidis, Konstantinos. "Hypothesis testing and feature selection in semi-supervised data". Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/hypothesis-testing-and-feature-selection-in-semisupervised-data(97f5f950-f020-4ace-b6cd-49cb2f88c730).html.

Pełny tekst źródła
Streszczenie:
A characteristic of most real world problems is that collecting unlabelled examples is easier and cheaper than collecting labelled ones. As a result, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully to partially supervised scenarios is a challenging problem. Our work focuses on two types of partially labelled data that can occur in binary problems: semi-supervised data, where the labelled set contains both positive and negative examples, and positive-unlabelled data, a more restricted version of partial supervision where the labelled set consists of only positive examples. In both settings, it is very important to explore a large number of features in order to derive useful and interpretable information about our classification task, and select a subset of features that contains most of the useful information. In this thesis, we address three fundamental and tightly coupled questions concerning feature selection in partially labelled data; all three relate to the highly controversial issue of when does additional unlabelled data improve performance in partially labelled learning environments and when does not. The first question is what are the properties of statistical hypothesis testing in such data? Second, given the widespread criticism of significance testing, what can we do in terms of effect size estimation, that is, quantification of how strong the dependency between feature X and the partially observed label Y? Finally, in the context of feature selection, how well can features be ranked by estimated measures, when the population values are unknown? The answers to these questions provide a comprehensive picture of feature selection in partially labelled data. Interesting applications include for estimation of mutual information quantities, structure learning in Bayesian networks, and investigation of how human-provided prior knowledge can overcome the restrictions of partial labelling. One direct contribution of our work is to enable valid statistical hypothesis testing and estimation in positive-unlabelled data. Focusing on a generalised likelihood ratio test and on estimating mutual information, we provide five key contributions. (1) We prove that assuming all unlabelled examples are negative cases is sufficient for independence testing, but not for power analysis activities. (2) We suggest a new methodology that compensates this and enables power analysis, allowing sample size determination for observing an effect with a desired power by incorporating user’s prior knowledge over the prevalence of positive examples. (3) We show a new capability, supervision determination, which can determine a-priori the number of labelled examples the user must collect before being able to observe a desired statistical effect. (4) We derive an estimator of the mutual information in positive-unlabelled data, and its asymptotic distribution. (5) Finally, we show how to rank features with and without prior knowledge. Also we derive extensions of these results to semi-supervised data. In another extension, we investigate how we can use our results for Markov blanket discovery in partially labelled data. While there are many different algorithms for deriving the Markov blanket of fully supervised nodes, the partially labelled problem is far more challenging, and there is a lack of principled approaches in the literature. Our work constitutes a generalization of the conditional tests of independence for partially labelled binary target variables, which can handle the two main partially labelled scenarios: positive-unlabelled and semi-supervised. The result is a significantly deeper understanding of how to control false negative errors in Markov Blanket discovery procedures and how unlabelled data can help. Finally, we present how our results can be used for information theoretic feature selection in partially labelled data. Our work extends naturally feature selection criteria suggested for fully-supervised data, to partially labelled scenarios. These criteria can capture both the relevancy and redundancy of the features and can be used for semi-supervised and positive-unlabelled data.
Style APA, Harvard, Vancouver, ISO itp.
45

Loudermilk, Margaret Susan. "Estimation and testing in dynamic, nonlinear panel data models". Diss., Connect to online resource - MSU authorized users, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

May, Peter S. "Test data generation : two evolutionary approaches to mutation testing". Thesis, University of Kent, 2007. https://kar.kent.ac.uk/24023/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Xu, Yanhui. "Large Scale Multiple Testing for High-Dimensional Nonparanormal Data". Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/553215.

Pełny tekst źródła
Streszczenie:
Statistics
Ph.D.
False discovery control in high dimensional multiple testing has been frequently encountered in many scientific research. Under the multivariate normal distribution assumption, \cite{fan2012} proposed an approximate expression for false discovery proportion (FDP) in large-scale multiple testing when a common threshold is used and provided a consistent estimate of realized FDP when the covariance matrix is known. They further extended their study when the covariance matrix is unknown \citep{fan2017}. However, in reality, the multivariate normal assumption is often violated. In this paper, we relaxed the normal assumption by developing a testing procedure on nonparanormal distribution which extends the Gaussian family to a much larger population. The nonparanormal distribution is indeed a high dimensional Gaussian copula with nonparametric marginals. Estimating the underlying monotone functions is key to good FDP approximation. Our procedure achieved minimal mean error in approximating the FDP compared with other methods in simulation studies. We gave theoretical investigations regarding the performance of estimated covariance matrix and false rejections. In real dataset setting, our method was able to detect more differentiated genes while still maintaining the FDP under a small level. This thesis provides an important tool for approximating FDP in a given experiment where the normal assumption may not hold. We also developed a dependence-adjusted procedure which provides more power than fixed-threshold method. Our procedure also show robustness for heavy-tailed data under a variety of distributions in numeric studies.
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
48

Pettit, Philip Anthony. "From data-informed to data-led?: School leadership within the context of external testing". Thesis, Australian Catholic University, 2009. https://acuresearchbank.acu.edu.au/download/5072fa500129086e08572c81ccacc91a9268e5a552303086ac01034c495ad040/2684723/65049_downloaded_stream_274.pdf.

Pełny tekst źródła
Streszczenie:
Schools now have access to an enormous range of data that can be used to improve student achievement. These data can include classroom-based assessment information together with individually tailored results from literacy and numeracy testing programs and from other sources. Also, there is an expectation at system and national policy levels that data on student achievement are collected for the purposes of program accountability and for improving student learning. However, there is evidence that schools are not effectively utilising such data for this purpose. This research explored how the experience of external literacy and numeracy testing and data utilisation affects attitudes to the tests, teaching practice and school leadership. This is a new area for research in Australia, given the relatively recent government emphasis on accountability, transparency and public reporting of student achievement. The research investigated the nature of and relationship between the themes of student achievement, the nature of educational change and school improvement and the consequent impact on the perceptions, by teachers and principals, of the efficacy of external testing within the wider context of educational accountability. With the research grounded in a Constructivist epistemology using a Pragmatist theoretical perspective, the emphasis was on understanding the nature of the research problem and on finding a way forward for planned action. Symbolic Interactionism was employed as the interpretivist lens through which to view how the actions of teachers and school principals reflect their understandings of, and their approaches to, the applicability of external testing programs to student learning, teaching practices and leadership within the school. The methodology for the research was based on case study using 'mixed methods' to collect and analyse data.;Following the initial phase of meetings with school principals, three further research phases utilising survey, semi-structured interviews and focus group instruments employed a mix of qualitative and quantitative data collection methods designed, firstly, to generate themes for questionnaire design and implementation, then to obtain rich information from one-on-one interviews of selected participants from a range of schools. The final phase of the research considered the perceptions of key system leaders about the results of the school-based research for their support of teachers and principals in the use of literacy and numeracy testing data to enhance student achievement. The research findings produced four themes for analysis to explain the factors affecting how literacy and numeracy testing data are being used and led in schools. These themes are: 'Attitudes towards External Testing', 'Leadership in Using Testing Data', 'Effective Data Analysis', and the 'Impact on Teaching Practices'. The study found that differences in perceptions of the value of data from external testing exist within and between schools. Accountability for testing results was viewed according to their perceived purpose, and the role of leadership in data analysis was seen as critical, but often missing. Further, differences were found in the way that leadership in data analysis and use is perceived within the school, particularly in relation to staff involvement in data analysis and whole-school planning using testing results. Finally, linking external testing data with classroom-based assessment was seen to have value, but was not necessarily operationalised in any systematic way across the school system. The lack of explicit leadership within the school was found to inhibit the potential effectiveness of data analysis and use.;The associated low levels of access and engagement of teachers in this process further affected the ability and willingness of teachers to incorporate the testing feedback information into classroom teaching practices. The findings from this study demonstrate the importance of the perceived value of such data in informing decisions about student outcomes, and the central role of evidence-based leadership at the school level in utilising such evidence of learning. The concept of 'Professional Purpose' was developed from the research findings as a possible framework to explain the relationship between the value one places on external testing and the link between data analysis and use in an operational sense. This involves the interplay among three elements related to the use of external testing: its moral purpose, practical purpose and public purpose. Within the context of increasing policy interest in measuring and reporting student achievement in Australia, the central role of data leadership at the government, system and school level has been placed in sharper focus. The findings from this research advocate the crucial role of leadership in the analysis, use and reporting of data from national tests of literacy and numeracy as an element within the wider context of evidencebased leadership. For schools and systems to be 'data-informed' is not sufficient; to be 'data-led' suggests the need for an understanding of the 'professional purpose' of such data and its relationship with other performance information to effect improvements in student achievement.
Style APA, Harvard, Vancouver, ISO itp.
49

Offutt, Andrew Jefferson VI. "Automatic test data generation". Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Zhou, Xinwei. "Reachability relations in selective regression testing". Thesis, London South Bank University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265281.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii