Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Data-based testing.

Dissertationen zum Thema „Data-based testing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Data-based testing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tsai, Bor-Yuan. „A hybrid object-oriented class testing method : based on state-based and data-flow testing“. Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311294.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Dongmei. „Resampling-based Multiple Testing with Applications to Microarray Data Analysis“. The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243993319.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Noor, Tanzeem Bin. „A Similarity-based Test Case Quality Metric using Historical Failure Data“. IEEE, 2015. http://hdl.handle.net/1993/31045.

Der volle Inhalt der Quelle
Annotation:
A test case is a set of input data and expected output, designed to verify whether the system under test satisfies all requirements and works correctly. An effective test case reveals a fault when the actual output differs from the expected output (i.e., the test case fails). The effectiveness of test cases is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases; therefore, they are ranked higher. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar. In this thesis, I have defined a metric that estimates test case quality using its similarity to the previously failing test cases. Moreover, I have evaluated the effectiveness of the proposed test quality metric through detailed empirical study.
February 2016
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nordholm, Johan. „Model-Based Testing: An Evaluation“. Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5188.

Der volle Inhalt der Quelle
Annotation:

Testing is a critical activity in the software development process in order to obtain systems of high quality. Tieto typically develops complex systems, which are currently tested through a large number of manually designed test cases. Recent development within software testing has resulted in methods and tools that can automate the test case design, the generation of test code and the test result evaluation based on a model of the system under test. This testing approach is called model-based testing (MBT).

This thesis is a feasibility study of the model-based testing concept and has been performed at the Tieto office in Karlstad. The feasibility study included the use and evaluation of the model-based testing tool Qtronic, developed by Conformiq, which automatically designs test cases given a model of the system under test as input. The experiments for the feasibility study were based on the incremental development of a test object, which was the client protocol module of a simplified model for an ATM (Automated Teller Machine) client-server system. The experiments were evaluated both individually and by comparison with the previous experiment since they were based on incremental development. For each experiment the different tasks in the process of testing using Qtronic were analyzed to document the experience gained as well as to identify strengths and weaknesses.

The project has shown the promise inherent in using a model-based testing approach. The application of model-based testing and the project results indicate that the approach should be further evaluated since experience will be crucial if the approach is to be adopted within Tieto’s organization.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lima, Lucas Albertins de. „Test case prioritization based on data reuse for Black-box environments“. Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1922.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-06-12T15:53:11Z (GMT). No. of bitstreams: 2 arquivo1906_1.pdf: 1728491 bytes, checksum: 711dbaf0713ac324ffe904a6dace38d7 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Albertins de Lima, Lucas; Cezar Alves Sampaio, Augusto. Test case prioritization based on data reuse for Black-box environments. 2009. Dissertação (Mestrado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2009.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Tracey, Nigel James. „A search-based automated test-data generation framework for safety-critical software“. Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Woldeselassie, Tilahun. „A simple microcomputer-based nuclear medicine data processing system design and performance testing“. Thesis, University of Aberdeen, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316066.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates the feasibility of designing a simple nuclear medicine data processing system based on an inexpensive microcomputer system, which is affordable to small hospitals and to developing countries where resources are limited. Since the main need for a computer is to allow dynamic studies to be carried out, the relevant criteria for choosing the computer are its speed and memory capacity. The benchmark chosen for these criteria is renography, one of the commonest nuclear medicine procedures. The Acorn Archimedes model 310 microcomputer was found to meet these requirements, and a suitable camera-computer interface has been designed. Because of the need for ensuring that the gain and offset controls of the interface are set optimally before connecting to the camera, it was necessary to design a circuit which produces a test pattern on the screen for use during this operation. Having also developed and tested the data acquisition and image display software successfully, atttention was concentrated on finding ways of characterising and measuring the performance of the computer interface and the display device, two important areas which have been largely neglected in the quality control of camera-computer systems. One of the characteristics of the interface is its deadtime. A procedure has been outlined for measuring this by means of a variable frequency pulse generator and also for interpreting the data correctly. A theoretical analysis of the way in which the interface deadtime affects the overall count rate performance of the system has also been provided. The spatial linearity, resolution and uniformity characteristics of the interface are measured using a special dual staircase generator circuit designed to simulate the camera position and energy signals. The test pattern set up on the screen consists of an orthogonal grid of points which can be used for a visual assessment of linearity, while analysis of the data in memory enables performance indices for resolution, linearity and uniformity to be computed. The thesis investigates the performance characteristics of display devices by means of radiometric measurements of screen luminance. These reveal that the relationship between screen luminance and display grey level value can be taken as quadratic. Characterisation of the display device in this way enables software techniques to be employed to ensure that screen luminance is a linear function of display grey level value; screen luminance measurements, coupled with film density measurements, are also used to optimise the settings of the display controls for using the film in the linear range of its optical densities. This in turn ensures that film density is a linear function of grey level value. An alternative approach for correcting for display nonlinearity is by means of an electronic circuit described in this thesis. Intensity coding schemes for improving the quality of grey scale images can be effective only if distortion due to the display device is corrected for. The thesis also draws attention to significant variations in film density which may have their origins in nonuniformities in the display screen, the recording film, or in the performance of the film processor. The work on display devices has been published in two papers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mairhofer, Stefan. „Search-based software testing and complex test data generation in a dynamic programming language“. Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4340.

Der volle Inhalt der Quelle
Annotation:
Manually creating test cases is time consuming and error prone. Search-based software testing (SBST) can help automate this process and thus to reduce time and effort and increase quality by automatically generating relevant test cases. Previous research have mainly focused on static programming languages with simple test data inputs such as numbers. In this work we present an approach for search-based software testing for dynamic programming languages that can generate test scenarios and both simple and more complex test data. This approach is implemented as a tool in and for the dynamic programming language Ruby. It uses an evolutionary algorithm to search for tests that gives structural code coverage. We have evaluated the system in an experiment on a number of code examples that differ in complexity and the type of input data they require. We compare our system with the results obtained by a random test case generator. The experiment shows, that the presented approach can compete with random testing and, for many situations, quicker finds tests and data that gives a higher structural code coverage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ndashimye, Maurice. „Accounting for proof test data in Reliability Based Design Optimization“. Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97108.

Der volle Inhalt der Quelle
Annotation:
Thesis (MSc)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Recent studies have shown that considering proof test data in a Reliability Based Design Optimization (RBDO) environment can result in design improvement. Proof testing involves the physical testing of each and every component before it enters into service. Considering the proof test data as part of the RBDO process allows for improvement of the original design, such as weight savings, while preserving high reliability levels. Composite Over-Wrapped Pressure Vessels (COPV) is used as an example application of achieving weight savings while maintaining high reliability levels. COPVs are light structures used to store pressurized fluids in space shuttles, the international space station and other applications where they are maintained at high pressure for extended periods of time. Given that each and every COPV used in spacecraft is proof tested before entering service and any weight savings on a spacecraft results in significant cost savings, this thesis put forward an application of RBDO that accounts for proof test data in the design of a COPV. The method developed in this thesis shows that, while maintaining high levels of reliability, significant weight savings can be achieved by including proof test data in the design process. Also, the method enables a designer to have control over the magnitude of the proof test, making it possible to also design the proof test itself depending on the desired level of reliability for passing the proof test. The implementation of the method is discussed in detail. The evaluation of the reliability was based on the First Order Reliability Method (FORM) supported by Monte Carlo Simulation. Also, the method is implemented in a versatile way that allows the use of analytical as well as numerical (in the form of finite element) models. Results show that additional weight savings can be achieved by the inclusion of proof test data in the design process.
AFRIKAANSE OPSOMMING: Onlangse studies het getoon dat die gebruik van ontwerp spesifieke proeftoets data in betroubaarheids gebaseerde optimering (BGO) kan lei tot 'n verbeterde ontwerp. BGO behels vele aspekte in die ontwerpsgebied. Die toevoeging van proeftoets data in ontwerpsoptimering bring te weë; die toetsing van 'n ontwerp en onderdele voor gebruik, die aangepaste en verbeterde ontwerp en gewig-besparing met handhawing van hoë betroubaarsheidsvlakke. 'n Praktiese toepassing van die BGO tegniek behels die ontwerp van drukvatte met saamgestelde materiaal bewapening. Die drukvatontwerp is 'n ligte struktuur wat gebruik word in die berging van hoë druk vloeistowwe in bv. in ruimtetuie, in die internasionale ruimtestasie en in ander toepassings waar hoë druk oor 'n tydperk verlang word. Elke drukvat met saamgestelde materiaal bewapening wat in ruimtevaartstelsels gebruik word, word geproeftoets voor gebruik. In ruimte stelselontwerp lei massa besparing tot 'n toename in loonvrag. Die tesis beskryf 'n optimeringsmetode soos ontwikkel en gebaseer op 'n BGO tegniek. Die metode word toegepas in die ontwerp van drukvatte met saamgestelde materiaal bewapening. Die resultate toon dat die gebruik van proeftoets data in massa besparing optimering onderhewig soos aan hoë betroubaarheidsvlakke moontlik is. Verdermeer, die metode laat ook ontwerpers toe om die proeftoetsvlak aan te pas om sodoende by ander betroubaarheidsvlakke te toets. In die tesis word die ontwikkeling en gebruik van die optimeringsmetode uiteengelê. Die evaluering van betroubaarheidsvlakke is gebaseer op 'n eerste orde betroubaarheids-tegniek wat geverifieer word met talle Monte Carlo simulasie resultate. Die metode is ook so geskep dat beide analitiese sowel as eindige element modelle gebruik kan word. Ten slotte, word 'n toepassing getoon waar resultate wys dat die gebruik van die optimeringsmetode met die insluiting van proeftoets data wel massa besparing kan oplewer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bayley, Gwain. „PC-based bit error rate analyser for a 2 Mbps data link“. Thesis, University of Cape Town, 1988. http://hdl.handle.net/11427/23153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Moyers, Kevin Keith. „A microcomputer-based data acquisition system for diagnostic monitoring and control of high-speed electric motors“. Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/41576.

Der volle Inhalt der Quelle
Annotation:
A microcomputer-based data acquisition and control system was designed for the diagnostic monitoring and control of high-speed electric motors. The system was utilized in high-speed bearing life-testing, using an electric motor as a test vehicle.

Bearing vibration and outer race temperature were continuously monitored for each ball bearing in the motor. In addition, the stator winding and motor casing temperature were monitored.

The monitoring system was successful in detecting an unbalance in the rotor caused by the loss of a small piece of balancing putty. The motor was shut down before any further damage occurred. In a separate test, excessive clearance between a bearing outer race and the motor caused high vibration readings. The motor was monitored until the condition began to deteriorate and the bearing outer race began to spin significantly. Again, the monitoring system powered down the motor before any significant damage occurred.

The speed of the motor tested is controlled by a PWM (pulse width modulation) technique. The resulting voltage and current waveforms are asymmetrical and contain high frequency components. Special circuitry was designed and constructed to interface sensors for measuring the voltage and current inputs to a spectrum analyzer. Using frequency and order analysis techniques, the real and reactive power inputs to the three·phase motor were measured.


Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jury, Owen T. „The Design of Telemetry Acquisition and Analysis Vans for Testing Construction and Mining Equipment“. International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607566.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Caterpillar Inc. has over 25 years of experience using instrument vans equipped with telemetry to support product testing. These vans provide the capability to instrument the product, to acquire telemetered data, and to analyze the data. They are being used in tests performed on construction and mining equipment at Caterpillar's proving grounds and at customer job sites throughout North America. This paper presents a design summary of the newest generation vans. It starts with an overview of the major subsystems and concentrates on the Caterpillar developed software that tightly integrates the various hardware and software components. This software greatly enhances the productivity of the system and makes it possible for the van to perform a large variety and quantity of tests required by our internal customers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Meyer, Mark J. „Understanding the challenges in HEV 5-cycle fuel economy calculations based on dynamometer test data“. Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/35648.

Der volle Inhalt der Quelle
Annotation:
EPA testing methods for calculation of fuel economy label ratings, which were revised beginning in 2008, use equations that weight the contributions of fuel consumption results from multiple dynamometer tests to synthesize city and highway estimates that reflect average U.S. driving patterns. The equations incorporate effects with varying weightings into the final fuel consumption, which are explained in this thesis paper, including illustrations from testing. Some of the test results used in the computation come from individual phases within the certification driving cycles. This methodology causes additional complexities for hybrid electric vehicles, because although they are required to have charge-balanced batteries over the course of a full drive cycle, they may have net charge or discharge within the individual phases. The fundamentals of studying battery charge-balance are discussed in this paper, followed by a detailed investigation of the implications of per-phase charge correction that was undertaken through testing of a 2010 Toyota Prius at Argonne National Laboratoryâ s vehicle dynamometer test facility. Using the charge-correction curves obtained through testing shows that phase fuel economy can be significantly skewed by natural charge imbalance, although the end effect on the fuel economy label is not as large. Finally, the characteristics of the current 5-cycle fuel economy testing method are compared to previous methods through a vehicle simulation study which shows that the magnitude of impact from mass and aerodynamic parameters vary between labeling methods and vehicle types.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Moore, Albert W. „A computer-based training course for assessing material safety data sheet comprehension“. Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06232009-063332/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Nandi, Shinjini. „Multiple Testing Procedures for One- and Two-Way Classified Hypotheses“. Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/580415.

Der volle Inhalt der Quelle
Annotation:
Statistics
Ph.D.
Multiple testing literature contains ample research on controlling false discoveries for hypotheses classified according to one criterion, which we refer to as `one-way classified hypotheses'. However, one often encounters the scenario of `two-way classified hypotheses' where hypotheses can be partitioned into two sets of groups via two different criteria. Associated multiple testing procedures that incorporate such structural information are potentially more effective than their one-way classified or non-classified counterparts. To the best of our knowledge, very little research has been pursued in this direction. This dissertation proposes two types of multiple testing procedures for two-way classified hypotheses. In the first part, we propose a general methodology for controlling the false discovery rate (FDR) using the Benjamini-Hochberg (BH) procedure based on weighted p-values. The weights can be appropriately chosen to reflect one- or two-way classified structure of hypotheses, producing novel multiple testing procedures for two-way classified hypotheses. Newer results for one-way classified hypotheses have been obtained in this process. Our proposed procedures control the false discovery rate (FDR) non-asymptotically in their oracle forms under positive regression dependence on subset of null p-values (PRDS) and in their data-adaptive forms for independent p-values. Simulation studies demonstrate that our proposed procedures can be considerably more powerful than some contemporary methods in many instances and that our data-adaptive procedures can non-asymptotically control the FDR under certain dependent scenarios. The proposed two-way adaptive procedure is applied to a data set from microbial abundance study, for which it makes more discoveries than an existing method. In the second part, we propose a Local false discovery rate (Lfdr) based multiple testing procedure for two-way classified hypotheses. The procedure has been developed in its oracle form under a model based framework that isolates the effects due to two-way grouping from the significance of an individual hypothesis. Simulation studies show that our proposed procedure successfully controls the average proportion of false discoveries, and is more powerful than existing methods.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Stewart, Patrick. „Statistical Inferences on Inflated Data Based on Modified Empirical Likelihood“. Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1590455262157706.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kalaji, Abdul Salam. „Search-based software engineering : a search-based approach for testing from extended finite state machine (EFSM) models“. Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4575.

Der volle Inhalt der Quelle
Annotation:
The extended finite state machine (EFSM) is a powerful modelling approach that has been applied to represent a wide range of systems. Despite its popularity, testing from an EFSM is a substantial problem for two main reasons: path feasibility and path test case generation. The path feasibility problem concerns generating transition paths through an EFSM that are feasible and satisfy a given test criterion. In an EFSM, guards and assignments in a path‟s transitions may cause some selected paths to be infeasible. The problem of path test case generation is to find a sequence of inputs that can exercise the transitions in a given feasible path. However, the transitions‟ guards and assignments in a given path can impose difficulties when producing such data making the range of acceptable inputs narrowed down to a possibly tiny range. While search-based approaches have proven efficient in automating aspects of testing, these have received little attention when testing from EFSMs. This thesis proposes an integrated search-based approach to automatically test from an EFSM. The proposed approach generates paths through an EFSM that are potentially feasible and satisfy a test criterion. Then, it generates test cases that can exercise the generated feasible paths. The approach is evaluated by being used to test from five EFSM cases studies. The achieved experimental results demonstrate the value of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kurin, Erik, und Adam Melin. „Data-driven test automation : augmenting GUI testing in a web application“. Thesis, Linköpings universitet, Programvara och system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96380.

Der volle Inhalt der Quelle
Annotation:
For many companies today, it is highly valuable to collect and analyse data in order to support decision making and functions of various sorts. However, this kind of data-driven approach is seldomly applied to software testing and there is often a lack of verification that the testing performed is relevant to how the system under test is used. Therefore, the aim of this thesis is to investigate the possibility of introducing a data-driven approach to test automation by extracting user behaviour data and curating it to form input for testing. A prestudy was initially conducted in order to collect and assess different data sources for augmenting the testing. After suitable data sources were identified, the required data, including data about user activity in the system, was extracted. This data was then processed and three prototypes where built on top of this data. The first prototype augments the model-based testing by automatically creating models of the most common user behaviour by utilising data mining algorithms. The second prototype tests the most frequent occurring client actions. The last prototype visualises which features of the system are not covered by automated regression testing. The data extracted and analysed in this thesis facilitates the understanding of the behaviour of the users in the system under test. The three prototypes implemented with this data as their foundation can be used to assist other testing methods by visualising test coverage and executing regression tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Hathaway, Drew Aaron. „The use of immersive technologies to improve consumer testing: the impact of multiple immersion levels on data quality and panelist engagement for the evaluation of cookies under a preparation-based scenario“. The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1448994162.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Kindstedt, Jonas. „Antibiotic resistance among European strains of Pseudomonas aeruginosa : A study based on resistance data, published articles, and susceptibility testing methods“. Thesis, Umeå universitet, Kemiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-101862.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Salles, Lucio Salles de. „Short continuously reinforced concrete pavement design recommendations based on non-destructive ultrasonic data and stress simulation“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3138/tde-20102017-082704/.

Der volle Inhalt der Quelle
Annotation:
Four sections of continuously reinforced concrete pavement (CRCP) were constructed at the University of São Paulo campus in order to introduce this kind of pavement structure to Brazil\'s technical transportation community. Sections were designed as 50 m long concrete slab, short in comparison to traditional CRCP, in order to simulate bus stops and terminals - locations of critical interest for public infrastructure. The thesis presented herein concludes this research project initiated in 2010. As the initial goal of this study was the development of coherent, reliable and intuitive design recommendations for the use of CRCP technology in Brazil, a profound understating of its structural and performance peculiarities was needed. For that, the cracking process of the experimental CRCP sections was recorded over a span of seven years. Due to the sections\' short length and lack of anchorage, the experimental \"short\" CRCP presented a cracking behavior quite different than traditional CRCP. There were much less visible cracks than expected. To address this issue, a novel technology in ultrasonic non-destructive testing of concrete structures was applied. Through ultrasonic signal interpretation it was possible to discover several incipient non-visible cracks within the slabs - many of these became apparent on the slab surface in later crack surveys - and to characterize visible and non-visible cracks regarding crack depth. The updated crack map with non-visible cracks showed similarities with traditional CRCP. Additionally, the ultrasonic data analysis provided important information on thickness variation, reinforcement location and concrete condition that were applied in theoretical simulations (finite element software) of the short CRCP. Simulations were attempted considering different slab geometries, firstly with transverse cracks as joints with high load transfer efficiency (LTE) and secondly with a continuous slab without cracks or joints. The latter simulation was more accurate reaching a shift factor between field and simulated stresses in the order of 0.7 to 1.0. Deflection data and LTE analysis from cracks and panels in between cracks further attested the slab continuous behavior, which contradicts current CRCP design models and performance predictors. Furthermore, critical traffic and environmental loading conditions concerning Brazil\'s climate and bus traffic characteristics were investigated and related using a selected fatigue model resulting in design recommendations in a chart format for the short CRCP aimed at long-term projects for over 20 years of operation. The design chart was successfully applied to investigate three failures presented by the experimental short CRCP due to thickness deficiencies pointed out by the ultrasonic testing.
Quatro seções de pavimento de concreto continuamente armado (PCCA) foram construídas no campus da Universidade de São Paulo, com o objetivo de introduzir esta estrutura, de reconhecido sucesso internacional, à comunidade técnica de engenharia de transportes brasileira. As seções foram projetadas com uma placa de concreto de 50 m de extensão, curta em comparação ao PCCA tradicional, com a finalidade de simular paradas e terminais de ônibus - locais de grande interesse para a infraestrutura pública. A tese aqui apresentada conclui este projeto de pesquisa iniciado em 2010. Como o objetivo inicial deste estudo foi o desenvolvimento de recomendações de projeto coerentes, confiáveis e intuitivas para a utilização do PCCA no Brasil, foi necessário um profundo entendimento de suas peculiaridades estruturais e de desempenho. Para isso, o processo de fissuração das secções experimentais foi acompanhado durante sete anos. Devido à curta extensão e falta de ancoragem das seções, o PCCA \"curto\" apresentou um padrão de fissuração diferente do PCCA tradicional com muito menos fissuras visíveis na superfície do que o esperado. Para abordar esta questão, uma nova tecnologia ultrassônica para ensaios não destrutivos de estruturas de concreto foi aplicada. Pela interpretação do sinal de ultrassom, foi possível descobrir várias fissuras incipientes (não visíveis) dentro das placas - muitas dessas foram observadas na superfície da placa em levantamentos de fissuras posteriores - e caracterizar fissuras visíveis e não-visíveis quanto à profundidade da fissura. O mapa de fissuração atualizado com fissuras não visíveis mostrou semelhanças com PCCA tradicional. Além disso, a análise dos dados de ultrassom forneceu informações importantes sobre a variação da espessura, localização da armadura longitudinal e condição do concreto, que foram aplicados em simulações teóricas (software de elementos finitos) do PCCA curto. Simulações foram propostas considerando diferentes geometrias, primeiramente com fissuras transversais como juntas com alta eficiência de transferência de carga (LTE) e posteriormente com uma placa contínua, sem fissuras ou juntas. Esta última simulação foi mais precisa alcançando um fator de conversão entre tensões de campo e simuladas na ordem de 0,7 a 1,0. Dados de deflexão e análise de LTE em fissuras e placas entre fissuras atestaram novamente o comportamento contínuo das placas, o que vai em contradição com os modelos atuais de dimensionamento e de previsão de desempenho para o PCCA. Ademais, o tráfego crítico e condições de carga ambiental correspondentes ao clima e tráfego de ônibus típicos brasileiros foram investigados e relacionados usando um modelo de fadiga resultando em recomendações de projeto para o PCCA de curta extensão sendo direcionado para projetos de longo prazo para mais de 20 anos de operação. O gráfico de projeto foi aplicado com sucesso para investigar três falhas apresentadas pelo PCCA curto experimental devido a deficiências de espessura apontadas pelo teste ultrassônico.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Naňo, Andrej. „Automatické generování testovacích dat informačních systémů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445520.

Der volle Inhalt der Quelle
Annotation:
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Persson, Jon. „Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning“. Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2855.

Der volle Inhalt der Quelle
Annotation:

Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware.

Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods.

A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Hensen, Bernadette. „Increasing men's uptake of HIV-testing in sub-Saharan Africa : a systematic review of interventions and analyses of population-based data from rural Zambia“. Thesis, London School of Hygiene and Tropical Medicine (University of London), 2016. http://researchonline.lshtm.ac.uk/2531234/.

Der volle Inhalt der Quelle
Annotation:
Men's uptake of HIV-testing and counselling services across sub-Saharan Africa is inadequate relative to universal access targets. A better understanding of the effectiveness of available interventions to increase men’s HIV-testing and of men’s HIV-testing behaviours is required to inform the development of strategies to increase men’s levels and frequency of HIV-testing. My thesis aims to fill this gap. To achieve this, I combine a systematic review of randomised trials of interventions to increase men’s uptake of HIV-testing in sub-Saharan Africa with analyses of two population-based surveys from Zambia, through which I investigate the levels of and factors associated with HIV-testing behaviours. I also conduct an integrated analyses to explore whether the scale-up of voluntary medical male circumcision (VMMC) services between 2009 and 2013 contributed to increasing men’s population levels of HIV-testing. In the systematic review I find that strategies to increase men's HIV-testing are available. Health facility-based strategies, including reaching men through their pregnant partners, reach a high proportion of men attending facilities, however, they have a low reach overall. Community-based mobile HIV-testing is effective at reaching a high proportion of men, reaching 44% of men in Tanzania and 53% in Zimbabwe compared to 9% and 5% in clinic-based communities, respectively. In the population-based surveys, HIV-testing increased with time: 52% of men evertested in 2011/12 compared to 61% in 2013. Less than one-third of men reported a recent-test in both surveys and 35% multiple lifetime HIV-tests. Having a spouse who ever-tested and markers of socioeconomic position were associated with HIV-testing outcomes and a history of TB with ever-testing. The scale-up of VMMC provided men who opt for circumcision with access to HIV-testing services: 86% of circumcised men ever-tested for HIV compared to 59% of uncircumcised men. However, there was little evidence that VMMC services contributed to increasing HIV-testing among men in this rural Zambian setting. Existing strategies to increase men’s uptake of HIV-testing are effective. Over half the men in two population-based surveys reported ever-testing for HIV in rural Zambia. Nonetheless, some 40% of men never-tested. Men’s frequency of HIV-testing was low relative to recommendations that individuals with continued risk of HIV-infection retest annually for HIV. Innovative strategies are required to provide never-testers with access to available services and to increase men’s frequency of HIV-testing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Doungsa-ard, Chartchai. „Generation of Software Test Data from the Design Specification Using Heuristic Techniques. Exploring the UML State Machine Diagrams and GA Based Heuristic Techniques in the Automated Generation of Software Test Data and Test Code“. Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5380.

Der volle Inhalt der Quelle
Annotation:
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Souza, Francisco Carlos Monteiro. „Uma abordagem para geração de dados de teste para o teste de mutação utilizando técnicas baseadas em busca“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28092017-162339/.

Der volle Inhalt der Quelle
Annotation:
O teste de mutação é um critério de teste poderoso para detectar falhas e medir a eficácia de um conjunto de dados de teste. No entanto, é uma técnica de teste computacionalmente cara. O alto custo provém principalmente do esforço para gerar dados de teste adequados para matar os mutantes e pela existência de mutantes equivalentes. Nesse contexto, o objetivo desta tese é apresentar uma abordagem chamada de Reach, Infect and Propagation to Mutation Testing (RIPMuT) que visa gerar dados de teste e sugerir mutantes equivalentes. A abordagem é composta por dois módulos: (i) uma geração automatizada de dados de teste usando subida da encosta e um esquema de fitness de acordo com as condições de alcançabilidade, infeção e propagação (RIP); e (ii) um método para sugerir mutantes equivalentes com base na análise das condições RIP durante o processo de geração de dados de teste. Os experimentos foram conduzidos para avaliar a eficácia da abordagem RIP-MuT e um estudo comparativo com o algoritmo genético e testes aleatórios foi realizado. A abordagem RIP-MuT obteve um escore médio de mutação de 18,25 % maior que o AG e 35,93 % maior que o teste aleatório. O método proposto para detecção de mutantes equivalentes se mostrou viável para redução de custos relacionado a essa atividade, uma vez que obteve uma precisão de 75,05% na sugestão dos mutantes equivalentes. Portanto, os resultados indicam que a abordagem gera dados de teste adequados capazes de matar a maioria dos mutantes em programas C e, também auxilia a identificar mutantes equivalentes corretamente.
Mutation Testing is a powerful test criterion to detect faults and measure the effectiveness of a test data set. However, it is a computationally expensive testing technique. The high cost comes mainly from the effort to generate adequate test data to kill the mutants and by the existence of equivalent mutants. In this thesis, an approach called Reach, Infect and Propagation to Mutation Testing (RIP-MuT) is presented to generate test data and to suggest equivalent mutants. The approach is composed of two modules: (i) an automated test data generation using hill climbing and a fitness scheme according to Reach, Infect, and Propagate (RIP) conditions; and (ii) a method to suggest equivalent mutants based on the analyses of RIP conditions during the process of test data generation. The experiments were conducted to evaluate the effectiveness of the RIP-MuT approach and a comparative study with a genetic algorithm and random testing. The RIP-MuT approach achieved a mean mutation score of 18.25% higher than the GA and 35.93% higher than random testing. The proposed method for detection of equivalent mutants demonstrate to be feasible for cost reduction in this activity since it obtained a precision of 75.05% on suggesting equivalent mutants. Therefore, the results indicate that the approach produces effective test data able to strongly kill the majority of mutants on C programs, and also it can assist in suggesting equivalent mutants correctly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ramasamy, Kandasamy Manimozhian. „Efficient state space exploration for parallel test generation“. Thesis, [Austin, Tex. : University of Texas, 2009. http://hdl.handle.net/2152/ETD-UT-2009-05-131.

Der volle Inhalt der Quelle
Annotation:
Report (M.S. in Engineering)--University of Texas at Austin, 2009.
Title from PDF title page (University of Texas Digital Repository, viewed on August 10, 2009). Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Lindahl, John, und Douglas Persson. „Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method“. Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43498.

Der volle Inhalt der Quelle
Annotation:
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Neves, Vânia de Oliveira. „Automatização do teste estrutural de software de veículos autônomos para apoio ao teste de campo“. Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15092015-090805/.

Der volle Inhalt der Quelle
Annotation:
Veículo autônomo inteligente (ou apenas veículo autônomo VA) é um tipo de sistema embarcado que integra componentes físicos (hardware) e computacionais (software). Sua principal característica é a capacidade de locomoção e de operação de modo semi ou completamente autônomo. A autonomia cresce com a capacidade de percepção e de deslocamento no ambiente, robustez e capacidade de resolver e executar tarefas lidando com as mais diversas situações (inteligência). Veículos autônomos representam um tópico de pesquisa importante e que tem impacto direto na sociedade. No entanto, à medida que esse campo avança alguns problemas secundários aparecem como, por exemplo, como saber se esses sistemas foram suficientemente testados. Uma das fases do teste de um VA é o teste de campo, em que o veículo é levado para um ambiente pouco controlado e deve executar livremente a missão para a qual foi programado. Ele é geralmente utilizado para garantir que os veículos autônomos mostrem o comportamento desejado, mas nenhuma informação sobre a estrutura do código é utilizada. Pode ocorrer que o veículo (hardware e software) passou no teste de campo, mas trechos importantes do código nunca tenham sido executados. Durante o teste de campo, os dados de entrada são coletados em logs que podem ser posteriormente analisados para avaliar os resultados do teste e para realizar outros tipos de teste offline. Esta tese apresenta um conjunto de propostas para apoiar a análise do teste de campo do ponto de vista do teste estrutural. A abordagem é composta por um modelo de classes no contexto do teste de campo, uma ferramenta que implementa esse modelo e um algoritmo genético para geração de dados de teste. Apresenta também heurísticas para reduzir o conjunto de dados contidos em um log sem diminuir substancialmente a cobertura obtida e estratégias de combinação e mutação que são usadas no algoritmo. Estudos de caso foram conduzidos para avaliar as heurísticas e estratégias e são também apresentados e discutidos.
Intelligent autonomous vehicle (or just autonomous vehicle - AV) is a type of embedded system that integrates physical (hardware) and computational (software) components. Its main feature is the ability to move and operate partially or fully autonomously. Autonomy grows with the ability to perceive and move within the environment, robustness and ability to solve and perform tasks dealing with different situations (intelligence). Autonomous vehicles represent an important research topic that has a direct impact on society. However, as this field progresses some secondary problems arise, such as how to know if these systems have been sufficiently tested. One of the testing phases of an AV is the field testing, where the vehicle is taken to a controlled environment and it should execute the mission for which it was programed freely. It is generally used to ensure that autonomous vehicles show the intended behavior, but it usually does not take into consideration the code structure. The vehicle (hardware and software) could pass the field testing, but important parts of the code may never have been executed. During the field testing, the input data are collected in logs that can be further analyzed to evaluate the test results and to perform other types of offline tests. This thesis presents a set of proposals to support the analysis of field testing from the point of view of the structural testing. The approach is composed of a class model in the context of the field testing, a tool that implements this model and a genetic algorithm to generate test data. It also shows heuristics to reduce the data set contained in a log without reducing substantially the coverage obtained and combination and mutation strategies that are used in the algorithm. Case studies have been conducted to evaluate the heuristics and strategies, and are also presented and discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Enderlin, Ivan. „Génération automatique de tests unitaires avec Praspel, un langage de spécification pour PHP“. Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2067/document.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans ce mémoire portent sur la validation de programmes PHP à travers un nouveau langage de spécification, accompagné de ses outils. Ces travaux s’articulent selon trois axes : langage de spécification, génération automatique de données de test et génération automatique de tests unitaires.La première contribution est Praspel, un nouveau langage de spécification pour PHP, basé sur la programmation par contrat. Praspel spécifie les données avec des domaines réalistes, qui sont des nouvelles structures permettant de valider etgénérer des données. À partir d’un contrat écrit en Praspel, nous pouvons faire du Contract-based Testing, c’est à dire exploiter les contrats pour générer automatiquement des tests unitaires. La deuxième contribution concerne la génération de données de test. Pour les booléens, les entiers et les réels, une génération aléatoire uniforme est employée. Pour les tableaux, un solveur de contraintes a été implémenté et utilisé. Pour les chaînes de caractères, un langage de description de grammaires avec un compilateur de compilateurs LL(⋆) et plusieurs algorithmes de génération de données sont employés. Enfin, la génération d’objets est traitée.La troisième contribution définit des critères de couverture sur les contrats.Ces derniers fournissent des objectifs de test. Toutes ces contributions ont été implémentées et expérimentées dans des outils distribués à la communauté PHP
The works presented in this memoir are about the validation of PHPprograms through a new specification language, along with its tools. These works follow three axes: specification language, automatic test data generation and automatic unit test generation. The first contribution is Praspel, a new specification language for PHP, based on the Design by Contract. Praspel specifies data with realistic domains, which are new structures allowing to validate and generate data. Based on a contract, we are able to perform Contract-based Testing, i.e.using contracts to automatically generate unit tests. The second contribution isabout test data generation. For booleans, integers and floating point numbers, auniform random generation is used. For arrays, a dedicated constraint solver has been implemented and used. For strings, a grammar description language along with an LL(⋆) compiler compiler and several algorithms for data generation are used. Finally, the object generation is supported. The third contribution defines contract coverage criteria. These latters provide test objectives. All these contributions are implemented and experimented into tools distributed to the PHP community
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Olsson, Jakob. „Measuring the Technical and Process Benefits of Test Automation based on Machine Learning in an Embedded Device“. Thesis, KTH, Programvaruteknik och datorsystem, SCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231785.

Der volle Inhalt der Quelle
Annotation:
Learning-based testing is a testing paradigm that combines model-based testing with machine learning algorithms to automate the modeling of the SUT, test case generation, test case execution and verdict construction. A tool that implements LBT been developed at the CSC school at KTH called LBTest. LBTest utilizes machine learning algorithms with off-the-shelf equivalence- and model-checkers, and the modeling of user requirements by propositional linear temporal logic. In this study, it is be investigated whether LBT may be suitable for testing a micro bus architecture within an embedded telecommunication device. Furthermore ideas to further automate the testing process by designing a data model to automate user requirement generation are explored.
Inlärningsbaserad testning är en testningsparadigm som kombinerar model-baserad testning med maskininlärningsalgoritmer för att automatisera systemmodellering, testfallsgenering, exekvering av tester och utfallsbedömning. Ett verktyg som är byggt på LBT är LBTest, utvecklat på CSC skolan på KTH. LBTest nyttjar maskininlärningsalgoritmer med färdiga ekvivalent- och model-checkers, och modellerar användarkrav med linjär temporal logik. I denna studie undersöks det om det är lämpat att använda LBT för att testa en mikrobus arkitektur inom inbyggda telekommunikationsenheter. Utöver det undersöks även hur testprocessen skulle kunna ytterligare automatiseras med hjälp av en data modell för att automatisera generering av användarkrav.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Vitale, Raffaele. „Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation“. Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.

Der volle Inhalt der Quelle
Annotation:
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included.
La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Johann, Matthew A. „Fire-Robust Structural Engineering: A Framework Approach to Structural Design for Fire Conditions“. Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-1219102-155849.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: structural engineering; fire safety; framework approach; performance-based design; information management; finite element; lumped-parameter; laboratory tests; steel; beam; restrained; plastic analysis. Includes bibliographical references (p. 180-182).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Su, Weizhe. „Bayesian Hidden Markov Model in Multiple Testing on Dependent Count Data“. University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613751403094066.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Olofsson, Louise. „Sustainable and scalable testing strategy in a multilayered service-based architecture“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80134.

Der volle Inhalt der Quelle
Annotation:
This thesis examines and evaluates whether it is possible to measure the quality of a software project and introduces a metric that will evaluate if the quality of every test performed when developing software can be measured. This subject is examined because it can be hard to conclude how well a project and all its parts performed, both during implementation and after it is done. To facilitate this need, this thesis provides a possible solution.   To try and answer these questions and meet the needs of this tool a prototype has been developed. The prototype is automated and runs through all the software development tests selected for this project. It sums up the test results and later translates them with the help of a metric to tell its quality grade. The metric is calculated with the help of an arbitrary formula developed for this thesis. Once the metric is concluded the development team working with the project will have an overview of how well each test area is performing and how well the project's end result was. With the help of this metric it is also easier to see if the quality achieved meets the company’s standards and the customer’s wishes. The prototype aims to be sustainable because the solution should last for a long term and also because sustainability means a smoother and more efficient way for developers and other people involved to work with the prototype since not much extra work will be required when updates need to be implemented or other necessary implementations.   The prototype is applied on a second project, which is larger and more advanced than the project created for this thesis, to get a better and accurate understanding if the implementation is correct and if the metric can be used as a value to describe a project. The metric results are compared and evaluated. The results of this thesis conclude a proof-of-concept and can be seen as a first step in a longer evaluation and process in determining the quality of tests. The results conclusion is that more parameters and more weighing of each tests importance are needed in order to achieve a reliable metric result.   This tool is meant to ease and help developers to quickly come to a conclusion about how good the work is. It could also be beneficial for a company with focus on web development and IT-solutions though it will be easier to follow and set a standard for the services they provide.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Emer, Maria Claudia Figueiredo Pereira. „Abordagem de teste baseada em defeitos para esquemas de dados“. [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261002.

Der volle Inhalt der Quelle
Annotation:
Orientadores: Mario Jino, Silvia Regina Vergilio
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-09T21:13:12Z (GMT). No. of bitstreams: 1 Emer_MariaClaudiaFigueiredoPereira_D.pdf: 934024 bytes, checksum: dbb2079115b56358ff3dc9e747df6386 (MD5) Previous issue date: 2007
Resumo: Dados são manipulados em várias aplicações de software envolvendo operações críticas. Em tais aplicações assegurar a qualidade dos dados manipulados é fundamental. Esquemas de dados definem a estrutura lógica e os relacionamentos entre os dados. O teste de esquemas por meio de abordagens, critérios e ferramentas de teste específicos é uma forma pouco explorada de assegurar a qualidade de dados definidos por esquemas. Este trabalho propõe uma abordagem de teste baseada em classes de defeitos comumente identificados em esquemas de dados. Um metamodelo de dados é definido para especificar os esquemas que podem ser testados e as restrições aos dados nos esquemas. Defeitos possíveis de serem revelados são os relacionados à definição incorreta ou ausente de restrições aos dados no esquema. A abordagem inclui a geração automática de um conjunto de teste que contém instâncias de dados e consultas a essas instâncias; as instâncias de dados e as consultas são geradas de acordo com padrões definidos em cada classe de defeito. Experimentos nos contextos de aplicações Web e de base de dados foram realizados para ilustrar a aplicação da abordagem
Abstract: Data are used in several software applications involving critical operations. In such applications to ensure the quality of the manipulated data is fundamental. Data schemas define the logical structure and the relationships among data. Testing schemas by means of specific testing approaches, criteria and tools has not been explored adequately as a way to ensure the quality of data defined by schemas. This work proposes a testing approach based on fault classes usually identified in data schemas. A data metamodel is defined to specify the schemas that can be tested and the constraints to the data in schemas. This testing approach provides means for revealing faults related to incorrect or absent definition of constraints for the data in the schema. The approach includes the automatic generation of a test set which contains data instances and queries to these instances; the data instances and queries are generated according to patterns defined in each fault class. Experiments in the contexts of Web and database applications were carried out to illustrate the testing approach application
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

CARVALHO, Gustavo Henrique Porto de. „NAT2TEST: generating test cases from natural language requirements based on CSP“. Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17929.

Der volle Inhalt der Quelle
Annotation:
Submitted by Natalia de Souza Gonçalves (natalia.goncalves@ufpe.br) on 2016-09-28T12:33:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5)
Made available in DSpace on 2016-09-28T12:33:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5) Previous issue date: 2016-02-26
High trustworthiness levels are usually required when developing critical systems, and model based testing (MBT) techniques play an important role generating test cases from specification models. Concerning critical systems, these models are usually created using formal or semi-formal notations. Moreover, it is also desired to clearly and formally state the conditions necessary to guarantee that an implementation is correct with respect to its specification by means of a conformance relation, which can be used to prove that the test generation strategy is sound. Despite the benefits of MBT, those who are not familiar with the models syntax and semantics may be reluctant to adopt these formalisms. Furthermore, most of these models are not available in the very beginning of the project, when usually natural-language requirements are available. Therefore, the use of MBT is postponed. Here, we propose an MBT strategy for generating test cases from controlled naturallanguage (CNL) requirements: NAT2TEST, which refrains the user from knowing the syntax and semantics of the underlying notations, besides allowing early use of MBT via naturallanguage processing techniques; the formal and semi-formal models internally used by our strategy are automatically generated from the natural-language requirements. Our approach is tailored to data-flow reactive systems: a class of embedded systems whose inputs and outputs are always available as signals. These systems can also have timed-based behaviour, which may be discrete or continuous. The NAT2TEST strategy comprises a number of phases. Initially, the requirements are syntactically analysed according to a CNL we proposed to describe data-flow reactive systems. Then, the requirements informal semantics are characterised based on the case grammar theory. Afterwards, we derive a formal representation of the requirements considering a model of dataflow reactive systems we defined. Finally, this formal model is translated into communicating sequential processes (CSP) to provide means for generating test cases. We prove that our test generation strategy is sound with respect to our timed input-output conformance relation based on CSP: csptio. Besides CSP, we explore the generation of other target notations (SCR and IMR) from which we can generate test cases using commercial tools (T-VEC and RT-Tester, respectively). The whole process is fully automated by the NAT2TEST tool. Our strategy was evaluated considering examples from the literature, the aerospace (Embraer) and the automotive (Mercedes) industry. We analysed performance and the ability to detect defects generated via mutation. In general, our strategy outperformed the considered baseline: random testing. We also compared our strategy with relevant commercial tools.
Testes baseados em modelos (MBT) consiste em criar modelos para especificar o comportamento esperado de sistemas e, a partir destes, gerar testes que verificam se implementações possuem o nível de confiabilidade esperado. No contexto de sistemas críticos, estes modelos são normalmente (semi)formais e deseja-se uma definição precisa das condições necessárias para garantir que uma implementação é correta em relação ao modelo da especificação. Esta definição caracteriza uma relação de conformidade, que pode ser usada para provar que uma estratégia de MBT é consistente (sound). Apesar dos benefícios, aqueles sem familiaridade com a sintaxe e a semântica dos modelos empregados podem relutar em adotar estes formalismos. Aqui, propõe-se uma estratégia de MBT para gerar casos de teste a partir de linguagem natural controlada (CNL). Esta estratégia (NAT2TEST) dispensa a necessidade de conhecer a sintaxe e a semântica das notações formais utilizadas internamente, uma vez que os modelos intermediários são gerados automaticamente a partir de requisitos em linguagem natural. Esta estratégia é apropriada para sistemas reativos baseados em fluxos de dados: uma classe de sistemas embarcados cujas entradas e saídas estão sempre disponíveis como sinais. Estes sistemas também podem ter comportamento dependente do tempo (discreto ou contínuo). Na estratégia NAT2TEST, inicialmente, os requisitos são analisados sintaticamente de acordo com a CNL proposta neste trabalho para descrever sistemas reativos. Em seguida, a semântica informal dos requisitos é caracterizada utilizando a teoria de gramática de casos. Posteriormente, deriva-se uma representação formal dos requisitos considerando um modelo definido neste trabalho para sistemas reativos. Finalmente, este modelo é traduzido em uma especificação em communicating sequential processes (CSP) para permitir a geração de testes. Este trabalho prova que a estratégia de testes proposta é consistente considerando a relação de conformidade temporal baseada em entradas e saídas também definida aqui: csptio. Além de CSP, foi explorada a geração de outras notações formais (SCR e IMR), a partir das quais é possível gerar casos de teste usando ferramentas comerciais (T-VEC e RT-Tester, respectivamente). Todo o processo é automatizado pela ferramenta NAT2TEST. A estratégia NAT2TEST foi avaliada considerando exemplos da literatura, da indústria aeroespacial (Embraer) e da automotiva (Mercedes). Foram analisados o desempenho e a capacidade de detectar defeitos gerados através de operadores de mutação. Em geral, a estratégia NAT2TEST apresentou melhores resultados do que a referência adotada: testes aleatórios. A estratégia NAT2TEST também foi comparada com ferramentas comerciais relevantes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

ITO, Hideo, und Gang ZENG. „Low-Cost IP Core Test Using Tri-Template-Based Codes“. Institute of Electronics, Information and Communication Engineers, 2007. http://hdl.handle.net/2237/15029.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Sadeghzadeh, Seyedehsaloumeh. „Optimal Data-driven Methods for Subject Classification in Public Health Screening“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/101611.

Der volle Inhalt der Quelle
Annotation:
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification. We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening. As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

馮可達 und Ho-tat Fung. „Soil property determination through a knowledge-based system with emphasis on undrained shear strength“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31236868.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Strålfors, Annika. „Making test automation sharable: The design of a generic test automation framework for web based applications“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240981.

Der volle Inhalt der Quelle
Annotation:
The validation approach for assuring quality of software does often include the conduction of tests. Software testing includes a wide range of methodology depending on the system level and the component under test. Graphical user interface (GUI) testing consists of high level tests that assert that functions and design element in user interfaces work as expected. The research conducted in this paper focused on GUI testing of web based applications and the movement towards automated testing within the software industry. The question which formed the basis for the study was the following: How should a generic test automation framework be designed in order to allow maintenance between developers and non-developers? The study was conducted on a Swedish consultant company that provides e-commerce web solutions. A work strategy approach for automated testing was identified and an automation framework prototype was produced. The framework was evaluated through a pilot study where testers participated through the creation of a test suite for a specific GUI testing area. Time estimations were collected as well as qualitative measurements through a follow up survey. This paper presents a work strategy proposal for automated tests together with description of the framework system design. The results are presented with a subsequent discussion about the benefits and complexity of creating and introducing automated tests within large scale systems. Future work suggestions are also addressed together with accountancy of the frameworks usefulness for other testing areas besides GUI testing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Zhang, Zhidong 1957. „Cognitive assessment in a computer-based coaching environment in higher education : diagnostic assessment of development of knowledge and problem-solving skill in statistics“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102853.

Der volle Inhalt der Quelle
Annotation:
Diagnostic cognitive assessment (DCA) was explored using Bayesian networks and evidence-centred design (ECD) in a statistics learning domain (ANOVA). The assessment environment simulates problem solving activities that occurred in a web-based statistics learning environment. The assessment model is composed of assessment constructs, and evidence models. Assessment constructs correspond to components of knowledge and procedural skill in a cognitive domain model and are represented as explanatory variables in the assessment model. Explanatory variables represent specific aspects of student's performance of assessment problems. Bayesian networks are used to connect the explanatory variables to the evidence variables. These links enable the network to propagate evidential information to explanatory model variables in the assessment model. The purpose of DCA is to infer cognitive components of knowledge and skill that have been mastered by a student. These inferences are realized probabilistically using the Bayesian network to estimate the likelihood that a student has mastered specific components of knowledge or skill based on observations of features of the student's performance of an assessment task.
The objective of this study was to develop a Bayesian assessment model that implements DCA in a specific domain of statistics, and evaluate it in relation to its potential to achieve the objectives of DCA. This study applied a method for model development to the ANOVA score model domain to attain the objectives of the study. The results documented: (a) the process of model development in a specific domain; (b) the properties of the Bayesian assessment model; (c) the performance of the network in tracing students' progress towards mastery by using the model to successfully update the posterior probabilities; (d) the use of estimates of log odds ratios of likelihood of mastery as a measure of "progress toward mastery;" (e) the robustness of diagnostic inferences based on the network; and (f) the use of the Bayesian assessment model for diagnostic assessment with a sample of 20 students who completed the assessment tasks. The results indicated that the Bayesian assessment network provided valid diagnostic information about specific cognitive components, and was able to track development towards achieving mastery of learning goals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Trondman, Anna-Kari. „Pollen-based quantitative reconstruction of land-cover change in Europe from 11,500 years ago until present - A dataset suitable for climate modelling“. Doctoral thesis, Linnéuniversitetet, Institutionen för biologi och miljö (BOM), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-40775.

Der volle Inhalt der Quelle
Annotation:
The major objective of this thesis was to produce descriptions of the land vegetation-cover in Europe for selected time windows of the Holocene (6000, 3000, 500, 200, and 50 calendar years before present (BP=1950)) that can be used in climate modelling. Land vegetation is part of the climate system; its changes influence climate through biogeophysical and biogeochemical processes. Land use such as deforestation is one of the external forcings of climate change.  Reliable descriptions of vegetation cover in the past are needed to study land cover-climate interactions and understand the possible effects of present and future land-use changes on future climate. We tested and applied the REVEALS (Regional Estimates of VEgetation Abundance from Large Sites) model to estimate past vegetation in percentage cover over Europe using pollen records from lake sediments and peat bogs. The model corrects for the biases of pollen data due to intraspecific differences in pollen productivity and pollen dispersion and deposition in lakes and bogs. For the land-cover reconstructions in Europe and the Baltic Sea catchment we used 636 (grouped by 1˚x1˚ grid cells) and 339 (grouped by biogeographical regions) pollen records, respectively. The REVEALS reconstructions were performed for 25 tree, shrub and herb taxa. The grid-based REVEALS reconstructions were then interpolated using a set of statistical spatial models. We show that the choice of input parameters for the REVEALS application does not affect the ranking of the REVEALS estimates significantly, except when entomophilous taxa are included. We demonstrate that pollen data from multiple small sites provide REVEALS estimates that are comparable to those obtained with pollen data from large lakes, however with larger error estimates. The distance between the small sites does not influence the results significantly as long as the sites are at a sufficient distance from vegetation zone boundaries. The REVEALS estimates of open land for Europe and the Baltic Sea catchment indicate that the degree of landscape openness during the Holocene was significantly higher than previously interpreted from pollen percentages. The relationship between Pinus and Picea and between evergreen and summer-green taxa may also differ strongly whether it is based on REVEALS percentage cover or pollen percentages. These results provide entirely new insights on Holocene vegetation history and help understanding questions related to resource management by humans and biodiversity in the past. The statistical spatial models provide for the first time pollen-based descriptions of past land cover that can be used in climate modelling and studies of land cover-climate interactions in the past.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Moussa, Ahmed S. „On learning and visualizing lexicographic preference trees“. UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/882.

Der volle Inhalt der Quelle
Annotation:
Preferences are very important in research fields such as decision making, recommendersystemsandmarketing. The focus of this thesis is on preferences over combinatorial domains, which are domains of objects configured with categorical attributes. For example, the domain of cars includes car objects that are constructed withvaluesforattributes, such as ‘make’, ‘year’, ‘model’, ‘color’, ‘body type’ and ‘transmission’.Different values can instantiate an attribute. For instance, values for attribute ‘make’canbeHonda, Toyota, Tesla or BMW, and attribute ‘transmission’ can haveautomaticormanual. To this end,thisthesis studiesproblemsonpreference visualization and learning for lexicographic preference trees, graphical preference models that often are compact over complex domains of objects built of categorical attributes. Visualizing preferences is essential to provide users with insights into the process of decision making, while learning preferences from data is practically important, as it is ineffective to elicit preference models directly from users. The results obtained from this thesis are two parts: 1) for preference visualization, aweb- basedsystem is created that visualizes various types of lexicographic preference tree models learned by a greedy learning algorithm; 2) for preference learning, a genetic algorithm is designed and implemented, called GA, that learns a restricted type of lexicographic preference tree, called unconditional importance and unconditional preference tree, or UIUP trees for short. Experiments show that GA achieves higher accuracy compared to the greedy algorithm at the cost of more computational time. Moreover, a Dynamic Programming Algorithm (DPA) was devised and implemented that computes an optimal UIUP tree model in the sense that it satisfies as many examples as possible in the dataset. This novel exact algorithm (DPA), was used to evaluate the quality of models computed by GA, and it was found to reduce the factorial time complexity of the brute force algorithm to exponential. The major contribution to the field of machine learning and data mining in this thesis would be the novel learning algorithm (DPA) which is an exact algorithm. DPA learns and finds the best UIUP tree model in the huge search space which classifies accurately the most number of examples in the training dataset; such model is referred to as the optimal model in this thesis. Finally, using datasets produced from randomly generated UIUP trees, this thesis presents experimental results on the performances (e.g., accuracy and computational time) of GA compared to the existent greedy algorithm and DPA.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Rubbestad, Gustav, und William Söderqvist. „Hacking a Wi-Fi based drone“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299887.

Der volle Inhalt der Quelle
Annotation:
Unmanned Aerial Vehicles, often called drones or abbreviated as UAVs, have been popularised and used by civilians for recreational use since the early 2000s. A majority of the entry- level commercial drones on the market are based on a WiFi connection with a controller, usually a smart phone. This makes them vulnerable to various WiFi attacks, which are evaluated and tested in this thesis, specifically on the Ryze Tello drone. Several threats were identified through threat modelling, in which a set of them was selected for penetration testing. This is done in order to answer the research question: How vulnerable is the Ryze Tello drone against WiFi based attacks? The answer to the research question is that the Ryze Tello drone is relatively safe, with the exception of it not having a default password for the network. A password was set for the network, however it was still exploited through a dictionary attack. This enabled attacks such as injecting flight instructions as well as the ability to gain access to the video feed of the drone while simultaneously controlling it through commands in terminal.
Drönare, eller UAV från engelskans Unmanned Aerial Vehicle, har ökat i popularitet bland privatpersoner sedan tidigt 2000tal. En majoritet av drönare för nybörjare är baserade på WiFi och styrs med en kontroll som vanligtvis är en smart phone. Detta innebär att dessa drönare kan vara sårbara för olika typer av attacker på nätverket, vilket utvärderas och testas i denna rapport på drönaren Ryze Tello. Flera hot identifierades med hotmodellering och ett flertal valdes ut för penetrationtest. Detta genomförs med syftet att svara på forskningsfrågan: Hur sårbar är Ryze Tello mot WiFi baserade attacker? Svaret på forskningsfrågan är att drönaren Ryze Tello är relativt säker, med undantaget att den inte har ett standardlösenord. Ett lösenord sattes på nätverket, men lösenordet knäcktes ändå med en ordboksattack. Detta möjliggjorde attacker så som instruktionsinjicering och förmågan att se videoströmmen från drönaren samtidigt som den kan kontrolleras via kommandon i terminalen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lindberg, Tobias. „A/B-testing for web design: A comparative study of response times between MySQL and PostgreSQL : Implementation of a web based tool for design comparisons with stored images“. Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15409.

Der volle Inhalt der Quelle
Annotation:
Web development is a challenging task and it’s easy to neglect feedback from users in development stages. That’s why the aim is to create a tool which would help the communication between developers and users by using A/B-testing. The idea is to let developers release two choices containing images, which would be the intended design changes. By letting the users vote for the preferred option, they will be able to provide some feedback for the developers. Response times becomes a critical factor for the tool’s overall success. Therefore, this study compares MySQL and PostgreSQL through a technical experiment to see which database would be the better option regarding the image processing. The experiment indicated that PostgreSQL would the better alternative regarding the subject, as it had the most responsive processing of images. This prototype provides a good foundation for a potentially useful system that could be implemented in future work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

De, Voir Christopher S. „Wavelet Based Feature Extraction and Dimension Reduction for the Classification of Human Cardiac Electrogram Depolarization Waveforms“. PDXScholar, 2005. https://pdxscholar.library.pdx.edu/open_access_etds/1740.

Der volle Inhalt der Quelle
Annotation:
An essential task for a pacemaker or implantable defibrillator is the accurate identification of rhythm categories so that the correct electrotherapy can be administered. Because some rhythms cause a rapid dangerous drop in cardiac output, it is necessary to categorize depolarization waveforms on a beat-to-beat basis to accomplish rhythm classification as rapidly as possible. In this thesis, a depolarization waveform classifier based on the Lifting Line Wavelet Transform is described. It overcomes problems in existing rate-based event classifiers; namely, (1) they are insensitive to the conduction path of the heart rhythm and (2) they are not robust to pseudo-events. The performance of the Lifting Line Wavelet Transform based classifier is illustrated with representative examples. Although rate based methods of event categorization have served well in implanted devices, these methods suffer in sensitivity and specificity when atrial, and ventricular rates are similar. Human experts differentiate rhythms by morphological features of strip chart electrocardiograms. The wavelet transform is a simple approximation of this human expert analysis function because it correlates distinct morphological features at multiple scales. The accuracy of implanted rhythm determination can then be improved by using human-appreciable time domain features enhanced by time scale decomposition of depolarization waveforms. The purpose of the present work was to determine the feasibility of implementing such a system on a limited-resolution platform. 78 patient recordings were split into equal segments of reference, confirmation, and evaluation sets. Each recording had a sampling rate of 512Hz, and a significant change in rhythm in the recording. The wavelet feature generator implemented in Matlab performs anti-alias pre-filtering, quantization, and threshold-based event detection, to produce indications of events to submit to wavelet transformation. The receiver operating characteristic curve was used to rank the discriminating power of the feature accomplishing dimension reduction. Accuracy was used to confirm the feature choice. Evaluation accuracy was greater than or equal to 95% over the IEGM recordings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Luo, Dan, und Yajing Ran. „Micro Drivers behind the Changes of CET1 Capital Ratio : An empirical analysis based on the results of EU-wide stress test“. Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44140.

Der volle Inhalt der Quelle
Annotation:
Background: Stress tests have been increasingly used as a part of the supervisory tool by national regulators after the financial crisis, which can also be used to conduct authorities’ supervisory for determining bank capital levels, assessing the health of a bank. Purpose: The main purpose of this study is to assess whether some micro factors play important roles on the changes of Common Equity Tier One Capital Ratio (between the bank accounting value and the stress testing results under the adverse scenarios).  Our secondary purpose is to investigate if our empirical results will help to provide some theoretical suggestions to regulators when they exercise stress tests.   Method: An empirical analysis by using Panel Data, introducing GARCH model to measure volatility.   Empirical foundation: The results of EU-wide stress tests and bank financial statements   Conclusion: The coefficient associated with non-performing loans to total loans is positively significant and the coefficient associated with bank size is negatively significant.  In addition, the financial system of strong banks is better to absorb financial shocks. These findings are useful, as banks is a reflection of the financial stability of an economic entity, we can use these findings as another reason to pay attention to the process of the stress testing rather just stress testing results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Jara-Almonte, J. „Extraction of eigen-pairs from beam structures using an exact element based on a continuum formulation and the finite element method“. Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54300.

Der volle Inhalt der Quelle
Annotation:
Studies of numerical methods to decouple structure and fluid interaction have reported the need for more precise approximations of higher structure eigenvalues and eigenvectors than are currently available from standard finite elements. The purpose of this study is to investigate hybrid finite element models composed of standard finite elements and exact-elements for the prediction of higher structure eigenvalues and eigenvectors. An exact beam-element dynamic-stiffness formulation is presented for a plane Timoshenko beam with rotatory inertia. This formulation is based on a converted continuum transfer matrix and is incorporated into a typical finite element program for eigenvalue/vector problems. Hybrid models using the exact-beam element generate transcendental, nonlinear eigenvalue problems. An eigenvalue extraction technique for this problem is also implemented. Also presented is a post-processing capability to reconstruct the mode shape each of exact element at as many discrete locations along the element as desired. The resulting code has advantages over both the standard transfer matrix method and the standard finite element method. The advantage over the transfer matrix method is that complicated structures may be modeled with the converted continuum transfer matrix without having to use branching techniques. The advantage over the finite element method is that fewer degrees of freedom are necessary to obtain good approximations for the higher eigenvalues. The reduction is achieved because the incorporation of an exact-beam-element is tantamount to the dynamic condensation of an infinity of degrees of freedom. Numerical examples are used to illustrate the advantages of this method. First, the eigenvalues of a fixed-fixed beam are found with purely finite element models, purely exact-element models, and a closed-form solution. Comparisons show that purely exact-element models give, for all practical purposes, the same eigenvalues as a closed-form solution. Next, a Portal Arch and a Verdeel Truss structure are modeled with hybrid models, purely finite element, and purely exact-element models. The hybrid models do provide precise higher eigenvalues with fewer degrees of freedom than the purely finite element models. The purely exact-element models were the most economical for obtaining higher structure eigenvalues. The hybrid models were more costly than the purely exact-element models, but not as costly as the purely finite element models.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Potnuru, Srinath. „Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294140.

Der volle Inhalt der Quelle
Annotation:
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols.
5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie