Dissertations / Theses on the topic 'Validation'

To see the other types of publications on this topic, follow the link: Validation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yuan, Changzheng. "Nutrient Validation In Women's Lifestyle Validation Study." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:16121152.

Full text
Abstract:
Nutritional factors have been intensively studied as important determinants of many diseases. Food frequency questionnaires (FFQ), dietary records, 24-hour dietary recalls, nutrient biomarkers are important dietary assessment methods, and are subject to various sources of measurement error. Given the limitations of these methods, much effort has been devoted to refining them and evaluating their ability to measure diet. This dissertation focused on evaluating the performance of a semi-quantitative FFQ (SFFQ), multiple web-based automated-self-administered 24-hour recalls (ASA24), 7-day dietary records (7DDR) and biochemical indicators in assessing nutrient intakes among women. Intraclass correlation coefficient, Spearman correlation coefficient, and validity coefficient calculated by method of triads were used to evaluate the reproducibility and validity of each dietary method. The first paper evaluated the performance of a 152-item SFFQ comparing intakes of nutrients estimated by SFFQ with those measured by the average of two 7DDR, and of four ASA24s kept over a one-year period. The study SFFQ performed consistently well when compared with multiple diet records, and that modifications to the questionnaire over time have adequately taken into account the changes in the food supply and eating patterns that have occurred since 1980. Multiple ASA24s can provide similar estimates of validity as dietary records if day-to-day variation is taken into account. The second paper explored the validity of long-term intakes of energy, protein, sodium and potassium assessed by SFFQ and ASA24s using recovery biomarkers and 7DDR as standards. The study SFFQ and averaged ASA24’s are reasonably valid measurements for energy-adjusted protein, sodium and potassium compared to multiple recovery biomarkers or dietary records. Recovery biomarkers should not be considered to be without error, including systematic within-person error. Finally, the third paper further evaluated the validity of nutrient assessed by SFFQ and ASA24 compared with intakes by the 7DDR and plasma levels of fatty acids, carotenoids, retinol, tocopherols and folate. Again, the study SFFQ provides reasonably valid measurements for specific fatty acid, most carotenoids, alpha-tocopherol and folate compared to concentration biomarkers or dietary records. Compared to SFFQ, almost all nutrients estimated by averaged ASA24s had relatively low correlations with biomarkers, 7DDRs and estimated ‘true’ underlying intakes.
Nutrition
APA, Harvard, Vancouver, ISO, and other styles
2

Koepke, Hoyt Adam. "Bayesian cluster validation." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1496.

Full text
Abstract:
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
3

Fraher, Patrick M. A. "Environmental sensor validation." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Govereau, Paul. "Denotational Translation Validation." Thesis, Harvard University, 2011. http://dissertations.umi.com/gsas.harvard:10045.

Full text
Abstract:
In this dissertation we present a simple and scalable system for validating the correctness of low-level program transformations. Proving that program transformations are correct is crucial to the development of security critical software tools. We achieve a simple and scalable design by compiling sequential low-level programs to synchronous data-flow programs. Theses data-flow programs are a denotation of the original programs, representing all of the relevant aspects of the program semantics. We then check that the two denotations are equivalent, which implies that the program transformation is semantics preserving. Our denotations are computed by means of symbolic analysis. In order to achieve our design, we have extended symbolic analysis to arbitrary control-flow graphs. To this end, we have designed an intermediate language called Synchronous Value Graphs (SVG), which is capable of representing our denotations for arbitrary control-flow graphs, we have built an algorithm for computing SVG from normal assembly language, and we have given a formal model of SVG which allows us to simplify and compare denotations. Finally, we report on our experiments with LLVM M.D., a prototype denotational translation validator for the LLVM optimization framework.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
5

Rizk, Raya. "Big Data Validation." Thesis, Uppsala universitet, Informationssystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353850.

Full text
Abstract:
With the explosion in usage of big data, stakes are high for companies to develop workflows that translate the data into business value. Those data transformations are continuously updated and refined in order to meet the evolving business needs, and it is imperative to ensure that a new version of a workflow still produces the correct output. This study focuses on the validation of big data in a real-world scenario, and implements a validation tool that compares two databases that hold the results produced by different versions of a workflow in order to detect and prevent potential unwanted alterations, with row-based and column-based statistics being used to validate the two versions. The tool was shown to provide accurate results in test scenarios, providing leverage to companies that need to validate the outputs of the workflows. In addition, by automating this process, the risk of human error is eliminated, and it has the added benefit of improved speed compared to the more labour-intensive manual alternative. All this allows for a more agile way of performing updates on the data transformation workflows by improving on the turnaround time of the validation process.
APA, Harvard, Vancouver, ISO, and other styles
6

Engel, Isabelle. "Validation des autoclaves." Paris 5, 1991. http://www.theses.fr/1991PA05P071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Beaudin, Guy. "Croyance partagée en l'efficacité groupale, validation prédictive et validation de construit." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ26636.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Elder, Samuel Scott. "Reliable validation : new perspectives on adaptive data analysis and cross-validation." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120660.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-109).
Validation refers to the challenge of assessing how well a learning algorithm performs after it has been trained on a given data set. It forms an important step in machine learning, as such assessments are then used to compare and choose between algorithms and provide reasonable approximations of their accuracy. In this thesis, we provide new approaches for addressing two common problems with validation. In the first half, we assume a simple validation framework, the holdout set, and address an important question of how many algorithms can be accurately assessed using the same holdout set, in the particular case where these algorithms are chosen adaptively. We do so by first critiquing the initial approaches to building a theory of adaptivity, then offering an alternative approach and preliminary results within this approach, all geared towards characterizing the inherent challenge of adaptivity. In the second half, we address the validation framework itself. Most common practice does not just use a single holdout set, but averages results from several, a family of techniques known as cross-validation. In this work, we offer several new cross-validation techniques with the common theme of utilizing training sets of varying sizes. This culminates in hierarchical cross-validation, a meta-technique for using cross-validation to choose the best cross-validation method.
by Samuel Scott Elder.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Tohme, Tony. "The Bayesian validation metric : a framework for probabilistic model calibration and validation." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126919.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 109-114).
In model development, model calibration and validation play complementary roles toward learning reliable models. In this thesis, we propose and develop the "Bayesian Validation Metric" (BVM) as a general model validation and testing tool. We show that the BVM can represent all the standard validation metrics - square error, reliability, probability of agreement, frequentist, area, probability density comparison, statistical hypothesis testing, and Bayesian model testing - as special cases while improving, generalizing and further quantifying their uncertainties. In addition, the BVM assists users and analysts in designing and selecting their models by allowing them to specify their own validation conditions and requirements. Further, we expand the BVM framework to a general calibration and validation framework by inverting the validation mathematics into a method for generalized Bayesian regression and model learning. We perform Bayesian regression based on a user's definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that "fail" in some cases. We show that our tool is capable of representing and combining Bayesian regression, standard regression, and likelihood-based calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes in Bayesian regression, standard regression, and likelihood-based methods while giving the analyst more control over these envelopes.
by Tony Tohme.
S.M.
S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Nan. "Validating relationships between Performance Shaping Factors within a Science-Based HRA validation framework." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398900568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lalani, Nisar. "Validation of Internet Applications." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-449.

Full text
Abstract:

Today, testing applications for Internet (web sites and other applications) is being verified using

proprietary test solutions. An application is developed and another application is developed to

test the first one. Test Competence Centre at Ericsson AB has expertise on testing telecom

applications using TTCN-2 and TTCN-3 notations. These notations have lot of potential and are

being used for testing in various areas. So far, not much work has been done on using TTCN

notations for testing Internet application. This thesis was a step through which the

capabilities/possibilities of the TTCN notation (in Web testing) could be determined.

This thesis presents investigation results of the 3 different test technologies/tools (TTCN-2,

TTCN-3 and a proprietary free software, PureTest) to see which one is the best for testing

Internet Applications and what are the drawbacks/benefits each technology has.

The background topics included are brief introduction of software testing and web testing, short

introduction of TTCN language and its version 2 and 3, description of the tool set representing

the chosen technologies, conceptual view of how the tools work, a short description of HTTP

protocol and description of HTTP adapter (Test Port).

Several benefits and drawbacks were found from all the three technologies but it can be said that

at the moment proprietary test solutions (PureTest in this case) is still the best tool to test Internet

Application. It scores over other the two technologies (TTCN-2 and TTCN-3) due to reason like

flexibility, cost effectiveness, user friendliness, small lead times for competence development etc.

TTCN-3 is more of a programming language and is certainly more flexible when compared to

TTCN-2. TTCN-3 is still evolving and it can be said that it holds promise. Some of the features

are missing which are vital for testing Internet Applications but are better than TTCN-2.

APA, Harvard, Vancouver, ISO, and other styles
12

Gatla, Goutham. "Validation of ModelicaML models." Thesis, Linköpings universitet, Programvara och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86364.

Full text
Abstract:
In the world of modeling, Model Validation plays a crucial role. A model editor is not said to becomplete without Validation. ModelicaML is a Modeling Language extended from a subset of UMLand SysML, developed under OpenModelica Project. It is defined to provide time-discrete andtime-continuous models. Papyrus Model Editor is extended to support for ModelicaML usingModelicaML Eclipse plug-in. This plug-in comes with Modelica Code Generator.Previously, ModelicaML plug-in had a prototype of validation which provided only Batch-modevalidation. The validation is used to be done by the Modelica compiler after the code generation phase.Each time the user tried to validate the model; first Modelica code is generated and then validated. Thistype of validation misses certain validation rules to validate due to the conversion from theModelicaML model to Modelica code.The goal of this thesis is to implement Model Validation done at model editor level with both Batch andLive mode validation. This can be done by developing an Eclipse plug-in which does the ModelValidation. This plug-in uses the EMF Validation framework for implementing the constraints andvalidation on ModelicaML models.
APA, Harvard, Vancouver, ISO, and other styles
13

Austli, Viktor, and Elin Hernborg. "Standardization of Bug Validation." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34801.

Full text
Abstract:
The usage of the Internet is widely implemented all over the world in a number of concepts. This generates a demand of establishing security as to sustain the integrity of data. In this thesis a service will be presented which can be used to identify various web vulnerabilities in order to regulate these and therefore prevent exploitation. As the world is today the increase of technical implementation provides with a growing amount of security flaws, this affect the organizations which may have to increase their resource financing in an effort to counter these. But what if a tremendous amount of work could be automated and avoid organizations having to spend an enormous amount of finances validating security flaws reported to them? What if these flaws could be validated in a more effective manner? With this tool being establish an individual will no longer require advanced technical knowledge in order to identify whether a web vulnerability is present or not but instead have an automated test perform the procedure for them.
APA, Harvard, Vancouver, ISO, and other styles
14

Fry, Andrew J. "Aspects of measurement validation." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bradley, Michael Ian. "Quantitative bioprocess containment validation." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hernandez, Tony. "Test Procedure Validation Process." Digital Commons at Loyola Marymount University and Loyola Law School, 2011. https://digitalcommons.lmu.edu/etd/401.

Full text
Abstract:
In System Test, there is currently no common test procedure validation process across commercial, civil and government programs. Enterprise processes simply state to perform verification and validation. The method of satisfying that requirement is left to each individual organization and/or program. The process is generic in order to support an Enterprise of products that can vary from military weapons to commercial satellites. A common test procedure validation process can be adopted which is platform and program independent to provide more guidance than the top level Enterprise process. The method of allowing each System Test program to define their own validation process results in inconsistencies between programs. Important Lessons Learned which could result in an improved validation process for a particular program are not captured. Costly rework is incurred without standardization. The root causes for Test Procedure rework fall in three common categories as shown in Table 1. Program X is an example of a program which does not follow a standard validation process. Therefore their process is incomplete and results in significant rework. This document will limit its scope to focus on standardizing Program X's process to the validation process currently used by the majority of programs in System Test. The goal is to develop a standard procedure and in tum reduce the number of validation process issues (rework) currently plaguing Program X. During this process, techniques from Lean and Quality courses will be utilized.
APA, Harvard, Vancouver, ISO, and other styles
17

Valový, Marcel. "Bean Validation in JAXB." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-192401.

Full text
Abstract:
Currently, there is no solution providing automatic validation of objects in the problem of solving Object-to-XML Impedance Mismatch. The author chose Java SE specification JAXB for Object-to-XML mapping and Java EE specification Bean Validation for validation of JavaBean objects. This thesis focuses on the interconnection of the two specifications and creation of a new specification Bean Validation in JAXB providing automatic validation at the object level during the process of marshalling and unmarshalling. This specification also provides means for mapping XML Restrictions and Facets to Bean Validation constraints. In this thesis author presents the design of Bean Validation in JAXB facility specification, its reference implementation, written by author, and users and programmers guide.
APA, Harvard, Vancouver, ISO, and other styles
18

Cominetti, Matteo. "Project validation using BIM." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5413/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

SALES, T. P. "Ontology Validation for Managers." Universidade Federal do Espírito Santo, 2014. http://repositorio.ufes.br/handle/10/4273.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:33:20Z (GMT). No. of bitstreams: 1 tese_8247_MSc Thesis - Tiago Prince Sales - Ontology Validation for Managers.pdf: 5883176 bytes, checksum: 8a88205c93ab6188ec78bdc4c5b590d9 (MD5) Previous issue date: 2014-10-10
Ontology driven conceptual modeling focuses on accurately representing a domain of interest, instead of making information fit an arbitrary set of constructs. It may be used for different purposes, like to achieve semantic interoperability (Nardi, Falbo and Almeida, 2013), development of knowledge representation models (Guizzardi and Zamborlini, 2012) and language evaluation (Santos, Almeida and Guizzardi,2010). Regardless its final application, a model must be accurately defined in order for it to be a successful solution. This new branch of conceptual modeling improves traditional techniques by taking into consideration ontological properties, such as rigidity, identity and dependence, which are derived from a foundational ontology. This increasing interest in more expressive languages for conceptual modeling is shown by OMGs request for language proposals for the Semantic Information Model Federation (SIMF) (OMG,2011). OntoUML (Guizzardi, 2005) is an example of a language designed for that purpose.Its metamodel (Carraretto, 2010) is designed to comply to the Unified Foundational Ontology (UFO). It focus on structural aspects of individuals and universals.Grounded on human cognition and linguistics, it aims to provide the most basic categories in which humans understand and classify things around them.In (Guizzardi, 2010) Guizzardi quotes the famous Dijkstras lecture, in which he discusses the humble programmer and makes an analogy entitled the humble ontologist. He argues that the task of ontology-driven conceptual modeling is extremely complex and thus, modelers should surround themselves with as many tools as possible to aid in the development of the ontology. These complexities arise from different sources. A couple of them come from foundational ontology itself, both its modal nature, which imposes modelers to deal with possibilities, and the many different restrictions of each ontological category. But they also come from the need of accurately defining instance level constraints, which require additional rules, outside of the languages graphical notation. To help modelers to develop high quality OntoUML models, a number of tools have been proposed to aid in different phases of conceptual modeling. From the construction of the models themselves using design patterns questions (Guizzardi et al., 2011), to automatic syntax verification (Benevides, 2010) and model validation through simulation (Benevides et al., 2010). The importance of domain specification that accurately captures the intended conceptualization has been recognized by both the traditional conceptual modeling community (Moody et al., 2003) and the ontology community (Vrandečić, 2009). In this research we want to improve (Benevides et al., 2010) initiative, but focus exclusively on the validation of ontology driven conceptual models, and not on verification. With the complexity of the modeling activity in mind, we want to help modelers to systematically produce high quality ontologies, improving precision and coverage (Gangemi et al., 2005) of the models. We intend to make the simulationbased approach available for users that are not experts in the formal method, relieving them of the need to learn yet another language, solely for the purpose of validating their models.
APA, Harvard, Vancouver, ISO, and other styles
20

Dyer, Matthias. "Distributed embedded systems : validation strategies /." Aachen : Shaker Verlag, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Trullols, Soler Esther. "Validation of qualitative analytical methods." Doctoral thesis, Universitat Rovira i Virgili, 2006. http://hdl.handle.net/10803/9004.

Full text
Abstract:
La informació química sobre la composició d'una mostra pot ser molt diversa: des de saber de quins analits es composa un cert material a saber exactament en quina quantitat s'hi troben o de quina forma hi són presents, si estan relacionats estructuralment entre ells, etc.
D'acord amb tota aquesta varietat, els mètodes analítics es classifiquen en dos grans grups: els mètodes d'anàlisi qualitativa i els mètodes d'anàlisi quantitativa. Segons les característiques del problema analític es triarà un o altre tipus de mètode d'anàlisi. Quan l'objectiu és saber què hi ha en una mostra desconeguda, un mètode qualitatiu serà el mes adequat. En els darrers temps, aquests mètodes han estat objecte d'estudi, i s'utilitzen avui dia, en molts camps d'aplicació. Per exemple, en l'anàlisi d'aliments és habitual l'ús d'un mètode qualitatiu per determinar si un o més analits es troben presents en la mostra per sobre o per sota d'una determinada concentració.
Però si l'interès és saber la quantitat d'un determinat component en una mostra, l'opció d'un mètode quantitatiu serà la més adient.

Aquesta tesi s'ha centrat en els mètodes d'anàlisi qualitativa pels nombrosos avantatges que presenten. Aquests mètodes poden ajudar a destriar mostres en funció de si aquestes presenten una quantitat d'un cert analit al voltant d'un valor de concentració prèviament establert, abans de ser quantificades. És a dir, s'utilitzen com a pas previ a l'aplicació del mètode quantitatiu, implicant un estalvi de feina, de temps i de diners important si es tracta de quantificar contaminants, detectar adulteracions o qualsevol altra situació en la que no es pugui sobrepassar una certa concentració. En aquests casos només s'ha de quantificar la mostra que en el mètode qualitatiu revela un resultat en el que es sobrepassa aquesta certa concentració.
En d'altres àmbits d'aplicació, els mètodes qualitatius estan perfectament integrats en el procediment estàndard d'operacions, pel que, llevat en situacions molt específiques, un resultat positiu no necessita ser confirmat mitjançant una anàlisi posterior amb un mètode quantitatiu.

A més de la importància de triar un mètode analític adequat a cada problemàtica, cal destacar que és igual d'important tenir fiabilitat sobre el resultat trobat i, per tant, sobre el mètode emprat. Això vol dir que qualsevol mètode analític ha de tenir definits els seus requeriments i qualitats analítiques i que s'ha de comprovar que aquests paràmetres prèviament definits, realment tenen el valor que se'ls ha assignat. D'aquesta confirmació se'n diu Validació, i és una condició indispensable per a poder emprar un mètode analític. D'aquesta manera es poden garantir els resultats demanats pels clients/usuaris. A més, des de l'aprovació de la norma ISO 17025 aquesta comprovació del mètode analític i dels seus resultats encara s'ha fet més recomanable.

Fins fa poc temps, la validació de mètodes analítics s'ha centrat en els mètodes quantitatius. El resultat ha estat una sèrie de guies/pautes perfectament establertes d'ús molt comú. Però no hi ha cap protocol general per a validar un mètode qualitatiu. Amb aquesta tesi es vol contribuir a millorar aquesta situació.

Es comença amb una revisió de les classificacions i de les definicions lligades a aquests mètodes, a més d'un repàs sobre quines institucions han fet esment d'aquest tema. Es segueix amb una proposta de classificació d'aquests mètodes i, finalment, es defineixen aquells paràmetres de qualitat que es consideren més importants en la validació.

En les tres aplicacions pràctiques presentades es descriuen les característiques intrínseques del mètode d'anàlisi qualitativa. Després, es defineixen els paràmetres que s'adeqüen millor als requeriments del mètode i, finalment, es proposa un protocol de validació que permet el seu establiment.

El cas de la revisió de les classificacions i definicions emprades en aquest àmbit, com en el cas de la presentació de les contribucions corresponents a diferents institucions, s'han traduït en dues publicacions que s'adjunten en la tesi. Pel que fa a les aplicacions pràctiques, una d'elles també s'inclou com a article publicat i les altres dues, s'inclouen com a articles acceptats.
The chemical information about the composition of a sample can be of different nature: which species are in the sample, their concentration or if they are structurally related, etc.
In order to fit any of these requirements, either a qualitative or a quantitative analytical method may be used. If the aim is to identify species, a qualitative method will suit the problem at hand. These types of methods have been recently studied and nowadays are being increasingly used in several fields of analysis. For example, it is common to use qualitative methods as far as food analysis is concerned.
On the contrary, if the aim is to quantify one or more analytes of a sample, a quantitative method will be very useful.

This thesis has focused on qualitative analytical methods because they provide several advantages and they are being increasingly used. These types of methods can screen samples according to the presence or absence of certain analytes with regard to a pre-set level of concentration. That is to say, they are used as a step before the quantitative method and results in lower analysis time and costs because analyte quantification is not required in all situations.
There are some particular analysis fields where qualitative methods are used as routine methods. Therefore, analyte quantification is not always necessary.

Moreover, it is also important to provide reliable results, that is to say, to assure that the method performs with reliability. Any analytical method must have its requirements and its analytical properties previously defined, and their values must be proven. To confirm that the requirements and the analytical properties are the right ones and to confirm that they have the right values is to validate the analytical method. This is a necessary condition to use an analytical method. In this sense, the reliability of the results given to the clients or to the users is assured. Moreover, the ISO Standard 17025 strongly encourages method validation.
Method validation has focused on quantitative methods. Therefore and as a result, there are more standards or guidelines addressed to quantitative methods validation. These guidelines are commonly used by several communities of practitioners. However, there is no generally accepted standard or validation procedure addressed to qualitative methods. In this sense, this thesis aims to contribute with the development of several validation procedures.

The starting point is to provide an overview as a result of a bibliographic search concerning qualitative methods validation. This overview includes the criteria existing for qualitative methods classification as well as the institutions committed the validation of these methods. After that, a classification of these methods is suggested and the most relevant performance parameters in the validation process are defined.
The subsequent practical applications describe the intrinsic characteristics of the corresponding qualitative analytical method. After that, the performance parameters that best fit the requirements and the characteristics of the method are defined and, finally, a validation strategy is proposed. Bear in mind, that the strategy considers the intrinsic characteristics of the analytical method.

The overview including relevant aspects such as qualitative methods classification, performance parameters definitions and the institutions committed to qualitative method validation, among others, are presented as two publications included in the thesis. Regarding the three practical applications, they are presented as three accepted papers.
APA, Harvard, Vancouver, ISO, and other styles
22

Rolland, Jean-François. "Développement et validation d'architectures dynamiques." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00367994.

Full text
Abstract:
Dans le cadre de cette thèse, nous nous proposons d'étudier le développement et la validation de systèmes dans un contexte temps réel asynchrone. On a choisi d'utiliser le langage AADL pour ses spécificités issues de l'avionique, domaine proche du spatial, et pour la précision de la description de son modèle d'exécution. Le travail de cette thèse se divise en deux axes principaux : d'une part, on étudie l'utilisation du langage AADL dans le cadre du développement d'un logiciel de vol ; et d'autre part, on présente une version réduite du langage AADL, et la définition formelle de son modèle d'exécution à l'aide du langage TLA+. L'objectif de la première partie est d'envisager l'utilisation d'AADL dans le cadre d'un processus de développement existant dans le domaine du spatial. Dans cette partie, on a cherché à identifier des motifs de conceptions récurrents dans les logiciels de vol. Enfin, on étudie l'expression en AADL des différents éléments de ce processus de développement. La seconde partie comporte la définition d'un mini AADL suffisant pour exprimer la plupart des concepts de parallélisme, de communication et de synchronisation qui caractérisent AADL. La partie formalisation est nécessaire afin de pouvoir vérifier des propriétés dynamiques. En effet, la définition formelle du modèle d'exécution permet de décrire le comportement attendu des modèles AADL. Une fois ce modèle défini, on peut à l'aide d'un vérificateur de modèles (model-checker) animer une modélisation AADL ou aborder la vérification de propriétés dynamiques. Cette étude a par ailleurs été menée dans le cadre de la standardisation du langage AADL.
APA, Harvard, Vancouver, ISO, and other styles
23

Davis, Robert Andrew. "Model validation for robust control." Thesis, University of Cambridge, 1995. https://www.repository.cam.ac.uk/handle/1810/251990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Alghadhi, Mostafa. "Validation of vehicle fuel consumption." Thesis, University of Huddersfield, 2015. http://eprints.hud.ac.uk/id/eprint/24697/.

Full text
Abstract:
The state of environmental degradation demands that factors contributing to it be looked into. A chief cause of environmental degradation is exhaust emissions from vehicles, especially passenger cars. This paper attempts to quantify the relationship between vehicle fuel emissions and the various factors that contribute to it such as speed, acceleration, throttle position etc. The central contention was to come up with anempirical correlation that could be used to reliably tabulate the fuel consumption of a passenger vehicle. The derivation of an empirical correlation between vehicle fuel consumption and the factors contributing to it would allow an optimisation of vehicle fuel consumption to reduce greenhouse gas emissions. Using a comparison of different driving cycles, the New European Driving Cycle (NEDC) was taken as the basic framework for testing. The research was carried out in two different phases i.e. laboratory testing and real life drive tests. Laboratory testing was utilised to generate the major parameters that affected vehicle fuel consumption. This was then used to derive an empirical correlation that was then tested in the field to determine its validity. The proposed empirical correlation was tested against real life driving conditions which proved the reliability of the empirical correlation. A number of different driving conditions were simulated including urban driving, extra urban driving and highway driving. The varied testing scheme ensured that the empirical correlation was valid for various driving situations at the same time. The derivation of such an empirical correlation through this work removed one of the chief defects of different driving cycles which was the lack of standardisation for testing. With the application of this tested model it would be easier and convenient to control pollution considerably through additional research in the future.
APA, Harvard, Vancouver, ISO, and other styles
25

Graham, Robert S. "The need for social validation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq24380.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hyjek, James. "Automation of object behavior validation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ26979.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wibling, Oskar. "Ad hoc routing protocol validation." Licentiate thesis, Uppsala : Department of Information Technology, Uppsala university, 2005. http://www.it.uu.se/research/reports/lic/2005-004/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mohan, Babu Diana. "Informative SNP Selection and Validation." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_theses/48.

Full text
Abstract:
The search for genetic regions associated with complex diseases, such as cancer or Alzheimer's disease, is an important challenge that may lead to better diagnosis and treatment. The existence of millions of DNA variations, primarily single nucleotide polymorphisms (SNPs), may allow the fine dissection of such associations. However, studies seeking disease association are limited by the cost of genotyping SNPs. Therefore, it is essential to find a small subset of informative SNPs (tag SNPs) that may be used as good representatives of the rest of the SNPs. Several informative SNP selection methods have been developed. Our experiments compare favorably to all the prediction and statistical methods by selecting the least number of informative SNPs. We proposed algorithms for faster prediction which yielded acceptable trade off. We validated our results using the k-fold test and its many variations.
APA, Harvard, Vancouver, ISO, and other styles
29

Fairchild, Carol J. "Prescription refill compliance validation study." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19465.

Full text
Abstract:
Numerous studies have demonstrated that noncompliance causes increased morbidity and mortality from a wide variety of illnesses. Solutions to the problem require systematic evaluation of the determinants of noncompliance and identification of subgroups of patients who are vulnerable so appropriate interventions can be designed. Population-based research methods are needed to evaluate this problem.
APA, Harvard, Vancouver, ISO, and other styles
30

Ibarguengoytia, P. H. "Any time probabilistic sensor validation." Thesis, University of Salford, 1997. http://usir.salford.ac.uk/2061/.

Full text
Abstract:
Many applications of computing, such as those in medicine and the control of manufacturing and power plants, utilize sensors to obtain information. Unfortunately, sensors are prone to failures. Even with the most sophisticated instruments and control systems, a decision based on faulty data could lead to disaster. This thesis develops a new approach to sensor validation. The thesis proposes a layered approach to the use of sensor information where the lowest layer validates sensors and provides information to the higher layers that model the process. The approach begins with a Bayesian network that defines the dependencies between the sensors in the process. Probabilistic propagation is used to estimate the value of a sensor based on its related sensors. If this estimated value differs from the actual value, then a potential fault is detected. The fault is only potential since it may be that the estimated value was based on a faulty reading. This process can be repeated for all the sensors resulting in a set of potentially faulty sensors. The real faults are isolated from the apparent ones by using a lemma whose proof is based on the properties of a Markov blanket. In order to perform in a real time environment, an any time version of the algorithm has been developed. That is, the quality of the answer returned by the algorithm improves continuously with time. The approach is compared and contrasted with other methods of sensor validation and an empirical evaluation of the sensor validation algorithm is carried out. The empirical evaluation presents the results obtained when the algorithm is applied to the validation of temperature sensors in a gas turbine of a power plant.
APA, Harvard, Vancouver, ISO, and other styles
31

Whurr, Renata. "The validation of aphasia tests." Thesis, Birkbeck (University of London), 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247714.

Full text
Abstract:
A brief history is provided of Aphasia testing. The test batteries published since 1947 are described under the following formal headings: theory and classification, organisation, quantification, charting and interpretation, and standardisation and factor analyses. A component analysis of the forty aphasia test batteries is conducted. The most frequently used subtests are identified and divided into subgroups of tests, tests of auditory comprehension, oral language, naming, reading comprehension, speech production, written language and non-language tasks. The fifty most frequently used subtests are then applied to a group of normal elderly people to establish baseline norms. Then 108 diagnosed aphasic patients are subjected to these same tests, and the results analysed. The patient data, the normal data and the tests thelllSelves are examined. Xethods of analysis include factor analYSiS, discriminant analysis and cluster analysis. Analysis of variance is applied to the elderly normal subj ects and the aphasic patients, as well as to the aphasic patients on repeated measures. Conclusions are then drawn about how effective the most frequently used subtests in aphasia batteries are in identifying the degree and type of language disturbance in aphasic patients.
APA, Harvard, Vancouver, ISO, and other styles
32

Batarfi, Omar Abdullah. "Certificate validation in untrusted domains." Thesis, University of Newcastle Upon Tyne, 2007. http://hdl.handle.net/10443/1983.

Full text
Abstract:
Authentication is a vital part of establishing secure, online transactions and Public key Infrastructure (PKI) plays a crucial role in this process for a relying party. A PKI certificate provides proof of identity for a subject and it inherits its trustworthiness from the fact that its issuer is a known (trusted) Certification Authority (CA) that vouches for the binding between a public key and a subject's identity. Certificate Policies (CPs) are the regulations recognized by PKI participants and they are used as a basis for the evaluation of the trust embodied in PKI certificates. However, CPs are written in natural language which can lead to ambiguities, spelling errors, and a lack of consistency when describing the policies. This makes it difficult to perform comparison between different CPs. This thesis offers a solution to the problems that arise when there is not a trusted CA to vouch for the trust embodied in a certificate. With the worldwide, increasing number of online transactions over Internet, it has highly desirable to find a method for authenticating subjects in untrusted domains. The process of formalisation for CPs described in this thesis allows their semantics to be described. The formalisation relies on the XML language for describing the structure of the CP and the formalization process passes through three stages with the outcome of the last stage being 27 applicable criteria. These criteria become a tool assisting a relying party to decide the level of trust that he/she can place on a subject certificate. The criteria are applied to the CP of the issuer of the subject certificate. To test their validity, the criteria developed have been examined against the UNCITRAL Model Law for Electronic Signatures and they are able to handle the articles of the UNCITRAL law. Finally, a case study is conducted in order to show the applicability of the criteria. A real CPs have been used to prove their applicability and convergence. This shows that the criteria can handle the correspondence activities defined in a real CPs adequately.
APA, Harvard, Vancouver, ISO, and other styles
33

Jarratt, Jason Aldrin. "Validation of chemical speciation models." Thesis, Manchester Metropolitan University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Render, Neil. "The validation of pharmaceutical buildings." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/288/.

Full text
Abstract:
The construction, commissioning and hand-over of pharmaceutical manufacturing buildings have become increasingly controlled by the requirements of regulatory agencies. Legislation requires that the process of validation is undertaken to establish that the facility is constructed in-line with the principles of pharmaceutical Good Manufacturing Practice (GMP). The validation process acts to ensure that the construction and building services systems are designed, installed and operate as intended and do not affect the quality of the manufactured product. A central objective of this thesis is to examine the sequential validation process and influencing factors that contribute to the facility attaining agency approval. A comprehensive review of the available literature indicates that projects regularly fail to meet their regulatory objectives due to the building provider and client's differing understanding and views of the validation process and of GMP. From this literature a validation model is derived and proposes that the design, installation and operation stages of the validation activity are time-series dependant sub-processes controlled through sensing, feedback and comparison. The research was largely qualitative, case-study based and used an interpretivist approach to analysis, which relied on participant observation and grounded theory techniques. Additional, external validation of the model was sought by collecting and analysing empirical data from an industry questionnaire The results of the study demonstrate that significant deviations between the model and the data exist and measures to construct compliant pharmaceutical buildings are often underdeveloped and result in unsuccessful project outcomes. The criteria by which the success of any construction project is judged are normally time, cost and quality. Time and cost are readily measurable, but the meaning of quality, in relation to the validation activity, can be more elusive and this is at the root of the problem of successful validation of pharmaceutical buildings.
APA, Harvard, Vancouver, ISO, and other styles
35

Morrell, David. "Validation of identified turbogenerator models." Thesis, Queen's University Belfast, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.254217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pantziarka, P. "Machine learning and data validation." Thesis, University of Surrey, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Tsorng-Ming. "Design validation of digital systems." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lievin-Lieven, Nicholas Andrew John. "Validation of structural dynamic models." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Cheng. "Scalable Validation of Data Streams." Doctoral thesis, Uppsala universitet, Avdelningen för datalogi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-291530.

Full text
Abstract:
In manufacturing industries, sensors are often installed on industrial equipment generating high volumes of data in real-time. For shortening the machine downtime and reducing maintenance costs, it is critical to analyze efficiently this kind of streams in order to detect abnormal behavior of equipment. For validating data streams to detect anomalies, a data stream management system called SVALI is developed. Based on requirements by the application domain, different stream window semantics are explored and an extensible set of window forming functions are implemented, where dynamic registration of window aggregations allow incremental evaluation of aggregate functions over windows. To facilitate stream validation on a high level, the system provides two second order system validation functions, model-and-validate and learn-and-validate. Model-and-validate allows the user to define mathematical models based on physical properties of the monitored equipment, while learn-and-validate builds statistical models by sampling the stream in real-time as it flows. To validate geographically distributed equipment with short response time, SVALI is a distributed system where many SVALI instances can be started and run in parallel on-board the equipment. Central analyses are made at a monitoring center where streams of detected anomalies are combined and analyzed on a cluster computer. SVALI is an extensible system where functions can be implemented using external libraries written in C, Java, and Python without any modifications of the original code. The system and the developed functionality have been applied on several applications, both industrial and for sports analytics.
APA, Harvard, Vancouver, ISO, and other styles
40

Xiong, Wei. "Verification and validation of JavaScript." Thesis, Durham University, 2013. http://etheses.dur.ac.uk/7326/.

Full text
Abstract:
JavaScript is a prototype-based, dynamically typed language with scope chains and higher-order functions. Third party web applications embedded in web pages rely on JavaScript to run inside every browser. Because of its dynamic nature, a JavaScript program is easily exploited by malicious manipulations and safety breach attacks. Therefore, it is highly desirable when developing a JavaScript application to be able to verify that it meets its expected specification and that it is safe. One of the challenges in achieving this objective is that it is hard to statically keep track of the heap-manipulating JavaScript program due to the mutability of data structures. This thesis focuses on developing a verification framework for both functional correctness and safety of JavaScript programs that involve heap-based data structures. Two automated inference-based verification frameworks are constructed based upon a variant of separation logic. The first framework defines a suitable subset of JavaScript, together with a set of operational semantics rules, a specification language and a set of inference rules. Furthermore, an axiomatic framework is presented to discover both pre/post-conditions of a JavaScript program. Hoare-style specification {Pre}prog{Post}, where program prog contains the language statements. The problem of verifying program can be reduced to the problem of proving that the execution of the statements meets the derived specification language. The second framework increases the expressiveness of the subset language to include this that can cause safety issues in JavaScript programs. It revises the operational rules and inference rules to manipulate the newly added feature. Furthermore, a safety verification algorithm is defined. Both verification frameworks have been proved sound, and the results ob- tained from evaluations validate the feasibility and precision of proposed approaches. The outcomes of this thesis confirm that it is possible to anal- yse heap-manipulating JavaScript programs automatically and precisely to discover unsafe programs.
APA, Harvard, Vancouver, ISO, and other styles
41

Ruddle, Alastair Richmond. "Validation of automotive electromagnetic models." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/35592.

Full text
Abstract:
The problems of modelling the electromagnetic characteristics of vehicles and the experimental validation of such models are considered. The validity of the measurement methods that are applied in model validation exercises is of particular concern. A philosophy for approaching the validation of automotive electromagnetic models of realistic complexity is presented. Mathematical modelling of the key elements of the measurement processes is proposed as the only reliable mechanism for addressing these issues. Areas considered include: basic elements of numerical models; geometrical fidelity requirements for model elements; calibration and use of experimental transducers; the inclusion of cables in electromagnetic models; essential content for vehicle models. A number of practical measurement processes are also investigated using numerical methods, leading to recommendations for improved practices in: calibration of transducers for current measurement at high frequencies; measurement of radiated emissions from vehicles; identification of range requirements for simple methods of determining antenna gain and related characteristics in EMC test facilities. The impact of such measures on the success of model validation studies for automotive applications is demonstrated. It is concluded that experimental results are no less in need of validation than the numerical results that are, more conventionally, judged against them.
APA, Harvard, Vancouver, ISO, and other styles
42

Caprioli, Peter. "AMQP Standard Validation and Testing." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277850.

Full text
Abstract:
As large-scale applications (such as the Internet of Things) become more common, the need to scale applications over multiple physical servers in- creases. One way of doing so is by utilizing middleware, a technique that breaks down a larger application into specific parts that each can run inde- pendently. Different middleware solutions use different protocols and mod- els. One such solution, AMQP (the Advanced Message Queueing Protocol), has become one of the most used middleware protocols as of late and mul- tiple open-source implementations of both the server and client side exists. In this thesis, a security and compatibility analysis of the wire-level protocol is performed against five popular AMQP libraries. Compatibility towards the official AMQP specification and variances between different implementa- tions are investigated. Multiple differences between libraries and the formal AMQP specification were found. Many of these differences are the same in all of the tested libraries, suggesting that they were developed using empir- ical development rather than following the specification. While these differ- ences were found to be subtle and generally do not pose any critical security, safety or stability risks, it was also shown that in some circumstances, it is possible to use these differences to perform a data injection attack, allowing an adversary to arbitrarily modify some aspects of the protocol. The protocol testing is performed using a software tester, AMQPTester. The tester is released alongside the thesis and allows for easy decoding/encoding of the protocol. Until the release of this thesis, no other dedicated AMQP testing tools existed. As such, future research will be made significantly easier.
Allt eftersom storskaliga datorapplikationer (t.ex. Internet of Things) blir vanligare så ökar behovet av att kunna skala upp dessa över flertalet fysiska servrar. En teknik som gör detta möjligt kallas Middleware. Denna teknik bryter ner en större applikation till mindre delar, individuellt kallade funk- tioner. Varje funktion körs oberoende av övriga funktioner vilket tillåter den större applikationen att skala mycket enkelt. Det finns flertalet Middleware- lösningar på marknaden idag. En av de mer populära kallas AMQP (Ad- vanced Message Queueing Protocol), som även har en stor mängd servrar och klienter på marknaden idag, varav många är släppta som öppen källkod. I rapporten undersöks fem populära klientimplementationer av AMQP med avseende på hur dessa hanterar det formellt definierade nätverksprotokollet. Ä ven skillnader mellan olika implementationer undersöks. Dessa skillnader evalueras sedan med avseende på både säkerhet och stabilitet. Ett flertal skillnader mellan de olika implementationerna och det formellt definierade protokollet upptäcktes. Många implementationer hade liknande avvikelser, vilket tyder på att dessa har utvecklats mot en specifik serverimplementation istället för mot den officiella specifikationen. De upptäckta skillnaderna visade sig vara små och utgör i de flesta fall inget hot mot säkerheten eller stabiliteten i protokollet. I vissa specifika fall var det, på grund av dessa skillnader, dock möjligt att genomföra en datainjektionsattack. Denna gör det möjlig för en attackerare att injecera arbiträra datatyper i vissa aspekter av protokollet. En mjukvarutestare, AMQPTester, används för att testa de olika imple- mentationerna. Denna testare publiceras tillsammans med rapporten och tillåter envar att själv med enkelhet koda/avkoda AMQP-protokollet. Hit- intills har inget testverktyg för AMQP existerat. I och med publicerandet av denna rapport och AMQPTester så förenklas således framtida forskning inom AMQP-protokollet.
APA, Harvard, Vancouver, ISO, and other styles
43

Corwin, Paul S. "Incremental Validation of Formal Specifications." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/71.

Full text
Abstract:
This thesis presents a tool for the mechanical validation of formal software specifications. The tool is based on a novel approach to incremental validation. In this approach, small-scale aspects of a specification are validated, as part of the stepwise refinement of a formal model. The incremental validation technique can be considered a form of "lightweight" model checking. This is in contrast to a "heavyweight" approach, wherein an entire large-scale model is validated en masse. The validation tool is part of a formal modeling and specification language (FMSL), used in software engineering instruction. A lightweight, incremental approach to validation is beneficial in this context. Such an approach can be used to elucidate specification concepts in a step-by-step manner. A heavy-weight approach to model checking is more difficult to use in this way. The FMSL model checker has itself been validated by evaluating portions of a medium-scale specification example. The example has been used in software engineering courses for a number of years, but has heretofore been validated only by human inspection. Evidence for the utility of the validation tool is provided by its performance during the example validation. In particular, use of the tool led to the discovery of a specification flaw that had gone undiscovered by manual validation alone.
APA, Harvard, Vancouver, ISO, and other styles
44

Lindberg, Mimmi. "Forensic Validation of 3D models." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171159.

Full text
Abstract:
3D reconstruction can be used in forensic science to reconstruct crime scenes and objects so that measurements and further information can be acquired off-site. It is desirable to use image based reconstruction methods but there is currently no procedure available for determining the uncertainty of such reconstructions. In this thesis the uncertainty of Structure from Motion is investigated. This is done by exploring the literature available on the subject and compiling the relevant information in a literary summary. Also, Monte Carlo simulations are conducted to study how the feature position uncertainty affects the uncertainty of the parameters estimated by bundle adjustment. The experimental results show that poses of cameras that contain few image correspondences are estimated with higher uncertainty. The poses of such cameras are estimated with lesser uncertainty if they have feature correspondences in cameras that contain a higher number of projections.
APA, Harvard, Vancouver, ISO, and other styles
45

Badayos, Noah Garcia. "Machine Learning-Based Parameter Validation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47675.

Full text
Abstract:
As power system grids continue to grow in order to support an increasing energy demand, the system's behavior accordingly evolves, continuing to challenge designs for maintaining security. It has become apparent in the past few years that, as much as discovering vulnerabilities in the power network, accurate simulations are very critical. This study explores a classification method for validating simulation models, using disturbance measurements from phasor measurement units (PMU). The technique used employs the Random Forest learning algorithm to find a correlation between specific model parameter changes, and the variations in the dynamic response. Also, the measurements used for building and evaluating the classifiers were characterized using Prony decomposition. The generator model, consisting of an exciter, governor, and its standard parameters have been validated using short circuit faults. Single-error classifiers were first tested, where the accuracies of the classifiers built using positive, negative, and zero sequence measurements were compared. The negative sequence measurements have consistently produced the best classifiers, with majority of the parameter classes attaining F-measure accuracies greater than 90%. A multiple-parameter error technique for validation has also been developed and tested on standard generator parameters. Only a few target parameter classes had good accuracies in the presence of multiple parameter errors, but the results were enough to permit a sequential process of validation, where elimination of a highly detectable error can improve the accuracy of suspect errors dependent on the former's removal, and continuing the procedure until all corrections are covered.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Lu, Ching-sung. "Automated validation of communication protocols /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu148726702499786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

McCaul, Courtney Ann. "Dot Counting Test cross-validation." Thesis, Alliant International University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10249120.

Full text
Abstract:

The purpose of this study was to determine the reliability and validity of the Dot Counting Test as a measure of feigned cognitive performance. Archival neuropsychological test data from a “real world” sample of 147 credible and 328 non-credible patients were compared. The Dot Counting Test E-score cutoff of ≥ 17 continued to show excellent specificity (93%). However, sensitivity dropped from approximately 74% documented in 2002 to 51% in the current sample. When the cutoff was lowered to ≥ 15, adequate specificity was maintained (90%) and sensitivity rose to (61%). However, a third of credible patients with borderline IQ failed the test using the Dot Counting Test E-cutoff score, indicating cautious use of the test with individuals who likely have borderline intelligence.

APA, Harvard, Vancouver, ISO, and other styles
48

Foures, Damien. "Validation de modèles de simulation." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30071/document.

Full text
Abstract:
Ce travail de thèse s'est intéressé à la validité des modèles de simulation dans le cadre du développement des systèmes complexes et critiques. Une analyse de l'approche d'ingénierie système, et plus particulièrement de l'aspect modélisation et simulation, a permis de constater qu'il est impossible d'établir de façon directe la validité du modèle de simulation. De nombreux points sont à l'origine de cette impossibilité, comme une mauvaise formulation des objectifs de simulation, une incohérence implémentatoire, les limites du moteur de simulation, etc. La validité d'un modèle de simulation étant définie pour un objectif de simulation, il est apparu important de proposer une approche globale de la M&S, associant un ensemble d'outils capables de détecter des incohérences entre les objectifs de simulation et les modèles du système d'intérêt. Ces outils, à destination de l'utilisateur de la simulation, permettent l'amélioration du niveau de confiance dans le modèle de simulation et donc dans les résultats de simulation. Notre étude se base sur la théorie de la M&S telle que proposée par B.P. Zeigler. En considérant le concept de cadre expérimental qui y est introduit, nous avons pu proposer un cadre méthodologique capable d'exprimer les objectifs de simulation de manière claire. Ce cadre méthodologique nous permet d'étudier les problématiques d'application et d'accommodation de la M& S que nous regroupons sous la problématique de compatibilité. Ainsi, notre premier objectif a été de proposer une approche capable de mesurer l'incohérence entre les objectifs de simulation et le modèle du système. En s'appuyant sur les méthodes formelles et la théorie des automates, nous avons établi un ensemble de métriques capables de mesurer le degré de compatibilité dynamique entre cadre expérimental et modèle du système d'intérêt. Pour cela, nous étudions en premier lieu la compatibilité dynamique entre automates à interface en utilisant la décomposition en arbre. Montrant les limites d'une telle approche, nous sommes passé à l'étude de la compatibilité entre modèles DEVS en utilisant la génération de graphes de classe, autrement appelés graphes d'atteignabilité. Cette étude formelle de la compatibilité nous permet de proposer un ensemble de bonnes propriétés de la simulation. Nous proposons finalement une méthodologie qui permet de guider l'utilisateur de la simulation dans l'élaboration de métriques permettant de mesurer ce niveau de compatibilité. S'appuyant sur les concepts de l'ingénierie dirigée par les modèles, nous proposons un langage dédié à la simulation permettant de guider l'utilisateur de la simulation dans l'évaluation de la validité des modèles de simulation
This work is focused on the validity of simulation models during development of complex and critical systems. The analysis of the system engineering approach and, especially the modeling and simulation aspect, showed that it was impossible to directly determine simulation models validity. Many aspects can cause this unattainability, such as bad formulation of simulation objectives, implementation inconsistency, limits of the simulation engine, etc. The validity of a simulation model being defined for a specific simulation goal, it seemed important to provide a global M&S approach, combining a set of tools to detect inconsistencies between objectives and models of the system under test. These tools, dedicated to the simulation user, allow to improve confidence level of the simulation model and thus in simulation results. Our study is based on the M& S theory as proposed by B.P. Zeigler. Using the concept of experimental frame, we are able to propose a methodological framework to express simulation objectives clearly. This allows us to study applicability and accommodation, witch we grouped under compatibility issue. Thus, our first objective was to propose an approach able to measure inconsistencies between experimental frame and model of the system. Based on formal methods and automata theory, we propose a set of metrics that measure the degree of dynamic compatibility between experimental frame and model system of interest. For this, we firstly study the dynamic compatibility between interface automata using tree decomposition. Showing limits of this approach, we studied compatibility between DEVS models using reachability graphs analysis. This formal study of the compatibility help us to propose a set of good properties of the simulation. Finally, we propose a methodology to guide the simulation user in metrics development to measure the compatibility level. Based on model-driven engineering approach, we propose a simulation dedicated language, to help users to asses the validity of simulation models
APA, Harvard, Vancouver, ISO, and other styles
49

Johansson, Fredrik, and Oskar Dahl. "Autonomous Validation through Visual Inspection." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34366.

Full text
Abstract:
The industrial testing phase of graphical user interfaces and the behaviour of screens, is still involving manual tests with human interaction. This type of testing is particularly difficult and time consuming to manually perform, due to time sensitive messages and information used within these interfaces. This thesis address this issue by introducing an approach to automate this process by utilizing high grade machine vision cameras and existing algorithm implementations from OpenCV 3.2.0. By knowing the expected graphical representation in advance, a comparison between the actual outcome and this expectation can be evaluated by applying image processing algorithms. It is found that this approach presents an Equal Error Rate of 6% while still maintaining a satisfactory time performance, in relation to the timeframe requirement of these time sensitive messages. Accuracy and time performance is profoundly affected by hardware equipment, partially due to the immense amount of image processing involved.
APA, Harvard, Vancouver, ISO, and other styles
50

Newlin, Matthew Philip Doyle John Comstock Doyle John Comstock. "Model validation, control, and computation /." Diss., Pasadena, Calif. : California Institute of Technology, 1996. http://resolver.caltech.edu/CaltechETD:etd-01032008-090000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography