Rozprawy doktorskie na temat „Validation techniques”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Validation techniques.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Validation techniques”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Jonoud, Sima. "Validation of steady-state upscaling techniques". Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/7812.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

PEDDIREDDY, SANTOSH KUMAR REDDY, i SRI RAM NIDAMANURI. "Requirements Validation Techniques : Factors Influencing them". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21152.

Pełny tekst źródła
Streszczenie:
Context: Requirement validation is a phase of the software development life cycle where requirements are validated to get rid of inconsistency, incompleteness. Stakeholders involved in the validation process to make requirements are suitable for the product. Requirement validation techniques are for validating the requirements. Selection of requirements validation techniques related to the factors that need to consider while validating requirements makes the validation process better. This paper is about the factors that influence the selection of requirements validation technique and analyzing the most critical factors. Objectives: Our research aim is to find the factors influencing the selection of requirements validation techniques and evaluating critical factor from the factors list. To achieve our goal, we are following these objectives. To get a list of validation techniques that are currently being used by organizations, and to enlist the factors that influence the requirements validation technique. Methods: To identify the factors influencing the selection of requirement validation techniques and evaluating the critical factors, we conducted both a literature review and survey. Results: From the literature review, two articles considered as our starter set, and through snowball sampling, a total of fifty-four articles were found relevant to the study. From the results of the literature review, we have formulated a questionnaire and conducted a survey. A total of thirty-three responses have gathered from the survey. The survey obtains the factors influencing the requirement validation techniques. Conclusions: The factors we got from the survey possess a mixed view like each factor has its critically in different aspects of validation. Selecting one critical factor is not possible during the selection of the requirements validation technique. So, we shortlisted the critical factors that have more influence in the selection of requirement validation techniques, Factors, Requirements validation techniques.
Style APA, Harvard, Vancouver, ISO itp.
3

Coleby, Dawn Elizabeth. "Assessment of techniques for electromagnetic modelling validation". Thesis, De Montfort University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Shen, Jian. "Effective techniques for processor validation and test /". Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ewanchyna, Theodore J. "Techniques for specification and validation of complex protocols". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26014.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kufel, Jedrzej. "Techniques and validation for protection of embedded processors". Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/381185/.

Pełny tekst źródła
Streszczenie:
Advances in technology scaling and miniaturization of on-chip structures have caused an increasing complexity of modern devices. Due to immense time-to-market pressures, the reusability of intellectual property (IP) sub-systems has become a necessity. With the resulting high risks involved with such a methodology, securing IP has become a major concern. Despite a number of proposed IP protection (IPP) techniques being available, securing an IP in the register transfer level (RTL) is not a trivial task, with many of the techniques presenting a number of shortfalls or design limitations. The most prominent and the least invasive solution is the integration of a digital watermark into an existing IP. In this thesis new techniques are proposed to address the implementation difficulties in constrained embedded IP processor cores. This thesis establishes the parameters of sequences used for digital watermarking and the tradeoffs between the hardware implementation cost, detection performance and robustness against IP tampering. A new parametric approach is proposed which can be implemented with any watermarking sequence. MATLAB simulations and experimental results of two fabricated silicon ASICs with a watermark circuit embedded in an ARMR Cortex R-M0 IP core and an ARMR Cortex R-A5 IP core demonstrate the tradeoffs between various sequences based on the final design application. The thesis further focuses on minimization of hardware costs of a watermark circuit implementation. A new clock-modulation based technique is proposed and reuses the existing circuit of an IP core to generate a watermark signature. Power estimation and experimental results demonstrate a significant area and power overhead reduction, when compared with the existing techniques. To further minimize the costs of a watermark implementation, a new technique is proposed which allows a non-deterministic and sporadic generation of a watermark signature. The watermark was embedded in an ARMR Cortex R-A5 IP core and was fabricated in silicon. Experimental silicon results have validated the proposed technique and have demonstrated the negligible hardware implementation costs of an embedded watermark.
Style APA, Harvard, Vancouver, ISO itp.
7

Kolluri, Murali Mohan. "Developing a validation metric using image classification techniques". University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406819893.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Grisot, Giorgia. "Validation of dMRI techniques for mapping brain pathways". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122194.

Pełny tekst źródła
Streszczenie:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 201-222).
Diffusion magnetic resonance imaging (dMRI) tractography is the only non-invasive tool for studying the connectional architecture of the brain in vivo. By measuring the diffusion of water molecules dMRI provides unique information about white matter pathways and their integrity, making it an invaluable neuroimaging tool that has improved our understanding of the human brain and how it is affected by disease. A major roadblock to its acceptance into clinical practice has been the difficulty in assessing its anatomical accuracy and reliability. In fact, obtaining a map of brain pathways is a multi-step process with numerous variables, assumptions and approximations that can influence the veracity of the generated pathways. Validation is, thus, necessary and yet challenging because there is no gold standard which dMRI can be compared to, since the configuration of human brain connections is largely unknown. Which aspects of tractography processing have the greatest effect on its performance? How do mapping methods compare? Which one is the most anatomically accurate? We tackle these questions with a multi-modal approach that capitalizes on the complementary strengths of available validation strategies to probe dMRI performance on different scales and across a wide range of acquisition and analysis parameters. The outcome is a multi-layered validation of dMRI tractography that 1) quantifies dMRI tractography accuracy both on the level of brain connections and tissue microstructure; 2) highlights the strengths and weaknesses of different modeling and tractography approaches, offering guidance on the issues that need to be resolved to achieve a more accurate mapping of the human brain.
by Giorgia Grisot.
Ph. D.
Ph.D. Harvard-MIT Program in Health Sciences and Technology
Style APA, Harvard, Vancouver, ISO itp.
9

Thieffry, Roland. "Etude et Validation de Systèmes d'Aide à l'Autonomie des Personnes Handicapées". Paris 6, 2004. http://www.theses.fr/2004PA066320.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Slesinger, Nathan Avery. "Thermal modeling validation techniques for thermoset polymer matrix composites". Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/26641.

Pełny tekst źródła
Streszczenie:
Process modeling is becoming a widely-accepted tool to reduce the time, cost, and risk in producing increasingly large and complicated composite structures. Process modeling reduces the need for physical parts, as it is not practical or economical to design and fabricate large composite structures using a trial-and-error approach. The foundation of the composite manufacturing process, and thus of process models, is the thermal history of the composite part during cure. Improperly curing the composite part will compromise its mechanical properties. Consequently, proper validation of the thermal model input parameters is critical, since the simulation output depends on the accuracy of the input. However, there are no standard methods to validate thermal process model input parameters. In this work, repeatable and robust methods were developed to isolate and validate the conductive heat transfer, thermochemical, and convective heat transfer sub-models. By validating the sub-models, the uncertainty of the complete thermal simulation was significantly reduced. Conductive and thermochemical material models were validated by comparing the thermal response of a material surrounded by rubber bricks to a 1-D simulation of the same materials. Four composite prepreg systems and their respective material models were tested, with agreement ranging from excellent (errors less than 1.0 °C) to poor (errors greater than 5.0 °C). Calorimetery, visual monitoring, and CFD were used to characterize the convective heat transfer environment inside the UBC autoclave. The validation methods were also used to better understand the capabilities and limitations of the autoclave. Local variations in airflow patterns and heat transfer coefficients showed that heat transfer can be highly variable in an individual piece of equipment. Simple procedures for characterization of an autoclave or oven were demonstrated. The developed methods can be used individually, or in combination, to validate thermal models and reduce uncertainties associated with the cure of composites. With further refinement, the demonstrated methods can be developed into validation standards for thermal modeling of composite materials.
Style APA, Harvard, Vancouver, ISO itp.
11

Magraoui, Mohamed. "Validation de techniques de commande d'un filtre actif parallèle". Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/226/1/MAGRAOUl_Mohamed.pdf.

Pełny tekst źródła
Streszczenie:
Les harmoniques de courant générées par des charges non linéaires détériorent la qualité de l'onde électrique. Notre projet de recherche propose trois approches pour contourner l'effet néfaste des harmoniques via l'utilisation du filtre actif de puissance. L'approche propose une réduction de la distorsion harmonique du courant de source, de plus elle permet la compensation de la puissance réactive à la fréquence fondamentale. L'algorithme proposé utilise une commande linéaire de type indirecte et de type directe, et une commande non linéaire de type directe pour générer les courants de références du filtre actif. Un régulateur de type proportioimelle intégral est utilisé pour maintenir la tension du bus DC constante et dans un deuxième temps force le courant du filtre a suivre le courant de référence générées par ces commandes. Les résultats de simulation et d'expérimentation présentés démontrent bien les performances statiques et dynamiques du système des commandes étudiées. La validation des méthodes proposées par simulation à l'aide du logiciel Matlab Simulink Simpower est fait expérimentalement en utilisant dSPACE au laboratoire GREPCI.
Style APA, Harvard, Vancouver, ISO itp.
12

Magraoui, Mohamed. "Validation de techniques de commande d'un filtre actif parallèle /". Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1459914731&sid=1&Fmt=2&clientId=46962&RQT=309&VName=PQD.

Pełny tekst źródła
Streszczenie:
Thèse (M. Ing.)--École de technologie supérieure, Montréal, 2007.
"Mémoire présenté à l'École de technologie supérieure comme exigence partielle à l'obtention de la maîtrise en génie électrique." CaQMUQET Bibliogr. : f. [150]-155. Également disponible en version électronique. CaQMUQET
Style APA, Harvard, Vancouver, ISO itp.
13

Saqi, Saqib Bashir, i Sheraz Ahmed. "Requirements Validation Techniques practiced in industry : Studies of six companies". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3157.

Pełny tekst źródła
Streszczenie:
Requirements validation is a critical phase of requirements engineering processes, which makes sure that requirements are correct, consistent, complete and accurate. Requirements validation is used in determining the right requirements, while verification determines that implementation is correct with respect to its requirements. The main objective of validation is to certify that requirement specification document is the acceptable description of the system, which is going to be implemented. Requirements validation techniques (RVTs) play pivotal role to detect possible defects in the requirements. RVTs can help in the completion of projects, within given schedule, budget and according to the desired functionality. The studies of six companies regarding requirements validation, is presented in this thesis. This study explores the requirements validation techniques that are presented in academia and practiced in industry as well. Interview studies are conducted in two countries, which is an attempt to find the usage of requirements validation techniques in both of the countries. The pros and cons of identified RVTs are discussed, along with it; the comparison of different RVTs with respect to the satisfaction level of specific RVT in terms of catching defects, time/schedule and cost is presented as well.
Style APA, Harvard, Vancouver, ISO itp.
14

Kurnianto, Kristedjo. "Application of machine learning techniques to power plant sensor validation /". [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17689.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Sulehri, Latif. "Comparative Selection of Requirements Validation Techniques Based on Industrial Survey". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1178.

Pełny tekst źródła
Streszczenie:
In software engineering the requirements validation has very core importance. The requirements validation very helpful to judge that software requirements specification (SRS) is complete and free of bugs. The requirements validation is a assurance that the software requirements document is free of unwanted requirements and completely consistent. In order to remove inconsistency, detect defects and make the software requirements document fully functional the requirements validation is key factor. All possible requirements validation techniques available in academia such requirements reviews , requirements prototyping, requirements testing and viewpoint-oriented requirements validation are explained properly in this thesis report. In a very readable and understandable way the thesis presents all pros and cons of these requirements validation techniques practiced in different software companies in Sweden and available in academia. This report explains all possible advantages and issues related with these RVTs. In order to judge the best performance of these RVTs and to make their comparison I used a proper channel. I have designed a very effective survey questionnaire with the help of my colleges and literature review. To make creative comparison I conduct interviews and send survey questionnaire to different people working in requirements engineering departments in different software industries in Sweden. Finally the satisfaction levels of different software industries with these requirements validation techniques presents in this thesis report. These variables such as defect detection, time and cost are used to measure the satisfaction levels.
I Software Engineering kraven validering har en mycket central betydelse. Den kravvalidering very helpful att bedöma att Kravspecifikation (SRS) är klar och felfria. Kraven validering är en garanti för att programvaran kravdokument är fri från oönskade krav och helt konsekvent. För att undanröja inkonsekvens, upptäcka brister och göra programvaran kravdokument fullt funktionella kraven validering är viktig faktor. Alla möjliga kravvalidering tekniker inom den akademiska sådana krav recensioner, krav prototyper, provning och synpunkt-orienterade kravvalidering förklaras ordentligt i denna avhandling rapport. I ett mycket lättläst och begripligt sätt avhandling presenterar alla fördelar och nackdelar med dessa krav validera metoder praktiseras i olika mjukvaruföretag i Sverige och finns i den akademiska världen. Denna rapport förklarar alla möjliga fördelar och frågor kring dessa RVTs. För att bedöma de bästa resultaten i dessa RVTs och göra en jämförelse av dem använde jag en riktig kanal. Jag har skapat en mycket effektiv frågeformulär med hjälp av min högskolor och litteraturgenomgång. Skapa kreativa jämförelse jag intervjua och skicka frågeformuläret till olika personer som arbetar inom tekniska kraven för dessa avdelningar i olika programvaruföretag i Sverige. Slutligen tillfredsställande nivåer av olika programvaruföretag med dessa krav validering teknik presenteras i denna avhandling rapport. Dessa variabler såsom Upptäcka, tid och kostnader används för att mäta tillfredsställande nivåer.
Author: Latif Hussain Sulehri E-mail: latifsulehry@hotmail.com Phone: +46 704 917 140
Style APA, Harvard, Vancouver, ISO itp.
16

Al-Takrouri, Saleh Othman Saleh Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Robust state estimation and model validation techniques in computer vision". Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/41002.

Pełny tekst źródła
Streszczenie:
The main objective of this thesis is to apply ideas and techniques from modern control theory, especially from robust state estimation and model validation, to various important problems in computer vision. Robust model validation is used in texture recognition where new approaches for classifying texture samples and segmenting textured images are developed. Also, a new model validation approach to motion primitive recognition is demonstrated by considering the motion segmentation problem for a mobile wheeled robot. A new approach to image inpainting based on robust state estimation is proposed where the implementation presented here concerns with recovering corrupted frames in video sequences. Another application addressed in this thesis based on robust state estimation is video-based tracking. A new tracking system is proposed to follow connected regions in video frames representing the objects in consideration. The system accommodates tracking multiple objects and is designed to be robust towards occlusions. To demonstrate the performance of the proposed solutions, examples are provided where the developed methods are applied to various gray-scale images, colored images, gray-scale videos and colored videos. In addition, a new algorithm is introduced for motion estimation via inverse polynomial interpolation. Motion estimation plays a primary role within the video-based tracking system proposed in this thesis. The proposed motion estimation algorithm is also applied to medical image sequences. Motion estimation results presented in this thesis include pairs of images from a echocardiography video and a robot-assisted surgery video.
Style APA, Harvard, Vancouver, ISO itp.
17

Baylis, Charles Passant. "Improved techniques for nonlinear electrothermal FET modeling and measurement validation". [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0001989.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Weaver, Simon. "A Comparison of Data Reduction Techniques for Zone Model Validation". University of Canterbury. Civil Engineering, 2000. http://hdl.handle.net/10092/8299.

Pełny tekst źródła
Streszczenie:
To validate zone models from experimental data, the experimental data needs to be reduced to a form which is compatible with the zone model. Two parameters in zone modelling which experimental data needs to be reduced to before comparison, are interface height and the upper and lower zone temperatures. Fire experiments of three different fire sizes were housed in a full sized double room enclosure. Thermocouple trees were located in the comer and centreline of each compartment. During the analysis of the experimental data three interface height prediction methods were used. A commonly used empirical method known as the N% method did not perform well in the fire compartment. The maximum slope method which estimates the interface height as the point where the temperature change is maximal over height, worked very well for the data reduction as did Quintiere's method, which used two integral identities to solve for the interface height. The interface height determination for these methods could be used successfully with temperature averaging techniques however would not be sufficient to validate the zone models interface height calculation. The interface height does not represent any physical occurrence, rather a layer of mixed gases appears in the room between the zones. To be compared conservatively to a zone model the height at the bottom of this interface layer should be used. To determine average temperatures the interface height directly in the middle of the interface layer would be best. Six temperature averaging techniques were investigated. All predicted the lower zone temperature accurately. Quintiere's method was the most successful in accurately predicting the upper layer temperature, as it was not affected by thermocouple readings in the interface layer. Emmon's method ranged from slightly over predicting the upper zone temperature to greatly over predicting it. Spatial averaging, Averaging based on the equation of state and Janssens and Trans method all slightly underestimated the upper layer temperature. These methods work more successfully if averaging takes place not from the boundary specified at the interface height but from the height above and below the interface layer.
Style APA, Harvard, Vancouver, ISO itp.
19

Howard-Johnson, Julia A. "Training Program Content Validation: A Practical Application of Educative Techniques". TopSCHOLAR®, 1993. http://digitalcommons.wku.edu/theses/1429.

Pełny tekst źródła
Streszczenie:
A McDonald’s training program for the positions of grill and counter was evaluated in order to identify recommendations for curriculum refinement or enhancements. The methodological approaches developed by Ford and Wroten (1984) and Bownas, Bosshardt, and Donnelly (1985) were applied. Three evaluation assessment inventories were developed: The Job task Inventory, The training Emphasis Inventory, and The Training Effectiveness Inventory. These inventories were constructed with the assistance of 49 managers, trainers, and employees with six or more months of service. Four managers, seven trainers, and 22 recent training graduates responded to the appropriate inventory and these ratings were used in the content validity evaluation. Scale reliability was evaluated for each inventory using Cronbach’s coefficient alpha and Kuder-Richardson 21. Descriptive statistics were calculated for training requirements, training emphasis and training effectiveness measures. A plotting matrix was developed and correlation analyses were performed to assess content validity. Results of the analyses indicate: (a) that the three inventories are reliable, (b) that the overall grill training program reflects job tasks needed for successful job performance with the exception of a single content domain, (c) that counter managers and trainers differ in their perception of the importance of job tasks and the training emphasis needed, (d) that recent grill graduates find the training curriculum effective while counter graduates do not, and (e) that managers and trainers for both positions perceive task importance differently. The results call for slight grill training enhancements for the Secondary Duties content domain. Additionally it is indicated that the counter training program needs significant adjustments in terms of curriculum content and training emphasis.
Style APA, Harvard, Vancouver, ISO itp.
20

Weitz, Noah. "Analysis of Verification and Validation Techniques for Educational CubeSat Programs". DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1854.

Pełny tekst źródła
Streszczenie:
Since their creation, CubeSats have become a valuable educational tool for university science and engineering programs. Unfortunately, while aerospace companies invest resources to develop verification and validation methodologies based on larger-scale aerospace projects, university programs tend to focus resources on spacecraft development. This paper looks at two different types of methodologies in an attempt to improve CubeSat reliability: generating software requirements and utilizing system and software architecture modeling. Both the Consortium Requirements Engineering (CoRE) method for software requirements and the Monterey Phoenix modeling language for architecture modeling were tested for usability in the context of PolySat, Cal Poly's CubeSat research program. In the end, neither CoRE nor Monterey Phoenix provided the desired results for improving PolySat's current development procedures. While a modified version of CoRE discussed in this paper does allow for basic software requirements to be generated, the resulting specification does not provide any more granularity than PolySat's current institutional knowledge. Furthermore, while Monterey Phoenix is a good tool to introduce students to model-based systems engineering (MBSE) concepts, the resulting graphs generated for a PolySat specific project were high-level and did not find any issues previously discovered through trial and error methodologies. While neither method works for PolySat, the aforementioned results do provide benefits for university programs looking to begin developing CubeSats.
Style APA, Harvard, Vancouver, ISO itp.
21

Simmons, Amanda Lee 1970. "An experimental validation of resistance monitoring techniques for Arizona whitefly". Thesis, The University of Arizona, 1996. http://hdl.handle.net/10150/292013.

Pełny tekst źródła
Streszczenie:
Three resistance monitoring methods, leaf disk, sticky trap, and vial, were tested to evaluate their relative reliability, discriminating ability, convenience, and practicality for monitoring insecticide resistance in Arizona whiteflies. Each method was evaluated against two field populations divergent in susceptibility using a mixture of fenpropathrin + acephate and two single chemicals, endosulfan and fenpropathrin. Correlations of field efficacy and leaf disk bioassays resistance estimates were conducted with the Yuma population and a comparatively resistant Maricopa population. At each location egg, immature, and adult whitefly densities were monitored before and after fenpropathrin + acephate treatments. The three methods had advantages and disadvantages. The leaf disk method had the greatest discriminating ability, the vial method was the most practical, and the sticky trap method was good at discriminating populations that had large differences in susceptibility. The field efficacy trials indicated good concordance between the leaf disk assays results and field performance.
Style APA, Harvard, Vancouver, ISO itp.
22

Baylis, Charles Passant II. "Improved Techniques for Nonlinear Electrothermal FET Modeling and Measurement Validation". Scholar Commons, 2007. https://scholarcommons.usf.edu/etd/620.

Pełny tekst źródła
Streszczenie:
Accurate transistor models are important in wireless and microwave circuit design. Large-signal field-effect transistor (FET) models are generally extracted from current-voltage (IV) characteristics, small-signal S-parameters, and large-signal measurements. This dissertation describes improved characterization and measurement validation techniques for FET models that correctly account for thermal and trapping effects. Demonstration of a customized pulsed-bias, pulsed-RF S-parameter system constructed by the author using a traditional vector network analyzer is presented, along with the design of special bias tees to allow pulsing of the bias voltages. Pulsed IV and pulsed-bias S-parameter measurements can provide results that are electrodynamically accurate; that is, thermal and trapping effects in the measurements are similar to those of radio-frequency or microwave operation at a desired quiescent bias point. The custom pulsed S-parameter system is benchmarked using passive devices and advantages and tradeoffs of pulsed S-parameter measurements are explored. Pulsed- and continuous-bias measurement results for a high-power transistor are used to validate thermal S-parameter correction procedures. A new implementation of the steepest-ascent search algorithm for load-pull is presented. This algorithm provides for high-resolution determination of the maximum power and associated load impedance using a small number of measured or simulated reflection-coefficient states. To perform a more thorough nonlinear model validation, it is often desired to find the impedance providing maximum output power or efficiency over variations of a parameter such as drain voltage, input power, or process variation. The new algorithm enables this type of validation that is otherwise extremely tedious or impractical with traditional load-pull. A modified nonlinear FET model is presented in this work that allows characterization of both thermal and trapping effects. New parameters and equation terms providing a trapping-related quiescent-bias dependence have been added to a popular nonlinear ("Angelov") model. A systematic method for fitting the quiescent-dependence parameters, temperature coefficients, and thermal resistance is presented, using a GaN high electron-mobility transistor as an example. The thermal resistance providing a good fit in the modeling procedure is shown to correspond well with infrared measurement results.
Style APA, Harvard, Vancouver, ISO itp.
23

Landin, Per. "On radio frequency behavioral modeling measurement techniques, devices and validation aspects /". Licentiate thesis, Stockholm : Skolan för elektro- och systemteknik, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11678.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Cadafalch, Rabasa Jordi. "Numerical Simulation of Turbulent Flows. Multiblock Techniques. Verification and Experimental Validation". Doctoral thesis, Universitat Politècnica de Catalunya, 2002. http://hdl.handle.net/10803/6681.

Pełny tekst źródła
Streszczenie:
Work here presented is the result of basic research in key aspects of the currently available engineering tools and methodologies for the design, optimisation and development of thermal systems and equipment: turbulence modelling, high performance computing and quality tests and procedures so as to assess credibility to the numerical solutions (verification and validation).

The thesis comprises six main chapters written in a paper format. Two of them have already been published in international journals, one in the proceedings of a Spanish conference and two in proceedings of international conferences on Computational Fluid Dynamics and heat transfer. The last chapter has recently been submitted for publication to an international journal. Therefore, all the chapters are written so as to be self-contained, complete and concise. As a consequence, some contents of the chapters such those describing the governing equations, or the verification procedure used to assess the credibility of the numerical solutions, are repeated in several of them. Furthermore, as only minor changes have been introduced in the chapters respect to the original papers, each of them reflects the know-how of the CTTC (Heat and Mass Transfer Technological Centre were the research has been carried out) when they were published.

Papers presented in chapters 1 and 2 deal with turbulence modelling. A general overview is given on the formulation and numerical techniques of the different levels of turbulence modelling: Direct Numerical Simulation (DNS), Large Eddy Simulation (LES) and Reynolds Averaged Navier-Stokes Simulation (RANS). Main attention is focussed on the eddy viscosity two-equation RANS models. Their formulation is presented in more detail, and numerical solutions of the most extended. Benchmark problems on turbulence modelling are given compared to the available experimental data.

Chapters 3 and 4 focus on the use of the multiblock method (domain decomposition method), as a numerical technique that combined with the parallel computing may allow reducing the demanding computational time and memory (high performance computing). The multiblock approach used is based on the conservation of all the physical quantities (fully conservative method) and on an explicit information exchange between the different blocks of the domain. The goal of the work presented in these two chapters is to verify that such a multiblock approach does not introduce additional uncertainty in the numerical solutions.

Chapter 5 presents a tool that has been developed at the CTTC for the verification of finite volume computations. In fact, this tool is also partially used and described in the results presented in the previous chapters. Here, it is described and discussed in detail and it is applied to a set of different CFD and heat transfer problems in two and three dimensions, with free and forced convection, with reactive and non-reactive flows and with laminar and turbulent flows.

The last chapter shows a complete study for the development of a credible heat transfer relation for the heat evacuated from a ventilation channel. Such study comprises all the different steps that have to be accomplished so as to develop credible and applicable results in mechanical engineering. It comprises a description of the mathematical model to represent the physical phenomena in the channel, the numerical model to solve the set of coupled differential equations of the mathematical model, the construction and testing of an ad-hoc experimental set-up, and a verification and validation (V&V) test that guarantees that the numerical solution is an accurate enough approximation of the mathematical model (verification), and that it properly predicts the reality (validation).
Style APA, Harvard, Vancouver, ISO itp.
25

Landin, Per N. "On Radio Frequency Behavioral Modeling : Measurement Techniques, Devices and Validation Aspects". Licentiate thesis, KTH, Signalbehandling, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11678.

Pełny tekst źródła
Streszczenie:
Effektförstärkare för radiofrekvensapplikationer utgör fortfarande ett av de största problemen i trådlösa kommunikationssystem. Detta beror på att dessa förstärkare är ickelinjära, har låg energieffektivitet och ger mycket distortioner. Bättre verktyg för att förstå och korrigera dessa beteenden är nödvändiga. Ett sådant verktyg är beteendemodellering. En beteendemodell kan ses som en svart låda med insignal(er) och utsignal(er). In detta fall är dessa signaler samplade basbandssignaler och den svarta lådan är en matematisk relation mellan en insignal och en utsignal. Avhandlingen behandlar några krav för beteendemodellering av nämnda system genom att presentera metoder för utvärdering och förbättring av modellernas prestanda. Detta åstadkoms genom att betrakta ett frekvensviktat felkriterium. Ett högpresterande mätsystem är också nödvändigt för experimenten. Prestandan hos det tillgängliga systemet jämförs med prestandan hos ett allmänt erkänt mätsystem, en s.k. storsignalsnätverksanalysator, genom att betrakta prestandan hos beteendemodellerna som extraheras och valideras med data från respektive mätsystem. Resultatet visar att det existerande mätsystemet har god prestanda. Ett stort problem vid beteendemodellering är att kunna sampla med tillräckligt hög hastighet. Genom att använda Zhu-Franks generaliserade samplingsteorem vid beteendemodellering kan en del av detta problem undvikas. Teoremet medför att man kan sampla med en väsentligt lägre samlingsfrekvens än vad Nyquistteoremet säger. Modeller extraheras och prestandan utvärderas genom att använda kriteriet normalized mean square error (NMSE). För stabil prediktion och korrektion av utsignalen måste robustheten hos de använda modellerna verifieras. En sådan studie som berör robustheten mot variationer i lastimpedansen har genomförts. Prestandan på direkta modeller försämras med 7 dB mätt som adjacent channel error power ratio (ACEPR). Prestanda på inversmodellen, implementerad som digital predistortion, försämras med upp till 13 dB mätt som adjacent channel power ratio (ACPR).
Radio frequency (RF) power amplifiers (PA) are still the most troublesomepart of a wireless system due to their inherent nonlinearity, low powerefficiency and high distortions. Better tools are needed to understand and correct the undesirable behavior. Some of these tools are behavioral models. A behavioral model is often thought of as a black box with some inputs andsome outputs. In the case here these inputs are sampled signals which meansthat the modeling amounts to finding a mathematical relationship betweenthe input signal(s) and the output signal(s). This thesis considers some requirements for behavioral modeling of said systems by presenting methods for general performance evaluation and improvement by considering a frequency weighted error criterion. A high performance measurement system is also needed. The performance of the available system is compared to the performance of a well recognized system, the largesignal network analyzer (LSNA). The results show that the existing measurementsystem can extract behavioral models with the same performance as the LSNA and can give lower performance validation errors. Still the need for higher bandwidths drives the measurement systems to the limits, especially the digital parts. By utilizing the so called Zhu-Frank generalized sampling theorem, behavioral modeling of a PA is done by using data acquired at a sampling rate lower than the Nyquist rate. Models of a PA are extracted and the performance is evaluated using the normalized meansquare error (NMSE) criterion. For prediction and correction of the output signals the stability of the models regarding changes must be investigated. One such study considering controlled variations on the output load of the PA is done and both the predictive and corrective capabilities of the models are evaluated. The predictive capability gets up to 7 dB worse measured as adjacent channel error powerratio (ACEPR) and the corrective, as digital predistortion, gets up to 13 dB worse measured as adjacent channel power ratio (ACPR).
Style APA, Harvard, Vancouver, ISO itp.
26

Li, Lun. "Integrated techniques for the formal verification and validation of digital systems". Ann Arbor, Mich. : ProQuest, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3214772.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D. in Computer Engineering)--S.M.U.
Title from PDF title page (viewed July 10, 2007). Source: Dissertation Abstracts International, Volume: 67-04, Section: B, page: 2151. Adviser: Mitchell A. Thornton. Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
27

Volkmer, Frank [Verfasser]. "Applying Live Job Monitoring Techniques to Monte Carlo Validation / Frank Volkmer". Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1124474196/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Abouarboub, Ahmed Ali Mohamed. "In-situ validation of three-phase flowmeters using capacitance sensing techniques". Thesis, University of Derby, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396508.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Chilenski, Mark Alan. "Experimental data analysis techniques for validation of Tokamak impurity transport simulations". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112393.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages [234]-309).
This thesis presents two new techniques for analyzing data from impurity transport experiments in magnetically confined plasmas, with specific applications to the Alcator C-Mod tokamak. The objective in developing these new techniques is to improve the quality of the experimental results used to test simulations of turbulent transport: better characterization of the uncertainty in the experimental results will yield a better test of the simulations. Transport codes are highly sensitive to the gradients of the background temperature and density profiles, so the first half of this thesis presents a new approach to fitting tokamak profiles using nonstationary Gaussian process regression. This powerful technique overcomes many of the shortcomings of previous spline-based data smoothing techniques, and can even handle more complicated cases such as line-integrated measurements, computation of second derivatives, and 2d fitting of spatially- and temporally-resolved measurements. The second half of this thesis focuses on experimental measurements of impurity transport coefficients. It is shown that there are considerable shortcomings in existing point estimates of these quantities. Next, a linearized model of impurity transport data is constructed and used to estimate diagnostic requirements for impurity transport measurements. It is found that spatial resolution is more important than temporal resolution. Finally, a fully Bayesian approach to inferring experimental impurity transport coefficient profiles which overcomes the shortcomings of the previous approaches through use of multimodal nested sampling is developed and benchmarked using synthetic data. These tests reveal that uncertainties in the transport coefficient profiles previously attributed to uncertainties in the temperature and density profiles are in fact entirely explained by changes in the spline knot positions. Appendices are provided describing the extensive work done to determine the derivatives of stationary and nonstationary covariance kernels and the open source software developed as part of this thesis work. The techniques developed here will enable more rigorous benchmarking of turbulent transport simulations, with the ultimate goal of developing a predictive capability.
by Mark Alan Chilenski.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
30

Falcone, Yliès Carlo. "Etude et mise en oeuvre de techniques de validation à l'exécution". Université Joseph Fourier (Grenoble), 2009. http://www.theses.fr/2009GRE10239.

Pełny tekst źródła
Streszczenie:
L'étude de cette thèse porte sur trois méthodes de validation dynamiques : les méthodes de vérification, d'enforcement (mise en application), et de test de propriétés lors de l'exécution des systèmes. Nous nous intéresserons à ces approches en l'absence de spécification comportementale du système à valider. Pour notre étude, nous nous plaçons dans la classification Safety-Progress des propriétés. Ce cadre offre plusieurs avantages pour la spécification de propriétés sur les systèmes. Nous avons adapté les résultats de cette classification, initialement développés pour les séquences infinies, pour prendre en compte de manière uniforme les séquences finies. Ces dernières peuvent être vues comme une représentation abstraite de l'exécution d'un système. Se basant sur ce cadre général, nous nous sommes intéressés à l'applicabilité des méthodes de validation dynamiques. Nous avons caractérisé les classes de propriétés vérifiables, enforçables, et testables. Nous avons ensuite proposé trois approches génériques de vérification, d'enforcement, et de test de propriétés opérant à l'exécution des systèmes. Nous avons montré comment il était possible d'obtenir, à partir d'une propriété exprimée dans le cadre de la classification Safety-Progress, des mécanismes de vérification, d'enforcement, ou de test dédiés à la propriété considérée. Enfin, nous proposons également les outils j-VETO et j-POST mettant en œuvre l'ensemble des résultats proposés sur les programmes Java
This thesis deals with three dynamic validation techniques: runtime verification (monitoring), runtime enforcement, and testing from property. We consider these approaches in the absence of complete behavioral specification of the system under scrutiny. Our study is done in the context of the Safety-Progress classification of properties. This framework offers several advantages for specifying properties on systems. We adapt the results on this classification, initially dedicated to infinite sequences, to take into account finite sequences. Those sequences may be considered as abstract representations of a system execution. Relying on this general framework, we study the applicability of dynamic validation methods. We characterize the classes of monitorable, enforceable, and testable properties. Then, we proposed three generic approaches for runtime verification, enforcement, and testing. We show how it is possible to obtain, from a property expressed in the {\SP} framework, some verification, enforcement, and testing mechanisms for the property under consideration. Finally, we propose the tools j-VETO and j-POST implementing all the aforementioned results on Java programs
Style APA, Harvard, Vancouver, ISO itp.
31

Jayaraman, Dheepakkumaran. "Optimization Techniques for Performance and Power Dissipation in Test and Validation". OpenSIUC, 2012. https://opensiuc.lib.siu.edu/dissertations/473.

Pełny tekst źródła
Streszczenie:
The high cost of chip testing makes testability an important aspect of any chip design. Two important testability considerations are addressed namely, the power consumption and test quality. The power consumption during shift is reduced by efficiently adding control logic to the design. Test quality is studied by determining the sensitization characteristics of a path to be tested. The path delay fault models have been used for the purpose of studying this problem. Another important aspect in chip design is performance validation, which is increasingly perceived as the major bottleneck in integrated circuit design. Given the synthesizable HDL code, the proposed technique will efficiently identify infeasible paths, subsequently, it determines the worst case execution time (WCET) in the HDL code.
Style APA, Harvard, Vancouver, ISO itp.
32

Green, Andrew. "Computational techniques for fast Monte Carlo validation of proton therapy treatment plans". Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/computational-techniques-for-fast-monte-carlo-validation-of-proton-therapy-treatment-plans(96ab69f6-9ec3-44e5-ba13-c3021bfa4d59).html.

Pełny tekst źródła
Streszczenie:
Proton therapy is an established radiotherapy technique for the treatment of complex cancers. However, problems exist in the planning of treatments where the use of inaccurate dose modelling may lead to treatments being delivered which are not optimal. Most of the problems with dose modelling tools used in proton therapy treatment planning lie in their treatment of processes such as multiple Coulomb scattering, therefore a technique that accurately models such effects is preferable. Monte Carlo simulation alleviates many of the problems in current dose models but, at present, well-validated full-physics Monte Carlo simulations require more time than is practical in clinical use. Using the well-known and well-validated Monte Carlo toolkit Geant4, an application-called PTMC-has been developed for the simulation of proton therapy treatment plans. Using PTMC, several techniques to improve throughput were developed and evaluated, including changes to the tracking algorithm in Geant4 and application of large scale parallelism using novel computing architectures such as the Intel Xeon Phi co-processor. In order to quantify any differences in the dose-distributions simulated when applying these changes, a new dose comparison tool was also developed which is more suited than current techniques for use with Monte Carlo simulated dose distributions. Using an implementation of the Woodcock algorithm developed in this work, it is possible to track protons through a water phantom up to eight times faster than using the PRESTA algorithm present in Geant4, with negligible loss of accuracy. When applied to a patient simulation, the Woodcock algorithm increases throughput by up to thirty percent, though step limitation was necessary to preserve simulation accuracy. Parallelism was implemented on an Intel Xeon Phi co-processor card, where PTMC was tested with up to 244 concurrent threads. Difficulties imposed by the limited RAM available were overcome through the modification of the Geant4 toolkit and through the use of a novel dose collation technique. Using a single Xeon Phi co-processor, it is possible to validate a proton therapy treatment plan in two hours; with two co-processors that simulation time is halved. For the treatment plan tested, two Xeon Phi co-processors were roughly equivalent to a single 48-core AMD Opteron machine. The relative costs of Xeon Phi co-processors and traditional machines have also been investigated; at present the Intel Xeon Phi co-processor is not cost competitive with standard hardware, costing around twice as much as an AMD machine with comparable performance. Distributed parallelism was also implemented through the use of the Google Compute Engine (GCE). A tool has been developed-called PYPE-which allows users to launch large clusters in the GCE to perform arbitrary compute-intensive work. PYPE was used with PTMC to perform rapid treatment plan validation in the GCE. Using a large cluster, it is possible to validate a proton therapy treatment plan in ten minutes at a cost of roughly $10; the same plan computed locally on a 24-thread Intel Xeon machine required five hours. As an example calculation using PYPE and PTMC, a robustness study is undertaken for a proton therapy treatment plan; this robustness study shows the usefulness of Monte Carlo when computing dose distributions for robustness studies, and the utility of the PYPE tool to make numerous full physics Monte Carlo simulations quickly. Using the tools developed in this work, a complete treatment plan robustness study can be performed in around 26 hours for a cost of less than $500, while using full-physics Monte Carlo for dose distribution calculations.
Style APA, Harvard, Vancouver, ISO itp.
33

Li, Li. "Student Misbehaviors and Teacher Techniques in Online Classrooms: Instrument Development and Validation". Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1343414806.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Moundanos, Konstantinos. "A unified validation framework for VLSI circuits using formal and abstraction techniques /". Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Madden, Ryan J. "Development of Robust Control Techniques towards Damage Identification". Cleveland State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=csu1460986638.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Weathers, James Boyd. "VERIFICATION AND VALIDATION USING STATE OF THE ART MEASURES AND MODULAR UNCERTAINTY TECHNIQUES". MSSTATE, 2008. http://sun.library.msstate.edu/ETD-db/theses/available/etd-03282008-111215/.

Pełny tekst źródła
Streszczenie:
As quantitative validation measures have become available, so has the controversy regarding the construction of such measures. The complexity of the physical processes involved is compounded by uncertainties introduced due to model inputs, experimental errors, and modeling assumptions just to name a few. Also, how these uncertainties are treated is of major importance. In this dissertation, the issues associated with several state of the art quantitative validation metrics are discussed in detail. Basic Verification and Validation (V&V) framework is introduced outlining areas where some agreement has been reached in the engineering community. In addition, carefully constructed examples are used to shed light on differences among the state of the art validation metrics. The results show that the univariate validation metric fails to account for correlation structure due to common systematic error sources in the comparison error results. Also, the confidence interval metric is an inadequate measure of the noise level of the validation exercise. Therefore, the multivariate validation metric should be utilized whenever possible. In addition, end-to-end examples of the V&V effort are provided using the multivariate and univariate validation metrics. Methodology is introduced using Monte Carlo analysis to construct the covariance matrix used in the multivariate validation metric when non-linear sensitivities exist. Also, the examples show how multiple iterations of the validation exercise can lead to a successful validation effort. Finally, modular uncertainty techniques are introduced for the uncertainty analysis of large systems where many data reduction equations or models are used to examine multiple outputs of interest. In addition, the modular uncertainty methodology was shown to be an equivalent method to the traditional propagation of errors approach with a drastic reduction in computational effort. The modular uncertainty technique also has the advantage in that insight is given into the relationship between the uncertainties of the quantities of interest being examined. An extension of the modular uncertainty methodology to cover full scale V&V exercises is also introduced.
Style APA, Harvard, Vancouver, ISO itp.
37

Dalne, Katja. "Validation Techniques for Credit Risk Models - Applying New Methods on Nordea’s Corporate Portfolio". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129067.

Pełny tekst źródła
Streszczenie:
Nordea, being the largest corporate group of its kind in Northern Europe, has a great need of evaluating its customers ability to repay a debt as well as the probability of bankruptcy. The evaluation is done by different statistically derived internal rating models, based on logistic regression. The models have been developed by the use of historical data and attain good predictiveness when a lot of observational data is provided for each specific customer. In order to ameliorate the rating models, Nordea wants to implement two new validation methods, recommended by the reputable credit rating agency Moody’s: information entropy and accuracy ratio with simulated defaults. A default is a customer either being close to or being bankrupt. Information entropy measures how much information is included within a given variable, while accuracy ratio with simulated defaults validates the ability of the model to discriminate between "good" and "bad" customers when simulating default data. The simulation is used when sufficient default data does not exist, which is the case for large corporates. After the implementation of these validation methods, for the same set of data that Moody’s were given, the results that they presented could be confirmed by the chosen implementation method. This method was then used for analysis of a general set of data and it could be concluded that the use of each validation method, recommended by Moody’s, would improve the validation of the model.
Nordea är den största bankkoncernen i norra Europa. Man har stort behov av att känna till sina låntagares förmåga att återbetala skulder samt att noggrannt kunna uppskatta sannolikheten för konkurs. Utvärderingen görs med hjälp av kreditvärderingsmodeller baserade på logistisk regression. Modellerna bygger på historiska data och predikterar väl då dataunderlaget är stort i det enskilda fallet. För att utveckla och förbättra den modell som används för stora företag, önskar man inom Nordea implementera två nya valideringsmetoder som rekommenderas av det välrenommerade kreditvärderingsinstitutet Moody’s: informationsentropi samt noggrannhetsförhållande med simulerade defaulter. En default är en kund som antingen riskerar att gå eller är försatt i konkurs. Informationsentropi mäter hur mycket information som ryms i en given variabel, medan noggrannhetsförhållande med simulerade defaulter validerar hur väl en modell kan skilja mellan "bra" och "dåliga" kunder vid användning av simulerade defaultdata. Simuleringen används när det inte finns tillräckligt med defaultdata att tillgå, vilket är fallet då antalet observerade defaulter inom en viss låntagarkategori är få. Efter implementering av valideringsmetoderna med samma data som Moody’s använt sig av, kunde de resultat Moody’s presenterat bekräftas med den implementeringsmetod som valts. Denna metod användes därefter vid analys av en större datamängd och slutsatsen kunde dras att användandet av var och en av valideringsmetoderna, föreslagna av Moody’s, leder till en förbättrad validering av modellen.
Style APA, Harvard, Vancouver, ISO itp.
38

Overbeck, Wiebke [Verfasser]. "Validation of three diagnostic techniques to diagnose subclinical endometritis in mares / Wiebke Overbeck". Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1050709594/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Sulaimalebbe, Aslam. "Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles". Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/54904/.

Pełny tekst źródła
Streszczenie:
In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft High Frequency Structure Simulator (HFSS), followed by the electrical characterisation of synthesised Pt NP films using the novel miniature fabricated OCP technique. The results obtained from this technique provided the inspiration to synthesise and evaluate the microwave properties of Au NPs. The findings from this technique provided the motivation to characterise both the Pt and Au NP films using the DR technique. Unlike the OCP technique, the DR method is highly sensitive but the achievable measurement accuracy is limited since this technique does not have broadband frequency capability like the OCP method. The results obtained from the DR technique show a good agreement with the theoretical prediction. In the last phase of this research, a further validation of the aperture admittance models on different types OCP (i.e. RG-405 and RG-402 cables and SMA connector) have been carried out on the developed 3D full wave models using HFSS software, followed by the development of universal models for the aforementioned OCPs based on the same 3D full wave models.
Style APA, Harvard, Vancouver, ISO itp.
40

Meneely, Julie Patricia. "Development and validation of rapid analytical techniques for the determination of trichothecine mycotoxins". Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.534609.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Chen, Sean Jy-Shyang. "Development and validation of vascular image processing techniques for image-guided neurovascular surgery". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123108.

Pełny tekst źródła
Streszczenie:
The effectiveness of image guided neurosurgery (IGNS) depends on the presen-tation of accurate image data to a neurosurgeon for surgical planning and guidance.The blood vessels supplying the brain are of particular importance in IGNS, becausethey densely surround brain lesions and tumours, may themselves be the sites ofpathologies, and need to be carefully considered during surgery.Given the importance of visualizing and identifying vasculature for diagnosis,planning, and guidance, there is a strong need for automated vessel enhancementand registration techniques. Furthermore, tools for the characterization and valida-tion of developed image processing methods are needed. This thesis presents thedevelopment of three separate techniques to address the above stated needs: (1)a vessel-based intraoperative image registration technique, (2) a technique for pro-ducing anatomically realistic multimodality imaging phantoms, and (3) a non-localestimator based vessel structure enhancement technique.For intraoperative registration, where preoperative images are aligned to thepatient on the operating room table, we developed a hybrid non-linear vessel-basedregistration algorithm. Our technique combines the benefits of feature-based andintensity-based vessel registration methods. Raw volumetric images are processedthrough feature enhancement to produce a set of image intensity maps for registra-tion using cross-correlation. By not explicitly extracting discrete vessel features, wecan be assured that removal of important registration information is minimized. Weextensively validated our registration method for robustness and accuracy using alarge number of synthetic images, real physical phantom images, and real clinicalpatient images.In validating our registration technique we realized a need for improved physicalphantoms. As such, we developed a multimodal anthropomorphic brain phantomfor inter-modality image processing validation. The brain phantom (1) has the me-chanical properties and anatomical structures found in live human brain and (2) wasmade from polyvinyl alcohol cryogel. Marker spheres and inflatable catheters werealso implanted to enable good registration comparisons and to simulate tissue defor-mation, respectively. Multiple sets of multimodal data were then acquired from thisphantom and made freely available to the image processing community.Based on our vessel registration work, we also found a need for improved vesselenhancement methods. Therefore, we developed a technique that extends Frangi'svessel enhancement method to improve background suppression. To do this, we ac-count for larger vessel geometries over an extended area rather than solely usinginformation from a small local region. Validation of the technique was performedon 3D synthetic images, and 2D and 3D clinical images. The results revealed thatby analyzing larger image regions to improve background suppression and identifyvessel-like structures, our method can effectively enhance and improve retention ofthin and lower contrast vessels in comparison to Frangi's method.The automated vessel enhancement and vessel-based image registration tech-niques developed in this thesis can be used to improve the effectiveness of surgicalwork-flows in IGNS. Our anthropomorphic phantom can be used to validate andcharacterize novel image processing methods.
L'efficacitée de la neurochirurgie guidée par l'image (IGNS) dépend de la pré sentation au neurochirurgien d'images précises pour la planification chirurgicale et l'orientation. Les vaisseaux sanguins cérébraux sont d'une importance particulière en IGNS, parce qu'ils entourent densément les lésions céreb́rales et les tumeurs, et peuvent eux-mêmes être le siège de pathologies, et doivent être soigneusement examinés durant la chirurgie. Compte tenu de l'importance de la visualisation et de l'identification des vaisseaux cérébraux pour la planification du diagnostique et de l'orientation, il est important de développer des techniques automatisées pour augmenter les contrastes des vaisseaux, ainsi que des méthodes de recalage des images préopératoires. De plus, des outils pour la caractérisation et la validation des méthodes de recalage et de segmentation sont nécessaires. Cette thèse présente le développement de trois techniques différentes pour prendre en comptes ces besoins: (1) une technique qui utilise les in formations des vaisseaux sanguins pour le recalage des images préopératoires, (2) une technique pour produire des fantômes multimodaux avec des structures anatomique ment réalistes, et (3) une technique pour augmentater les contrastes des vaisseaux sanguins avec un estimateur non-local. Pour le recalage d'images préopératoires, où les images sont alignées par rapport au patient sur la table de la salle d'opération, nous avons développé une approche hybride de recalage avec transformation non-linéaire. Notre technique combine les avantages des techniques de recalage basées sur l'intensité et les attributs géométriques. La segmentation des vaisseaux sanguins est appliquée aux images volumétriques natives pour produire un ensemble de cartes d'intensité afin d'utiliser la corrélation croisée pour le recalage. En maintenant les structures sanguines comme cartes d'intensité, au lieu de l'extraction comme les caractéristiques discrètes, nous pouvons être sûr que d'importantes informations de l'image ne sont pas supprimées pour le recalage. En validant notre technique de recalage d'image, nous avons réalisé le besoin d'améliorer les fantômes multimodaux. Pour cela, nous avons développé un fantôme anthropomorphique du cerveau qui peut être efficacement utilisé pour la validation intermodalité du traitement des images. Le fantôme cérébral, qui a les proprietés mécaniques et une anatomie similaire; au cerveau humain in vivo, a été fait à partir d'alcool de polyvinyle cryogel. Des marqueurs sphériques et des cathéters gonflables ont également été implantés pour permettre de simuler la déformation des tissus et de comparer la qualité des recalages. Plusieurs ensembles de données multimodaux ont été acquis avec ce fantôme et ont été mis à la disposition de la communauté qui travaille sur le traitement des images. Notre travail sur le recalage des vaisseaux sanguins nous a également révélé la nécessité d'améliorer les méthodes numérique des vaisseaux. En conséquence, nous avons développé une technique qui pousse la méthode de Frangi en augmentant le contraste et en supprimant les éléments de fond. Ainsi, pour détecter des géométries des vaisseaux sanguins plus grandes, nous considérons une zone de recherche plus grande plutôt qu'une petite zone locale. La validation de la technique a été réalisée avec des images synthétiques 3D, et des images cliniques 2D et 3D. Les techniques automatisées pour le recalage et l'augmentation des contrastes des vaisseaux sanguins développés dans cette thèse peuvent être utilisées pour améliorer l'efficacité des processus chirurgicaux en IGNS. Notre fantôme anthropomorphique peut être quant à lui utilisé pour valider et caractériser de nouvelles méthodes de traitement d'image.
Style APA, Harvard, Vancouver, ISO itp.
42

Thornton, Jenna Louise. "Ice particle size and roughness from novel techniques : in situ measurements and validation". Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17644.

Pełny tekst źródła
Streszczenie:
The roughness of ice crystals, defined by small-scale surface roughness and large scale complexity, in high-altitude cloud, has been studied due to its important influence on the radiative properties of ice cloud. The Small Ice Detector 3 (SID-3) created at the University of Hertfordshire was used to measure the characteristics of individual ice crystals in situ. These are supplemented by a range of meteorological in situ measurements, including temperature, relative humidity, and wind velocity to investigate the influence of atmospheric conditions on ice crystal roughness/complexity. Since the method of roughness retrieval was novel, for atmospheric ice particles, laboratory experiments were setup to test and improve the characterization techniques. Criteria were set as a result of the laboratory experiments which data was expected to meet for it to be deemed reliable. These criteria and techniques were applied to data collected in situ on research aircraft. A range of degrees of ice crystal roughness were observed over five flights from two campaigns based out of Scotland in 2012 and 2015 (PIKNMIX and CIRCCREX). When all the flights were combined the majority of particles (51%) were categorised as lightly rough; the second most common roughness type was moderately rough (39%). Smooth particles made up 10% of the total particles, and < 0.02% were classed as severely rough. When considering a wave-cloud case separately, a similar range of roughness values were seen, however, smooth particles were only observed at the cloud leading-edge where nucleation was expected to occur during the only straight level run of the aircraft to probe this region. During the same wave-cloud flight smooth particles were more common in supersaturated regions and moderately rough crystals were more common in subsaturated regions, suggesting that crystals are more likely to tend towards rougher values when observed in subsaturated environments (a statistical T-test showed this hypothesis to be statistically significant). It was found that due to limitations associated with instantaneous measurements, it was challenging to observe how ice particle roughness evolved in situ, since the history of the individual crystals was unknown in most cases. Orographic cloud, however, was found to provide a more robust estimation of crystal evolution as a consequence of having sharp-leading edges where nucleation events were expected to occur, and since crystals then follow streamlines, the distance from the sharp-leading edge can act as a proxy for time since nucleation.
Style APA, Harvard, Vancouver, ISO itp.
43

Tanguay, Michelle. "Validation d'un test de vocabulaire américain par un procédé multi-techniques de traduction". Master's thesis, Université Laval, 1989. http://hdl.handle.net/20.500.11794/37375.

Pełny tekst źródła
Streszczenie:
"La traduction de toute méthode et de tout instrument de mesure développés par et pour un autre milieu culturel, doit être vérifiée afin d’éliminer l’erreur causée par la contamination des données de départ. La version de langue française des items 40 à 79 de la forme A du test de vocabulaire "P.P.V.T.” a été validée par une procédure multi-technigues. Après une technique de retraduction réalisée par deux groupes de traducteurs, une autre équipe de traducteurs a analysé les différentes versions obtenues pour choisir la meilleure traduction à chacun des items. Deux groupes de sujets anglophone et francophone ont passé ce test ou sa version et, après analyse de leurs résultats, leurs moyennes se sont avérées être comparables. La version en langue française de cette partie du *’P . P . V . T ., comparée au test original, est donc fidèle (selon le coefficient alpha de l’analyse de la fidélité) et valide (selon une analyse de la covariance)."
Québec Université Laval, Bibliothèque 2019
Style APA, Harvard, Vancouver, ISO itp.
44

Rapin, Nicolas. "Validation de spécifications à base d'automates par des techniques de dépliage et d'exécution". Evry-Val d'Essonne, 2004. http://www.theses.fr/2004EVRY0016.

Pełny tekst źródła
Streszczenie:
Une partie notable des anomalies de fonctionnements des systèmes industriels est due à des erreurs de spécifcation. La validation est l'ensemble des techniques qui visent à prévenir de telles erreurs. La thèse développe une méthodologie, ainsi que ses fondements théoriques, pour la validation de spécifications à bases d'automates communicants. L'idée développée est qu'une spécification peut être réécrite dans une forme qui révèle ses comportements, de sorte que le concepteur puisse en apprécier la justesse. Plus techniquement la spécification réécrite est un graphe construit à partir des chemins de contrôle consistants issus de la spécification d'origine. Il est montré que la spécification réécrite décrit bien le même système et qu'elle peut être substituée à la spécification d'origine dans le cas où elle joue le rôle de composant au sein d'un système plus vaste. Ce résultat soutient une méthodologie de validation basée sur le couple validation unitaire/ validation d'intégration
A notable part of the faulty operations of the industrial systems is due to specification erros. Validation denotes the set of techniques which aim at preventing such erros. The thesis develops a methodology and its theoretical bases for the validation of communicating automata based specifications. The idea is that a specification can be rewritten in a form which reveals its behaviours, so that the engineer can appreciate the accuracy of it. More technically the rewritten specification is a graph built starting from the consistent control path resulting from the original specification. It is shown that the rewritten specification describes the same system and that it can be substituted for the original spécification if it plays the part of component within a vaster system. This result supports a methodology of validation by integration
Style APA, Harvard, Vancouver, ISO itp.
45

Linton, Nick William Fox. "Intracardiac mapping of atrial arrhythmias in humans : development and validation of new techniques". Thesis, King's College London (University of London), 2015. http://kclpure.kcl.ac.uk/portal/en/theses/intracardiac-mapping-of-atrial-arrhythmias-in-humans-development-and-validation-of-new-techniques(66e47520-881e-4325-b7c9-a407919cbe7e).html.

Pełny tekst źródła
Streszczenie:
In patients undergoing catheter ablation procedures for atrial tachycardia, successful ablation requires the mechanism and location of the tachycardia to be correctly determined. This thesis explores the integration of engineering and computational methods with electrophysiological principles for mapping atrial tachycardias. The first objective of the thesis is to re-evaluate activation mapping. Ripple Mapping was created for this purpose. This is a method that displays each recorded electrogram as a bar on the shell that represents the cardiac surface: the length of the bar varies with time according to the electrogram voltage-time relationship. A proof-of-concept study evaluates Ripple Mapping in a small number of patients with a variety of different arrhythmias. After further development of the method, it is evaluated in patients with atrial tachycardia. Benefits include avoiding the need to annotate each electrogram with a Local Activation Time and also avoiding the need to select a Window of Interest. The second objective is to investigate how macro-reentry tachycardias are detected. The classic entrainment criteria can be difficult to apply in the clinical setting of atrial tachycardia (particularly after prior ablation). A new entrainment criterion is described that utilises the response to entrainment from multiple locations. This can also detect double loop reentry from two entrainment manoeuvres. The theoretical basis for the criterion is developed within a mathematical framework. Clinical testing is performed in patients with typical flutter, left atrial macroreentry, and also analysis of previously published reports of double-loop reentry. The criterion is also incorporated into the overdrive pacing analysis software described below. The final objective was to integrate information from overdrive pacing manoeuvres in combination with the electroanatomic information from 3D mapping systems. A theoretical basis for this has been developed and incorporated into a computer program. Initial clinical evaluation is presented from patients with simulated focal tachycardias as well as clinical localised reentrant and macroreentrant tachycardias.
Style APA, Harvard, Vancouver, ISO itp.
46

Park, June-Woo. "Development and validation of novel molecular techniques to elucidate mechanisms of endocrine disruption". Diss., Connect to online resource - MSU authorized users, 2008.

Znajdź pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Michigan State University. Dept. of Zoology-Environmental Toxicology, 2008.
Title from PDF t.p. (viewed on Mar. 30, 2009) Includes bibliographical references. Also issued in print.
Style APA, Harvard, Vancouver, ISO itp.
47

Flohrer, Claudia. "Mutual validation of satellite-geodetic techniques and its impact on GNSS orbit modeling /". Bern : [s.n.], 2008. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Woo, Mei-sum Becky, i 胡美心. "Validation and calibration of a digital subtraction radiography systemfor quantitative assessment of alveolar bone changes". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31954169.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Alphonce, Magori. "Use of software verification & validation (V&V) techniques for software process improvement". Thesis, University West, Department of Technology, Mathematics and Computer Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-574.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Isaza, Narvaez Claudia Victoria. "Diagnostic par techniques d'apprentissage floues: concept d'une méthode de validation et d'optimisation des partitions". Phd thesis, INSA de Toulouse, 2007. http://tel.archives-ouvertes.fr/tel-00190884.

Pełny tekst źródła
Streszczenie:
Ce travail se situe dans le domaine du diagnostic des processus défini comme étant l'identification de ses états fonctionnels. Dans le cas où l'obtention d'un modèle précis du processus est délicate ou impossible, la connaissance sur le système peut être extraite à partir des signaux obtenus lors d'un fonctionnement normal ou anormal en incluant des mécanismes d'apprentissage. Cette connaissance s'organise sous l'aspect d'une partition de l'espace des données sous forme de classes (représentant les états du système). Parmi les techniques d'apprentissage, celles incluant de la logique floue ont l'avantage d'exprimer les appartenances d'un individu à plusieurs classes, ceci permet de mieux connaître la situation réelle du système et prévoir des changements vers des états de défaillance. Nonobstant leurs performances adéquates, leur forte dépendance aux paramètres d'initialisation est une difficulté pour l'apprentissage. Cette thèse se situe dans l'amélioration de ces techniques, en particulier notre objectif est l'élaboration d'une méthode permettant de valider et d'adapter automatiquement la partition de l'espace de données obtenue par une technique de classification floue. Elle permet de trouver automatiquement une partition optimale en termes de compacité et de séparation des classes, à partir uniquement de la matrice des degrés d'appartenance obtenue par une classification antérieure. Cette méthode est donc une aide importante donnée à l'expert du processus pour établir les états fonctionnels dans l'implémentation d'une technique de surveillance d'un procédé complexe. Son application est illustrée sur des exemples académiques et sur le diagnostic de 3 procédés chimiques.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii