Dissertations / Theses on the topic 'Statistical techniques'

To see the other types of publications on this topic, follow the link: Statistical techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Traiger, Elizabeth A. "Statistical Techniques for flood estimation." Thesis, University of Oxford, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Whitehead, Christopher David. "Statistical techniques in credit scoring." Thesis, Lancaster University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443518.

Full text
Abstract:
The credit industry requires continued development and application of new statistical methodology that can improve aspects of the business. The first part of the thesis suggests a new diagnostic, derived from Kalman filtering, to assess model performance. It allows systematic updating of tracked statistics over time by incorporating new observations with the previous best estimates. It has benefits that current industry practices do not possess and we illustrate its worth on a mortgage application database. The second part of the thesis is concerned with regression analysis of financial data. To aid in the understanding of financial data quantile regression and a variable transformation is applied to a 'missed payments' database resulting in a greater understanding and more accurate description of the data. A less standard sampling and modelling approach is also employed which may give increased predictive power on independent data not used for model construction. The third part of this thesis is concerned with regression modelling in situations where the dimensionality is large. Latent variable modelling of explanatory and binary response variables is suggested which can be maximised using an EM algorithm. Less progress than anticipated has been accomplished in this area. The first two parts of this thesis have suggested novel statistical methodology that can provide benefits over current industry practices, both of which are adapted to real credit scoring applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Pickard, Lesley Margaret. "Statistical techniques and project monitoring." Thesis, City University London, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Qin. "Reliable techniques for survey with sensitive question." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Qun. "Bootstrap techniques for statistical pattern recognition." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/MQ52407.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kempson, C. N. "Statistical techniques for digital modulation recognition." Thesis, Cranfield University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277938.

Full text
Abstract:
Automatic modulation recognition is an important part of communications electronic monitoring and surveillance systems where it is used for signal sorting and receiver switching. ' This thesis introduces a novel application of multivariate statistical techniques to the problem of automatic modulation classification. The classification technique uses modulation features derived from time-domain parameters of instantaneous signal envelope, frequency and phase. Principal component analysis (PCA) is employed for data reduction and multivariate analysis of variance (MANOVA) is used to investigate the data and to construct a discriminant function to enable the classification of modulation type. MANOVA is shown to offer advantages over the techniques already used for modulation recognition, even when simple features are used. The technique is used to construct a universal discriminator which is independent of the unknown signal to noise ratio (SNR) of the received signal. The universal discriminator is shown to extend the range of signal-to-noise ratios (SNRs) over which discrimination is possible, being effective over an SNR range of 0-4OdB. Development of discriminant functions using MANOVA is shown to be an extensible technique, capable of application to more complex problems. i
APA, Harvard, Vancouver, ISO, and other styles
7

Jubock, Z. H. "Statistical models and techniques for dendrochronology." Thesis, University of Nottingham, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Petrova, E. N. "Statistical techniques in software reliability quantification." Thesis, University of Liverpool, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Iloni, Karen. "Biplot graphical display techniques." Master's thesis, University of Cape Town, 1991. http://hdl.handle.net/11427/17119.

Full text
Abstract:
Includes bibliography.
The thesis deals with graphical display techniques based on the singular value decomposition. These techniques, known as biplots, are used to find low dimensional representations of multidimensional data matrices. The aim of the thesis is to provide a review of biplots for a practical statistician who is not familiar with the area. It therefore focuses on the underlying theory, assuming a standard statisticians' knowledge of matrix algebra, and on the interpretation of the various plots. The topic falls in the realm of descriptive statistics. As such, the methods are chiefly exploratory. They are a means of summarising the data. The data matrix is represented in a reduced number of dimensions, usually two, for simplicity of display. The aim is to summarise the information in the matrix and to present a visual representation of this information. The aim in using graphical display techniques is that the "gain in interpretability far exceeds the loss in information" (Greenacre, 1984). A graphical description is often more easy to understand than a numerical one. Histograms and pie charts are familiar forms of data representation to many people with no other, or very rudimentary, statistical understanding. These are applicable to univariate data. For multivariate data sets, univariate methods do not reveal interesting relationships in the data set as a whole. In addition, a biplot can be presented in a manner which can be readily understood by non-statistically minded individuals. Greenacre (1984) comments that only in recent years has the value of statistical graphics been recognised. Young (1989) notes that recently there has been a shift in emphasis, among statisticians towards exploratory data analysis methods. This school of thought was given momentum by the publication of the book "Exploratory Data Analysis" (Tukey, 1977). The trend has been facilitated by advances in computer technology which have increased both the power and the accessibility of computers. Biplot techniques include the popular correspondence analysis. The original proponents of correspondence analysis (among them Benzecri) reject probabilistic modelling. At the other extreme, some view graphical display techniques as a mere preliminary to the more traditional statistical approaches. Under the latter view, graphical display techniques are used to suggest models and hypotheses. The emphasis in exploratory data techniques such as graphical displays is on 'getting a feel' for the data rather than on building models and testing hypotheses. These methods do not replace model building and hypothesis testing, but supplement them. The essence of the philosophy is that models are suggested by the data, rather than the frequently followed route of first fitting a model. Some work has gone into developing inferential methods, with hypothesis tests and associated p-values for biplot-type techniques (Lebart et al, 1984, Greenacre, 1984). However, this aspect is not important if the techniques are viewed merely as exploratory. Chapter Two provides the mathematical concepts necessary for understanding biplots. Chapter Three explains exactly what a biplot is, and lays the theoretical framework for the biplot techniques that follow. The goal of this chapter is to provide a framework in which biplot techniques can be classified and described. Correlation biplots are described in Chapter Four. Chapter Five discusses the principal component biplot, and the link between these and principal component analysis is drawn. In Chapter Six, correspondence analysis is presented. In Chapter Seven practical issues such as choice of centre are discussed. Practical examples are presented in Chapter Eight. The aim is that these examples illustrate techniques commonly applicable in practice. Evaluation and choice of biplot is discussed in Chapter Nine.
APA, Harvard, Vancouver, ISO, and other styles
10

Wiltshire, S. E. "Statistical techniques for regional flood-frequency analysis." Thesis, University of Newcastle Upon Tyne, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zafirakou, Antigoni Koulouris. "Statistical analysis techniques in water resources engineering /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2000.

Find full text
Abstract:
Thesis (Ph. D.)--Tufts University, 2000.
Adviser: Richard M. Vogel. Submitted to the Dept. of Civil and Environmental Engineering. Includes bibliographical references (leaves 206-214). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
12

HaKong, L. "Expert systems techniques for statistical data analysis." Thesis, London South Bank University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Stark, J. Alex. "Statistical model selection techniques for data analysis." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lakshminarayanan, S. "Process characterization and control using multivariate statistical techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/nq21588.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chapman, Geoffrey S. "Statistical estimation techniques for use in robotic tracking." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0020/MQ58021.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fairbanks, James Paul. "Graph analysis combining numerical, statistical, and streaming techniques." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54972.

Full text
Abstract:
Graph analysis uses graph data collected on a physical, biological, or social phenomena to shed light on the underlying dynamics and behavior of the agents in that system. Many fields contribute to this topic including graph theory, algorithms, statistics, machine learning, and linear algebra. This dissertation advances a novel framework for dynamic graph analysis that combines numerical, statistical, and streaming algorithms to provide deep understanding into evolving networks. For example, one can be interested in the changing influence structure over time. These disparate techniques each contribute a fragment to understanding the graph; however, their combination allows us to understand dynamic behavior and graph structure. Spectral partitioning methods rely on eigenvectors for solving data analysis problems such as clustering. Eigenvectors of large sparse systems must be approximated with iterative methods. This dissertation analyzes how data analysis accuracy depends on the numerical accuracy of the eigensolver. This leads to new bounds on the residual tolerance necessary to guarantee correct partitioning. We present a novel stopping criterion for spectral partitioning guaranteed to satisfy the Cheeger inequality along with an empirical study of the performance on real world networks such as web, social, and e-commerce networks. This work bridges the gap between numerical analysis and computational data analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Silva, Jesús, Naveda Alexa Senior, Guliany Jesús García, Núẽz William Niebles, and Palma Hugo Hernández. "Forecasting Electric Load Demand through Advanced Statistical Techniques." Institute of Physics Publishing, 2020. http://hdl.handle.net/10757/652142.

Full text
Abstract:
Traditional forecasting models have been widely used for decision-making in production, finance and energy. Such is the case of the ARIMA models, developed in the 1970s by George Box and Gwilym Jenkins [1], which incorporate characteristics of the past models of the same series, according to their autocorrelation. This work compares advanced statistical methods for determining the demand for electricity in Colombia, including the SARIMA, econometric and Bayesian methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Balıklı, Umut Başak Tokatlı Figen. "Use of multivariate statistical techniques in HACCP programs/." [s.l.]: [s.n.], 2003. http://library.iyte.edu.tr/tezler/master/gidamuh/T000292.rar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bracho, Belkys Yasmin. "Application of statistical techniques to modeling crop growth." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0010109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hackl, Peter, and Michaela Denk. "Data Integration: Techniques and Evaluation." Austrian Statistical Society, 2004. http://epub.wu.ac.at/5631/1/435%2D1317%2D1%2DSM.pdf.

Full text
Abstract:
Within the DIECOFIS framework, ec3, the Division of Business Statistics from the Vienna University of Economics and Business Administration and ISTAT worked together to find methods to create a comprehensive database of enterprise data required for taxation microsimulations via integration of existing disparate enterprise data sources. This paper provides an overview of the broad spectrum of investigated methodology (including exact and statistical matching as well as imputation) and related statistical quality indicators, and emphasises the relevance of data integration, especially for official statistics, as a means of using available information more efficiently and improving the quality of a statistical agency's products. Finally, an outlook on an empirical study comparing different exact matching procedures in the maintenance of Statistics Austria's Business Register is presented.
APA, Harvard, Vancouver, ISO, and other styles
21

Nashimoto, Kane. "Multiple comparison techniques for order restricted models /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tonini, Enrico. "General Profile Monitoring Through Nonparametric Techniques." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423081.

Full text
Abstract:
This Ph.D. thesis is devoted to Statistical Process Control (SPC) methods for monitoring over time the stability of a relation between two variables (profile). Very often in literature the functional form of the relation is assumed to be known, whereas in this work we concentrated on generic and unknown relations which have to be estimated with the usual nonparametric regression techniques. The original contributes are two, resented in chapters 2 and 3 respectively. In Chapter 1 we make a brief overview on the topic in order to make you become familiar with these specific problems of Statistical Process Control (SPC) applications and we introduce you to the original parts of this work. In Chapter 2 we envelope and compare five new control charts for monitoring on-line unknown general, and not only linear, relations among variables over time under the assumption of the normality of the errors; these charts combine in an original way the following techniques: self-starting methods, useful to drop the distinction between Phase I and Phase II of the analysis; very known multivariate charting schemes as MEWMA and CUSCORE; nonparametric testing techniques as wavelet methods and kernel linear smoothing. In Chapter 3, instead, we construct a test statistic useful to check with a completely nonparametric procedure the stability of a process retrospectively, thus off-line. Both second and third chapters are structured in the following way: brief literature review; framework and model considered in our study; simulation study; a section with some useful complements on the topics and relative research carried out; conclusion and suggestions for future research.
Questa tesi è dedicata ai metodi per il Controllo Statistico della Qualità (CSQ) per il monitoraggio della stabilità nel tempo della relazione tra due variabili (profilo). Spesso in letteratura si assume nota la forma funzionale della relazione, viceversa in questo lavoro ci si è concentrati su relazioni generiche ed ignore e quindi da stimare con le usuali tecniche di regressione non parametrica. I contributi originali sono due, presentati nei capitoli 2 e 3 rispettivamente. Nel Capitolo 1 presentiamo una breve panoramica dell’argomento in modo da far prendere familiarità al lettore con questi problemi specifici delle applicazioni del Controllo Statistico della Qualità (CSQ) e introdurlo alle parti originali di questo lavoro. Nel Capitolo 2 sviluppiamo e confrontiamo cinque nuove carte di controllo per il monitoraggio on-line di relazioni ignote generiche, e non solo lineari, tra variabili sotto l’assunzione di normalità degli errori; queste carte mettono insieme in modo originale le seguenti tecniche: metodi self-starting, utili per eliminare la distinzione tra Fase I e Fase II dell’analisi; alcune carte di controllo multivariate ben note come MEWMA e CUSCORE; tecniche non parametriche per la verifica di ipotesi come metodi wavelet o il lisciamento lineare con il metodo del kernel. Nel Capitolo 3, invece, costruiamo una statistica test utile per verificare con una procedura completamente non parametrica la stabilità di un processo in maniera retrospettiva, quindi off-line. Sia il secondo che il terzo capitolo sono strutturati nel modo seguente: breve revisione della letteratura; contesto e modello considerati in questo studio; simulazioni; una sezione con alcuni complementi utili sugli argomenti e relativa ricerca effettuata; conclusione e suggerimenti per la ricerca futura.
APA, Harvard, Vancouver, ISO, and other styles
23

Tuyiragize, Richard. "Multi-objective optimization techniques in electricity generation planning." Doctoral thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/10720.

Full text
Abstract:
The objective of this research is to develop a framework of multi-objective optimization (MOO) models that are better capable of providing decision support on future long-term electricity generation planning (EGP), in the context of insufficient electricity capacity and to apply it to the electricity system for a developing country. The problem that motivated this study was a lack of EGP models in developing countries to keep pace with the countries' socio-economic and demographic dynamics. This research focused on two approaches: mathematical programming (MP) and system dynamics (SD). Detailed model descriptions, formulations, and implementation results are presented in the thesis along with the observations and insights obtained during the course of this research.
APA, Harvard, Vancouver, ISO, and other styles
24

Vumbukani, Bokang C. "Comparison of ridge and other shrinkage estimation techniques." Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/4364.

Full text
Abstract:
Includes bibliographical references.
Shrinkage estimation is an increasingly popular class of biased parameter estimation techniques, vital when the columns of the matrix of independent variables X exhibit dependencies or near dependencies. These dependencies often lead to serious problems in least squares estimation: inflated variances and mean squared errors of estimates unstable coefficients, imprecision and improper estimation. Shrinkage methods allow for a little bias and at the same time introduce smaller mean squared error and variances for the biased estimators, compared to those of unbiased estimators. However, shrinkage methods are based on the shrinkage factor, of which estimation depends on the unknown values, often computed from the OLS solution. We argue that the instability of OLS estimates may have an adverse effect on performance of shrinkage estimators. Hence a new method for estimating the shrinkage factors is proposed and applied on ridge and generalized ridge regression. We propose that the new shrinkage factors should be based on the principal components instead of the unstable OLS estimates.
APA, Harvard, Vancouver, ISO, and other styles
25

Jansson, Mattias, and Jimmy Johansson. "Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1716.

Full text
Abstract:

This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137.

In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions.

In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize.

Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.

APA, Harvard, Vancouver, ISO, and other styles
26

Li, Qiao. "Data mining and statistical techniques applied to genetic epidemiology." Thesis, University of East Anglia, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.533716.

Full text
Abstract:
Genetic epidemiology is the study of the joint action of genes and environmental factors in determining the phenotypes of diseases. The twin study is a classic and important epidemiological tool, which can help to separate the underlying effects of genes and environment on phenotypes. Twin data have been widely examined using traditional methods to genetic epidemiological research. However, they provide a rich sources information related to many complex phenotypes that has the potential to be further explored and exploited. This thesis focuses on two major genetic epidemiological approaches: familial aggregation analysis and linkage analysis, using twin data from TwinsUK Registry. Structural equation modelling (SEM) is a conventional method used in familial aggregation analysis, and is applied in this research to discover the underlying genetic and environmental influences on two complex phenotypes: coping strategies and osteoarthritis. However, SEM is a confirmatory method and relies on prior biomedical hypotheses. A new exploratory method, named MDS-C, combining multidimensional scaling and clustering method is developed in this thesis. It does not rely on using prior hypothetical models and is applied to uncover underlying genetic determinants of bone mineral density (BMD). The results suggest that the genetic influence on BMD is site-specific. Haseman-Elston (H-E) regression is a conventional linkage analysis approach using the identity by descent (IBD) information between twins to detect quantitative trait loci (QTLs) which regulate the quantitative phenotype. However, it only considers the genetic effect from individual loci. Two new approaches including a pair-wise H-E regression (PWH-E) and a feature screening approach (FSA) are proposed in this research to detect QTLs allowing gene-gene interaction. Simulation studies demonstrate that PWH-E and FSA have greater power to detect QTLs with interactions. Application to real-world BMD data results in identifying a set of potential QTLs, including 7 chromosomal loci consistent with previous genome-wide studies.
APA, Harvard, Vancouver, ISO, and other styles
27

Beadles, Joseph W., and Lee W. Schonenberg. "Statistical process control techniques for the telecommunications systems manager." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/38538.

Full text
Abstract:
Approved for public release; distribution is unlimited.
The purpose of this thesis is to provide personnel, who are undergoing Total Quality Leadership (TQL) implementation at their telecommunications-related command, an understanding of Statistical Process Controls (SPCs) and their potential application to telecommunications issues. Basic SPC tools common to most Total Quality programs are discussed. Advanced SPC methods including Analysis of Means (ANOM), Analysis of Variance (ANOVA), Weibull analysis and Taguchi Methods are also presented. Selected SPC training plans for both naval telecommunication commands and commercial telecommunication industry are examined. Finally, a case study of a telecommunications-related issue is provided to demonstrate an integrated approach to the use of SPCs.
APA, Harvard, Vancouver, ISO, and other styles
28

Hardy, Rebecca Jane. "Meta-analysis techniques in medical research : a statistical perspective." Thesis, London School of Hygiene and Tropical Medicine (University of London), 1995. http://researchonline.lshtm.ac.uk/682268/.

Full text
Abstract:
Meta-analysis is now commonly used in medical research. However there are statistical issues relating to the subject that require investigation and some are considered here, from both a methodological and a practical perspective. Each of the fixed effect and the random effects models for meta-analysis are based on certain assumptions and the validity of these is investigated. A formal test of the homogeneity assumption made in the fixed effect model may be performed. Since the test has low power, simulation was used to investigate the power under various conditions. The random effects model incorporates a between-study component of variance into the model. A likelihood based method was used to obtain a confidence interval for this variance and also to provide an interval for the overall treatment effect which takes into account the fact that the between-study variance is estimated, rather than assuming it to be known. In order to obtain confidence intervals for the treatment effect for both the fixed effect and the random effects models, distributional assumptions of normality are usually made. Such assumptions may be checked using q-q plots of the residuals obtained for each trial in the meta-analysis. In both meta-analysis models it is assumed that the weight allocated to each study is known, when in fact it must be estimated from the data. The effect of estimating the weights on the overall treatment effect estimate, its confidence intervals, the between-study variance estimate and the test statistic for homogeneity, is investigated by both analytic and simulation methods. It is shown how meta-analysis methods may be used to analyse multicentre trials of a paired cluster randomised design. Meta-analysis techniques are found to be preferable to previously published methods specifically developed for the analysis of such designs, which produce biased and potentially misleading results when a large treatment effect is present.
APA, Harvard, Vancouver, ISO, and other styles
29

To, Hing-yan. "Statistical Analysis and Design Techniques for Analog VLSI Circuits /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487928649989917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chinea, Ríos Mara. "Advanced techniques for domain adaptation in Statistical Machine Translation." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117611.

Full text
Abstract:
[ES] La Traducción Automática Estadística es un sup-campo de la lingüística computacional que investiga como emplear los ordenadores en el proceso de traducción de un texto de un lenguaje humano a otro. La traducción automática estadística es el enfoque más popular que se emplea para construir estos sistemas de traducción automáticos. La calidad de dichos sistemas depende en gran medida de los ejemplos de traducción que se emplean durante los procesos de entrenamiento y adaptación de los modelos. Los conjuntos de datos empleados son obtenidos a partir de una gran variedad de fuentes y en muchos casos puede que no tengamos a mano los datos más adecuados para un dominio específico. Dado este problema de carencia de datos, la idea principal para solucionarlo es encontrar aquellos conjuntos de datos más adecuados para entrenar o adaptar un sistema de traducción. En este sentido, esta tesis propone un conjunto de técnicas de selección de datos que identifican los datos bilingües más relevantes para una tarea extraídos de un gran conjunto de datos. Como primer paso en esta tesis, las técnicas de selección de datos son aplicadas para mejorar la calidad de la traducción de los sistemas de traducción bajo el paradigma basado en frases. Estas técnicas se basan en el concepto de representación continua de las palabras o las oraciones en un espacio vectorial. Los resultados experimentales demuestran que las técnicas utilizadas son efectivas para diferentes lenguajes y dominios. El paradigma de Traducción Automática Neuronal también fue aplicado en esta tesis. Dentro de este paradigma, investigamos la aplicación que pueden tener las técnicas de selección de datos anteriormente validadas en el paradigma basado en frases. El trabajo realizado se centró en la utilización de dos tareas diferentes de adaptación del sistema. Por un lado, investigamos cómo aumentar la calidad de traducción del sistema, aumentando el tamaño del conjunto de entrenamiento. Por otro lado, el método de selección de datos se empleó para crear un conjunto de datos sintéticos. Los experimentos se realizaron para diferentes dominios y los resultados de traducción obtenidos son convincentes para ambas tareas. Finalmente, cabe señalar que las técnicas desarrolladas y presentadas a lo largo de esta tesis pueden implementarse fácilmente dentro de un escenario de traducción real.
[CAT] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
[EN] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents d'adaptació del sistema. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
Chinea Ríos, M. (2019). Advanced techniques for domain adaptation in Statistical Machine Translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/117611
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Beadles, Joseph W. Schonenberg Lee W. "Statistical process control techniques for the telecommunications systems manager." Monterey, Calif. : Naval Postgraduate School, 1992. http://handle.dtic.mil/100.2/ADA249122.

Full text
Abstract:
Thesis (M.S. in Telecommunications Systems Management)--Naval Postgraduate School, March 1992.
Thesis Advisors: Boger, Dan C. ; Sessions, Sterling D. "March, 1992." Includes bibliographical references (p. 96-98). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
32

Alden, Kieran. "Simulation and statistical techniques to explore lymphoid tissue organogenesis." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3220/.

Full text
Abstract:
Secondary lymphoid organs have a key role in the initiation of adaptive immune responses to infection. Organogenesis occurs in foetal development, and the use of genetic tools, imaging technologies, and ex vivo culture systems has provided significant insights into the cellular components and associated signalling pathways that are involved. However such approaches tend to be reductionist and descriptive, focusing on the contribution of individual components, and cannot fully explain how lymphoid organs develop through interaction between biological components. In this study, a set of simulation and statistical tools have been developed that provide further insights into the molecular and biophysical mechanisms of lymphoid tissue organogenesis. Specifically, the formation of Peyer’s Patches, gut-associated secondary lymphoid organs, is examined. In collaboration with experimental immunologists, a structured process in the design and calibration of a computer simulation of the biological process has been conducted, leading to the development of a publicly accessible scientific tool where cell behaviour emerges that is statistically similar to that observed in ex vivo culture. Robust biological hypotheses can be generated through use of the tool to perform in silico experimentation that simulates different physiological conditions. A lack of available statistical tools to analyse in silico simulation results has been addressed through the development and release of the spartan toolkit, a set of techniques that can suggest the influence that pathways and components have on simulation behaviour, offering valuable biological insight into the system being explored. An analysis of simulation results using spartan suggests the influence of biological pathways on tissue formation changes during development, in contrast to hypotheses in the literature that suggest the process is chemokine driven. Data presented suggests the development period is biphasic, with cell adhesion the key factor early in development, and chemokine expression influential at later point. Through novel application of the statistical tools in spartan to perform a time-lapse analysis of cell behaviour, it is suggested this change in phase occurs between hours 24 and 36. Novel in silico experimentation performed has suggested the key biological factors in causing cell aggregation, and suggested a role for LTin cells in limiting size and number of Peyer’s Patches. A range of potential laboratory investigations have been suggested that could validate whether these simulation derived hypotheses are valid.
APA, Harvard, Vancouver, ISO, and other styles
33

Zadeh, Pooneh Bagheri. "Statistical and perceptual based image and video compression techniques." Thesis, Glasgow Caledonian University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.688254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cimarelli, Andrea <1983&gt. "Statistical analysis and simulation techniques in wall-bounded turbulence." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3821/1/Cimarelli_Andrea_tesi.pdf.

Full text
Abstract:
The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.
APA, Harvard, Vancouver, ISO, and other styles
35

Cimarelli, Andrea <1983&gt. "Statistical analysis and simulation techniques in wall-bounded turbulence." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3821/.

Full text
Abstract:
The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.
APA, Harvard, Vancouver, ISO, and other styles
36

Batidzirai, Jesca Mercy. "Randomization in a two armed clinical trial: an overview of different randomization techniques." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/395.

Full text
Abstract:
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
APA, Harvard, Vancouver, ISO, and other styles
37

Meng, Xiaojun. "Batch process monitoring using multiway techniques." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Meinhardt, Llopis Enric. "Morphological and statistical techniques for the analysis of 3D images." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22719.

Full text
Abstract:
Aquesta tesi proposa una estructura de dades per emmagatzemar imatges tridimensionals. L'estructura da dades té forma d'arbre i codifica les components connexes dels conjunts de nivell de la imatge. Aquesta estructura és la eina bàsica per moltes aplicacions proposades: operadors morfològics tridimensionals, visualització d'imatges mèdiques, anàlisi d'histogrames de color, seguiment d'objectes en vídeo i detecció de vores. Motivada pel problema de la completació de vores, la tesi conté un estudi de com l'eliminació de soroll mitjançant variació total anisòtropa es pot fer servir per calcular conjunts de Cheeger en mètriques anisòtropes. Aquests conjunts de Cheeger anisòtrops es poden utilitzar per trobar òptims globals d'alguns funcionals per completar vores. També estan relacionats amb certs invariants afins que s'utilitzen en reconeixement d'objectes, i en la tesi s'explicita aquesta relació.
This thesis proposes a tree data structure to encode the connected components of level sets of 3D images. This data structure is applied as a main tool in several proposed applications: 3D morphological operators, medical image visualization, analysis of color histograms, object tracking in videos and edge detection. Motivated by the problem of edge linking, the thesis contains also an study of anisotropic total variation denoising as a tool for computing anisotropic Cheeger sets. These anisotropic Cheeger sets can be used to find global optima of a class of edge linking functionals. They are also related to some affine invariant descriptors which are used in object recognition, and this relationship is laid out explicitly.
APA, Harvard, Vancouver, ISO, and other styles
39

Adamakis, Sotiris. "Application of statistical analysis techniques to solar and stellar phenomena." Thesis, University of Central Lancashire, 2009. http://clok.uclan.ac.uk/20908/.

Full text
Abstract:
Currently, solar observers are investigating spectroscopic images of the Sun's outermost atmosphere (the corona), which are challenging long-held views on the density and temperature structure of this environment. The corona is "filled" with magnetic strands but determining their precise nature is not straightforward. One way of revealing the nature of the coronal heating mechanism is by comparing simple theoretical one dimensional hydrostatic loop models with observations of the temperature and/or density structure along these features. The most wellknown method for dealing with comparisons like that is the x2 approach. In this research we consider the restrictions imposed by this approach and present an alternative way for making model comparisons using Bayesian statistics. In order to quantify our beliefs we use Bayes factors and information criteria such as AIC and BIC. Three simulated data-sets are analysed in order to validate the procedure and assess the effects of varying error bar size. Another three datasets (Ugarte-Urra ci at., 2005; Priest ci at., 2000; Young ci al., 2007) are analysed using the method described above. For the Ugarte-Urra ci at. and Young ci al. data-sets, we conclude apex dominant heating is the likely heating candidate, whereas the Priest ci al. data-set implies basal heating. Note that these new results (regarding the Ugarte-Urra ci at. and Priest ci at. data-sets) are different from those obtained using the chi-squared statistic. The second research project involves extensive model comparison against solar flare plasma observed cooling curves. After a solar flare erupts, flare-loops form which cool over thousands of seconds. How the plasma cools over time is investigated. In this case, we test the adequacy of the zero-dimensional EBTEL (Enthalpy-Based Thermal Evolution of Loops) model as introduced by Klimchuk, Patsourakos, and Cargill (2008). An interesting approach here is to define the form of the non-thermal heating input to the system and compare it with the thermal heating input. For the data-set under investigation (Raftery et al., 2009) a Full-Gaussian energy profile is proposed. Also, from the data it is not possible to distinguish which of the thermal or non-thermal heat flux is more dominant, so both can be equally considered for temperature, density and pressure evolution of the system. Finally, the last part of this research is dedicated to recurrent nova outbursts. RS Ophiuehi is a nova produced by a white dwarf star and a red giant. In this case the white dwarf will steadily acerete gases on its surface from the red giant's outer atmosphere. About every twenty years, enough material will be accreted on the white dwarf's surface in order to produce an eruption. Over the past one hundred years at least five such outbursts have been observed. As another application of Bayesian model comparison techniques, curve fitting models are tested against light curves of RS Ophiuchi outbursts in order to decide upon the one that best describes the data. Furthermore, the magnitude of the star is analysed using wavclet analysis techniques. Ways of deriving the Cone of Influence are presented. An outcome of this analysis is that we can quantitatively confirm that an outburst occurred around November 26, 1945, which was not recorded due to the observational seasonal gaps. This was originally proposed by Oppenheimer and Mattei (1993) but was never accepted as a confirmed outburst. Also, this method reveals a pre-outburst signal in the light curve. For this, the way in which the wavelet analysis can be beneficial for future outburst predictions is presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Ali, Qazi Mazhar. "Statistical classification techniques in the analysis of remotely sensed images." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Thompson, Paul. "Statistical techniques for extreme wave condition analysis in coastal design." Thesis, University of Plymouth, 2009. http://hdl.handle.net/10026.1/2636.

Full text
Abstract:
The study of the behaviour of the extreme values of a variable such as wave height is very important in engineering applications such as flood risk assessment and coastal design. Storm wave modelling usually adopts a univariate extreme value theory approach, essentially identifying the extreme observations of one variable and fitting a standard extreme value distribution to these values. Often it is of interest to understand how extremes of a variable such as wave height depend on a covariate such as wave direction. An important associated concept is that of return level, a value that is expected to be exceeded once in a certain time period. The main areas of research discussed in this thesis involve making improvements to the way that extreme observations are identified and to the use of quantile regression as an alternative methodology for understanding the dependence of extreme values on a covariate. Both areas of research provide developments to existing return level methodology so enhancing the accuracy of predicted future storm wave events. We illustrate the methodology that we have developed using both coastal and offshore wave data sets. In particular, we present an automated and computationally inexpensive method to select the threshold used to identify observations for extreme value modelling. Our method is based on the distribution of model parameter estimates across a range of thresholds. We also assess the effect of the uncertainty associated with threshold selection on return level estimation by using a bootstrap procedure. Furthermore, we extend our approach so that the selection of the threshold can also depends on the value of a covariate such as wave direction. As a biproduct of our methodological development we have improved existing techniques for estimating and making inference about the parameters of a standard extreme value distribution. We also present a new technique that extends existing Bayesian quantile regression methodology by modelling the dependence of a quantile of one variable on the values of another using a natural cubic spline. Inference is based on the posterior density of the spline and an associated smoothing parameter and is performed by means of a specially tuned Markov chain Monte Carlo algorithm. We show that our nonparametric methodology provides more flexible modelling than the current polynomial based approach for a range of examples.
APA, Harvard, Vancouver, ISO, and other styles
42

Dunsmore, William. "The application of statistical techniques to mechanical design and redesign." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Loh, M. J. "Application of Statistical Pattern Recognition techniques to analysis of thermograms." Thesis, University of Cambridge, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Choudary, Omar-Salim. "Efficient multivariate statistical techniques for extracting secrets from electronic devices." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Honnor, Thomas R. "Some spatial statistical techniques with applications to cellular imaging data." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/97940/.

Full text
Abstract:
The aim of this thesis is to provide techniques for the analysis of a variety of types of spatial data, each corresponding to one of three biological questions on the function of the protein TACC3 during mitosis. A starting point in each investigation is the interpretation of the biological question and understanding of the form of the available data, from which a mathematical representation of data and corresponding statistical problem are developed. The thesis begins with description of a methodology for application to two collections of (marked) point patterns to determine the significance of differences in their structure, achieved through comparison of summary statistics and quantification of the significance of such differences by permutation tests. A methodology is then proposed for application to a pair spatio-temporal processes to estimate their individual temporal evolutions, including ideas from optimal transportation theory, and a test of dependence between such estimators. The thesis concludes with a proposed model for line data, designed to approximate the mitotic spindle structure using trajectories on the surface of spheroids, and a comparison score to compare model t between models and/or observations. The results of methodologies when applied to simulated data are presented as part of investigations into their validity and power. Application to biological data indicates that TACC3 influences microtubule structure during mitosis at a range of scales, supporting and extending previous investigations. Each of the methodologies is designed to require minimal assumptions and numbers of parameters, resulting in techniques which may be applied more widely to similar biological data from additional experiments or data arising from other fields.
APA, Harvard, Vancouver, ISO, and other styles
46

Lam, Remi Roger Alain Paul. "Surrogate modeling based on statistical techniques for multi-fidelity optimization." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90673.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-74).
Designing and optimizing complex systems generally requires the use of numerical models. However, it is often too expensive to evaluate these models at each step of an optimization problem. Instead surrogate models can be used to explore the design space, as they are much cheaper to evaluate. Constructing a surrogate becomes challenging when different numerical models are used to compute the same quantity, but with different levels of fidelity (i.e., different levels of uncertainty in the models). In this work, we propose a method based on statistical techniques to build such a multi-fidelity surrogate. We introduce a new definition of fidelity in the form of a variance metric. This variance is characterized by expert opinion and can vary across the design space. Gaussian processes are used to create an intermediate surrogate for each model. The uncertainty of each intermediate surrogate is then characterized by a total variance, combining the posterior variance of the Gaussian process and the fidelity variance. Finally, a single multi-fidelity surrogate is constructed by fusing all the intermediate surrogates. One of the advantages of the approach is the multi-fidelity surrogate capability of integrating models whose fidelity changes over the design space, thus relaxing the common assumption of hierarchical relationships among models. The proposed approach is applied to two aerodynamic examples: the computation of the lift coefficient of a NACA 0012 airfoil in the subsonic regime and of a biconvex airfoil in both the subsonic and the supersonic regimes. In these examples, the multi-fidelity surrogate mimics the behavior of the higher fidelity samples where available, and uses the lower fidelity points elsewhere. The proposed method is also able to quantify the uncertainty of the multi-fidelity surrogate and identify whether the fidelity or the sampling is the principal source of this uncertainty.
by Rémi Lam.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

De, Bonet Jeremy S. "Novel statistical multiresolution techniques for image synthesis, discrimination, and recognition." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10428.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 194-201).
by Jeremy S. De Bonet.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
48

Cassady, Charles Richard. "Statistical quality control techniques using multilevel discrete product quality measures." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151120/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pan, Jianjia. "Image segmentation based on the statistical and contour information." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/1004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

何添賢 and Tim Yin Timothy Ho. "Forecasting with smoothing techniques for inventory control." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B42574286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography