To see the other types of publications on this topic, follow the link: Engineering - Statistical methods.

Dissertations / Theses on the topic 'Engineering - Statistical methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Engineering - Statistical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marco, Almagro Lluís. "Statistical methods in Kansei engineering studies." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/85059.

Full text
Abstract:
Aquesta tesi doctoral tracta sobre Enginyeria Kansei (EK), una tècnica per traslladar emocions transmeses per productes en paràmetres tècnics, i sobre mètodes estadístics que poden beneficiar la disciplina. El propòsit bàsic de l'EK és descobrir de quina manera algunes propietats d'un producte transmeten certes emocions als seus usuaris. És un mètode quantitatiu, i les dades es recullen típicament fent servir qüestionaris. S'extreuen conclusions en analitzar les dades recollides, normalment usant algun tipus d'anàlisi de regressió. L'EK es pot situar en l'àrea de recerca del disseny emocional. La tesi comença justificant la importància del disseny emocional. Com que el rang de tècniques usades sota el nom d'EK és extens i no massa clar, la tesi proposa una definició d'EK que serveix per delimitar el seu abast. A continuació, es suggereix un model per desenvolupar estudis d'EK. El model inclou el desenvolupament de l'espai semàntic – el rang d'emocions que el producte pot transmetre – i l'espai de propietats – les variables tècniques que es poden modificar en la fase de disseny. Després de la recollida de dades, l'etapa de síntesi enllaça ambdós espais (descobreix com diferents propietats del producte transmeten certes emocions). Cada pas del model s'explica detalladament usant un estudi d'EK realitzat per aquesta tesi: l'experiment dels sucs de fruites. El model inicial es va millorant progressivament durant la tesi i les dades de l'experiment es van reanalitzant usant noves propostes.Moltes inquietuds pràctiques apareixen quan s'estudia el model per a estudis d'EK esmentat anteriorment (entre d'altres, quants participants són necessaris i com es desenvolupa la sessió de recollida de dades). S'ha realitzat una extensa revisió bibliogràfica amb l'objectiu de respondre aquestes i altres preguntes. Es descriuen també les aplicacions d'EK més habituals, juntament amb comentaris sobre idees particularment interessants de diferents articles. La revisió bibliogràfica serveix també per llistar quines són les eines més comunament utilitzades en la fase de síntesi.La part central de la tesi se centra precisament en les eines per a la fase de síntesi. Eines estadístiques com la teoria de quantificació tipus I o la regressió logística ordinal s'estudien amb detall, i es proposen diverses millores. En particular, es proposa una nova forma gràfica de representar els resultats d'una regressió logística ordinal. S'introdueix una tècnica d'aprenentatge automàtic, els conjunts difusos (rough sets), i s'inclou una discussió sobre la seva idoneïtat per a estudis d'EK. S'usen conjunts de dades simulades per avaluar el comportament de les eines estadístiques suggerides, la qual cosa dóna peu a proposar algunes recomanacions.Independentment de les eines d'anàlisi utilitzades en la fase de síntesi, les conclusions seran probablement errònies quan la matriu del disseny no és adequada. Es proposa un mètode per avaluar la idoneïtat de matrius de disseny basat en l'ús de dos nous indicadors: un índex d'ortogonalitat i un índex de confusió. S'estudia l'habitualment oblidat rol de les interaccions en els estudis d'EK i es proposa un mètode per incloure una interacció, juntament amb una forma gràfica de representar-la. Finalment, l'última part de la tesi es dedica a l'escassament tractat tema de la variabilitat en els estudis d'EK. Es proposen un mètode (basat en l'anàlisi clúster) per segmentar els participants segons les seves respostes emocionals i una forma d'ordenar els participants segons la seva coherència en valorar els productes (usant un coeficient de correlació intraclasse). Com que molts usuaris d'EK no són especialistes en la interpretació de sortides numèriques, s'inclouen representacions visuals per a aquests dos nous mètodes que faciliten el processament de les conclusions.
Esta tesis doctoral trata sobre Ingeniería Kansei (IK), una técnica para trasladar emociones transmitidas por productos en parámetros técnicos, y sobre métodos estadísticos que pueden beneficiar la disciplina. El propósito básico de la IK es descubrir de qué manera algunas propiedades de un producto transmiten ciertas emociones a sus usuarios. Es un método cuantitativo, y los datos se recogen típicamente usando cuestionarios. Se extraen conclusiones al analizar los datos recogidos, normalmente usando algún tipo de análisis de regresión.La IK se puede situar en el área de investigación del diseño emocional. La tesis empieza justificando la importancia del diseño emocional. Como que el rango de técnicas usadas bajo el nombre de IK es extenso y no demasiado claro, la tesis propone una definición de IK que sirve para delimitar su alcance. A continuación, se sugiere un modelo para desarrollar estudios de IK. El modelo incluye el desarrollo del espacio semántico – el rango de emociones que el producto puede transmitir – y el espacio de propiedades – las variables técnicas que se pueden modificar en la fase de diseño. Después de la recogida de datos, la etapa de síntesis enlaza ambos espacios (descubre cómo distintas propiedades del producto transmiten ciertas emociones). Cada paso del modelo se explica detalladamente usando un estudio de IK realizado para esta tesis: el experimento de los zumos de frutas. El modelo inicial se va mejorando progresivamente durante la tesis y los datos del experimento se reanalizan usando nuevas propuestas. Muchas inquietudes prácticas aparecen cuando se estudia el modelo para estudios de IK mencionado anteriormente (entre otras, cuántos participantes son necesarios y cómo se desarrolla la sesión de recogida de datos). Se ha realizado una extensa revisión bibliográfica con el objetivo de responder éstas y otras preguntas. Se describen también las aplicaciones de IK más habituales, junto con comentarios sobre ideas particularmente interesantes de distintos artículos. La revisión bibliográfica sirve también para listar cuáles son las herramientas más comúnmente utilizadas en la fase de síntesis. La parte central de la tesis se centra precisamente en las herramientas para la fase de síntesis. Herramientas estadísticas como la teoría de cuantificación tipo I o la regresión logística ordinal se estudian con detalle, y se proponen varias mejoras. En particular, se propone una nueva forma gráfica de representar los resultados de una regresión logística ordinal. Se introduce una técnica de aprendizaje automático, los conjuntos difusos (rough sets), y se incluye una discusión sobre su idoneidad para estudios de IK. Se usan conjuntos de datos simulados para evaluar el comportamiento de las herramientas estadísticas sugeridas, lo que da pie a proponer algunas recomendaciones. Independientemente de las herramientas de análisis utilizadas en la fase de síntesis, las conclusiones serán probablemente erróneas cuando la matriz del diseño no es adecuada. Se propone un método para evaluar la idoneidad de matrices de diseño basado en el uso de dos nuevos indicadores: un índice de ortogonalidad y un índice de confusión. Se estudia el habitualmente olvidado rol de las interacciones en los estudios de IK y se propone un método para incluir una interacción, juntamente con una forma gráfica de representarla. Finalmente, la última parte de la tesis se dedica al escasamente tratado tema de la variabilidad en los estudios de IK. Se proponen un método (basado en el análisis clúster) para segmentar los participantes según sus respuestas emocionales y una forma de ordenar los participantes según su coherencia al valorar los productos (usando un coeficiente de correlación intraclase). Puesto que muchos usuarios de IK no son especialistas en la interpretación de salidas numéricas, se incluyen representaciones visuales para estos dos nuevos métodos que facilitan el procesamiento de las conclusiones.
This PhD thesis deals with Kansei Engineering (KE), a technique for translating emotions elicited by products into technical parameters, and statistical methods that can benefit the discipline. The basic purpose of KE is discovering in which way some properties of a product convey certain emotions in its users. It is a quantitative method, and data are typically collected using questionnaires. Conclusions are reached when analyzing the collected data, normally using some kind of regression analysis. Kansei Engineering can be placed under the more general area of research of emotional design. The thesis starts justifying the importance of emotional design. As the range of techniques used under the name of Kansei Engineering is rather vast and not very clear, the thesis develops a detailed definition of KE that serves the purpose of delimiting its scope. A model for conducting KE studies is then suggested. The model includes spanning the semantic space – the whole range of emotions the product can elicit – and the space of properties – the technical variables that can be modified in the design phase. After the data collection, the synthesis phase links both spaces; that is, discovers how several properties of the product elicit certain emotions. Each step of the model is explained in detail using a KE study specially performed for this thesis: the fruit juice experiment. The initial model is progressively improved during the thesis and data from the experiment are reanalyzed using the new proposals. Many practical concerns arise when looking at the above mentioned model for KE studies (among many others, how many participants are used and how the data collection session is conducted). An extensive literature review is done with the aim of answering these and other questions. The most common applications of KE are also depicted, together with comments on particular interesting ideas from several papers. The literature review also serves to list which are the most common tools used in the synthesis phase. The central part of the thesis focuses precisely in tools for the synthesis phase. Statistical tools such as quantification theory type I and ordinal logistic regression are studied in detail, and several improvements are suggested. In particular, a new graphical way to represent results from an ordinal logistic regression is proposed. An automatic learning technique, rough sets, is introduced and a discussion is included on its adequacy for KE studies. Several sets of simulated data are used to assess the behavior of the suggested statistical techniques, leading to some useful recommendations. No matter the analysis tools used in the synthesis phase, conclusions are likely to be flawed when the design matrix is not appropriate. A method to evaluate the suitability of design matrices used in KE studies is proposed, based on the use of two new indicators: an orthogonality index and a confusion index. The commonly forgotten role of interactions in KE studies is studied and a method to include an interaction in KE studies is suggested, together with a way to represent it graphically. Finally, the untreated topic of variability in KE studies is tackled in the last part of the thesis. A method (based in cluster analysis) for finding segments among subjects according to their emotional responses and a way to rank subjects based on their coherence when rating products (using an intraclass correlation coefficient) are proposed. As many users of Kansei Engineering are not specialists in the interpretation of the numerical output from statistical techniques, visual representations for these two new proposals are included to aid understanding.
APA, Harvard, Vancouver, ISO, and other styles
2

Molaro, Mark Christopher. "Computational statistical methods in chemical engineering." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/111286.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-182).
Recent advances in theory and practice, have introduced a wide variety of tools from machine learning that can be applied to data intensive chemical engineering problems. This thesis covers applications of statistical learning spanning a range of relative importance of data versus existing detailed theory. In each application, the quantity and quality of data available from experimental systems are used in conjunction with an understanding of the theoretical physical laws governing system behavior to the extent they are available. A detailed generative parametric model for optical spectra of multicomponent mixtures is introduced. The application of interest is the quantification of uncertainty associated with estimating the relative abundance of mixtures of carbon nanotubes in solution. This work describes a detailed analysis of sources of uncertainty in estimation of relative abundance of chemical species in solution from optical spectroscopy. In particular, the quantification of uncertainty in mixtures with parametric uncertainty in pure component spectra is addressed. Markov Chain Monte Carlo methods are utilized to quantify uncertainty in these situations and the inaccuracy and potential for error in simpler methods is demonstrated. Strategies to improve estimation accuracy and reduce uncertainty in practical experimental situations are developed including when multiple measurements are available and with sequential data. The utilization of computational Bayesian inference in chemometric problems shows great promise in a wide variety of practical experimental applications. A related deconvolution problem is addressed in which a detailed physical model is not available, but the objective of analysis is to map from a measured vector valued signal to a sum of an unknown number of discrete contributions. The data analyzed in this application is electrical signals generated from a free surface electro-spinning apparatus. In this information poor system, MAP estimation is used to reduce the variance in estimates of the physical parameters of interest. The formulation of the estimation problem in a probabilistic context allows for the introduction of prior knowledge to compensate for a high dimensional ill-conditioned inverse problem. The estimates from this work are used to develop a productivity model expanding on previous work and showing how the uncertainty from estimation impacts system understanding. A new machine learning based method for monitoring for anomalous behavior in production oil wells is reported. The method entails a transformation of the available time series of measurements into a high-dimensional feature space representation. This transformation yields results which can be treated as static independent measurements. A new method for feature selection in one-class classification problems is developed based on approximate knowledge of the state of the system. An extension of features space transformation methods on time series data is introduced to handle multivariate data in large computationally burdensome domains by using sparse feature extraction methods. As a whole these projects demonstrate the application of modern statistical modeling methods, to achieve superior results in data driven chemical engineering challenges.
by Mark Christopher Molaro.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Chia-Jung. "Statistical and engineering methods for model enhancement." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44766.

Full text
Abstract:
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as "Minimal Adjustment", which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
APA, Harvard, Vancouver, ISO, and other styles
4

Walls, Frederick George 1976. "Topic detection through statistical methods." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80244.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 77-79).
by Frederick George Walls.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Maas, Luis C. (Luis Carlos). "Statistical methods in ultrasonic tissue characterization." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36456.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 88-93).
by Luis Carlos Maas III.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Huan. "New Statistical Methods for Simulation Output Analysis." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4931.

Full text
Abstract:
In this thesis, there are generally three contributions to the Ranking and Selection problem in discrete-event simulation area. Ranking and selection is an important problem when people want to select single or multiple best designs from alternative pool. There are two different types in discrete-event simulation: terminating simulation and steady-state simulation. For steady-state simulation, there is an initial trend before the data output enters into the steady-state, if we cannot start the simulation from steady state. We need to remove the initial trend before we use the data to estimate the steady-state mean. Our first contribution regards the application to eliminate the initial trend/initialization bias. In this thesis, we present a novel solution to remove the initial trend motivated by offline change detection method. The method is designed to monitor the cumulative absolute bias from the estimated steady-state mean. Experiments are conducted to compare our procedure with other existing methods. Our method is shown to be at least no worse than those methods and in some cases much better. After removing the initialization bias, we can apply a ranking and selection procedure for the data outputs from steady-state simulation. There are two main approaches to ranking and selection problem. One is subset selection and the other one is indifference zone selection. Also by employing directed graph, some single-best ranking and selection methods can be extended to solve multi-best selection problem. Our method is designed to solve multi-best ranking and selection. And in Chapter 3, one procedure for ranking and selection in terminating simulation is extended based full sequential idea. It means we compare the sample means among all systems in contention at each stage. Also, we add a technique to do pre-selection of the superior systems at the same time of eliminating inferior systems. This can accelerate the speed of obtaining the number of best systems we want. Experiments are conducted to demonstrate the pre-selection technique can save observation significantly compared with the procedure without it. Also compared with existing methods, our procedure can save significant number of observations. We also explore the effect of common random number. By using it in the simulation process, more observations can be saved. The third contribution of this thesis is to extend the procedure in Chapter 3 for steady-state simulation. Asymptotic variance is employed in this case. We justify our procedure in asymptotic point of view. And by doing extensive experiments, we demonstrate that our procedure can work in most cases when sample size is finite
APA, Harvard, Vancouver, ISO, and other styles
7

Betschart, Willie. "Applying intelligent statistical methods on biometric systems." Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1694.

Full text
Abstract:
This master’s thesis work was performed at Optimum Biometric Labs, OBL, located in Karlskrona, Sweden. Optimum Biometric Labs perform independent scenario evaluations to companies who develop biometric devices. The company has a product Optimum preConTM which is surveillance and diagnosis tool for biometric systems. This thesis work’s objective was to develop a conceptual model and implement it as an additional layer above the biometric layer with intelligence about the biometric users. The layer is influenced by the general procedure of biometrics in a multimodal behavioural way. It is working in an unsupervised way and performs in an unsupervised manner. While biometric systems are increasingly adopted the technologies have some inherent problems such as false match and false non-match. In practice, a rejected user can not be interpreted as an impostor since the user simply might have problems using his/her biometric feature. The proposed methods in this project are dealing with these problems when analysing biometric usage in runtime. Another fact which may give rise to false rejections is template aging; a phenomenon where the enrolled user’s template is too old compared towards the user’s current biometric feature. A theoretical approach of template aging was known; however since the analysis of template aging detection was correlated with potential system flaws such as device defects or human generated risks such as impostor attacks this task would become difficult to solve in an unsupervised system but when ignoring the definition of template aging, the detection of similar effects was possible. One of the objectives of this project was to detect template aging in a predictive sense; this task failed to be carried out because the absence of basis performing this kind of tasks. The developed program performs abnormality detection at each incoming event from a biometric system. Each verification attempt is assumed to be from a genuine user unless any deviation according to the user's history is found, an abnormality. The possibility of an impostor attack depends on the degree of the abnormality. The application makes relative decisions between fraud possibilities or if genuine user was the source of what caused the deviations. This is presented as an alarm with the degree of impostor possibility. This intelligent layer has increased Optimum preCon´s capacity as a surveillance tool for biometrics. This product is an efficient complement to biometric systems in a steady up-going worldwide market.
APA, Harvard, Vancouver, ISO, and other styles
8

Chandrasekaran, Venkat. "Convex optimization methods for graphs and statistical modeling." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66002.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 209-220).
An outstanding challenge in many problems throughout science and engineering is to succinctly characterize the relationships among a large number of interacting entities. Models based on graphs form one major thrust in this thesis, as graphs often provide a concise representation of the interactions among a large set of variables. A second major emphasis of this thesis are classes of structured models that satisfy certain algebraic constraints. The common theme underlying these approaches is the development of computational methods based on convex optimization, which are in turn useful in a broad array of problems in signal processing and machine learning. The specific contributions are as follows: -- We propose a convex optimization method for decomposing the sum of a sparse matrix and a low-rank matrix into the individual components. Based on new rank-sparsity uncertainty principles, we give conditions under which the convex program exactly recovers the underlying components. -- Building on the previous point, we describe a convex optimization approach to latent variable Gaussian graphical model selection. We provide theoretical guarantees of the statistical consistency of this convex program in the high-dimensional scaling regime in which the number of latent/observed variables grows with the number of samples of the observed variables. The algebraic varieties of sparse and low-rank matrices play a prominent role in this analysis. -- We present a general convex optimization formulation for linear inverse problems, in which we have limited measurements in the form of linear functionals of a signal or model of interest. When these underlying models have algebraic structure, the resulting convex programs can be solved exactly or approximately via semidefinite programming. We provide sharp estimates (based on computing certain Gaussian statistics related to the underlying model geometry) of the number of generic linear measurements required for exact and robust recovery in a variety of settings. -- We present convex graph invariants, which are invariants of a graph that are convex functions of the underlying adjacency matrix. Graph invariants characterize structural properties of a graph that do not depend on the labeling of the nodes; convex graph invariants constitute an important subclass, and they provide a systematic and unified computational framework based on convex optimization for solving a number of interesting graph problems. We emphasize a unified view of the underlying convex geometry common to these different frameworks. We describe applications of both these methods to problems in financial modeling and network analysis, and conclude with a discussion of directions for future research.
by Venkat Chandrasekaran.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Lingg, Andrew James. "Statistical Methods for Image Change Detection with Uncertainty." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1357249370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ranger, Jeremy. "Adaptive image magnification using edge-directed and statistical methods." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/26753.

Full text
Abstract:
This thesis provides a comparison of two adaptive image magnification algorithms selected from the literature. Following implementation and experimentation with the algorithms, a number of improvements are proposed. The first selected algorithm takes an edge-directed approach by using an estimation of the edge map of the high-resolution image to guide the interpolation process. It was found that this algorithm suffered from certain inaccuracies in the edge detection stage. The proposed improvements focus on methods for increasing the accuracy of edge detection. The second selected algorithm takes a statistical approach by modelling the high-resolution image as a Gibbs-Markov random field and solving with the maximum a posteriori estimation technique. It was found that this algorithm suffered from blurring caused by the general way in which the clique potentials are applied to every sample. The proposed improvements introduce a set of weights to prevent smoothing across discontinuities. The two selected algorithms are compared to the enhanced versions to demonstrate the merit of the proposed improvements. Results have shown significant improvements in the quality of the magnified test images. In particular, blurring was reduced and edge sharpness was enhanced.
APA, Harvard, Vancouver, ISO, and other styles
11

Farhat, Hikmat. "Studies in computational methods for statistical mechanics of fluids." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0026/NQ50157.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Laporte, Catherine. "Statistical methods for out-of-plane ultrasound transducer motion estimation." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86597.

Full text
Abstract:
Freehand 3D ultrasound imaging usually involves moving a conventional tracked 2D ultrasound probe over a subject and combining the images into a volume to be interpreted for medical purposes. Tracking devices can be cumbersome; thus, there is interest in inferring the trajectory of the transducer based on the images themselves. This thesis focuses on new methods for the recovery of the out-of-plane component of the transducer trajectory using the predictive relationship between the elevational decorrelation of ultrasound speckle patterns and transducer displacement. To resolve the directional ambiguities associated with this approach, combinatorial optimisation techniques and robust statistics are combined to recover non-monotonic motion and frame intersections. In order to account for the variability of the sample correlation coefficient between corresponding image patches of fully developed speckle, a new probabilistic speckle decorrelation model is developed. This model can be used to quantify the uncertainty of any displacement estimate, thereby facilitating the use of a new maximum likelihood out-of-plane trajectory estimation approach which fully exploits the information available from multiple redundant and noisy correlation measurements collected in imagery of fully developed speckle. To generalise the applicability of these methods to the case of imagery of real tissue, a new data-driven method is proposed for locally estimating elevational correlation length based on statistical features collected within the image plane. In this approach, the relationship between the image features and local elevational correlation length is learned by sparse Gaussian process regression using a training set of synthetic ultrasound image sequences. The synthetic imagery used for learning is created via a new statistical model for the spatial distribution of ultrasound scatterers which maps realisations of a 1D generalised Poisson point process to a 3D Hilbert space-filli
L'échographie 3D main-libre consiste habituellement à déplacer et à mesurer le déplacement d'une sonde échographique 2D conventionnelle au-dessus d'un sujet et à créer un volume à partir des images qui sera ensuite interprété dans un but médical. Puisque les capteurs de position externes peuvent être encombrants, il y a un intérêt à calculer la trajectoire de la sonde à partir des images elles-mêmes. Cette thèse se penche sur de nouvelles méthodes pour le calcul de la composante hors-plan de la trajectoire de la sonde utilisant la relation prédictive entre la décorrélation hors-plan du speckle échographique et le déplacement de la sonde. Afin de résoudre les ambiguïtés directionnelles associées à cette approche, un nouveau cadre d'opérations est proposé. Ce cadre combine des techniques d'optimisation combinatoire et des techniques statistiques robustes pour détecter les mouvements non-monotones et les intersections entre les images. Pour tenir compte de la variabilité du coefficient de corrélation échantillonnaire entre deux portions d'images de speckle pleinement développé correspondantes, un nouveau modèle probabiliste de la décorrélation du speckle est développé. Ce modèle permet de quantifier l'incertitude associée à l'estimé d'un déplacement, facilitant ainsi l'utilisation d'une nouvelle approche de maximisation de la vraisemblance pour l'estimation de la trajectoire hors-plan qui exploite pleinement l'information rendue disponible par des mesures de corrélation multiples et redondantes acquises dans des images de speckle pleinement développé. Afin de généraliser l'applicabilité de ces méthodes au cas d'images de tissus véritables, un nouvel algorithme guidé par les données est proposé pour l'estimation de la longueur de corrélation hors-plan locale à partir d'attributs statistiques acquis à même le plan image. Dans cette approche, la relation entre les attributs de l'image et la longueur de corr
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Junmo 1976. "Nonparametric statistical methods for image segmentation and shape analysis." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30352.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Page 131 blank.
Includes bibliographical references (p. 125-130).
Image segmentation, the process of decomposing an image into meaningful regions, is a fundamental problem in image processing and computer vision. Recently, image segmentation techniques based on active contour models with level set implementation have received considerable attention. The objective of this thesis is in the development of advanced active contour-based image segmentation methods that incorporate complex statistical information into the segmentation process, either about the image intensities or about the shapes of the objects to be segmented. To this end, we use nonparametric statistical methods for modeling both the intensity distributions and the shape distributions. Previous work on active contour-based segmentation considered the class of images in which each region can be distinguished from others by second order statistical features such as the mean or variance of image intensities of that region. This thesis addresses the problem of segmenting a more general class of images in which each region has a distinct arbitrary intensity distribution. To this end, we develop a nonparametric information-theoretic method for image segmentation. In particular, we cast the segmentation problem as the maximization of the mutual information between the region labels and the image pixel intensities. The resulting curve evolution equation is given in terms of nonparametric density estimates of intensity distributions, and the segmentation method can deal with a variety of intensity distributions in an unsupervised fashion. The second component of this thesis addresses the problem of estimating shape densities from training shapes and incorporating such shape prior densities into the image segmentation process.
(cont.) To this end, we propose nonparametric density estimation methods in the space of curves and the space of signed distance functions. We then derive a corresponding curve evolution equation for shape-based image segmentation. Finally, we consider the case in which the shape density is estimated from training shapes that form multiple clusters. This case leads to the construction of complex, potentially multi-modal prior densities for shapes. As compared to existing methods, our shape priors can: (a) model more complex shape distributions; (b) deal with shape variability in a more principled way; and (c) represent more complex shapes.
by Junmo Kim.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Guyader, Andrew C. "A statistical approach to equivalent linearization with application to performance-based engineering /." Pasadena : California Institute of Technology, Earthquake Engineering Research Laboratory, 2004. http://caltecheerl.library.caltech.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Strong, Mark J. (Mark Joseph). "Statistical methods for process control in automobile body assembly." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10922.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1996, and Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1996.
Includes bibliographical references (p. 117-120).
by Mark J. Strong.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
16

Wright, Christopher M. "Using Statistical Methods to Determine Geolocation Via Twitter." TopSCHOLAR®, 2014. http://digitalcommons.wku.edu/theses/1372.

Full text
Abstract:
With the ever expanding usage of social media websites such as Twitter, it is possible to use statistical inquires to form a geographic location of a person using solely the content of their tweets. According to a study done in 2010, Zhiyuan Cheng, was able to detect a location of a Twitter user within 100 miles of their actual location 51% of the time. While this may seem like an already significant find, this study was done while Twitter was still finding its ground to stand on. In 2010, Twitter had 75 million unique users registered, as of March 2013, Twitter has around 500 million unique users. In this thesis, my own dataset was collected and using Excel macros, a comparison of my results to that of Cheng’s will see if the results have changed over the three years since his study. If found to be that Cheng’s 51% can be shown more efficiently using a simpler methodology, this could have a significant impact on Homeland Security and cyber security measures.
APA, Harvard, Vancouver, ISO, and other styles
17

Sun, Felice (Felice Tzu-yun) 1976. "Integrating statistical and knowledge-based methods for automatic phonemic segmentation." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Stunes, Michael R. "Statistical methods for locating performance problems in multi-tier applications." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77017.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 59-60).
This thesis describes an algorithm developed to aid in solving the problem of performance diagnosis, by automatically identifying the specific component in a multicomponent application system responsible for a performance problem. The algorithm monitors the system, collecting load and latency information from each component, searches the data for patterns indicative of performance saturation using statistical methods, and uses a machine learning classifier to interpret those results. The algorithm was tested with two test applications in several configurations, with different performance problems synthetically introduced. The algorithm correctly located these problems as much as 90% of the time, indicating that this is a good approach to the problem of automatic performance problem location. Also, the experimentation demonstrated that the algorithm can locate performance problems in environments different from those for which it was designed and from that on which it was trained.
by Michael R. Stunes.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
19

Sharma, Vikas. "A new modeling methodology combining engineering and statistical modeling methods : a semiconductor manufacturing application." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kang, Bei. "STATISTICAL CONTROL USING NEURAL NETWORK METHODS WITH HIERARCHICAL HYBRID SYSTEMS." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/122303.

Full text
Abstract:
Electrical Engineering
Ph.D.
The goal of an optimal control algorithm is to improve the performance of a system. For a stochastic system, a typical optimal control method minimizes the mean (first cumulant) of the cost function. However, there are other statistical properties of the cost function, such as variance (second cumulant) and skewness (third cumulant), which will affect the system performance. In this dissertation, the work on the statistical optimal control are presented, which extends the traditional optimal control method using cost cumulants to shape the system performance. Statistical optimal control will allow more design freedom to achieve better performance. The solutions of statistical control involve solving partial differential equations known as Hamilton-Jacobi-Bellman equation. A numerical method based on neural networks is employed to find the solutions of the Hamilton-Jacobi-Bellman partial differential equation. Furthermore, a complex problem such as multiple satellite control, has both continuous and discrete dynamics. Thus, a hierarchical hybrid architecture is developed in this dissertation where the discrete event system is applied to discrete dynamics, and the statistical control is applied to continuous dynamics. Then, the application of a multiple satellite navigation system is analyzed using the hierarchical hybrid architecture. Through this dissertation, it is shown that statistical control theory is a flexible optimal control method which improves the performance; and hierarchical hybrid architecture allows control and navigation of a complex system which contains continuous and discrete dynamics.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
21

Mastin, Dana Andrew. "Statistical methods for 2D-3D registration of optical and LIDAR images." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55123.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 121-123).
Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the LIDAR point cloud, which is a camera pose estimation problem. We propose a novel application of mutual information registration which exploits statistical dependencies in urban scenes, using variables such as LIDAR elevation, LIDAR probability of detection (pdet), and optical luminance. We employ the well known downhill simplex optimization to infer camera pose parameters. Utilization of OpenGL and graphics hardware in the optimization process yields registration times on the order of seconds. Using an initial registration comparable to GPS/INS accuracy, we demonstrate the utility of our algorithms with a collection of urban images. Our analysis begins with three basic methods for measuring mutual information. We demonstrate the utility of the mutual information measures with a series of probing experiments and registration tests. We improve the basic algorithms with a novel application of foliage detection, where the use of only non-foliage points improves registration reliability significantly. Finally, we show how the use of an existing registered optical image can be used in conjunction with foliage detection to achieve even more reliable registration.
by Dana Andrew Mastin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Ritchie, Paul Andrew 1960. "A systematic, experimental methodology for design optimization." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276698.

Full text
Abstract:
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.
APA, Harvard, Vancouver, ISO, and other styles
23

Man, Peter Lau Weilen. "Statistical methods for computing sensitivities and parameter estimates of population balance models." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Muller, Cole. "Reliability analysis of the 4.5 roller bearing." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FMuller.pdf.

Full text
Abstract:
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2003.
Thesis advisor(s): David H. Olwell, Samuel E. Buttrey. Includes bibliographical references (p. 65). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
25

Capaci, Francesca. "Contributions to the Use of Statistical Methods for Improving Continuous Production." Licentiate thesis, Luleå tekniska universitet, Industriell Ekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-66256.

Full text
Abstract:
Complexity of production processes, high computing capabilities, and massive datasets characterize today’s manufacturing environments, such as those of continuous andbatch production industries. Continuous production has spread gradually acrossdifferent industries, covering a significant part of today’s production. Commonconsumer goods such as food, drugs, and cosmetics, and industrial goods such as iron,chemicals, oil, and ore come from continuous processes. To stay competitive intoday’s market requires constant process improvements in terms of both effectivenessand efficiency. Statistical process control (SPC) and design of experiments (DoE)techniques can play an important role in this improvement strategy. SPC attempts toreduce process variation by eliminating assignable causes, while DoE is used toimprove products and processes by systematic experimentation and analysis. However,special issues emerge when applying these methods in continuous process settings.Highly automated and computerized processes provide an exorbitant amount ofserially dependent and cross-correlated data, which may be difficult to analyzesimultaneously. Time series data, transition times, and closed-loop operation areexamples of additional challenges that the analyst faces.The overall objective of this thesis is to contribute to using of statisticalmethods, namely SPC and DoE methods, to improve continuous production.Specifically, this research serves two aims: [1] to explore, identify, and outlinepotential challenges when applying SPC and DoE in continuous processes, and [2] topropose simulation tools and new or adapted methods to overcome the identifiedchallenges.The results are summarized in three appended papers. Through a literaturereview, Paper A outlines SPC and DoE implementation challenges for managers,researchers, and practitioners. For example, problems due to process transitions, themultivariate nature of data, serial correlation, and the presence of engineering processcontrol (EPC) are discussed. Paper B further explores one of the DoE challengesidentified in Paper A. Specifically, Paper B describes issues and potential strategieswhen designing and analyzing experiments in processes operating under closed-loopcontrol. Two simulated examples in the Tennessee Eastman (TE) process simulatorshow the benefits of using DoE techniques to improve and optimize such industrialprocesses. Finally, Paper C provides guidelines, using flow charts, on how to use thecontinuous process simulator, “The revised TE process simulator,” run with adecentralized control strategy as a test bed for developing SPC and DoE methods incontinuous processes. Simulated SPC and DoE examples are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

林達明 and Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lomangino, F. Paul. "Grammar- and optimization-based mechanical packaging." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kentwell, D. J. "Fractal relationships and spatial distribution of ore body modelling." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1997. https://ro.ecu.edu.au/theses/882.

Full text
Abstract:
The nature of spatial distributions of geological variables such as ore grades is of primary concern when modelling ore bodies and mineral resources. The aim of any mineral resource evaluation process is to determine the location, extent, volume and average grade of that resource by a trade off between maximum confidence in the results and minimum sampling effort. The principal aim of almost every geostatistical modelling process is to predict the spatial variation of one or more geological variables in order to estimate values of those variables at locations that have not been sampled. From the spatial analysis of these variables, in conjunction with the physical geology of the region of interest, the location, extent and volume, or series of discrete volumes, whose average ore grade exceeds a specific ore grade cut off value determined' by economic parameters can be determined, Of interest are not only the volume and average grade of the material but also the degree of uncertainty associated with each of these. Geostatistics currently provides many methods of assessing spatial variability. Fractal dimensions also give us a measure of spatial variability and have been found to model many natural phenomenon successfully (Mandelbrot 1983, Burrough 1981), but until now fractal modelling techniques have not been able to match the versatility and accuracy of geostatistical methods. Fractal ideas and use of the fractal dimension may in certain cases provide a better understanding of the way in which spatial variability manifests itself in geostatistical situations. This research will propose and investigate a new application of fractal simulation methods to spatial variability and spatial interpolation techniques as they relate to ore body modelling. The results show some advantages over existing techniques of geostatistical simulation.
APA, Harvard, Vancouver, ISO, and other styles
29

Patil, Vidyaangi Giszter Simon Francis. "Different forms of modularity in trunk muscles in the rat revealed by various statistical methods /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Hongshu. "Sampling-based Bayesian latent variable regression methods with applications in process engineering." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189650596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Srinivasan, Raghuram. "Monte Carlo Alternate Approaches to Statistical Performance Estimation in VLSI Circuits." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396531763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tosi, Riccardo. "Towards stochastic methods in CFD for engineering applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/673389.

Full text
Abstract:
Recent developments of high performance computing capabilities allow solving modern science problems employing sophisticated computational techniques. However, it is necessary to ensure the efficiency of state of the art computational methods to fully take advantage of modern technology capabilities. In this thesis we propose uncertainty quantification and high performance computing strategies to solve fluid dynamics systems characterized by uncertain conditions and unknown parameters. We verify that such techniques allow us to take decisions faster and ensure the reliability of simulation results. Different sources of uncertainties can be relevant in computational fluid dynamics applications. For example, we consider the shape and time variability of boundary conditions, as well as the randomness of external forces acting on the system. From a practical point of view, one has to estimate statistics of the flow, and a failure probability convergence criterion must be satisfied by the statistical estimator of interest to assess reliability. We use hierarchical Monte Carlo methods as uncertainty quantification strategy to solve stochastic systems. Such algorithms present three levels of parallelism: over levels, over realizations per level, and on the solution of each realization. We propose an improvement by adding a new level of parallelism, between batches, where each batch has its independent hierarchy. These new methods are called asynchronous hierarchical Monte Carlo, and we demonstrate that such techniques take full advantage of concurrency capabilities of modern high performance computing environments, while preserving the same reliability of state of the art methods. Moreover, we focus on reducing the wall clock time required to compute statistical estimators of chaotic incompressible flows. Our approach consists in replacing a single long-term simulation with an ensemble of multiple independent realizations, which are run in parallel with different initial conditions. The error analysis of the statistical estimator leads to the identification of two error contributions: the initialization bias and the statistical error. We propose an approach to systematically detect the burn-in time to minimize the initialization bias, accompanied by strategies to reduce the simulation cost. Finally, we propose an integration of Monte Carlo and ensemble averaging methods for reducing the wall clock time required for computing statistical estimators of time-dependent stochastic turbulent flows. A single long-term Monte Carlo realization is replaced by an ensemble of multiple independent realizations, each characterized by the same random event and different initial conditions. We consider different systems, relevant in the computational fluid dynamics engineering field, as realistic wind flowing around high-rise buildings or compressible potential flow problems. By solving such numerical examples, we demonstrate the accuracy, efficiency, and effectiveness of our proposals.
Los desarrollos relacionados con la computación de alto rendimiento de las últimas décadas permiten resolver problemas científicos actuales, utilizando métodos computacionales sofisticados. Sin embargo, es necesario asegurarse de la eficiencia de los métodos computacionales modernos, con el fin de explotar al máximo las capacidades tecnológicas. En esta tesis proponemos diferentes métodos, relacionados con la cuantificación de incertidumbres y el cálculo de alto rendimiento, con el fin de minimizar el tiempo de computación necesario para resolver las simulaciones y garantizar una alta fiabilidad. En concreto, resolvemos sistemas de dinámica de fluidos caracterizados por incertidumbres. En el campo de la dinámica de fluidos computacional existen diferentes tipos de incertidumbres. Nosotros consideramos, por ejemplo, la forma y la evolución en el tiempo de las condiciones de frontera, así como la aleatoriedad de las fuerzas externas que actúan sobre el sistema. Desde un punto de vista práctico, es necesario estimar valores estadísticos del flujo del fluido, cumpliendo los criterios de convergencia para garantizar la fiabilidad del método. Para cuantificar el efecto de las incertidumbres utilizamos métodos de Monte Carlo jerárquicos, también llamados hierarchical Monte Carlo methods. Estas estrategias tienen tres niveles de paralelización: entre los niveles de la jerarquía, entre los eventos de cada nivel y durante la resolución del evento. Proponemos agregar un nuevo nivel de paralelización, entre batches, en el cual cada batch es independiente de los demás y tiene su propia jerarquía, compuesta por niveles y eventos distribuidos en diferentes niveles. Definimos estos nuevos algoritmos como métodos de Monte Carlo asíncronos y jerárquicos, cuyos nombres equivalentes en inglés son asynchronous hierarchical Monte Carlo methods. También nos enfocamos en reducir el tiempo de computación necesario para calcular estimadores estadísticos de flujos de fluidos caóticos e incompresibles. Nuestro método consiste en reemplazar una única simulación de dinámica de fluidos, caracterizada por una ventana de tiempo prolongada, por el promedio de un conjunto de simulaciones independientes, caracterizadas por diferentes condiciones iniciales y una ventana de tiempo menor. Este conjunto de simulaciones se puede ejecutar en paralelo en superordenadores, reduciendo el tiempo de computación. El método de promedio de conjuntos se conoce como ensemble averaging. Analizando las diferentes contribuciones del error del estimador estadístico, identificamos dos términos: el error debido a las condiciones iniciales y el error estadístico. En esta tesis proponemos un método que minimiza el error debido a las condiciones iniciales, y en paralelo sugerimos varias estrategias para reducir el coste computacional de la simulación. Finalmente, proponemos una integración del método de Monte Carlo y del método de ensemble averaging, cuyo objetivo es reducir el tiempo de computación requerido para calcular estimadores estadísticos de problemas de dinámica de fluidos dependientes del tiempo, caóticos y estocásticos. Reemplazamos cada realización de Monte Carlo por un conjunto de realizaciones independientes, cada una caracterizada por el mismo evento aleatorio y diferentes condiciones iniciales. Consideramos y resolvemos diferentes sistemas físicos, todos relevantes en el campo de la dinámica de fluidos computacional, como problemas de flujo del viento alrededor de rascacielos o problemas de flujo potencial. Demostramos la precisión, eficiencia y efectividad de nuestras propuestas resolviendo estos ejemplos numéricos.
Gli sviluppi del calcolo ad alte prestazioni degli ultimi decenni permettono di risolvere problemi scientifici di grande attualità, utilizzando sofisticati metodi computazionali. È però necessario assicurarsi dell’efficienza di questi metodi, in modo da ottimizzare l’uso delle odierne conoscenze tecnologiche. A tal fine, in questa tesi proponiamo diversi metodi, tutti inerenti ai temi di quantificazione di incertezze e calcolo ad alte prestazioni. L’obiettivo è minimizzare il tempo necessario per risolvere le simulazioni e garantire alta affidabilità. Nello specifico, utilizziamo queste strategie per risolvere sistemi fluidodinamici caratterizzati da incertezze in macchine ad alte prestazioni. Nel campo della fluidodinamica computazionale esistono diverse tipologie di incertezze. In questo lavoro consideriamo, ad esempio, il valore e l’evoluzione temporale delle condizioni di contorno, così come l’aleatorietà delle forze esterne che agiscono sul sistema fisico. Dal punto di vista pratico, è necessario calcolare una stima delle variabili statistiche del flusso del fluido, soddisfacendo criteri di convergenza, i quali garantiscono l’accuratezza del metodo. Per quantificare l’effetto delle incertezze sul sistema utilizziamo metodi gerarchici di Monte Carlo, detti anche hierarchical Monte Carlo methods. Queste strategie presentano tre livelli di parallelizzazione: tra i livelli della gerarchia, tra gli eventi di ciascun livello e durante la risoluzione del singolo evento. Proponiamo di aggiungere un nuovo livello di parallelizzazione, tra gruppi (batches), in cui ogni batch sia indipendente dagli altri ed abbia una propria gerarchia, composta da livelli e da eventi distribuiti su diversi livelli. Definiamo questi nuovi algoritmi come metodi asincroni e gerarchici di Monte Carlo, il cui corrispondente in inglese è asynchronous hierarchical Monte Carlo methods. Ci focalizziamo inoltre sulla riduzione del tempo di calcolo necessario per stimare variabili statistiche di flussi caotici ed incomprimibili. Il nostro metodo consiste nel sostituire un’unica simulazione fluidodinamica, caratterizzata da un lungo arco temporale, con il valore medio di un insieme di simulazioni indipendenti, caratterizzate da diverse condizioni iniziali ed un arco temporale minore. Questo insieme 10 di simulazioni può essere eseguito in parallelo in un supercomputer, riducendo il tempo di calcolo. Questo metodo è noto come media di un insieme o, in inglese, ensemble averaging. Calcolando la stima di variabili statistiche, commettiamo due errori: l’errore dovuto alle condizioni iniziali e l’errore statistico. In questa tesi proponiamo un metodo per minimizzare l’errore dovuto alle condizioni iniziali, ed in parallelo suggeriamo diverse strategie per ridurre il costo computazionale della simulazione. Infine, proponiamo un’integrazione del metodo di Monte Carlo e del metodo di ensemble averaging, il cui obiettivo è ridurre il tempo di calcolo necessario per stimare variabili statistiche di problemi di fluidodinamica dipendenti dal tempo, caotici e stocastici. Ogni realizzazione di Monte Carlo è sostituita da un insieme di simulazioni indipendenti, ciascuna caratterizzata dallo stesso evento casuale, da differenti condizioni iniziali e da un arco temporale minore. Consideriamo e risolviamo differenti sistemi fisici, tutti rilevanti nel campo della fluidodinamica computazionale, come per esempio problemi di flusso del vento attorno a grattacieli, o sistemi di flusso potenziale. Dimostriamo l’accuratezza, l’efficienza e l’efficacia delle nostre proposte, risolvendo questi esempi numerici.
Enginyeria civil
APA, Harvard, Vancouver, ISO, and other styles
33

Knopp, Jeremy Scott. "Modern Statistical Methods and Uncertainty Quantification for Evaluating Reliability of Nondestructive Evaluation Systems." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1395942220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Togiti, Varun. "Pattern Recognition of Power System Voltage Stability using Statistical and Algorithmic Methods." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1488.

Full text
Abstract:
In recent years, power demands around the world and particularly in North America increased rapidly due to increase in customer’s demand, while the development in transmission system is rather slow. This stresses the present transmission system and voltage stability becomes an important issue in this regard. Pattern recognition in conjunction with voltage stability analysis could be an effective tool to solve this problem In this thesis, a methodology to detect the voltage stability ahead of time is presented. Dynamic simulation software PSS/E is used to simulate voltage stable and unstable cases, these cases are used to train and test the pattern recognition algorithms. Statistical and algorithmic pattern recognition methods are used. The proposed method is tested on IEEE 39 bus system. Finally, the pattern recognition models to predict the voltage stability of the system are developed.
APA, Harvard, Vancouver, ISO, and other styles
35

Fox, Marshall Edward. "Identifying opportunities to reduce emergency service calls in hematology manufacturing using statistical methods." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104308.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 38-39).
The main goal of this project is to identify opportunities to improve the reliability of the DxHTM product line, an automated hematology instrument for analyzing patient blood samples. The product was developed by Beckman Coulter Diagnostics, a division of a Danaher operating company with principal manufacturing and support operations based near Miami, Florida. A critical business metric used to reflect reliability is the Emergency Service Call (ESC) rate. An ESC for an instrument is defined as the number of unscheduled, on-site technician visits during the one year warranty period. Though Beckman Coulter already deploys an extremely robust quality control system, ESCs can still occur for a wide variety of other reasons resulting in an impact to reliability. Any tools that support the reduction of ESCs may help generate positive perceptions among customers since their instruments will have greater up-time. This project entails an evaluation of a new initiative called "Reliability Statistical Process Control" (R-SPC). R-SPC is a form of manufacturing process control developed internally consisting of an electronic tool that collects raw instrument data during manufacturing. Unusual measurements are automatically sent to a cross functional team, which examines the potential trend in more detail. If an abnormal trend is identified, the examination could generate a lasting improvement in the manufacturing process. Currently, the success of R-SPC is measured by the extent to which it reduces ESCs. Because an unusual measurement engenders further actions to investigate an instrument, it is desirable to show with empirical evidence that the measurement is linked to reliability. To assess whether particular measurements were systematically related to the ESC rate, relevant data were analyzed via the Pearson Chi Squared statistical test. The tests revealed that some of the variables now monitored do not appear to affect the ESC rate for the range of values studied. In contrast, several proposed "derived" parameters may serve as better indicators of an instrument's ESC rate. Moreover, the Chi Squared methodology described can be used to investigate the relationships between other variables and the ESC rate. The thesis concludes by offering several specific recommendations to help refine the R-SPC initiative.
by Marshall Edward Fox.
M.B.A.
S.M. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
36

Pookhao, Naruekamol. "Statistical Methods for Functional Metagenomic Analysis Based on Next-Generation Sequencing Data." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/320986.

Full text
Abstract:
Metagenomics is the study of a collective microbial genetic content recovered directly from natural (e.g., soil, ocean, and freshwater) or host-associated (e.g., human gut, skin, and oral) environmental communities that contain microorganisms, i.e., microbiomes. The rapid technological developments in next generation sequencing (NGS) technologies, enabling to sequence tens or hundreds of millions of short DNA fragments (or reads) in a single run, facilitates the studies of multiple microorganisms lived in environmental communities. Metagenomics, a relatively new but fast growing field, allows us to understand the diversity of microbes, their functions, cooperation, and evolution in a particular ecosystem. Also, it assists us to identify significantly different metabolic potentials in different environments. Particularly, metagenomic analysis on the basis of functional features (e.g., pathways, subsystems, functional roles) enables to contribute the genomic contents of microbes to human health and leads us to understand how the microbes affect human health by analyzing a metagenomic data corresponding to two or multiple populations with different clinical phenotypes (e.g., diseased and healthy, or different treatments). Currently, metagenomic analysis has substantial impact not only on genetic and environmental areas, but also on clinical applications. In our study, we focus on the development of computational and statistical methods for functional metagnomic analysis of sequencing data that is obtained from various environmental microbial samples/communities.
APA, Harvard, Vancouver, ISO, and other styles
37

Aradhye, Hrishikesh Balkrishna. "Anomaly Detection Using Multiscale Methods." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu989701610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Longmire, Pamela. "Nonparametric statistical methods applied to the final status decommissioning survey of Fort St. Vrains prestressed concrete reactor vessel." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407398430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Uddin, Mohammad Moin. "ROBUST STATISTICAL METHODS FOR NON-NORMAL QUALITY ASSURANCE DATA ANALYSIS IN TRANSPORTATION PROJECTS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/153.

Full text
Abstract:
The American Association of Highway and Transportation Officials (AASHTO) and Federal Highway Administration (FHWA) require the use of the statistically based quality assurance (QA) specifications for construction materials. As a result, many of the state highway agencies (SHAs) have implemented the use of a QA specification for highway construction. For these statistically based QA specifications, quality characteristics of most construction materials are assumed normally distributed, however, the normality assumption can be violated in several forms. Distribution of data can be skewed, kurtosis induced, or bimodal. If the process shows evidence of a significant departure from normality, then the quality measures calculated may be erroneous. In this research study, an extended QA data analysis model is proposed which will significantly improve the Type I error and power of the F-test and t-test, and remove bias estimates of Percent within Limit (PWL) based pay factor calculation. For the F-test, three alternative tests are proposed when sampling distribution is non-normal. These are: 1) Levene’s test; 2) Brown and Forsythe’s test; and 3) O’Brien’s test. One alternative method is proposed for the t-test, which is the non-parametric Wilcoxon - Mann – Whitney Sign Rank test. For PWL based pay factor calculation when lot data suffer non-normality, three schemes were investigated, which are: 1) simple transformation methods, 2) The Clements method, and 3) Modified Box-Cox transformation using “Golden Section Search” method. The Monte Carlo simulation study revealed that both Levene’s test and Brown and Forsythe’s test are robust alternative tests of variances when underlying sample population distribution is non-normal. Between the t-test and Wilcoxon test, the t-test was found significantly robust even when sample population distribution was severely non-normal. Among the data transformation for PWL based pay factor, the modified Box-Cox transformation using the golden section search method was found to be the most effective in minimizing or removing pay bias. Field QA data was analyzed to validate the model and a Microsoft® Excel macro based software is developed, which can adjust any pay consequences due to non-normality.
APA, Harvard, Vancouver, ISO, and other styles
40

El, Hayek Mustapha Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Optimizing life-cycle maintenance cost of complex machinery using advanced statistical techniques and simulation." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/24955.

Full text
Abstract:
Maintenance is constantly challenged with increasing productivity by maximizing up-time and reliability while at the same time reducing expenditure and investment. In the last few years it has become evident through the development of maintenance concepts that maintenance is more than just a non-productive support function, it is a profit- generating function. In the past decades, hundreds of models that address maintenance strategy have been presented. The vast majority of those models rely purely on mathematical modeling to describe the maintenance function. Due to the complex nature of the maintenance function, and its complex interaction with other functions, it is almost impossible to accurately model maintenance using mathematical modeling without sacrificing accuracy and validity with unfeasible simplifications and assumptions. Analysis presented as part of this thesis shows that stochastic simulation offers a viable alternative and a powerful technique for tackling maintenance problems. Stochastic simulation is a method of modeling a system or process (on a computer) based on random events generated by the software so that system performance can be evaluated without experimenting or interfering with the actual system. The methodology developed as part of this thesis addresses most of the shortcomings found in literature, specifically by allowing the modeling of most of the complexities of an advanced maintenance system, such as one that is employed in the airline industry. This technique also allows sensitivity analysis to be carried out resulting in an understanding of how critical variables may affect the maintenance and asset management decision-making process. In many heavy industries (e.g. airline maintenance) where high utilization is essential for the success of the organization, subsystems are often of a rotable nature, i.e. they rotate among different systems throughout their life-cycle. This causes a system to be composed of a number of subsystems of different ages, and therefore different reliability characteristics. This makes it difficult for analysts to estimate its reliability behavior, and therefore may result in a less-than-optimal maintenance plan. Traditional reliability models are based on detailed statistical analysis of individual component failures. For complex machinery, especially involving many rotable parts, such analyses are difficult and time consuming. In this work, a model is proposed that combines the well-established Weibull method with discrete simulation to estimate the reliability of complex machinery with rotable subsystems or modules. Each module is characterized by an empirically derived failure distribution. The simulation model consists of a number of stages including operational up-time, maintenance down-time and a user-interface allowing decisions on maintenance and replacement strategies as well as inventory levels and logistics. This enables the optimization of a maintenance plan by comparing different maintenance and removal policies using the Cost per Unit Time (CPUT) measure as the decision variable. Five different removal strategies were tested. These include: On-failure replacements, block replacements, time-based replacements, condition-based replacements and a combination of time-based and condition-based strategies. Initial analyses performed on aircraft gas-turbine data yielded an optimal combination of modules out of a pool of multiple spares, resulting in an increased machine up-time of 16%. In addition, it was shown that condition-based replacement is a cost-effective strategy; however, it was noted that the combination of time and condition-based strategy can produce slightly better results. Furthermore, a sensitivity analysis was performed to optimize decision variables (module soft-time), and to provide an insight to the level of accuracy with which it has to be estimated. It is imperative as part of the overall reliability and life-cycle cost program to focus not only on reducing levels of unplanned (i.e. breakdown) maintenance through preventive and predictive maintenance tasks, but also optimizing inventory of spare parts management, sometimes called float hardware. It is well known that the unavailability of a spare part may result in loss of revenue, which is associated with an increase in system downtime. On the other hand increasing the number of spares will lead to an increase in capital investment and holding cost. The results obtained from the simulation model were used in a discounted NPV (Net Present Value) analysis to determine the optimal number of spare engines. The benefits of this methodology are that it is capable of providing reliability trends and forecasts in a short time frame and based on available data. In addition, it takes into account the rotable nature of many components by tracking the life and service history of individual parts and allowing the user to simulate different combinations of rotables, operating scenarios, and replacement strategies. It is also capable of optimizing stock and spares levels as well as other related key parameters like the average waiting time, unavailability cost, and the number of maintenance events that result in extensive durations due to the unavailability of spare parts. Importantly, as more data becomes available or as greater accuracy is demanded, the model or database can be updated or expanded, thereby approaching the results obtainable by pure statistical reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
41

Zibdeh, Hazim S. "Environmental thermal stresses as a first passage problem." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/49971.

Full text
Abstract:
Due to changes of the thermal environment, thermal stresses are produced in structures. Two approaches based on the stochastic process theory are used to describe this phenomenon. The structure is idealized as a long hollow viscoelastic cylinder. Two sites are considered: Barrow (AK) and Yuma (AZ). First passage concepts are applied to characterize the reliability of the system. Crossings are assumed to follow either the behavior of the Poisson process or Markov process. In both cases, the distribution of the time to first passage is taken to be the exponential distribution. Because the material is viscoelastic, statistically and time varying barriers (strengths) with Normal, Log-Normal, or Neibull distributions are considered. Degradation of the barriers by aging and cumulative damage are incorporated in the analysis.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
42

Rose, Michael Benjamin. "Statistical Methods for Launch Vehicle Guidance, Navigation, and Control (GN&C) System Design and Analysis." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1278.

Full text
Abstract:
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
APA, Harvard, Vancouver, ISO, and other styles
43

Chan, Shu-hei, and 陳樹禧. "Statistical distribution of forces in random packings of spheres and honeycomb structures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29545365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mahadevan, Sankaran. "Stochastic finite element-based structural reliability analysis and optimization." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/19517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tang, Philip Kwok Fan. "Stochastic Hydrologic Modeling in Real Time Using a Deterministic Model (Streamflow Synthesis and Reservoir Regulation Model), Time Series Model, and Kalman Filter." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4580.

Full text
Abstract:
The basic concepts of hydrologic forecasting using the Streamflow Synthesis And Reservoir Regulation Model of the U.S. Army Corps of Engineers, auto-regressive-moving-average time series models (including Greens' functions, inverse functions, auto covariance Functions, and model estimation algorithm), and the Kalman filter (including state space modeling, system uncertainty, and filter algorithm), were explored. A computational experiment was conducted in which the Kalman filter was applied to update Mehama local basin model (Mehama is a 227 sq. miles watershed located on the North Santiam River near Salem, Oregon.), a typical SSARR basin model, to streamflow measurements as they became available in simulated real time. Among the candidate AR and ARMA models, an ARMA(l,l) time series model was selected as the best-fit model to represent the residual of the basin model. It was used to augment the streamflow forecasts created by the local basin model in simulated real time. Despite the limitations imposed by the quality of the moisture input forecast and the design and calibration of the basin model, the experiment shows that the new stochastic methods are effective in significantly improving the flood forecast accuracy of the SSARR model.
APA, Harvard, Vancouver, ISO, and other styles
46

Kapur, Loveena. "Investigation of artificial neural networks, alternating conditional expectation, and Bayesian methods for reservoir characterization /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ayllón, David. "Methods for Cole Parameter Estimation from Bioimpedance Spectroscopy Measurements." Thesis, Högskolan i Borås, Institutionen Ingenjörshögskolan, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Shenoi, Sangeetha Chandra. "A Comparative Study on Methods for Stochastic Number Generation." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1511881394773194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lucas, Tamara J. H. "Formulation and solution of hierarchical decision support problems." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/17291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hoi, Ka In. "Enhancement of efficiency and robustness of Kalman filter based statistical air quality models by using Bayesian approach." Thesis, University of Macau, 2010. http://umaclib3.umac.mo/record=b2488003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography