Dissertations / Theses on the topic 'Estimation of probability density function'

To see the other types of publications on this topic, follow the link: Estimation of probability density function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation of probability density function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Joshi, Niranjan Bhaskar. "Non-parametric probability density function estimation for medical images." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:ebc6af07-770b-4fee-9dc9-5ebbe452a0c1.

Full text
Abstract:
The estimation of probability density functions (PDF) of intensity values plays an important role in medical image analysis. Non-parametric PDF estimation methods have the advantage of generality in their application. The two most popular estimators in image analysis methods to perform the non-parametric PDF estimation task are the histogram and the kernel density estimator. But these popular estimators crucially need to be ‘tuned’ by setting a number of parameters and may be either computationally inefficient or need a large amount of training data. In this thesis, we critically analyse and further develop a recently proposed non-parametric PDF estimation method for signals, called the NP windows method. We propose three new algorithms to compute PDF estimates using the NP windows method. One of these algorithms, called the log-basis algorithm, provides an easier and faster way to compute the NP windows estimate, and allows us to compare the NP windows method with the two existing popular estimators. Results show that the NP windows method is fast and can estimate PDFs with a significantly smaller amount of training data. Moreover, it does not require any additional parameter settings. To demonstrate utility of the NP windows method in image analysis we consider its application to image segmentation. To do this, we first describe the distribution of intensity values in the image with a mixture of non-parametric distributions. We estimate these distributions using the NP windows method. We then use this novel mixture model to evolve curves with the well-known level set framework for image segmentation. We also take into account the partial volume effect that assumes importance in medical image analysis methods. In the final part of the thesis, we apply our non-parametric mixture model (NPMM) based level set segmentation framework to segment colorectal MR images. The segmentation of colorectal MR images is made challenging due to sparsity and ambiguity of features, presence of various artifacts, and complex anatomy of the region. We propose to use the monogenic signal (local energy, phase, and orientation) to overcome the first difficulty, and the NPMM to overcome the remaining two. Results are improved substantially on those that have been reported previously. We also present various ways to visualise clinically useful information obtained with our segmentations in a 3-dimensional manner.
APA, Harvard, Vancouver, ISO, and other styles
2

Kharoufeh, Jeffrey P. "Density estimation for functions of correlated random variables." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1177097417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hao, Wei-Da. "Waveform Estimation with Jitter Noise by Pseudo Symmetrical Probability Density Function." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4587.

Full text
Abstract:
A new method for solving jitter noise in estimating high frequency waveform is proposed. It reduces the bias of the estimation in those points where all the other methods fail to achieve. It provides preliminary models for estimating percentiles in Normal, Exponential probability density function. Based on the model for Normal probability density function, a model for any probability density function is derived. The resulting percentiles, in turn, are used as estimates for the amplitude of the waveform. Simulation results show us with satisfactory accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Sadeghi, Mohammad T. "Automatic architecture selection for probability density function estimation in computer vision." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843248/.

Full text
Abstract:
In this thesis, the problem of probability density function estimation using finite mixture models is considered. Gaussian mixture modelling is used to provide a semi-parametric density estimate for a given data set. The fundamental problem with this approach is that the number of mixtures required to adequately describe the data is not known in advance. In this work, a predictive validation technique [91] is studied and developed as a useful, operational tool that automatically selects the number of components for Gaussian mixture models. The predictive validation test approves a candidate model if, for the set of events they try to predict, the predicted frequencies derived from the model match the empirical ones derived from the data set. A model selection algorithm, based on the validation test, is developed which prevents both problems of over-fitting and under-fitting. We investigate the influence of the various parameters in the model selection method in order to develop it into a robust operational tool. The capability of the proposed method in real world applications is examined on the problem of face image segmentation for automatic initialisation of lip tracking systems. A segmentation approach is proposed which is based on Gaussian mixture modelling of the pixels RGB values using the predictive validation technique. The lip region segmentation is based on the estimated model. First a grouping of the model components is performed using a novel approach. The resulting groups are then the basis of a Bayesian decision making system which labels the pixels in the mouth area as lip or non-lip. The experimental results demonstrate the superiority of the method over the conventional clustering approaches. In order to improve the method computationally an image sampling technique is applied which is based on Sobol sequences. Also, the image modelling process is strengthened by incorporating spatial contextual information using two different methods, a Neigh-bourhood Expectation Maximisation technique and a spatial clustering method based on a Gibbs/Markov random field modelling approach. Both methods are developed within the proposed modelling framework. The results obtained on the lip segmentation application suggest that spatial context is beneficial.
APA, Harvard, Vancouver, ISO, and other styles
5

Phillips, Kimberly Ann. "Probability Density Function Estimation Applied to Minimum Bit Error Rate Adaptive Filtering." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/33280.

Full text
Abstract:
It is known that a matched filter is optimal for a signal corrupted by Gaussian noise. In a wireless environment, the received signal may be corrupted by Gaussian noise and a variety of other channel disturbances: cochannel interference, multiple access interference, large and small-scale fading, etc. Adaptive filtering is the usual approach to mitigating this channel distortion. Existing adaptive filtering techniques usually attempt to minimize the mean square error (MSE) of some aspect of the received signal, with respect to the desired aspect of that signal. Adaptive minimization of MSE does not always guarantee minimization of bit error rate (BER). The main focus of this research involves estimation of the probability density function (PDF) of the received signal; this PDF estimate is used to adaptively determine a solution that minimizes BER. To this end, a new adaptive procedure called the Minimum BER Estimation (MBE) algorithm has been developed. MBE shows improvement over the Least Mean Squares (LMS) algorithm for most simulations involving interference and in some multipath situations. Furthermore, the new algorithm is more robust than LMS to changes in algorithm parameters such as stepsize and window width.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Esterhuizen, Gerhard. "Generalised density function estimation using moments and the characteristic function." Thesis, Link to the online version, 2003. http://hdl.handle.net/10019.1/1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Santos, André Duarte dos. "Implied probability density functions: Estimation using hypergeometric, spline and lognormal functions." Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/3372.

Full text
Abstract:
Master of Science in Finance
This thesis examines the stability and accuracy of three different methods to estimate Risk-Neutral Density functions (RNDs) using European options. These methods are the Double-Lognormal Function (DLN), the Smoothed Implied Volatility Smile (SML) and the Density Functional Based on Confluent Hypergeometric function (DFCH). These methodologies were used to obtain the RNDs from the option prices with the underlying USDBRL (price of US dollars in terms of Brazilian reals) for different maturities (1, 3 and 6 months), and then tested in order to analyze which method best fits a simulated "true" world as estimated through the Heston model (accuracy measure) and which model has a better performance in terms of stability. We observed that in the majority of the cases the SML outperformed the DLN and DFCH in capturing the "true" implied skewness. The DFCH and DLN methods were better than the SML model at estimating the "true" Kurtosis. However, due to the higher sensitivity of the skewness and kurtosis measures to the tails of the distribution (all the information outside the available strike prices is extrapolated and the probability masses outside this range can have ininite forms) we also compared the tested models using the root mean integrated squared error (RMISE) which is less sensitive to the tails of the distribution. We observed that using the RMISE criteria, the DFCH outperformed the other methods as a better estimator of the "true" RND. Besides testing which model best captured the "true" world's expectations, we an¬alyzed the historical summary statistics of the RNDs obtained from the FX options on the USDBRL for the period between June 2006 (before the start of the subprime crisis) and February 2010 (seven months before the Brazilian general election).
APA, Harvard, Vancouver, ISO, and other styles
8

Calatayud, Gregori Julia. "Computational methods for random differential equations: probability density function and estimation of the parameters." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138396.

Full text
Abstract:
[EN] Mathematical models based on deterministic differential equations do not take into account the inherent uncertainty of the physical phenomenon (in a wide sense) under study. In addition, inaccuracies in the collected data often arise due to errors in the measurements. It thus becomes necessary to treat the input parameters of the model as random quantities, in the form of random variables or stochastic processes. This gives rise to the study of random ordinary and partial differential equations. The computation of the probability density function of the stochastic solution is important for uncertainty quantification of the model output. Although such computation is a difficult objective in general, certain stochastic expansions for the model coefficients allow faithful representations for the stochastic solution, which permits approximating its density function. In this regard, Karhunen-Loève and generalized polynomial chaos expansions become powerful tools for the density approximation. Also, methods based on discretizations from finite difference numerical schemes permit approximating the stochastic solution, therefore its probability density function. The main part of this dissertation aims at approximating the probability density function of important mathematical models with uncertainties in their formulation. Specifically, in this thesis we study, in the stochastic sense, the following models that arise in different scientific areas: in Physics, the model for the damped pendulum; in Biology and Epidemiology, the models for logistic growth and Bertalanffy, as well as epidemiological models; and in Thermodynamics, the heat partial differential equation. We rely on Karhunen-Loève and generalized polynomial chaos expansions and on finite difference schemes for the density approximation of the solution. These techniques are only applicable when we have a forward model in which the input parameters have certain probability distributions already set. When the model coefficients are estimated from collected data, we have an inverse problem. The Bayesian inference approach allows estimating the probability distribution of the model parameters from their prior probability distribution and the likelihood of the data. Uncertainty quantification for the model output is then carried out using the posterior predictive distribution. In this regard, the last part of the thesis shows the estimation of the distributions of the model parameters from experimental data on bacteria growth. To do so, a hybrid method that combines Bayesian parameter estimation and generalized polynomial chaos expansions is used.
[ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado.
[CAT] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat.
This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad grant MTM2017–89664–P.
Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
9

Rahikainen, I. (Ilkka). "Direct methodology for estimating the risk neutral probability density function." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201404241289.

Full text
Abstract:
The target of the study is to find out if the direct methodology could provide same information about the parameters of the risk neutral probability density function (RND) than the reference RND methodologies. The direct methodology is based on for defining the parameters of the RND from underlying asset by using futures contracts and only few at-the-money (ATM) and/or close at-the-money (ATM) options on asset. Of course for enabling the analysis of the feasibility of the direct methodology the reference RNDs must be estimated from the option data. Finally the results of estimating the parameters by the direct methodology are compared to the results of estimating the parameters by the selected reference methodologies for understanding if the direct methodology can be used for understanding the key parameters of the RND. The study is based on S&P 500 index option data from year 2008 for estimating the reference RNDs and for defining the reference moments from the reference RNDs. The S&P 500 futures contract data is necessary for finding the expectation value estimation for the direct methodology. Only few ATM and/or close ATM options from the S&P 500 index option data are necessary for getting the standard deviation estimation for the direct methodology. Both parametric and non-parametric methods were implemented for defining reference RNDs. The reference RND estimation results are presented so that the reference RND estimation methodologies can be compared to each other. The moments of the reference RNDs were calculated from the RND estimation results so that the moments of the direct methodology can be compared to the moments of the reference methodologies. The futures contracts are used in the direct methodology for getting the expectation value estimation of the RND. Only few ATM and/or close ATM options are used in the direct methodology for getting the standard deviation estimation of the RND. The implied volatility is calculated from option prices using ATM and/or close ATM options only. Based on implied volatility the standard deviation can be calculated directly using time scaling equations. Skewness and kurtosis can be calculated from the estimated expectation value and the estimated standard deviation by using the assumption of the lognormal distribution. Based on the results the direct methodology is acceptable for getting the expectation value estimation using the futures contract value directly instead of the expectation value, which is calculated from the RND of full option data, if and only if the time to maturity is relative short. The standard deviation estimation can be calculated from few ATM and/or at close ATM options instead of calculating the RND from full option data only if the time to maturity is relative short. Skewness and kurtosis were calculated from the expectation value estimation and the standard deviation estimation by using the assumption of the lognormal distribution. Skewness and kurtosis could not be estimated by using the assumption of the lognormal distribution because the lognormal distribution is not correct generic assumption for the RND distributions.
APA, Harvard, Vancouver, ISO, and other styles
10

Heinemann, Christian [Verfasser]. "Estimation and regularization of probability density functions in image processing / Christian Heinemann." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1058851497/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ericok, Ozlen. "Uncertainty Assessment In Reserv Estimation Of A Naturally Fractured Reservoir." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605713/index.pdf.

Full text
Abstract:
ABSTRACT UNCERTAINTY ASSESSMENT IN RESERVE ESTIMATION OF A NATURALLY FRACTURED RESERVOIR ERIÇ
OK, Ö
zlen M.S., Department of Petroleum and Natural Gas Engineering Supervisor : Prof. Dr. Fevzi GÜ
MRAH December 2004, 169 pages Reservoir performance prediction and reserve estimation depend on various petrophysical parameters which have uncertainties due to available technology. For a proper and economical field development, these parameters must be determined by taking into consideration their uncertainty level and probable data ranges. For implementing uncertainty assessment on estimation of original oil in place (OOIP) of a field, a naturally fractured carbonate field, Field-A, is chosen to work with. Since field information is obtained by drilling and testing wells throughout the field, uncertainty in true ranges of reservoir parameters evolve due to impossibility of drilling every location on an area. This study is based on defining the probability distribution of uncertain variables in reserve estimation and evaluating probable reserve amount by using Monte Carlo simulation method. Probabilistic reserve estimation gives the whole range of probable v original oil in place amount of a field. The results are given by their likelyhood of occurance as P10, P50 and P90 reserves in summary. In the study, Field-A reserves at Southeast of Turkey are estimated by probabilistic methods for three producing zones
Karabogaz Formation, Kbb-C Member of Karababa formation and Derdere Formation. Probability density function of petrophysical parameters are evaluated as inputs in volumetric reserve estimation method and probable reserves are calculated by @Risk software program that is used for implementing Monte Carlo method. Outcomes of the simulation showed that Field-A has P50 reserves as 11.2 MMstb in matrix and 2.0 MMstb in fracture of Karabogaz Formation, 15.7 MMstb in matrix and 3.7 MMstb in fracture of Kbb-C Member and 10.6 MMstb in matrix and 1.6 MMstb in fracture of Derdere Formation. Sensitivity analysis of the inputs showed that matrix porosity, net thickness and fracture porosity are significant in Karabogaz Formation and Kbb-C Member reserve estimation while water saturation and fracture porosity are most significant in estimation of Derdere Formation reserves.
APA, Harvard, Vancouver, ISO, and other styles
12

Van, der Walt Christiaan Maarten. "Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt." Thesis, North-West University, 2014. http://hdl.handle.net/10394/10635.

Full text
Abstract:
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation.
PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
APA, Harvard, Vancouver, ISO, and other styles
13

Balabdaoui, Fadoua. "Nonparametric estimation of a k-monotone density : a new asymptotic distribution theory /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Johnson, Paul E. "Uncertainties in Oceanic Microwave Remote Sensing: The Radar Footprint, the Wind-Backscatter Relationship, and the Measurement Probability Density Function." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/71.

Full text
Abstract:
Oceanic microwave remote sensing provides the data necessary for the estimation of significant geophysical parameters such as the near-surface vector wind. To obtain accurate estimates, a precise understanding of the measurements is critical. This work clarifies and quantifies specific uncertainties in the scattered power measured by an active radar instrument. While there are many sources of uncertainty in remote sensing measurements, this work concentrates on three significant, yet largely unstudied effects. With a theoretical derivation of the backscatter from an ocean-like surface, results from this dissertation demonstrate that the backscatter decays with surface roughness with two distinct modes of behavior, affected by the size of the footprint. A technique is developed and scatterometer data analyzed to quantify the variability of spaceborne backscatter measurements for given wind conditions; the impact on wind retrieval is described in terms of bias and the Cramer-Rao lower bound. The probability density function of modified periodogram averages (a spectral estimation technique) is derived in generality and for the specific case of power estimates made by the NASA scatterometer. The impact on wind retrieval is quantified.
APA, Harvard, Vancouver, ISO, and other styles
15

Silase, Geletu Biruk. "Modeling the Behavior of an Electronically Switchable Directional Antenna for Wireless Sensor Networks." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3026.

Full text
Abstract:
Reducing power consumption is among the top concerns in Wireless Sensor Networks, as the lifetime of a Wireless Sensor Network depends on its power consumption. Directional antennas help achieve this goal contrary to the commonly used omnidirectional antennas that radiate electromagnetic power equally in all directions, by concentrating the radiated electromagnetic power only in particular directions. This enables increased communication range at no additional energy cost and reduces contention on the wireless medium. The SPIDA (SICS Parasitic Interference Directional Antenna) prototype is one of the few real-world prototypes of electronically switchable directional antennas for Wireless Sensor Networks. However, building several prototypes of SPIDA and conducting real-world experiments using them may be expensive and impractical. Modeling SPIDA based on real-world experiments avoids the expenses incurred by enabling simulation of large networks equipped with SPIDA. Such a model would then allow researchers to develop new algorithms and protocols that take advantage of the provided directional communication on existing Wireless Sensor Network simulators. In this thesis, a model of SPIDA for Wireless Sensor Networks is built based on thoroughly designed real-world experiments. The thesis builds a probabilistic model that accounts for variations in measurements, imperfections in the prototype construction, and fluctuations in experimental settings that affect the values of the measured metrics. The model can be integrated into existing Wireless Sensor Network simulators to foster the research of new algorithms and protocols that take advantage of directional communication. The model returns the values of signal strength and packet reception rate from a node equipped with SPIDA at a certain point in space given the two-dimensional distance coordinates of the point and the configuration of SPIDA as inputs.
Phone:+46765816263 Additional email: burkaja@yahoo.com
APA, Harvard, Vancouver, ISO, and other styles
16

Manomaiphiboon, Kasemsan. "Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5141.

Full text
Abstract:
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
APA, Harvard, Vancouver, ISO, and other styles
17

Коломієць, Антон Ігорович. "Дослідження ядерного оцінювання щільності імовірності акустичних сигналів." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/28339.

Full text
Abstract:
Метою роботи є дослідження і аналіз ядерного оцінювання щільності імовірності акустичних сигналів. У роботі приведені основні відомості з теорії імовірності про випадкові величини та їх імовірнісні характеристики. Щільність імовірності дозволяє вирішувати задачі вимірювання випадкових процесів, здійснювати класифікацію сигналів, досліджувати функціональні перетворення та ін. Під час виконання було проведено ядерне оцінювання щільності імовірності згенерованих акустичних сигналів, використовуючи такі закони розподілу: нормальний закон розподілу, закон розподілу Стьюдента, закон розподілу Лапласа. Порівняв результати теоретичних розрахунків з експериментальними, отримані наступні результати:експериментальні значення максимально наближаються до теоретичних зі збільшенням об’єму вибірок.
Метою роботи є дослідження і аналіз ядерного оцінювання щільності імовірності акустичних сигналів. У роботі приведені основні відомості з теорії імовірності про випадкові величини та їх імовірнісні характеристики. Щільність імовірності дозволяє вирішувати задачі вимірювання випадкових процесів, здійснювати класифікацію сигналів, досліджувати функціональні перетворення та ін. Під час виконання було проведено ядерне оцінювання щільності імовірності згенерованих акустичних сигналів, використовуючи такі закони розподілу: нормальний закон розподілу, закон розподілу Стьюдента, закон розподілу Лапласа. Порівняв результати теоретичних розрахунків з експериментальними, отримані наступні результати:експериментальні значення максимально наближаються до теоретичних зі збільшенням об’єму вибірок.
APA, Harvard, Vancouver, ISO, and other styles
18

Servien, Rémi. "Estimation de régularité locale." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00730491.

Full text
Abstract:
L'objectif de cette thèse est d'étudier le comportement local d'une mesure de probabilité, notamment au travers d'un indice de régularité locale. Dans la première partie, nous établissons la normalité asymptotique de l'estimateur des kn plus proches voisins de la densité et de l'histogramme. Dans la deuxième, nous définissons un estimateur du mode sous des hypothèses affaiblies. Nous montrons que l'indice de régularité intervient dans ces deux problèmes. Enfin, nous construisons dans une troisième partie différents estimateurs pour l'indice de régularité à partir d'estimateurs de la fonction de répartition, dont nous réalisons une revue bibliographique.
APA, Harvard, Vancouver, ISO, and other styles
19

Becvar, Martin. "Estimating typical sediment concentration probability density functions for European rivers." Thesis, Cranfield University, 2005. http://hdl.handle.net/1826/1016.

Full text
Abstract:
Sediment in rivers is linked with qualitative and quantitative water problems throughout Europe. Sediment supply and transfer are part of a natural process of minimising gradients in the landscape. However, since human activities have started to affect the equilibrium, sediment supply is often out of balance with the river system. Cases of either low or high concentration often mean an instability which may cause severe problems. Therefore it is highly important to gain knowledge about sediment patterns in catchments as a part of catchment management. This study was undertaken in order to improve sediment modelling in the GREAT-ER point source pollution river modelling package which currently uses suspended sediment concentration of 15 mg.l-1 for all rivers in Europe, which is an obvious oversimplification. There are three aims for this thesis; one to investigate the range of suspended sediment yields from major European catchments (44 catchments investigated), two the verification of sediment delivery equations and three to develop a methodology to predict suspended sediment concentration from sediment yield in these rivers. Coarse sediment and bed load are not investigated in this study. Monitored river sediment concentration data were analysed and compared to sediment yields obtained using the well established sediment delivery ratio (SDR) approach. Several SDR equations were tested. Equations where the area of the catchment was used as the sole variable provide the best results. In addition, sediment yields were estimated based on the recent PESERA soil erosion map for Europe. Annual sediment yields were finally predicted using three relationships between observed yields and catchment characteristics. A method to predict sediment concentration at different flow exceedance rates was successfully developed and provides satisfactory results. The basic principle of the method is redistribution of annual sediment yield into annual water volume using flow characteristics at the point of interest. Further investigations with an emphasis on sediment data and refining the methodology were suggested in order to improve concentration modelling.
APA, Harvard, Vancouver, ISO, and other styles
20

Ternynck, Camille. "Contributions à la modélisation de données spatiales et fonctionnelles : applications." Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30062/document.

Full text
Abstract:
Dans ce mémoire de thèse, nous nous intéressons à la modélisation non paramétrique de données spatiales et/ou fonctionnelles, plus particulièrement basée sur la méthode à noyau. En général, les échantillons que nous avons considérés pour établir les propriétés asymptotiques des estimateurs proposés sont constitués de variables dépendantes. La spécificité des méthodes étudiées réside dans le fait que les estimateurs prennent en compte la structure de dépendance des données considérées.Dans une première partie, nous appréhendons l’étude de variables réelles spatialement dépendantes. Nous proposons une nouvelle approche à noyau pour estimer les fonctions de densité de probabilité et de régression spatiales ainsi que le mode. La particularité de cette approche est qu’elle permet de tenir compte à la fois de la proximité entre les observations et de celle entre les sites. Nous étudions les comportements asymptotiques des estimateurs proposés ainsi que leurs applications à des données simulées et réelles.Dans une seconde partie, nous nous intéressons à la modélisation de données à valeurs dans un espace de dimension infinie ou dites "données fonctionnelles". Dans un premier temps, nous adaptons le modèle de régression non paramétrique introduit en première partie au cadre de données fonctionnelles spatialement dépendantes. Nous donnons des résultats asymptotiques ainsi que numériques. Puis, dans un second temps, nous étudions un modèle de régression de séries temporelles dont les variables explicatives sont fonctionnelles et le processus des innovations est autorégressif. Nous proposons une procédure permettant de tenir compte de l’information contenue dans le processus des erreurs. Après avoir étudié le comportement asymptotique de l’estimateur à noyau proposé, nous analysons ses performances sur des données simulées puis réelles.La troisième partie est consacrée aux applications. Tout d’abord, nous présentons des résultats de classification non supervisée de données spatiales (multivariées), simulées et réelles. La méthode de classification considérée est basée sur l’estimation du mode spatial, obtenu à partir de l’estimateur de la fonction de densité spatiale introduit dans le cadre de la première partie de cette thèse. Puis, nous appliquons cette méthode de classification basée sur le mode ainsi que d’autres méthodes de classification non supervisée de la littérature sur des données hydrologiques de nature fonctionnelle. Enfin, cette classification des données hydrologiques nous a amené à appliquer des outils de détection de rupture sur ces données fonctionnelles
In this dissertation, we are interested in nonparametric modeling of spatial and/or functional data, more specifically based on kernel method. Generally, the samples we have considered for establishing asymptotic properties of the proposed estimators are constituted of dependent variables. The specificity of the studied methods lies in the fact that the estimators take into account the structure of the dependence of the considered data.In a first part, we study real variables spatially dependent. We propose a new kernel approach to estimating spatial probability density of the mode and regression functions. The distinctive feature of this approach is that it allows taking into account both the proximity between observations and that between sites. We study the asymptotic behaviors of the proposed estimates as well as their applications to simulated and real data. In a second part, we are interested in modeling data valued in a space of infinite dimension or so-called "functional data". As a first step, we adapt the nonparametric regression model, introduced in the first part, to spatially functional dependent data framework. We get convergence results as well as numerical results. Then, later, we study time series regression model in which explanatory variables are functional and the innovation process is autoregressive. We propose a procedure which allows us to take into account information contained in the error process. After showing asymptotic behavior of the proposed kernel estimate, we study its performance on simulated and real data.The third part is devoted to applications. First of all, we present unsupervised classificationresults of simulated and real spatial data (multivariate). The considered classification method is based on the estimation of spatial mode, obtained from the spatial density function introduced in the first part of this thesis. Then, we apply this classification method based on the mode as well as other unsupervised classification methods of the literature on hydrological data of functional nature. Lastly, this classification of hydrological data has led us to apply change point detection tools on these functional data
APA, Harvard, Vancouver, ISO, and other styles
21

Buchman, Susan. "High-Dimensional Adaptive Basis Density Estimation." Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/169.

Full text
Abstract:
In the realm of high-dimensional statistics, regression and classification have received much attention, while density estimation has lagged behind. Yet there are compelling scientific questions which can only be addressed via density estimation using high-dimensional data, such as the paths of North Atlantic tropical cyclones. If we cast each track as a single high-dimensional data point, density estimation allows us to answer such questions via integration or Monte Carlo methods. In this dissertation, I present three new methods for estimating densities and intensities for high-dimensional data, all of which rely on a technique called diffusion maps. This technique constructs a mapping for high-dimensional, complex data into a low-dimensional space, providing a new basis that can be used in conjunction with traditional density estimation methods. Furthermore, I propose a reordering of importance sampling in the high-dimensional setting. Traditional importance sampling estimates high-dimensional integrals with the aid of an instrumental distribution chosen specifically to minimize the variance of the estimator. In many applications, the integral of interest is with respect to an estimated density. I argue that in the high-dimensional realm, performance can be improved by reversing the procedure: instead of estimating a density and then selecting an appropriate instrumental distribution, begin with the instrumental distribution and estimate the density with respect to it directly. The variance reduction follows from the improved density estimate. Lastly, I present some initial results in using climatic predictors such as sea surface temperature as spatial covariates in point process estimation.
APA, Harvard, Vancouver, ISO, and other styles
22

Pai, Madhusudan Gurpura. "Probability density function formalism for multiphase flows." [Ames, Iowa : Iowa State University], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sardo, Lucia. "Model selection in probability density estimation using Gaussian mixtures." Thesis, University of Surrey, 1997. http://epubs.surrey.ac.uk/842833/.

Full text
Abstract:
This thesis proposes Gaussian Mixtures as a flexible semiparametric tool for density estimation and addresses the problem of model selection for this class of density estimators. First, a brief introduction to various techniques for model selection proposed in literature is given. The most commonly used techniques are cross validation nad methods based on data reuse and they all are either computationally very intensive or extremely demanding in terms of training set size. Another class of methods known as information criteria allows model selection at a much lower computational cost and for any sample size. The main objective of this study is to develop a technique for model selection that is not too computationally demanding, while capable of delivering an acceptable performance on a range of problems of various dimensionality. Another important issue addressed is the effect of the sample size. Large data sets are often difficult and costly to obtain, hence keeping the sample size within reasonable limits is also very important. Nevertheless sample size is central to the problem of density estimation and one cannot expect good results with extremely limited samples. Information Criteria are the most suitable candidates for a model selection procedure fulfilling these requirements. The well-known criterion Schwarz's Bayesian Information Criterion (BIC) has been analysed and its deficiencies when used with data of large dimensionality data are noted. A modification that improves on BIC criterion is proposed and named Maximum Penalised Likelihood (MPL) criterion. This criterion has the advantage that it can adapted to the data and its satisfactory performance is demonstrated experimentally. Unfortunately all information criteria, including the proposed MPL, suffer from a major drawback: a strong assumption of simplicity of the density to be estimated. This can lead to badly underfitted estimates, especially for small sample size problems. As a solution to such deficiencies, a procedure for validating the different models, based on an assessment of the model predictive performance, is proposed. The optimality criterion for model selection can be formulated as follow; if a model is able to predict the observed data frequencies within the statistical error, it is an acceptable model, otherwise it is rejected. An attractive feature of such a measure of goodness is the fact that it is an absolute measure, rather than a relative one, which would only provide a ranking between candidated models.
APA, Harvard, Vancouver, ISO, and other styles
24

Aguirre-Saldivar, Rina Guadalupe. "Two scalar probability density function models for turbulent flames." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Louloudi, Sofia. "Transported probability density function : modelling of turbulent jet flames." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/8007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hulek, Tomas. "Modelling of turbulent combustion using transported probability density function methods." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Somé, Sobom Matthieu. "Estimations non paramétriques par noyaux associés multivariés et applications." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2030/document.

Full text
Abstract:
Dans ce travail, l'approche non-paramétrique par noyaux associés mixtes multivariés est présentée pour les fonctions de densités, de masse de probabilité et de régressions à supports partiellement ou totalement discrets et continus. Pour cela, quelques aspects essentiels des notions d'estimation par noyaux continus (dits classiques) multivariés et par noyaux associés univariés (discrets et continus) sont d'abord rappelés. Les problèmes de supports sont alors révisés ainsi qu'une résolution des effets de bords dans les cas des noyaux associés univariés. Le noyau associé multivarié est ensuite défini et une méthode de leur construction dite mode-dispersion multivarié est proposée. Il s'ensuit une illustration dans le cas continu utilisant le noyau bêta bivarié avec ou sans structure de corrélation de type Sarmanov. Les propriétés des estimateurs telles que les biais, les variances et les erreurs quadratiques moyennes sont également étudiées. Un algorithme de réduction du biais est alors proposé et illustré sur ce même noyau avec structure de corrélation. Des études par simulations et applications avec le noyau bêta bivarié avec structure de corrélation sont aussi présentées. Trois formes de matrices des fenêtres, à savoir, pleine, Scott et diagonale, y sont utilisées puis leurs performances relatives sont discutées. De plus, des noyaux associés multiples ont été efficaces dans le cadre de l'analyse discriminante. Pour cela, on a utilisé les noyaux univariés binomial, catégoriel, triangulaire discret, gamma et bêta. Par la suite, les noyaux associés avec ou sans structure de corrélation ont été étudiés dans le cadre de la régression multiple. En plus des noyaux univariés ci-dessus, les noyaux bivariés avec ou sans structure de corrélation ont été aussi pris en compte. Les études par simulations montrent l'importance et les bonnes performances du choix des noyaux associés multivariés à matrice de lissage pleine ou diagonale. Puis, les noyaux associés continus et discrets sont combinés pour définir les noyaux associés mixtes univariés. Les travaux ont aussi donné lieu à la création d'un package R pour l'estimation de fonctions univariés de densités, de masse de probabilité et de régression. Plusieurs méthodes de sélections de fenêtres optimales y sont implémentées avec une interface facile d'utilisation. Tout au long de ce travail, la sélection des matrices de lissage se fait généralement par validation croisée et parfois par les méthodes bayésiennes. Enfin, des compléments sur les constantes de normalisations des estimateurs à noyaux associés des fonctions de densité et de masse de probabilité sont présentés
This work is about nonparametric approach using multivariate mixed associated kernels for densities, probability mass functions and regressions estimation having supports partially or totally discrete and continuous. Some key aspects of kernel estimation using multivariate continuous (classical) and (discrete and continuous) univariate associated kernels are recalled. Problem of supports are also revised as well as a resolution of boundary effects for univariate associated kernels. The multivariate associated kernel is then defined and a construction by multivariate mode-dispersion method is provided. This leads to an illustration on the bivariate beta kernel with Sarmanov's correlation structure in continuous case. Properties of these estimators are studied, such as the bias, variances and mean squared errors. An algorithm for reducing the bias is proposed and illustrated on this bivariate beta kernel. Simulations studies and applications are then performed with bivariate beta kernel. Three types of bandwidth matrices, namely, full, Scott and diagonal are used. Furthermore, appropriated multiple associated kernels are used in a practical discriminant analysis task. These are the binomial, categorical, discrete triangular, gamma and beta. Thereafter, associated kernels with or without correlation structure are used in multiple regression. In addition to the previous univariate associated kernels, bivariate beta kernels with or without correlation structure are taken into account. Simulations studies show the performance of the choice of associated kernels with full or diagonal bandwidth matrices. Then, (discrete and continuous) associated kernels are combined to define mixed univariate associated kernels. Using the tools of unification of discrete and continuous analysis, the properties of the mixed associated kernel estimators are shown. This is followed by an R package, created in univariate case, for densities, probability mass functions and regressions estimations. Several smoothing parameter selections are implemented via an easy-to-use interface. Throughout the paper, bandwidth matrix selections are generally obtained using cross-validation and sometimes Bayesian methods. Finally, some additionnal informations on normalizing constants of associated kernel estimators are presented for densities or probability mass functions
APA, Harvard, Vancouver, ISO, and other styles
28

Jawhar, Nizar Sami. "Adaptive Density Estimation Based on the Mode Existence Test." DigitalCommons@USU, 1996. https://digitalcommons.usu.edu/etd/7129.

Full text
Abstract:
The kernel persists as the most useful tool for density estimation. Although, in general, fixed kernel estimates have proven superior to results of available variable kernel estimators, Minnotte's mode tree and mode existence test give us newfound hope of producing a useful adaptive kernel estimator that triumphs when the fixed kernel methods fail. It improves on the fixed kernel in multimodal distributions where the size of modes is unequal, and where the degree of separation of modes varies. When these latter conditions exist, they present a serious challenge to the best of fixed kernel density estimators. Capitalizing on the work of Minnotte in detecting multimodality adaptively, we found it possible to determine the bandwidth h adaptively in a most original fashion and to estimate the mixture normals adaptively, using the normal kernel with encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
29

Weerasinghe, Weerasinghe Mudalige Sujith Rohitha. "Application of Lagrangian probability density function approach to turbulent reacting flows." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kakhi, M. "The transported probability density function approach for predicting turbulent combusting flows." Thesis, Imperial College London, 1994. http://hdl.handle.net/10044/1/8729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sarajedini, Amir. "Probability density estimation with neural networks and its application to blind signal processing /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9906494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yeon, Ji Youn. "Travel time estimation as a function of the probability of breakdown." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Burot, Daria. "Transported probability density function for the numerical simulation of flames characteristic of fire." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0026/document.

Full text
Abstract:
La simulation de scenarios d’incendie nécessite de modéliser de nombreux processus complexe, particulièrement la combustion gazeuse d’hydrocarbure incluant la production de suie et les transferts radiatifs dans un écoulement turbulent. La nature turbulente de l’écoulement fait apparaitre des interactions qui doivent être prises en compte entre ces processus. L’objectif de cette thèse est d’implémenter une méthode de transport de la fonction de densité de probabilité afin de modéliser ces interactions de manière précise. En conjonction avec un modèle de flammelettes, le modèle de Lindstedt et un modèle à large-bande k-corrélé, l’équation de transport de la PDF jointe de composition est résolue avec la méthode des Champs Eulérien Stochastiques. Le modèle est validé en simulant 12 flammes turbulentes recouvrant une large gamme de nombre de Reynolds et de propension à former de la suie par les combustibles. Dans un second temps, les effets des interactions rayonnement-turbulence (TRI) sur l’émission de la suie sont étudiés en détails, montrant que la TRI tend à augmenter l’émission radiative de la suie à cause des fluctuations de température, mais que cette augmentation est plus faible pour des nombres de Reynolds élevés ou des quantités de suie plus élevées. Ceci est dû à la corrélation négative entre le coefficient d’absorption des suies et la fonction de Planck. Finalement, l’influence de la corrélation entre la fraction de mélange et le paramètre de non-adiabaticité est étudiée sur une flamme d’éthylène, montrant qu’elle a peu d’effet sur la structure moyenne de flamme mais tend à limiter les fluctuations de température et les pertes radiatives
The simulation of fire scenarios requires the numerical modeling of various complex process, particularly the gaseous combustion of hydrocarbons including soot production and radiative transfers in a turbulent. The turbulent nature of the flow induces interactions between these processes that need to be taken accurately into account. The purpose of this thesis is to implement a transported Probability Density function method to model these interactions precisely. In conjunction with the flamelet model, the Lindstedt model, and a wide-band correlated-k model, the composition joint-PDF transport equation is solved using the Stochastic Eulerian Fields method. The model is validated by simulating 12 turbulent jet flames covering a large range of Reynolds numbers and fuel sooting propensity. Model prediction are found to be in reasonable agreement with experimental data. Second, the effects of turbulence-radiation interactions (TRI) on soot emission are studied in details, showing that TRI tends to increase soot radiative emission due to temperature fluctuations, but that this increase is smaller for higher Reynolds numbers and higher soot loads. This is due to the negative correlation between soot absorption coefficient and the Planck function. Finally, the effects of taking into account the correlation between mixture fraction and enthalpy defect on flame structure and radiative characteristics are also studied on an ethylene flame, showing that it has weak effect on the mean flame structure but tends to inhibit both temperature fluctuations and radiative loss
APA, Harvard, Vancouver, ISO, and other styles
34

Jornet, Sanz Marc. "Mean square solutions of random linear models and computation of their probability density function." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138394.

Full text
Abstract:
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case.
[ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana.
[CAT] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana.
This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València.
Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
TESIS
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Xing. "Time Dependent Kernel Density Estimation: A New Parameter Estimation Algorithm, Applications in Time Series Classification and Clustering." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6425.

Full text
Abstract:
The Time Dependent Kernel Density Estimation (TDKDE) developed by Harvey & Oryshchenko (2012) is a kernel density estimation adjusted by the Exponentially Weighted Moving Average (EWMA) weighting scheme. The Maximum Likelihood Estimation (MLE) procedure for estimating the parameters proposed by Harvey & Oryshchenko (2012) is easy to apply but has two inherent problems. In this study, we evaluate the performances of the probability density estimation in terms of the uniformity of Probability Integral Transforms (PITs) on various kernel functions combined with different preset numbers. Furthermore, we develop a new estimation algorithm which can be conducted using Artificial Neural Networks to eliminate the inherent problems with the MLE method and to improve the estimation performance as well. Based on the new estimation algorithm, we develop the TDKDE-based Random Forests time series classification algorithm which is significantly superior to the commonly used statistical feature-based Random Forests method as well as the Ker- nel Density Estimation (KDE)-based Random Forests approach. Furthermore, the proposed TDKDE-based Self-organizing Map (SOM) clustering algorithm is demonstrated to be superior to the widely used Discrete-Wavelet- Transform (DWT)-based SOM method in terms of the Adjusted Rand Index (ARI).
APA, Harvard, Vancouver, ISO, and other styles
36

Hörmann, Wolfgang, and Onur Bayar. "Modelling Probability Distributions from Data and its Influence on Simulation." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/612/1/document.pdf.

Full text
Abstract:
Generating random variates as generalisation of a given sample is an important task for stochastic simulations. The three main methods suggested in the literature are: fitting a standard distribution, constructing an empirical distribution that approximates the cumulative distribution function and generating variates from the kernel density estimate of the data. The last method is practically unknown in the simulation literature although it is as simple as the other two methods. The comparison of the theoretical performance of the methods and the results of three small simulation studies show that a variance corrected version of kernel density estimation performs best and should be used for generating variates directly from a sample. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
37

Khalil, M. A. M. "On the estimation of the mixing density function in the mixture of exponentials." Thesis, City University London, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Elbahloul, Salem A. "Modelling of turbulent flames with transported probability density function and rate-controlled constrained equilibrium methods." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/30826.

Full text
Abstract:
In this study, turbulent diffusion flames have been modelled using the Transported Probability Density Function (PDF) method and chemistry reduction with the Rate-Controlled Constrained Equilibrium (RCCE). RCCE is a systematic method of chemistry reduction which is employed to simulate the evolution of the chemical composition with a reduced number of species. It is based on the principle of chemical time-scale separation and is formulated in a generalised and systematic manner that allows a reduced mechanism to be derived given a set of constraint species. The transported scalar PDF method was coupled with RANS turbulence modelling and this PDF-RANS methodology was exploited to simulate several turbulent diffusion flames with detailed and RCCE-reduced chemistry. The phenomena of extinction and reignition, soot formation and thermal radiation in these flames are explored. Sandia Flames D, E and F have been simulated with both the detailed GRI-3.0 mechanism and RCCE reduced mechanisms. Scatter plots show that PDF methods with simple mixing models are able to reproduce different degrees of local extinction in Sandia piloted flames. The PDF-RCCE results are compared with PDF simulations with the detailed mechanism and with measurements of Sandia flames. The RCCE method predicted the three flames with the same level of accuracy of the detailed mechanism. The methodology has also been applied to sooting flames with radiative heat transfer. Semi-empirical soot model and Optically-thin radiation model have been combined with the PDF-RCCE method to compute these flames. Methane flames measured by Brooks and Moss [26] have been predicted using several RCCE mechanisms with good agreement with measurements. The propane flame with preheated air [162] has also been simulated with the PDF-RCCE methodology. Gaseous species profiles of the propane flame compare reasonably with measurements but soot and temperature predictions in this flame were weak and improvements are still needed.
APA, Harvard, Vancouver, ISO, and other styles
39

Olsen, Maren Kjøstvedt. "Estimation of annual probability of mooring line failure as a function of safety factors." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-15735.

Full text
Abstract:
In Chapter 2, the procedure for designing a mooring line under Norwegian jurisdiction is discussed. The procedure is given by the Norwegian Maritime Directorate through the ANCHORING REGULATION 09 and ISO 19901-7 (2005). The difference in the required safety factors and the environment return period between ISO 19901-7 (2005) Annex B and the ANCHORING REGULATION 09  has been evaluated. The effect of this difference on the annual probability of mooring line failure is investigated Chapter 4. In Chapter 3, two ways of estimating the annual probability of mooring line failure is discussed. One of them is the environmental line contour method. The method is used for the probability estimates performed in the following chapters of the report. In Chapter 4, the annual probability of mooring line failure is estimated based on model test results for the Midgard platform model. In Chapter 5, SIMO is used to analyze the line tensions of the Midgard platform. From the line tensions found by SIMO, the annual probability of mooring line failure is estimated for various safety factors. The results are compared with the results found in Chapter 4. The chapter also discusses the joint occurrence of environmental storm values. In Chapter 6, the effect of water depth on the annual probability of mooring line failure is discussed by using SIMO. Chapter 7 addresses problems raised while working on the previous chapters; The determination of the sea state which produces the largest line tensions is further discussed after first raising the topic in Chapter 2. The effect of changing the 90 % fractiles used in the environmental contour line method for deciding the annual probability of mooring line failure is investigated by using the model test results. The effect of changing the wave heading on the annual probability estimates is investigated by using SIMO after not having enough model test results of the worst sea sate in Chapter 4. The SIMO results found in Chapter 5 for 100 year environment are not a match to the results found in Chapter 4. The increase in values found by increasing the return period to 10 000 years is also not a match. The last discussion of chapter 7 is therefore the effect of the lack of matching increase.
APA, Harvard, Vancouver, ISO, and other styles
40

Minsker, Stanislav. "Non-asymptotic bounds for prediction problems and density estimation." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44808.

Full text
Abstract:
This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning. Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem. The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.
APA, Harvard, Vancouver, ISO, and other styles
41

Hansen, Elizabeth Ann. "Penalized likelihood estimation of a fixed-effect and a mixed-effect transfer function model." Diss., University of Iowa, 2006. http://ir.uiowa.edu/etd/58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Amezziane, Mohamed. "SMOOTHING PARAMETER SELECTION IN NONPARAMETRIC FUNCTIONAL ESTIMATION." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3488.

Full text
Abstract:
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
Ph.D.
Department of Mathematics
Arts and Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
43

Rashid, Muhammad Asim. "An LTE implementation based on a road traffic density model." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101990.

Full text
Abstract:
The increase in vehicular traffic has created new challenges in determining the behavior of performance of data and safety measures in traffic. Hence, traffic signals on intersection used as cost effective and time saving tools for traffic management in urban areas. But on the other hand the signalized intersections in congested urban areas are the key source of high traffic density and slow traffic. High traffic density causes the slow network traffic data rate between vehicle to vehicle and vehicle to infrastructure. To match up with the emerging technologies, LTE takes the lead with good packet delivery and versatile to changes in the network due to vehicular movements and density. This thesis is about analyzing of LTE implementation based on a road traffic density model. This thesis work is aimed to use probability distribution function to calculate density values and develop a real traffic scenario in LTE network using density values. In order to analyze the traffic behavior, Aimsun simulator software has been used to represent the real situation of traffic density on a model intersection. For a realistic traffic density model field measurement were used for collection of input data. After calibration and validation process, a close to realty results extracted and used a logistic curve of probability distribution function to find out the density situation on each part of intersection. Similar traffic scenarios were implemented on MATLAB based LTE system level simulator. Results were concluded with the whole traffic scenario of 90 seconds and calculating the throughput at every traffic signal time and section. It is quite evident from the results that LTE system adopts the change of traffic behavior with dynamic nature and allocates more bandwidth where it is more needed.
APA, Harvard, Vancouver, ISO, and other styles
44

Yi, Jianwen. "Large eddy probability density function (LEPDF) simulations for turbulent reactive channel flows and hybrid rocket combustion investigations." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187273.

Full text
Abstract:
A new numerical simulation methodology, Large Eddy Probability Density Function (LEPDF), and corresponding numerical code have been developed for turbulent reactive flow systems. In LEPDF, large scale of turbulent motion is resolved accurately. Small scale of motion is taken care of by a modified Smagorinsky subgrid scale model. Chemical reaction terms are resolved exactly without modeling. A numerical scheme to generate inflow boundary conditions has been proposed for spatial simulations of turbulent flows. Monte-Carlo scheme is used to resolve filtered PDF (Probability Density Function) evolution equation. The present turbulent simulation code has been successfully applied in the simulations of transpired and non-transpired fully developed turbulent channel flows. It more accurately predicts turbulent channel flows than the existing temporal simulation code with only 27% of the grid size of the temporal simulation code. It has been shown that "Ejection" and "Sweep" are two dominant events in the wall region of turbulent channel flows. They are responsible for about 120% of the total turbulent production. Their interactions have negative contributions to the turbulent production, thereby keeping the total 100%. Counter-rotating vortex is a major mechanism responsible for turbulent production in boundary layer. It has also shown that injection from channel side walls increases the boundary layer thickness and turbulence intensities, but decreases the wall friction and heat transfer. Suction has opposite effects. A state-of-the-art hybrid rocket research laboratory has been established. Labscale hybrid rockets with fuel port diameters ranging from 0.5 to 4.0 inches have been designed and constructed. Rocket testing facilities for routine measurements and advanced combustion diagnosis techniques, such as infrared image technique and gas chromatography, are well developed. A computerized data acquisition/control system has been designed and built. A new Cu⁺⁺ based catalyst is identified which can improve the burning rate of general HTPB based hybrid rocket fuel by 15%. Scale-up principles are developed through a series of experimental testing on different sizes of hybrid rockets. A polymer (rocket fuel) degradation model with consideration of catalytic effects of small concentration of oxidizer near fuel surface is developed. The numerical predictions are in very good agreements with experimental data.
APA, Harvard, Vancouver, ISO, and other styles
45

Baig, Arif Marza. "Prediction of passive scalar in a mixing layer using vortex-in-cell and probability density function methods." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6343.

Full text
Abstract:
The transport equation for the probability density function (p.d.f.) of a scalar is applied in conjunction with the vortex-in-cell (VIC) method developed by Abdolhosseini and Milane (1998), to predict the passive scalar field in a two-dimensional spatially growing mixing layer. The VIC method predicts the instantaneous velocity field. Then the turbulent flow characteristics such as mean velocity, the root-mean-square (r.m.s.) longitudinal and lateral velocity fluctuations and the Reynolds shear stress are calculated. The scalar field is represented through the transport equation for the scalar p.d.f. and is solved using the Monte Carlo technique. In the p.d.f. equation, turbulent diffusion is modeled using the gradient transport model, wherein the eddy diffusivity is computed using Boussinesq's postulate and using the Reynolds shear stress and gradient of mean velocity from the VIC solution. The molecular mixing term is closed by a modified Curl model, and the convection term uses the mean velocity from the VIC solution. The computational results were compared with available two-dimensional experimental results. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
46

Yee, Paul V. "Regularized radial basis function networks, theory and applications to probability estimation, classification, and time series prediction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0006/NQ42774.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bunn, Wendy Jill. "Sensitivity to Distributional Assumptions in Estimation of the ODP Thresholding Function." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/953.

Full text
Abstract:
Recent technological advances in fields like medicine and genomics have produced high-dimensional data sets and a challenge to correctly interpret experimental results. The Optimal Discovery Procedure (ODP) (Storey 2005) builds on the framework of Neyman-Pearson hypothesis testing to optimally test thousands of hypotheses simultaneously. The method relies on the assumption of normally distributed data; however, many applications of this method will violate this assumption. This thesis investigates the sensitivity of this method to detection of significant but nonnormal data. Overall, estimation of the ODP with the method described in this thesis is satisfactory, except when the nonnormal alternative distribution has high variance and expectation only one standard deviation away from the null distribution.
APA, Harvard, Vancouver, ISO, and other styles
48

Pokhrel, Keshav Prasad. "Statistical Analysis and Modeling of Brain Tumor Data: Histology and Regional Effects." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4746.

Full text
Abstract:
Comprehensive statistical models for non-normally distributed cancerous tumor sizes are of prime importance in epidemiological studies, whereas a long term forecasting models can facilitate in reducing complications and uncertainties of medical progress. The statistical forecasting models are critical for a better understanding of the disease and supply appropriate treatments. In addition such a model can be used for the allocations of budgets, planning, control and evaluations of ongoing efforts of prevention and early detection of the diseases. In the present study, we investigate the effects of age, demography, and race on primary brain tumor sizes using quantile regression methods to obtain a better understanding of the malignant brain tumor sizes. The study reveals that the effects of risk factors together with the probability distributions of the malignant brain tumor sizes, and plays significant role in understanding the rate of change of tumor sizes. The data that our analysis and modeling is based on was obtained from Surveillance Epidemiology and End Results (SEER) program of the United States. We also analyze the discretely observed brain cancer mortality rates using functional data analysis models, a novel approach in modeling time series data, to obtain more accurate and relevant forecast of the mortality rates for the US. We relate the cancer registries, race, age, and gender to age-adjusted brain cancer mortality rates and compare the variations of these rates during the period of the study that data was collected. Finally, in the present study we have developed effective statistical model for heterogenous and high dimensional data that forecast the hazard rates with high degree of accuracy, that will be very helpful to address subject health problems at present and in the future.
APA, Harvard, Vancouver, ISO, and other styles
49

Kontak, Max [Verfasser]. "Novel algorithms of greedy-type for probability density estimation as well as linear and nonlinear inverse problems / Max Kontak." Siegen : Universitätsbibliothek der Universität Siegen, 2018. http://d-nb.info/1157094554/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Chee Sing. "Simultaneous localization and mapping using single cluster probability hypothesis density filters." Doctoral thesis, Universitat de Girona, 2015. http://hdl.handle.net/10803/323637.

Full text
Abstract:
The majority of research in feature-based SLAM builds on the legacy of foundational work using the EKF, a single-object estimation technique. Because feature-based SLAM is an inherently multi-object problem, this has led to a number of suboptimalities in popular solutions. We develop an algorithm using the SC-PHD filter, a multi-object estimator modeled on cluster processes. This algorithm hosts capabilities not typically seen with feature-base SLAM solutions such as principled handling of clutter measurements and missed detections, and navigation with a mixture of stationary and moving landmarks. We present experiments with the SC-PHD SLAM algorithm on both synthetic and real datasets using an autonomous underwater vehicle. We compare our method to the RB-PHD SLAM, showing that it requires fewer approximations in its derivation and thus achieves superior performance.
En aquesta tesis es desenvolupa aquest algoritme a partir d’un filtre PHD amb un únic grup (SC-PHD), una tècnica d’estimació multi-objecte basat en processos d’agrupació. Aquest algoritme té unes capacitats que normalment no es veuen en els algoritmes de SLAM basats en característiques, ja que és capaç de tractar falses característiques, així com característiques no detectades pels sensors del vehicle, a més de navegar en un entorn amb la presència de característiques estàtiques i característiques en moviment de forma simultània. Es presenten els resultats experimentals de l’algoritme SC-PHD en entorns reals i simulats utilitzant un vehicle autònom submarí. Els resultats són comparats amb l’algoritme de SLAM Rao-Blackwellized PHD (RB-PHD), demostrant que es requereixen menys aproximacions en la seva derivació i en conseqüència s’obté un rendiment superior.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography