Dissertations / Theses on the topic 'Forecast probability density function'

To see the other types of publications on this topic, follow the link: Forecast probability density function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Forecast probability density function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Саркісян, Анна Оганесівна. "Методи і моделі прогнозування актуарних ризиків." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2021. https://ela.kpi.ua/handle/123456789/45229.

Full text
Abstract:
Дипломна робота: 169 с., 21 табл., 48 рис., 2 дод., 29 джерел. Об’єкт дослідження – актуарні ризики, які досліджуються за допомогою байєсівської методології, як потужного інструментарію врахування апріорної та вибіркової інформації з метою уточнення закону розподілу випадкової величини, а також задача прогнозування величини страхових виплат як складна задача актуарної математики, що може бути розв’язана засобами байєсівської методології. Мета роботи – дослідження можливості застосування байєсівської методології для підвищення якості оцінок прогнозів можливих втрат в актуарних задачах; розгляд задач, розв’язок яких може бути покращено шляхом використання байєсівських методів, аналіз задачі прогнозування розподілу величини страхових виплат як такої, що може бути успішно розв’язана за допомогою байєсівських методів аналізу. Моделі - досліджувались байєсівські методи аналізу як інструмент врахування вибіркової та апріорної інформації і модифікації на її основі запропонованих моделей, актуарні ризики як перспективна область застосування байєсівських методів дослідження невизначеності, уточнення структури та покращення адекватності прогностичних якостей моделей. Отримані результати – побудована модель аналізу та прогнозування виплат страхової та модель кількості звернень до страхової. Прогнозні припущення щодо розвитку об’єкту дослідження – узагальнення запропонованого методу аналізу різних типів розподілів випадкових величин, що зустрічаються у страхуванні, проведення дослідження точності моделі залежно від вибору нормуючого коефіцієнта, модифікація відомих методів аналізу та управління страховими ризиками з використанням байєсівської методики.
Bachelor thesis: 169 p., 21 tabl., 48 fig., 2 append., 29 sources. Object of study - actuarial risks, which are studied using Bayesian methodology as a powerful tool for a priori and sample information to clarify the law of distribution of random variables, as well as the problem of predicting the amount of insurance benefits as a complex problem of actuarial mathematics that can be solved by means of Bayesian methodology. Purpose - to study the possibility of applying the Bayesian methodology to improve the quality of estimates of forecasts of possible losses in actuarial problems; consideration of problems, the solution of which can be improved by using Bayesian methods, analysis of the problem of forecasting the distribution of the amount of insurance benefits as such, which can be successfully solved using Bayesian methods of analysis. Used models - Bayesian methods of analysis were studied as a tool to take into account sample and a priori information and modify the proposed models based on it, actuarial risks as a promising area of application of Bayesian methods of uncertainty research, refinement of structure and improvement of adequacy of prognostic qualities of models. Results - a model of analysis and forecasting of insurance payments and a model of the number of appeals to the insurance company. Predictive assumptions about the development of the object of study - generalization of the proposed method of analysis of different types of distributions of random variables found in insurance, study the accuracy of the model depending on the choice of standardization, modification of known methods of analysis and management of insurance risks using Bayesian methodology.
APA, Harvard, Vancouver, ISO, and other styles
2

Pai, Madhusudan Gurpura. "Probability density function formalism for multiphase flows." [Ames, Iowa : Iowa State University], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aguirre-Saldivar, Rina Guadalupe. "Two scalar probability density function models for turbulent flames." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Joshi, Niranjan Bhaskar. "Non-parametric probability density function estimation for medical images." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:ebc6af07-770b-4fee-9dc9-5ebbe452a0c1.

Full text
Abstract:
The estimation of probability density functions (PDF) of intensity values plays an important role in medical image analysis. Non-parametric PDF estimation methods have the advantage of generality in their application. The two most popular estimators in image analysis methods to perform the non-parametric PDF estimation task are the histogram and the kernel density estimator. But these popular estimators crucially need to be ‘tuned’ by setting a number of parameters and may be either computationally inefficient or need a large amount of training data. In this thesis, we critically analyse and further develop a recently proposed non-parametric PDF estimation method for signals, called the NP windows method. We propose three new algorithms to compute PDF estimates using the NP windows method. One of these algorithms, called the log-basis algorithm, provides an easier and faster way to compute the NP windows estimate, and allows us to compare the NP windows method with the two existing popular estimators. Results show that the NP windows method is fast and can estimate PDFs with a significantly smaller amount of training data. Moreover, it does not require any additional parameter settings. To demonstrate utility of the NP windows method in image analysis we consider its application to image segmentation. To do this, we first describe the distribution of intensity values in the image with a mixture of non-parametric distributions. We estimate these distributions using the NP windows method. We then use this novel mixture model to evolve curves with the well-known level set framework for image segmentation. We also take into account the partial volume effect that assumes importance in medical image analysis methods. In the final part of the thesis, we apply our non-parametric mixture model (NPMM) based level set segmentation framework to segment colorectal MR images. The segmentation of colorectal MR images is made challenging due to sparsity and ambiguity of features, presence of various artifacts, and complex anatomy of the region. We propose to use the monogenic signal (local energy, phase, and orientation) to overcome the first difficulty, and the NPMM to overcome the remaining two. Results are improved substantially on those that have been reported previously. We also present various ways to visualise clinically useful information obtained with our segmentations in a 3-dimensional manner.
APA, Harvard, Vancouver, ISO, and other styles
5

Louloudi, Sofia. "Transported probability density function : modelling of turbulent jet flames." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/8007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hulek, Tomas. "Modelling of turbulent combustion using transported probability density function methods." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rahikainen, I. (Ilkka). "Direct methodology for estimating the risk neutral probability density function." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201404241289.

Full text
Abstract:
The target of the study is to find out if the direct methodology could provide same information about the parameters of the risk neutral probability density function (RND) than the reference RND methodologies. The direct methodology is based on for defining the parameters of the RND from underlying asset by using futures contracts and only few at-the-money (ATM) and/or close at-the-money (ATM) options on asset. Of course for enabling the analysis of the feasibility of the direct methodology the reference RNDs must be estimated from the option data. Finally the results of estimating the parameters by the direct methodology are compared to the results of estimating the parameters by the selected reference methodologies for understanding if the direct methodology can be used for understanding the key parameters of the RND. The study is based on S&P 500 index option data from year 2008 for estimating the reference RNDs and for defining the reference moments from the reference RNDs. The S&P 500 futures contract data is necessary for finding the expectation value estimation for the direct methodology. Only few ATM and/or close ATM options from the S&P 500 index option data are necessary for getting the standard deviation estimation for the direct methodology. Both parametric and non-parametric methods were implemented for defining reference RNDs. The reference RND estimation results are presented so that the reference RND estimation methodologies can be compared to each other. The moments of the reference RNDs were calculated from the RND estimation results so that the moments of the direct methodology can be compared to the moments of the reference methodologies. The futures contracts are used in the direct methodology for getting the expectation value estimation of the RND. Only few ATM and/or close ATM options are used in the direct methodology for getting the standard deviation estimation of the RND. The implied volatility is calculated from option prices using ATM and/or close ATM options only. Based on implied volatility the standard deviation can be calculated directly using time scaling equations. Skewness and kurtosis can be calculated from the estimated expectation value and the estimated standard deviation by using the assumption of the lognormal distribution. Based on the results the direct methodology is acceptable for getting the expectation value estimation using the futures contract value directly instead of the expectation value, which is calculated from the RND of full option data, if and only if the time to maturity is relative short. The standard deviation estimation can be calculated from few ATM and/or at close ATM options instead of calculating the RND from full option data only if the time to maturity is relative short. Skewness and kurtosis were calculated from the expectation value estimation and the standard deviation estimation by using the assumption of the lognormal distribution. Skewness and kurtosis could not be estimated by using the assumption of the lognormal distribution because the lognormal distribution is not correct generic assumption for the RND distributions.
APA, Harvard, Vancouver, ISO, and other styles
8

Hao, Wei-Da. "Waveform Estimation with Jitter Noise by Pseudo Symmetrical Probability Density Function." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4587.

Full text
Abstract:
A new method for solving jitter noise in estimating high frequency waveform is proposed. It reduces the bias of the estimation in those points where all the other methods fail to achieve. It provides preliminary models for estimating percentiles in Normal, Exponential probability density function. Based on the model for Normal probability density function, a model for any probability density function is derived. The resulting percentiles, in turn, are used as estimates for the amplitude of the waveform. Simulation results show us with satisfactory accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Weerasinghe, Weerasinghe Mudalige Sujith Rohitha. "Application of Lagrangian probability density function approach to turbulent reacting flows." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kakhi, M. "The transported probability density function approach for predicting turbulent combusting flows." Thesis, Imperial College London, 1994. http://hdl.handle.net/10044/1/8729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sadeghi, Mohammad T. "Automatic architecture selection for probability density function estimation in computer vision." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843248/.

Full text
Abstract:
In this thesis, the problem of probability density function estimation using finite mixture models is considered. Gaussian mixture modelling is used to provide a semi-parametric density estimate for a given data set. The fundamental problem with this approach is that the number of mixtures required to adequately describe the data is not known in advance. In this work, a predictive validation technique [91] is studied and developed as a useful, operational tool that automatically selects the number of components for Gaussian mixture models. The predictive validation test approves a candidate model if, for the set of events they try to predict, the predicted frequencies derived from the model match the empirical ones derived from the data set. A model selection algorithm, based on the validation test, is developed which prevents both problems of over-fitting and under-fitting. We investigate the influence of the various parameters in the model selection method in order to develop it into a robust operational tool. The capability of the proposed method in real world applications is examined on the problem of face image segmentation for automatic initialisation of lip tracking systems. A segmentation approach is proposed which is based on Gaussian mixture modelling of the pixels RGB values using the predictive validation technique. The lip region segmentation is based on the estimated model. First a grouping of the model components is performed using a novel approach. The resulting groups are then the basis of a Bayesian decision making system which labels the pixels in the mouth area as lip or non-lip. The experimental results demonstrate the superiority of the method over the conventional clustering approaches. In order to improve the method computationally an image sampling technique is applied which is based on Sobol sequences. Also, the image modelling process is strengthened by incorporating spatial contextual information using two different methods, a Neigh-bourhood Expectation Maximisation technique and a spatial clustering method based on a Gibbs/Markov random field modelling approach. Both methods are developed within the proposed modelling framework. The results obtained on the lip segmentation application suggest that spatial context is beneficial.
APA, Harvard, Vancouver, ISO, and other styles
12

Phillips, Kimberly Ann. "Probability Density Function Estimation Applied to Minimum Bit Error Rate Adaptive Filtering." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/33280.

Full text
Abstract:
It is known that a matched filter is optimal for a signal corrupted by Gaussian noise. In a wireless environment, the received signal may be corrupted by Gaussian noise and a variety of other channel disturbances: cochannel interference, multiple access interference, large and small-scale fading, etc. Adaptive filtering is the usual approach to mitigating this channel distortion. Existing adaptive filtering techniques usually attempt to minimize the mean square error (MSE) of some aspect of the received signal, with respect to the desired aspect of that signal. Adaptive minimization of MSE does not always guarantee minimization of bit error rate (BER). The main focus of this research involves estimation of the probability density function (PDF) of the received signal; this PDF estimate is used to adaptively determine a solution that minimizes BER. To this end, a new adaptive procedure called the Minimum BER Estimation (MBE) algorithm has been developed. MBE shows improvement over the Least Mean Squares (LMS) algorithm for most simulations involving interference and in some multipath situations. Furthermore, the new algorithm is more robust than LMS to changes in algorithm parameters such as stepsize and window width.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Esterhuizen, Gerhard. "Generalised density function estimation using moments and the characteristic function." Thesis, Link to the online version, 2003. http://hdl.handle.net/10019.1/1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Burot, Daria. "Transported probability density function for the numerical simulation of flames characteristic of fire." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0026/document.

Full text
Abstract:
La simulation de scenarios d’incendie nécessite de modéliser de nombreux processus complexe, particulièrement la combustion gazeuse d’hydrocarbure incluant la production de suie et les transferts radiatifs dans un écoulement turbulent. La nature turbulente de l’écoulement fait apparaitre des interactions qui doivent être prises en compte entre ces processus. L’objectif de cette thèse est d’implémenter une méthode de transport de la fonction de densité de probabilité afin de modéliser ces interactions de manière précise. En conjonction avec un modèle de flammelettes, le modèle de Lindstedt et un modèle à large-bande k-corrélé, l’équation de transport de la PDF jointe de composition est résolue avec la méthode des Champs Eulérien Stochastiques. Le modèle est validé en simulant 12 flammes turbulentes recouvrant une large gamme de nombre de Reynolds et de propension à former de la suie par les combustibles. Dans un second temps, les effets des interactions rayonnement-turbulence (TRI) sur l’émission de la suie sont étudiés en détails, montrant que la TRI tend à augmenter l’émission radiative de la suie à cause des fluctuations de température, mais que cette augmentation est plus faible pour des nombres de Reynolds élevés ou des quantités de suie plus élevées. Ceci est dû à la corrélation négative entre le coefficient d’absorption des suies et la fonction de Planck. Finalement, l’influence de la corrélation entre la fraction de mélange et le paramètre de non-adiabaticité est étudiée sur une flamme d’éthylène, montrant qu’elle a peu d’effet sur la structure moyenne de flamme mais tend à limiter les fluctuations de température et les pertes radiatives
The simulation of fire scenarios requires the numerical modeling of various complex process, particularly the gaseous combustion of hydrocarbons including soot production and radiative transfers in a turbulent. The turbulent nature of the flow induces interactions between these processes that need to be taken accurately into account. The purpose of this thesis is to implement a transported Probability Density function method to model these interactions precisely. In conjunction with the flamelet model, the Lindstedt model, and a wide-band correlated-k model, the composition joint-PDF transport equation is solved using the Stochastic Eulerian Fields method. The model is validated by simulating 12 turbulent jet flames covering a large range of Reynolds numbers and fuel sooting propensity. Model prediction are found to be in reasonable agreement with experimental data. Second, the effects of turbulence-radiation interactions (TRI) on soot emission are studied in details, showing that TRI tends to increase soot radiative emission due to temperature fluctuations, but that this increase is smaller for higher Reynolds numbers and higher soot loads. This is due to the negative correlation between soot absorption coefficient and the Planck function. Finally, the effects of taking into account the correlation between mixture fraction and enthalpy defect on flame structure and radiative characteristics are also studied on an ethylene flame, showing that it has weak effect on the mean flame structure but tends to inhibit both temperature fluctuations and radiative loss
APA, Harvard, Vancouver, ISO, and other styles
15

Kharoufeh, Jeffrey P. "Density estimation for functions of correlated random variables." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1177097417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jornet, Sanz Marc. "Mean square solutions of random linear models and computation of their probability density function." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138394.

Full text
Abstract:
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case.
[ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana.
[CAT] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana.
This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València.
Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
TESIS
APA, Harvard, Vancouver, ISO, and other styles
17

Calatayud, Gregori Julia. "Computational methods for random differential equations: probability density function and estimation of the parameters." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138396.

Full text
Abstract:
[EN] Mathematical models based on deterministic differential equations do not take into account the inherent uncertainty of the physical phenomenon (in a wide sense) under study. In addition, inaccuracies in the collected data often arise due to errors in the measurements. It thus becomes necessary to treat the input parameters of the model as random quantities, in the form of random variables or stochastic processes. This gives rise to the study of random ordinary and partial differential equations. The computation of the probability density function of the stochastic solution is important for uncertainty quantification of the model output. Although such computation is a difficult objective in general, certain stochastic expansions for the model coefficients allow faithful representations for the stochastic solution, which permits approximating its density function. In this regard, Karhunen-Loève and generalized polynomial chaos expansions become powerful tools for the density approximation. Also, methods based on discretizations from finite difference numerical schemes permit approximating the stochastic solution, therefore its probability density function. The main part of this dissertation aims at approximating the probability density function of important mathematical models with uncertainties in their formulation. Specifically, in this thesis we study, in the stochastic sense, the following models that arise in different scientific areas: in Physics, the model for the damped pendulum; in Biology and Epidemiology, the models for logistic growth and Bertalanffy, as well as epidemiological models; and in Thermodynamics, the heat partial differential equation. We rely on Karhunen-Loève and generalized polynomial chaos expansions and on finite difference schemes for the density approximation of the solution. These techniques are only applicable when we have a forward model in which the input parameters have certain probability distributions already set. When the model coefficients are estimated from collected data, we have an inverse problem. The Bayesian inference approach allows estimating the probability distribution of the model parameters from their prior probability distribution and the likelihood of the data. Uncertainty quantification for the model output is then carried out using the posterior predictive distribution. In this regard, the last part of the thesis shows the estimation of the distributions of the model parameters from experimental data on bacteria growth. To do so, a hybrid method that combines Bayesian parameter estimation and generalized polynomial chaos expansions is used.
[ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado.
[CAT] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat.
This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad grant MTM2017–89664–P.
Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
18

Elbahloul, Salem A. "Modelling of turbulent flames with transported probability density function and rate-controlled constrained equilibrium methods." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/30826.

Full text
Abstract:
In this study, turbulent diffusion flames have been modelled using the Transported Probability Density Function (PDF) method and chemistry reduction with the Rate-Controlled Constrained Equilibrium (RCCE). RCCE is a systematic method of chemistry reduction which is employed to simulate the evolution of the chemical composition with a reduced number of species. It is based on the principle of chemical time-scale separation and is formulated in a generalised and systematic manner that allows a reduced mechanism to be derived given a set of constraint species. The transported scalar PDF method was coupled with RANS turbulence modelling and this PDF-RANS methodology was exploited to simulate several turbulent diffusion flames with detailed and RCCE-reduced chemistry. The phenomena of extinction and reignition, soot formation and thermal radiation in these flames are explored. Sandia Flames D, E and F have been simulated with both the detailed GRI-3.0 mechanism and RCCE reduced mechanisms. Scatter plots show that PDF methods with simple mixing models are able to reproduce different degrees of local extinction in Sandia piloted flames. The PDF-RCCE results are compared with PDF simulations with the detailed mechanism and with measurements of Sandia flames. The RCCE method predicted the three flames with the same level of accuracy of the detailed mechanism. The methodology has also been applied to sooting flames with radiative heat transfer. Semi-empirical soot model and Optically-thin radiation model have been combined with the PDF-RCCE method to compute these flames. Methane flames measured by Brooks and Moss [26] have been predicted using several RCCE mechanisms with good agreement with measurements. The propane flame with preheated air [162] has also been simulated with the PDF-RCCE methodology. Gaseous species profiles of the propane flame compare reasonably with measurements but soot and temperature predictions in this flame were weak and improvements are still needed.
APA, Harvard, Vancouver, ISO, and other styles
19

Rashid, Muhammad Asim. "An LTE implementation based on a road traffic density model." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101990.

Full text
Abstract:
The increase in vehicular traffic has created new challenges in determining the behavior of performance of data and safety measures in traffic. Hence, traffic signals on intersection used as cost effective and time saving tools for traffic management in urban areas. But on the other hand the signalized intersections in congested urban areas are the key source of high traffic density and slow traffic. High traffic density causes the slow network traffic data rate between vehicle to vehicle and vehicle to infrastructure. To match up with the emerging technologies, LTE takes the lead with good packet delivery and versatile to changes in the network due to vehicular movements and density. This thesis is about analyzing of LTE implementation based on a road traffic density model. This thesis work is aimed to use probability distribution function to calculate density values and develop a real traffic scenario in LTE network using density values. In order to analyze the traffic behavior, Aimsun simulator software has been used to represent the real situation of traffic density on a model intersection. For a realistic traffic density model field measurement were used for collection of input data. After calibration and validation process, a close to realty results extracted and used a logistic curve of probability distribution function to find out the density situation on each part of intersection. Similar traffic scenarios were implemented on MATLAB based LTE system level simulator. Results were concluded with the whole traffic scenario of 90 seconds and calculating the throughput at every traffic signal time and section. It is quite evident from the results that LTE system adopts the change of traffic behavior with dynamic nature and allocates more bandwidth where it is more needed.
APA, Harvard, Vancouver, ISO, and other styles
20

Yi, Jianwen. "Large eddy probability density function (LEPDF) simulations for turbulent reactive channel flows and hybrid rocket combustion investigations." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187273.

Full text
Abstract:
A new numerical simulation methodology, Large Eddy Probability Density Function (LEPDF), and corresponding numerical code have been developed for turbulent reactive flow systems. In LEPDF, large scale of turbulent motion is resolved accurately. Small scale of motion is taken care of by a modified Smagorinsky subgrid scale model. Chemical reaction terms are resolved exactly without modeling. A numerical scheme to generate inflow boundary conditions has been proposed for spatial simulations of turbulent flows. Monte-Carlo scheme is used to resolve filtered PDF (Probability Density Function) evolution equation. The present turbulent simulation code has been successfully applied in the simulations of transpired and non-transpired fully developed turbulent channel flows. It more accurately predicts turbulent channel flows than the existing temporal simulation code with only 27% of the grid size of the temporal simulation code. It has been shown that "Ejection" and "Sweep" are two dominant events in the wall region of turbulent channel flows. They are responsible for about 120% of the total turbulent production. Their interactions have negative contributions to the turbulent production, thereby keeping the total 100%. Counter-rotating vortex is a major mechanism responsible for turbulent production in boundary layer. It has also shown that injection from channel side walls increases the boundary layer thickness and turbulence intensities, but decreases the wall friction and heat transfer. Suction has opposite effects. A state-of-the-art hybrid rocket research laboratory has been established. Labscale hybrid rockets with fuel port diameters ranging from 0.5 to 4.0 inches have been designed and constructed. Rocket testing facilities for routine measurements and advanced combustion diagnosis techniques, such as infrared image technique and gas chromatography, are well developed. A computerized data acquisition/control system has been designed and built. A new Cu⁺⁺ based catalyst is identified which can improve the burning rate of general HTPB based hybrid rocket fuel by 15%. Scale-up principles are developed through a series of experimental testing on different sizes of hybrid rockets. A polymer (rocket fuel) degradation model with consideration of catalytic effects of small concentration of oxidizer near fuel surface is developed. The numerical predictions are in very good agreements with experimental data.
APA, Harvard, Vancouver, ISO, and other styles
21

Baig, Arif Marza. "Prediction of passive scalar in a mixing layer using vortex-in-cell and probability density function methods." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6343.

Full text
Abstract:
The transport equation for the probability density function (p.d.f.) of a scalar is applied in conjunction with the vortex-in-cell (VIC) method developed by Abdolhosseini and Milane (1998), to predict the passive scalar field in a two-dimensional spatially growing mixing layer. The VIC method predicts the instantaneous velocity field. Then the turbulent flow characteristics such as mean velocity, the root-mean-square (r.m.s.) longitudinal and lateral velocity fluctuations and the Reynolds shear stress are calculated. The scalar field is represented through the transport equation for the scalar p.d.f. and is solved using the Monte Carlo technique. In the p.d.f. equation, turbulent diffusion is modeled using the gradient transport model, wherein the eddy diffusivity is computed using Boussinesq's postulate and using the Reynolds shear stress and gradient of mean velocity from the VIC solution. The molecular mixing term is closed by a modified Curl model, and the convection term uses the mean velocity from the VIC solution. The computational results were compared with available two-dimensional experimental results. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
22

Pokhrel, Keshav Prasad. "Statistical Analysis and Modeling of Brain Tumor Data: Histology and Regional Effects." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4746.

Full text
Abstract:
Comprehensive statistical models for non-normally distributed cancerous tumor sizes are of prime importance in epidemiological studies, whereas a long term forecasting models can facilitate in reducing complications and uncertainties of medical progress. The statistical forecasting models are critical for a better understanding of the disease and supply appropriate treatments. In addition such a model can be used for the allocations of budgets, planning, control and evaluations of ongoing efforts of prevention and early detection of the diseases. In the present study, we investigate the effects of age, demography, and race on primary brain tumor sizes using quantile regression methods to obtain a better understanding of the malignant brain tumor sizes. The study reveals that the effects of risk factors together with the probability distributions of the malignant brain tumor sizes, and plays significant role in understanding the rate of change of tumor sizes. The data that our analysis and modeling is based on was obtained from Surveillance Epidemiology and End Results (SEER) program of the United States. We also analyze the discretely observed brain cancer mortality rates using functional data analysis models, a novel approach in modeling time series data, to obtain more accurate and relevant forecast of the mortality rates for the US. We relate the cancer registries, race, age, and gender to age-adjusted brain cancer mortality rates and compare the variations of these rates during the period of the study that data was collected. Finally, in the present study we have developed effective statistical model for heterogenous and high dimensional data that forecast the hazard rates with high degree of accuracy, that will be very helpful to address subject health problems at present and in the future.
APA, Harvard, Vancouver, ISO, and other styles
23

辻, 義之, Yoshiyuki TSUJI, 圭. 宮地, Kei MIYACHI, 育雄 中村, and Ikuo NAKAMURA. "平板乱流境界層対数速度分布領域における変動速度確率密度関数の特性 (第2報, レイノルズ数依存性について)." 日本機械学会, 2002. http://hdl.handle.net/2237/9103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Johnson, Paul E. "Uncertainties in Oceanic Microwave Remote Sensing: The Radar Footprint, the Wind-Backscatter Relationship, and the Measurement Probability Density Function." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/71.

Full text
Abstract:
Oceanic microwave remote sensing provides the data necessary for the estimation of significant geophysical parameters such as the near-surface vector wind. To obtain accurate estimates, a precise understanding of the measurements is critical. This work clarifies and quantifies specific uncertainties in the scattered power measured by an active radar instrument. While there are many sources of uncertainty in remote sensing measurements, this work concentrates on three significant, yet largely unstudied effects. With a theoretical derivation of the backscatter from an ocean-like surface, results from this dissertation demonstrate that the backscatter decays with surface roughness with two distinct modes of behavior, affected by the size of the footprint. A technique is developed and scatterometer data analyzed to quantify the variability of spaceborne backscatter measurements for given wind conditions; the impact on wind retrieval is described in terms of bias and the Cramer-Rao lower bound. The probability density function of modified periodogram averages (a spectral estimation technique) is derived in generality and for the specific case of power estimates made by the NASA scatterometer. The impact on wind retrieval is quantified.
APA, Harvard, Vancouver, ISO, and other styles
25

Jesch, Dávid [Verfasser], Johannes [Akademischer Betreuer] Janicka, and Michael [Akademischer Betreuer] Schäfer. "Large Eddy Simulation of Turbulent Combustion: A Novel Multivariate Probability Density Function Approach / Dávid Jesch. Betreuer: Johannes Janicka ; Michael Schäfer." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2016. http://d-nb.info/1112269606/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Forgo, Vincent Z. Mr. "A Distribution of the First Order Statistic When the Sample Size is Random." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3181.

Full text
Abstract:
Statistical distributions also known as probability distributions are used to model a random experiment. Probability distributions consist of probability density functions (pdf) and cumulative density functions (cdf). Probability distributions are widely used in the area of engineering, actuarial science, computer science, biological science, physics, and other applicable areas of study. Statistics are used to draw conclusions about the population through probability models. Sample statistics such as the minimum, first quartile, median, third quartile, and maximum, referred to as the five-number summary, are examples of order statistics. The minimum and maximum observations are important in extreme value theory. This paper will focus on the probability distribution of the minimum observation, also known as the first order statistic, when the sample size is random.
APA, Harvard, Vancouver, ISO, and other styles
27

辻, 義之, Yoshiyuki TSUJI, 圭. 宮地, Kei MIYACHI, 孝裕 鈴木, Takahiro SUZUKI, 育雄 中村, and Ikuo NAKAMURA. "平板乱流境界層対数速度分布領域における変動速度確率密度関数の特性 (第3報, 対数法則領域における整構造の役割)." 日本機械学会, 2004. http://hdl.handle.net/2237/9097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mo, Eirik. "Nonlinear stochastic dynamics and chaos by numerical path integration." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1786.

Full text
Abstract:

The numerical path integration method for solving stochastic differential equations is extended to solve systems up to six spatial dimensions, angular variables, and highly nonlinear systems - including systems that results in discontinuities in the response probability density function of the system. Novel methods to stabilize the numerical method and increase computation speed are presented and discussed. This includes the use of the fast Fourier transform (FFT) and some new spline interpolation methods. Some sufficient criteria for the path integration theory to be applicable is also presented. The development of complex numerical code is made possible through automatic code generation by scripting. The resulting code is applied to chaotic dynamical systems by adding a Gaussian noise term to the deterministic equation. Various methods and approximations to compute the largest Lyapunov exponent of these systems are presented and illustrated, and the results are compared. Finally, it is shown that the location and size of the additive noise term affects the results, and it is shown that additive noise for specific systems could make a non-chaotic system chaotic, and a chaotic system non-chaotic.

APA, Harvard, Vancouver, ISO, and other styles
29

Gaddam, Purna Chandra Srinivas Kumar, and Prathik Sunkara. "Advanced Image Processing Using Histogram Equalization and Android Application Implementation." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13735.

Full text
Abstract:
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.
APA, Harvard, Vancouver, ISO, and other styles
30

Phillips, Michael James. "A random matrix model for two-colour QCD at non-zero quark density." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5084.

Full text
Abstract:
We solve a random matrix ensemble called the chiral Ginibre orthogonal ensemble, or chGinOE. This non-Hermitian ensemble has applications to modelling particular low-energy limits of two-colour quantum chromo-dynamics (QCD). In particular, the matrices model the Dirac operator for quarks in the presence of a gluon gauge field of fixed topology, with an arbitrary number of flavours of virtual quarks and a non-zero quark chemical potential. We derive the joint probability density function (JPDF) of eigenvalues for this ensemble for finite matrix size N, which we then write in a factorised form. We then present two different methods for determining the correlation functions, resulting in compact expressions involving Pfaffians containing the associated kernel. We determine the microscopic large-N limits at strong and weak non-Hermiticity (required for physical applications) for both the real and complex eigenvalue densities. Various other properties of the ensemble are also investigated, including the skew-orthogonal polynomials and the fraction of eigenvalues that are real. A number of the techniques that we develop have more general applicability within random matrix theory, some of which we also explore in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
31

Curto, Filipa Madeira Lopes Esteves. "Estimação da aversão ao risco através do cálculo da função densidade de probabilidade subjetiva para o caso das opções do petróleo." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20856.

Full text
Abstract:
Mestrado em Mathematical Finance
O petróleo simboliza uma das mais importantes commodities trocadas mundialmente, apresentando-se como a fonte primária de produção de energia. Este pode ser trocado pelo seu spot price ou pelo recurso a contratos de derivados financeiros, sendo a troca de opções de petróleo um exemplo do segundo tipo de investimento. O estudo desta variável revela-se pertinente para compreender o seu comportamento e estimar previsões possíveis. Tal estudo pode ser conduzido através de uma função densidade de probabilidade. A análise da função densidade de probabilidade risk-neutral urge, pelas suas falhas em abraçar todas as hipóteses representativas do investidor comum, o estudo de uma função densidade de probabilidade suavizada/subjetiva. Considerando apenas a função densidade de probabilidade risk-neutral para uma análise ao preço das opções de petróleo, assumir-se-ia que todos os investidores são indiferentes ao risco, independentemente da circunstância, ou seja, colocariam as suas decisões de investimento unicamente nos eventuais retornos associados a esse investimento. No entanto, tal não vai de encontro com a realidade, o que apela ao cálculo e análise dos valores de aversão ao risco dos investidores respeitantes à variável em estudo. Através do presente ensaio, consegue-se concluir que a aversão ao risco é bastante elevada em períodos de grande volatilidade e incerteza, e consideravelmente menor em períodos de maior estabilidade económica e financeira.
Oil is considered to be one of the most important commodities traded worldwide and is deemed to be the primary source in energy production. It can be traded for its spot price or by using a financial derivatives contract, being oil option trading an example of the second type of investment mentioned. The study of this variable is relevant in order to comprehend its behaviour and estimate possible forecasts. Said study can be conducted through a probability density function. The analysis of a risk-neutral probability density function requires, because of its flaws in embracing all the hypothesis that portray the common investor, the study of a smoothed/subjective probability density function. Considering only a risk-neutral probability density function for the analysis of oil option prices, it would be assuming all investors are indifferent to risk, whichever the circumstance, which means they would place their investment decisions solely on the possible outcomes associated with that investment. However, that does not meet reality, which calls for the computation and analysis of the investors risk aversion's values with respect to the variable at play. From the present essay, it is possible to conclude that risk aversion is rather high in periods of great volatility and uncertainty, and considerably smaller in periods of greater economic and financial stability.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
32

Ericok, Ozlen. "Uncertainty Assessment In Reserv Estimation Of A Naturally Fractured Reservoir." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605713/index.pdf.

Full text
Abstract:
ABSTRACT UNCERTAINTY ASSESSMENT IN RESERVE ESTIMATION OF A NATURALLY FRACTURED RESERVOIR ERIÇ
OK, Ö
zlen M.S., Department of Petroleum and Natural Gas Engineering Supervisor : Prof. Dr. Fevzi GÜ
MRAH December 2004, 169 pages Reservoir performance prediction and reserve estimation depend on various petrophysical parameters which have uncertainties due to available technology. For a proper and economical field development, these parameters must be determined by taking into consideration their uncertainty level and probable data ranges. For implementing uncertainty assessment on estimation of original oil in place (OOIP) of a field, a naturally fractured carbonate field, Field-A, is chosen to work with. Since field information is obtained by drilling and testing wells throughout the field, uncertainty in true ranges of reservoir parameters evolve due to impossibility of drilling every location on an area. This study is based on defining the probability distribution of uncertain variables in reserve estimation and evaluating probable reserve amount by using Monte Carlo simulation method. Probabilistic reserve estimation gives the whole range of probable v original oil in place amount of a field. The results are given by their likelyhood of occurance as P10, P50 and P90 reserves in summary. In the study, Field-A reserves at Southeast of Turkey are estimated by probabilistic methods for three producing zones
Karabogaz Formation, Kbb-C Member of Karababa formation and Derdere Formation. Probability density function of petrophysical parameters are evaluated as inputs in volumetric reserve estimation method and probable reserves are calculated by @Risk software program that is used for implementing Monte Carlo method. Outcomes of the simulation showed that Field-A has P50 reserves as 11.2 MMstb in matrix and 2.0 MMstb in fracture of Karabogaz Formation, 15.7 MMstb in matrix and 3.7 MMstb in fracture of Kbb-C Member and 10.6 MMstb in matrix and 1.6 MMstb in fracture of Derdere Formation. Sensitivity analysis of the inputs showed that matrix porosity, net thickness and fracture porosity are significant in Karabogaz Formation and Kbb-C Member reserve estimation while water saturation and fracture porosity are most significant in estimation of Derdere Formation reserves.
APA, Harvard, Vancouver, ISO, and other styles
33

Falcone, Jessica Dominique. "Validation of high density electrode arrays for cochlear implants: a computational and structural approach." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39563.

Full text
Abstract:
Creating high resolution, or high-density, electrode arrays may be the key for improving cochlear implant users' speech perception in noise, comprehension of lexical languages, and music appreciation. Contemporary electrode arrays use multipolar stimulation techniques such as current steering (shifting the spread of neural excitation in between two physical electrodes) and current focusing (narrowing of the neural spread of excitation) to increase resolution and more specifically target the neural population. Another approach to increasing resolution incorporates microelectromechanical systems (MEMS) fabrication to create a thin film microelectrode (TFM) array with a series of high density electrodes. Validating the benefits of high density electrode arrays requires a systems-level approach. This hypothesis will be tested computationally via cochlea and auditory nerve simulations, and in vitro studies will provide structural proof-of-concept. By employing Rattay's activating function and entering it into Litvak's neural probability model, a first order estimation model was obtained of the auditory nerve's response to electrical stimulation. Two different stimulation scenarios were evaluated: current steering vs. a high density electrode and current focusing of contemporary electrodes vs. current focusing of high density electrodes. The results revealed that a high density electrode is more localized than current steering and requires less current. A second order estimation model was also created COMSOL, which provided the resulting potential and current flow when the electrodes were electrically stimulated. The structural tests were conducted to provide a proof of concept for the TFM arrays' ability to contour to the shape of the cochlea. The TFM arrays were integrated with a standard insertion platform (IP). In vitro tests were performed on human cadaver cochleae using the TFM/IP devices. Fluoroscopic images recorded the insertion, and post analysis 3D CT scans and histology were conducted on the specimens. Only three of the ten implanted TFM/IPs suffered severe delamination. This statistic for scala vestibuli excursion is not an outlier when compared to previous data recorded for contemporary cochlear electrode arrays.
APA, Harvard, Vancouver, ISO, and other styles
34

Silase, Geletu Biruk. "Modeling the Behavior of an Electronically Switchable Directional Antenna for Wireless Sensor Networks." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3026.

Full text
Abstract:
Reducing power consumption is among the top concerns in Wireless Sensor Networks, as the lifetime of a Wireless Sensor Network depends on its power consumption. Directional antennas help achieve this goal contrary to the commonly used omnidirectional antennas that radiate electromagnetic power equally in all directions, by concentrating the radiated electromagnetic power only in particular directions. This enables increased communication range at no additional energy cost and reduces contention on the wireless medium. The SPIDA (SICS Parasitic Interference Directional Antenna) prototype is one of the few real-world prototypes of electronically switchable directional antennas for Wireless Sensor Networks. However, building several prototypes of SPIDA and conducting real-world experiments using them may be expensive and impractical. Modeling SPIDA based on real-world experiments avoids the expenses incurred by enabling simulation of large networks equipped with SPIDA. Such a model would then allow researchers to develop new algorithms and protocols that take advantage of the provided directional communication on existing Wireless Sensor Network simulators. In this thesis, a model of SPIDA for Wireless Sensor Networks is built based on thoroughly designed real-world experiments. The thesis builds a probabilistic model that accounts for variations in measurements, imperfections in the prototype construction, and fluctuations in experimental settings that affect the values of the measured metrics. The model can be integrated into existing Wireless Sensor Network simulators to foster the research of new algorithms and protocols that take advantage of directional communication. The model returns the values of signal strength and packet reception rate from a node equipped with SPIDA at a certain point in space given the two-dimensional distance coordinates of the point and the configuration of SPIDA as inputs.
Phone:+46765816263 Additional email: burkaja@yahoo.com
APA, Harvard, Vancouver, ISO, and other styles
35

Shahraeeni, Mohammad Sadegh. "Inversion of seismic attributes for petrophysical parameters and rock facies." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4754.

Full text
Abstract:
Prediction of rock and fluid properties such as porosity, clay content, and water saturation is essential for exploration and development of hydrocarbon reservoirs. Rock and fluid property maps obtained from such predictions can be used for optimal selection of well locations for reservoir development and production enhancement. Seismic data are usually the only source of information available throughout a field that can be used to predict the 3D distribution of properties with appropriate spatial resolution. The main challenge in inferring properties from seismic data is the ambiguous nature of geophysical information. Therefore, any estimate of rock and fluid property maps derived from seismic data must also represent its associated uncertainty. In this study we develop a computationally efficient mathematical technique based on neural networks to integrate measured data and a priori information in order to reduce the uncertainty in rock and fluid properties in a reservoir. The post inversion (a posteriori) information about rock and fluid properties are represented by the joint probability density function (PDF) of porosity, clay content, and water saturation. In this technique the a posteriori PDF is modeled by a weighted sum of Gaussian PDF’s. A so-called mixture density network (MDN) estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. We solve several inverse problems with the MDN and compare results with Monte Carlo (MC) sampling solution and show that the MDN inversion technique provides good estimate of the MC sampling solution. However, the computational cost of training and using the neural network is much lower than solution found by MC sampling (more than a factor of 104 in some cases). We also discuss the design, implementation, and training procedure of the MDN, and its limitations in estimating the solution of an inverse problem. In this thesis we focus on data from a deep offshore field in Africa. Our goal is to apply the MDN inversion technique to obtain maps of petrophysical properties (i.e., porosity, clay content, water saturation), and petrophysical facies from 3D seismic data. Petrophysical facies (i.e., non-reservoir, oil- and brine-saturated reservoir facies) are defined probabilistically based on geological information and values of the petrophysical parameters. First, we investigate the relationship (i.e., petrophysical forward function) between compressional- and shear-wave velocity and petrophysical parameters. The petrophysical forward function depends on different properties of rocks and varies from one rock type to another. Therefore, after acquisition of well logs or seismic data from a geological setting the petrophysical forward function must be calibrated with data and observations. The uncertainty of the petrophysical forward function comes from uncertainty in measurements and uncertainty about the type of facies. We present a method to construct the petrophysical forward function with its associated uncertainty from the both sources above. The results show that introducing uncertainty in facies improves the accuracy of the petrophysical forward function predictions. Then, we apply the MDN inversion method to solve four different petrophysical inverse problems. In particular, we invert P- and S-wave impedance logs for the joint PDF of porosity, clay content, and water saturation using a calibrated petrophysical forward function. Results show that posterior PDF of the model parameters provides reasonable estimates of measured well logs. Errors in the posterior PDF are mainly due to errors in the petrophysical forward function. Finally, we apply the MDN inversion method to predict 3D petrophysical properties from attributes of seismic data. In this application, the inversion objective is to estimate the joint PDF of porosity, clay content, and water saturation at each point in the reservoir, from the compressional- and shear-wave-impedance obtained from the inversion of AVO seismic data. Uncertainty in the a posteriori PDF of the model parameters are due to different sources such as variations in effective pressure, bulk modulus and density of hydrocarbon, uncertainty of the petrophysical forward function, and random noise in recorded data. Results show that the standard deviations of all model parameters are reduced after inversion, which shows that the inversion process provides information about all parameters. We also applied the result of the petrophysical inversion to estimate the 3D probability maps of non-reservoir facies, brine- and oil-saturated reservoir facies. The accuracy of the predicted oil-saturated facies at the well location is good, but due to errors in the petrophysical inversion the predicted non-reservoir and brine-saturated facies are ambiguous. Although the accuracy of results may vary due to different sources of error in different applications, the fast, probabilistic method of solving non-linear inverse problems developed in this study can be applied to invert well logs and large seismic data sets for petrophysical parameters in different applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Hasan, Abeer. "A Study of non-central Skew t Distributions and their Applications in Data Analysis and Change Point Detection." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1371055538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Van, der Walt Christiaan Maarten. "Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt." Thesis, North-West University, 2014. http://hdl.handle.net/10394/10635.

Full text
Abstract:
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation.
PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
APA, Harvard, Vancouver, ISO, and other styles
38

Денисов, Станіслав Іванович, Станислав Иванович Денисов, Stanislav Ivanovych Denysov, V. V. Reva, and O. O. Bondar. "Generalized Fokker-Planck Equation for the Nanoparticle Magnetic Moment Driven by Poisson White Noise." Thesis, Sumy State University, 2012. http://essuir.sumdu.edu.ua/handle/123456789/35373.

Full text
Abstract:
We derive the generalized Fokker-Planck equation for the probability density function of the nanoparticle magnetic moment driven by Poisson white noise. Our approach is based on the reduced stochastic Landau-Lifshitz equation in which this noise is included into the effective magnetic field. We take into account that the magnetic moment under the noise action can change its direction instantaneously and show that the generalized equation has an integro-differential form. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/35373
APA, Harvard, Vancouver, ISO, and other styles
39

Mahmood, Khalid. "Constrained linear and non-linear adaptive equalization techniques for MIMO-CDMA systems." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/10203.

Full text
Abstract:
Researchers have shown that by combining multiple input multiple output (MIMO) techniques with CDMA then higher gains in capacity, reliability and data transmission speed can be attained. But a major drawback of MIMO-CDMA systems is multiple access interference (MAI) which can reduce the capacity and increase the bit error rate (BER), so statistical analysis of MAI becomes a very important factor in the performance analysis of these systems. In this thesis, a detailed analysis of MAI is performed for binary phase-shift keying (BPSK) signals with random signature sequence in Raleigh fading environment and closed from expressions for the probability density function of MAI and MAI with noise are derived. Further, probability of error is derived for the maximum Likelihood receiver. These derivations are verified through simulations and are found to reinforce the theoretical results. Since the performance of MIMO suffers significantly from MAI and inter-symbol interference (ISI), equalization is needed to mitigate these effects. It is well known from the theory of constrained optimization that the learning speed of any adaptive filtering algorithm can be increased by adding a constraint to it, as in the case of the normalized least mean squared (NLMS) algorithm. Thus, in this work both linear and non-linear decision feedback (DFE) equalizers for MIMO systems with least mean square (LMS) based constrained stochastic gradient algorithm have been designed. More specifically, an LMS algorithm has been developed , which was equipped with the knowledge of number of users, spreading sequence (SS) length, additive noise variance as well as MAI with noise (new constraint) and is named MIMO-CDMA MAI with noise constrained (MNCLMS) algorithm. Convergence and tracking analysis of the proposed algorithm are carried out in the scenario of interference and noise limited systems, and simulation results are presented to compare the performance of MIMO-CDMA MNCLMS algorithm with other adaptive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
40

Gasper, Rebecca Elizabeth. "Action potentials in the peripheral auditory nervous system : a novel PDE distribution model." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1321.

Full text
Abstract:
Auditory physiology is nearly unique in the human body because of its small-diameter neurons. When considering a single node on one neuron, the number of channels is very small, so ion fluxes exhibit randomness. Hodgkin and Huxley, in 1952, set forth a system of Ordinary Differential Equations (ODEs) to track the flow of ions in a squid motor neuron, based on a circuit analogy for electric current. This formalism for modeling is still in use today and is useful because coefficients can be directly measured. To measure auditory properties of Firing Efficiency (FE) and Post Stimulus Time (PST), we can simply measure the depolarization, or "upstroke," of a node. Hence, we reduce the four-dimensional squid neuron model to a two-dimensional system of ODEs. The stochastic variable m for sodium activation is allowed a random walk in addition to its normal evolution, and the results are drastic. The diffusion coefficient, for spreading, is inversely proportional to the number of channels; for 130 ion channels, D is closer to 1/3 than 0 and cannot be called negligible. A system of Partial Differential Equations (PDEs) is derived in these pages to model the distribution of states of the node with respect to the (nondimensionalized) voltage v and the sodium activation gate m. Initial conditions describe a distribution of (v,m) states; in most experiments, this would be a curve with mode at the resting state. Boundary conditions are Robin (Natural) boundary conditions, which gives conservation of the population. Evolution of the PDE has a drift term for the mean change of state and a diffusion term, the random change of state. The phase plane is broken into fired and resting regions, which form basins of attraction for fired and resting-state fixed points. If a stimulus causes ions to flow from the resting region into the fired region, this rate of flux is approximately the firing rate, analogous to clinically measuring when the voltage crosses a threshold. This gives a PST histogram. The FE is an integral of the population over the fired region at a measured stop time after the stimulus (since, in the reduced model, when neurons fire they do not repolarize). This dissertation also includes useful generalizations and methodology for turning other ODEs into PDEs. Within the HH modeling, parameters can be switched for other systems of the body, and may present a similar firing and non-firing separatrix (as in Chapter 3). For any system of ODEs, an advection model can show a distribution of initial conditions or the evolution of a given initial probability density over a state space (Chapter 4); a system of Stochastic Differential Equations can be modeled with an advection-diffusion equation (Chapter 5). As computers increase in speed and as the ability of software to create adaptive meshes and step sizes improves, modeling with a PDE becomes more and more efficient over its ODE counterpart.
APA, Harvard, Vancouver, ISO, and other styles
41

Коломієць, Антон Ігорович. "Дослідження ядерного оцінювання щільності імовірності акустичних сигналів." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/28339.

Full text
Abstract:
Метою роботи є дослідження і аналіз ядерного оцінювання щільності імовірності акустичних сигналів. У роботі приведені основні відомості з теорії імовірності про випадкові величини та їх імовірнісні характеристики. Щільність імовірності дозволяє вирішувати задачі вимірювання випадкових процесів, здійснювати класифікацію сигналів, досліджувати функціональні перетворення та ін. Під час виконання було проведено ядерне оцінювання щільності імовірності згенерованих акустичних сигналів, використовуючи такі закони розподілу: нормальний закон розподілу, закон розподілу Стьюдента, закон розподілу Лапласа. Порівняв результати теоретичних розрахунків з експериментальними, отримані наступні результати:експериментальні значення максимально наближаються до теоретичних зі збільшенням об’єму вибірок.
Метою роботи є дослідження і аналіз ядерного оцінювання щільності імовірності акустичних сигналів. У роботі приведені основні відомості з теорії імовірності про випадкові величини та їх імовірнісні характеристики. Щільність імовірності дозволяє вирішувати задачі вимірювання випадкових процесів, здійснювати класифікацію сигналів, досліджувати функціональні перетворення та ін. Під час виконання було проведено ядерне оцінювання щільності імовірності згенерованих акустичних сигналів, використовуючи такі закони розподілу: нормальний закон розподілу, закон розподілу Стьюдента, закон розподілу Лапласа. Порівняв результати теоретичних розрахунків з експериментальними, отримані наступні результати:експериментальні значення максимально наближаються до теоретичних зі збільшенням об’єму вибірок.
APA, Harvard, Vancouver, ISO, and other styles
42

Oliveira, Natália Lombardi de. "Distribuições preditiva e implícita para ativos financeiros." Universidade Federal de São Carlos, 2017. https://repositorio.ufscar.br/handle/ufscar/9077.

Full text
Abstract:
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-08-28T13:57:07Z No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:03Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:12Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5)
Made available in DSpace on 2017-09-06T13:28:02Z (GMT). No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) Previous issue date: 2017-06-01
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
We present two different approaches to obtain a probability density function for the stock?s future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula.
Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação.
APA, Harvard, Vancouver, ISO, and other styles
43

Alagbe, Solomon Oluyemi. "Experimental and numerical investigation of high viscosity oil-based multiphase flows." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/10495.

Full text
Abstract:
Multiphase flows are of great interest to a large variety of industries because flows of two or more immiscible liquids are encountered in a diverse range of processes and equipment. However, the advent of high viscosity oil requires more investigations to enhance good design of transportation system and forestall its inherent production difficulties. Experimental and numerical studies were conducted on water-sand, oil-water and oilwater- sand respectively in 1-in ID 5m long horizontal pipe. The densities of CYL680 and CYL1000 oils employed are 917 and 916.2kg/m3 while their viscosities are 1.830 and 3.149Pa.s @ 25oC respectively. The solid-phase concentration ranged from 2.15e-04 to 10%v/v with mean diameter of 150micron and material density of 2650kg/m3. Experimentally, the observed flow patterns are Water Assist Annular (WA-ANN), Dispersed Oil in Water (DOW/OF), Oil Plug in Water (OPW/OF) with oil film on the wall and Water Plug in Oil (WPO). These configurations were obtained through visualisation, trend and the probability density function (PDF) of pressure signals along with the statistical moments. Injection of water to assist high viscosity oil transport reduced the pressure gradient by an order of magnitude. No significant differences were found between the gradients of oil-water and oil-water-sand, however, increase in sand concentration led to increase in the pressure losses in oil-water-sand flow. Numerically, Water Assist Annular (WA-ANN), Dispersed Oil in Water (DOW/OF), Oil Plug in Water (OPW/OF) with oil film on the wall, and Water Plug in Oil (WPO) flow pattern were successfully obtained by imposing a concentric inlet condition at the inlet of the horizontal pipe coupled with a newly developed turbulent kinetic energy budget equation coded as user defined function which was hooked up to the turbulence models. These modifications aided satisfactory predictions.
APA, Harvard, Vancouver, ISO, and other styles
44

Moore, Natalie. "The Effect of Receiver Nonlinearity and Nonlinearity Induced Interference on the Performance of Amplitude Modulated Signals." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84899.

Full text
Abstract:
All wireless receivers have some degree of nonlinearity that can negatively impact performance. Two major effects from this nonlinearity are power compression, which leads to amplitude and phase distortions in the received signal, and desensitization caused by a high powered interfering signal at an adjacent channel. As the RF spectrum becomes more crowded, the interference caused by these adjacent signals will become a more significant problem for receiver design. Therefore, having bit and symbol error rate expressions that take the receiver nonlinearity into account will allow for determining the linearity requirements of a receiver. This thesis examines the modeling of the probability density functions of M-PAM and M-QAM signals through an AWGN channel taking into account the impact of receiver nonlinearity. A change of variables technique is used to provide a relationship between the pdf of these signals with a linear receiver and the pdf with a nonlinear receiver. Additionally, theoretical bit and symbol error rates are derived from the pdf expressions. Finally, this approach is extended by deriving pdf and error rate expressions for these signals when nearby blocking signals cause desensitization of the signal of interest. Matlab simulation shows that the derived expressions for a nonlinear receiver have the same accuracy as the accepted expressions for linear receivers.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

GAJJELA, VENKATA SARATH, and SURYA DEEPTHI DUPATI. "Mobile Application Development with Image Applications Using Xamarin." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15838.

Full text
Abstract:
Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement techniques have been widely used in many applications of image processing where the subjective quality of images is important for human interpretation. In many cases, the images have lack of clarity and have some effects on images due to fog, low light and other daylight effects exist. So, the images which have these scenarios should be enhanced and made clear to recognize the objects clearly. Histogram-based image enhancement technique is mainly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. The Histogram equalization algorithm was performed and tested using different images facing the low light, fog images and colour contrast and succeeded in obtaining enhanced images. This technique is implemented by averaging the histogram values as the probability density function. Initially, we have worked with the MATLAB code on Histogram Equalization and made changes to implement an Application Program Interface i.e., API using Xamarin software. The mobile application developed using Xamarin software works efficiently and has less execution time when compared to the application developed in Android Studio. Debugging of the application is successfully done in both Android and IOS versions. The focus of this thesis is to develop a mobile application on Image enhancement using Xamarin on low light, foggy images.
APA, Harvard, Vancouver, ISO, and other styles
46

Oliveira, Natália Lombardi de. "Distribuição preditiva do preço de um ativo financeiro: abordagens via modelo de série de tempo Bayesiano e densidade implícita de Black & Scholes." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/104/104131/tde-13092017-083037/.

Full text
Abstract:
Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-­se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação.
We present two different approaches to obtain a probability density function for the stocks future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula.
APA, Harvard, Vancouver, ISO, and other styles
47

Navarro, Quiles Ana. "COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/98703.

Full text
Abstract:
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias.
Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations.
Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries.
Navarro Quiles, A. (2018). COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
TESIS
APA, Harvard, Vancouver, ISO, and other styles
48

Manomaiphiboon, Kasemsan. "Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5141.

Full text
Abstract:
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
APA, Harvard, Vancouver, ISO, and other styles
49

Bates, Lakesha. "ANALYSIS OF TIME SYNCHRONIZATION ERRORS IN HIGH DATA RATE ULTRAWIDEBAN." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2582.

Full text
Abstract:
Emerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing errors to a small absolute number of bits in error in real-time relative to a data rate of hundreds of megabits per second. Our research approach involves managing bit error rates through identifying maximum timing synchronization errors. Thus, it became our research goal to determine the timing accuracies required to avoid operation of communication systems within the asymptotic region of BER flaring at low BERs in the resultant BER curves. We propose pushing physical layer bit error rates to below 10-12 before using forward error correction (FEC) codes. This way, the maximum reserve is maintained for the FEC hardware to correct for burst as well as recurring bit errors due to corrupt bits caused by other than timing synchronization errors.
M.S.E.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
50

Rosch, Jan, Thijs Heus, Marc Salzmann, Johannes Mülmenstädt, Linda Schlemmer, and Johannes Quaas. "Analysis of diagnostic climate model cloud parameterisations using large-eddy simulations." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-202452.

Full text
Abstract:
Current climate models often predict fractional cloud cover on the basis of a diagnostic probability density function (PDF) describing the subgrid-scale variability of the total water specific humidity, qt, favouring schemes with limited complexity. Standard shapes are uniform or triangular PDFs the width of which is assumed to scale with the gridbox mean qt or the grid-box mean saturation specific humidity, qs. In this study, the qt variability is analysed from large-eddy simulations for two stratocumulus, two shallow cumulus, and one deep convective cases. We find that in most cases, triangles are a better approximation to the simulated PDFs than uniform distributions. In two of the 24 slices examined, the actual distributions were so strongly skewed that the simple symmetric shapes could not capture the PDF at all. The distribution width for either shape scales acceptably well with both the mean value of qt and qs, the former being a slightly better choice. The qt variance is underestimated by the fitted PDFs, but overestimated by the existing parameterisations. While the cloud fraction is in general relatively well diagnosed from fitted or parameterised uniform or triangular PDFs, it fails to capture cases with small partial cloudiness, and in 10 – 30% of the cases misdiagnoses clouds in clear skies or vice-versa. The results suggest choosing a parameterisation with a triangular shape, where the distribution width would scale with the grid-box mean qt using a scaling factor of 0.076. This, however, is subject to the caveat that the reference simulations examined here were partly for rather small domains and driven by idealised boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography