Dissertations / Theses on the topic 'Empirical distribution function test'

To see the other types of publications on this topic, follow the link: Empirical distribution function test.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Empirical distribution function test.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Steele, Michael C., and n/a. "The Power of Categorical Goodness-Of-Fit Statistics." Griffith University. Australian School of Environmental Studies, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20031006.143823.

Full text
Abstract:
The relative power of goodness-of-fit test statistics has long been debated in the literature. Chi-Square type test statistics to determine 'fit' for categorical data are still dominant in the goodness-of-fit arena. Empirical Distribution Function type goodness-of-fit test statistics are known to be relatively more powerful than Chi-Square type test statistics for restricted types of null and alternative distributions. In many practical applications researchers who use a standard Chi-Square type goodness-of-fit test statistic ignore the rank of ordinal classes. This thesis reviews literature in the goodness-of-fit field, with major emphasis on categorical goodness-of-fit tests. The continued use of an asymptotic distribution to approximate the exact distribution of categorical goodness-of-fit test statistics is discouraged. It is unlikely that an asymptotic distribution will produce a more accurate estimation of the exact distribution of a goodness-of-fit test statistic than a Monte Carlo approximation with a large number of simulations. Due to their relatively higher powers for restricted types of null and alternative distributions, several authors recommend the use of Empirical Distribution Function test statistics over nominal goodness-of-fit test statistics such as Pearson's Chi-Square. In-depth power studies confirm the views of other authors that categorical Empirical Distribution Function type test statistics do not have higher power for some common null and alternative distributions. Because of this, it is not sensible to make a conclusive recommendation to always use an Empirical Distribution Function type test statistic instead of a nominal goodness-of-fit test statistic. Traditionally the recommendation to determine 'fit' for multivariate categorical data is to treat categories as nominal, an approach which precludes any gain in power which may accrue from a ranking, should one or more variables be ordinal. The presence of multiple criteria through multivariate data may result in partially ordered categories, some of which have equal ranking. This thesis proposes a modification to the currently available Kolmogorov-Smirnov test statistics for ordinal and nominal categorical data to account for situations of partially ordered categories. The new test statistic, called the Combined Kolmogorov-Smirnov, is relatively more powerful than Pearson's Chi-Square and the nominal Kolmogorov-Smirnov test statistic for some null and alternative distributions. A recommendation is made to use the new test statistic with higher power in situations where some benefit can be achieved by incorporating an Empirical Distribution Function approach, but the data lack a complete natural ordering of categories. The new and established categorical goodness-of-fit test statistics are demonstrated in the analysis of categorical data with brief applications as diverse as familiarity of defence programs, the number of recruits produced by the Merlin bird, a demographic problem, and DNA profiling of genotypes. The results from these applications confirm the recommendations associated with specific goodness-of-fit test statistics throughout this thesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Steele, Michael C. "The Power of Categorical Goodness-Of-Fit Statistics." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/366717.

Full text
Abstract:
The relative power of goodness-of-fit test statistics has long been debated in the literature. Chi-Square type test statistics to determine 'fit' for categorical data are still dominant in the goodness-of-fit arena. Empirical Distribution Function type goodness-of-fit test statistics are known to be relatively more powerful than Chi-Square type test statistics for restricted types of null and alternative distributions. In many practical applications researchers who use a standard Chi-Square type goodness-of-fit test statistic ignore the rank of ordinal classes. This thesis reviews literature in the goodness-of-fit field, with major emphasis on categorical goodness-of-fit tests. The continued use of an asymptotic distribution to approximate the exact distribution of categorical goodness-of-fit test statistics is discouraged. It is unlikely that an asymptotic distribution will produce a more accurate estimation of the exact distribution of a goodness-of-fit test statistic than a Monte Carlo approximation with a large number of simulations. Due to their relatively higher powers for restricted types of null and alternative distributions, several authors recommend the use of Empirical Distribution Function test statistics over nominal goodness-of-fit test statistics such as Pearson's Chi-Square. In-depth power studies confirm the views of other authors that categorical Empirical Distribution Function type test statistics do not have higher power for some common null and alternative distributions. Because of this, it is not sensible to make a conclusive recommendation to always use an Empirical Distribution Function type test statistic instead of a nominal goodness-of-fit test statistic. Traditionally the recommendation to determine 'fit' for multivariate categorical data is to treat categories as nominal, an approach which precludes any gain in power which may accrue from a ranking, should one or more variables be ordinal. The presence of multiple criteria through multivariate data may result in partially ordered categories, some of which have equal ranking. This thesis proposes a modification to the currently available Kolmogorov-Smirnov test statistics for ordinal and nominal categorical data to account for situations of partially ordered categories. The new test statistic, called the Combined Kolmogorov-Smirnov, is relatively more powerful than Pearson's Chi-Square and the nominal Kolmogorov-Smirnov test statistic for some null and alternative distributions. A recommendation is made to use the new test statistic with higher power in situations where some benefit can be achieved by incorporating an Empirical Distribution Function approach, but the data lack a complete natural ordering of categories. The new and established categorical goodness-of-fit test statistics are demonstrated in the analysis of categorical data with brief applications as diverse as familiarity of defence programs, the number of recruits produced by the Merlin bird, a demographic problem, and DNA profiling of genotypes. The results from these applications confirm the recommendations associated with specific goodness-of-fit test statistics throughout this thesis.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Australian School of Environmental Studies
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Haluzová, Dana. "Uplatnění statistických metod pro zkoumání vlastností nejprodávanějších přípravků na ochranu rostlin a vztahů mezi nimi." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2018. http://www.nusl.cz/ntk/nusl-377387.

Full text
Abstract:
This diploma thesis focuses on the statistical examination of properties of plant protection products at Agro-Artikel, s.r.o. Using the empirical distribution function, it focuses on the sales price and the shelf life of the products, tests the hypotheses about the properties of the products and the dependencies between them. The thesis also explores the results of the questionnaire survey and offers recommendations for the introduction of new products.
APA, Harvard, Vancouver, ISO, and other styles
4

TSUYUGUCHI, Aline Barbosa. "Testes de bondade de ajuste para a distribuição Birnbaum-Saunders." Universidade Federal de Campina Grande, 2012. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1333.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-02T21:21:48Z No. of bitstreams: 1 ALINE BARBOSA TSUYUGUCHI - DISSERTAÇÃO PPGMAT 2012..pdf: 613833 bytes, checksum: c354cd90842e461c0fb29b0ee5f925d3 (MD5)
Made available in DSpace on 2018-08-02T21:21:48Z (GMT). No. of bitstreams: 1 ALINE BARBOSA TSUYUGUCHI - DISSERTAÇÃO PPGMAT 2012..pdf: 613833 bytes, checksum: c354cd90842e461c0fb29b0ee5f925d3 (MD5) Previous issue date: 2012-02
CNPq
Neste trabalho estudamos testes de bondade de ajuste para a distribuição Birnbaum-Saunders. Consideramos testes clássicos baseados em função de distribuição empírica (Anderson-Darling, Cramér-von Mises e Kolmogorov-Sminorv) e baseados em função característica empírica. Nos limitamos ao caso onde o vetor de parâmetros é desconhecido e, portanto deverá ser estimado. Apresentamos estudos de simulação para verificar o desempenho das estatísticas de teste em estudo. Além disso, propomos estudos de simulação de Monte Carlo para testes de bondade de ajuste para a distribuição Birnbaum-Saunders com dados com censura tipo II.
In this work we study goodness-of-fit tests for Birnbaum-Saunders distribution. We consider classical tests based on empirical distribution function (Anderson-Darling, Cramér-von Mises e Kolmogorov-Sminorv) and based on empirical characteristic function. We limited this study to the case in which the vector of parameters is unknown and, therefore, must be estimated. We present the simulation studies to verify the performance of the test statistics in study. Also, we propose simulation studies of Monte Carlo for goodness-of-fit test for Birnbaum-Saunders distribution using Type-II censored data.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Yen-Chin. "Empirical distribution function statistics, speed of convergence, and p-variation." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Jun. "Empirical characteristics function in time series estimation and a test statistic in financial modelling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ31169.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

KWON, YEIL. "NONPARAMETRIC EMPIRICAL BAYES SIMULTANEOUS ESTIMATION FOR MULTIPLE VARIANCES." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/495491.

Full text
Abstract:
Statistics
Ph.D.
The shrinkage estimation has proven to be very useful when dealing with a large number of mean parameters. In this dissertation, we consider the problem of simultaneous estimation of multiple variances and construct a shrinkage type, non-parametric estimator. We take the non-parametric empirical Bayes approach by starting with an arbitrary prior on the variances. Under an invariant loss function, the resultant Bayes estimator relies on the marginal cumulative distribution function of the sample variances. Replacing the marginal cdf by the empirical distribution function, we obtain a Non-parametric Empirical Bayes estimator for multiple Variances (NEBV). The proposed estimator converges to the corresponding Bayes version uniformly over a large set. Consequently, the NEBV works well in a post-selection setting. We then apply the NEBV to construct condence intervals for mean parameters in a post-selection setting. It is shown that the intervals based on the NEBV are shortest among all the intervals which guarantee a desired coverage probability. Through real data analysis, we have further shown that the NEBV based intervals lead to the smallest number of discordances, a desirable property when we are faced with the current "replication crisis".
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yuanyuan. "Continuity and compositions of operators with kernels in ultra-test function and ultra-distribution spaces." Doctoral thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-58076.

Full text
Abstract:
In this thesis we consider continuity and positivity properties of pseudo-differential operators in Gelfand-Shilov and Pilipović spaces, and their distribution spaces. We also investigate composition property of pseudo-differential operators with symbols in quasi-Banach modulation spaces. We prove that positive elements with respect to the twisted convolutions, possesing Gevrey regularity of certain order at origin, belong to the Gelfand-Shilov space of the same order. We apply this result to positive semi-definite pseudo-differential operators, as well as show that the strongest Gevrey irregularity of kernels to positive semi-definite operators appear at the diagonals. We also prove that any linear operator with kernel in a Pilipović or Gelfand-Shilov space can be factorized by two operators in the same class. We give links on numerical approximations for such compositions and apply these composition rules to deduce estimates of singular values and establish Schatten-von Neumann properties for such operators.   Furthermore, we derive sufficient and necessary conditions for continuity of the Weyl product with symbols in quasi-Banach modulation spaces.
APA, Harvard, Vancouver, ISO, and other styles
9

Ngunkeng, Grace. "Statistical Analysis of Skew Normal Distribution and its Applications." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1370958073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chopping, M. J. "Linear semi-empirical kernel-driven bidirectional reflectance distribution function models in monitoring semi-arid grasslands from space." Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Guangyuan. "The Energy Goodness-of-fit Test for Univariate Stable Distributions." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1339476355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Onaran, Özlem, and Engelbert Stockhammer. "Do profits affect investment and employment? An empirical test based on the Bhaduri-Marglin model." Inst. für Volkswirtschaftstheorie und -politik, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1534/1/document.pdf.

Full text
Abstract:
In this study, a Kaleckian-Post-Keynesian macroeconomic model, which is an extended version of the Bhaduri and Marglin (1990) model, serves as the starting point. The merit of a Kaleckian model for our purposes is that it highlights the dual function of wages as a component of aggregate demand as well as a cost item as opposed to the mainstream economics, which perceive wages merely as a cost item. Depending on the relative magnitude of these two effects, Kaleckian models distinguish between profit-led and wage-led regimes, where the latter is defined as a low rate of accumulation being caused by a high profit share. Are actual economies wage-led or profit-led? Current orthodoxy implicitly assumes that they are profit-led, and thus supports the neoliberal policy agenda. The purpose of the paper is to carry this discussion into the empirical terrain, and to test whether accumulation and employment are profit-led in two groups of countries. We do so by means of a structural vector autoregression (VAR) model. The model is estimated for USA, UK and France to represent the major developed countries, and for Turkey and Korea to represent developing countries. The latter are chosen since they represent two different export-oriented growth experiences. The results of the adjustment experiences of both countries are in striking contrast to orthodox theory, however they also present counter-examples to each other in terms of their ways of integrating into the world economy. (author's abstract)
Series: Working Papers Series "Growth and Employment in Europe: Sustainability and Competitiveness"
APA, Harvard, Vancouver, ISO, and other styles
13

Gottfridsson, Anneli. "Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution." Thesis, Linköpings universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69738.

Full text
Abstract:
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
APA, Harvard, Vancouver, ISO, and other styles
14

Stewart, Michael Ian. "Asymptotic methods for tests of homogeneity for finite mixture models." University of Sydney. Mathematics and Statistics, 2002. http://hdl.handle.net/2123/855.

Full text
Abstract:
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
APA, Harvard, Vancouver, ISO, and other styles
15

Schwartz, Amy K. "Divergent natural selection and the parallel evolution of mating preferences : a model and empirical test for the origins of reproductive isolation." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82432.

Full text
Abstract:
Ecological speciation involves the evolution of reproductive isolation (RI) as a by-product of adaptation to different selective environments. Parallel patterns of non-random mating by environment type provide strong evidence that ecological speciation has occurred. The processes involved in the origins of RI are more difficult to detect however. One mechanism involves the correlated evolution of mating preferences and sexually selected traits. I developed a conceptual model for detecting RI under various scenarios of mate preference evolution. The model predicts that RI will not evolve if preferences are evolutionarily constrained relative to the preferred traits, but is detectable as long as preferences evolve in parallel. I then applied this framework to an empirical system with populations of guppies (Poecilia reticulata) adapted to low- and high-predation environments. I measured female mate preferences for male colour and size; traits which are divergent between the two environment types. Preference functions for colour also diverged in the predicted direction. The parallel pattern of preference divergence suggests that divergent natural selection from predators may be contributing to RI between guppy populations.
APA, Harvard, Vancouver, ISO, and other styles
16

Eger, Karl-Heinz, and Evgeni Borisovich Tsoy. "Robustness of Sequential Probability Ratio Tests in Case of Nuisance Parameters." Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000949.

Full text
Abstract:
This paper deals with the computation of OC- and ASN-function of sequential probability ratio tests in the multi-parameter case. In generalization of the method of conjugated parameter pairs Wald-like approximations are presented for the OC- and ASN-function. These characteristics can be used describing robustness properties of a sequential test in case of nuisance parameters. As examples tests are considered for the mean and the variance of a normal distribution.
APA, Harvard, Vancouver, ISO, and other styles
17

Loukrati, Hicham. "Tail Empirical Processes: Limit Theorems and Bootstrap Techniques, with Applications to Risk Measures." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37594.

Full text
Abstract:
Au cours des dernières années, des changements importants dans le domaine des assurances et des finances attirent de plus en plus l’attention sur la nécessité d’élaborer un cadre normalisé pour la mesure des risques. Récemment, il y a eu un intérêt croissant de la part des experts en assurance sur l’utilisation de l’espérance conditionnelle des pertes (CTE) parce qu’elle partage des propriétés considérées comme souhaitables et applicables dans diverses situations. En particulier, il répond aux exigences d’une mesure de risque “cohérente”, selon Artzner [2]. Cette thèse représente des contributions à l’inférence statistique en développant des outils, basés sur la convergence des intégrales fonctionnelles, pour l’estimation de la CTE qui présentent un intérêt considérable pour la science actuarielle. Tout d’abord, nous développons un outil permettant l’estimation de la moyenne conditionnelle E[X|X > x], ensuite nous construisons des estimateurs de la CTE, développons la théorie asymptotique nécessaire pour ces estimateurs, puis utilisons la théorie pour construire des intervalles de confiance. Pour la première fois, l’approche de bootstrap non paramétrique est explorée dans cette thèse en développant des nouveaux résultats applicables à la valeur à risque (VaR) et à la CTE. Des études de simulation illustrent la performance de la technique de bootstrap.
APA, Harvard, Vancouver, ISO, and other styles
18

Mžourek, Zdeněk. "Statistická charakteristická funkce a její využití pro zpracování signálu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220664.

Full text
Abstract:
Aim of this thesis is provide basic information about characteristic function used in statistic and compare its properties with the Fourier transform used in engineering applications. First part of this thesis is theoretical, there are discussed basic concepts, their properties and mutual relations. The second part is devoted to some possible applications, for example normality testing of data or utilization of the characteristic function in independent component analysis. The first chapter describes the introduction to probability theory for the unification of terminology and mentioned concepts will be used to demonstrate the interesting properties of characteristic function. The second chapter describes the Fourier transform, definition of characteristic function and their comparison. The second part of this text is devoted to applications the empirical characteristic function is analyzed as an estimate of the characteristic function of examined data. As an example of application is describe a simple test of normality. The last part deals with more advanced applications of characteristic function for methods such as independent component analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

He, Bin. "APPLICATION OF THE EMPIRICAL LIKELIHOOD METHOD IN PROPORTIONAL HAZARDS MODEL." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4384.

Full text
Abstract:
In survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly interval-censoring in survival data makes model assessment difficult, and the existing tests for goodness-of-fit do not have direct extension to these complicated types of censored data. In this work, we use empirical likelihood (Owen, 1988) approach to construct goodness-of-fit test and provide estimates for the Cox model with various types of censored data. Specifically, the problems under consideration are the two-sample Cox model and stratified Cox model with right censored data, doubly censored data and partly interval-censored data. Related computational issues are discussed, and some simulation results are presented. The procedures developed in the work are applied to several real data sets with some discussion.
Ph.D.
Department of Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
20

Trönnberg, Filip. "Empirical evaluation of a Markovian model in a limit order market." Thesis, Uppsala universitet, Matematiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176726.

Full text
Abstract:
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
APA, Harvard, Vancouver, ISO, and other styles
21

Pesee, Chatchai. "Stochastic modelling of financial processes with memory and semi-heavy tails." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16057/2/Chatchai%20Pesee%20Thesis.pdf.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
22

Pesee, Chatchai. "Stochastic Modelling of Financial Processes with Memory and Semi-Heavy Tails." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16057/.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
23

Bjellerup, Mårten. "Essays on Consumption : - Aggregation, Asymmetry and Asset Distributions." Doctoral thesis, Växjö universitet, Ekonomihögskolan, EHV, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-406.

Full text
Abstract:
The dissertation consists of four self-contained essays on consumption. Essays 1 and 2 consider different measures of aggregate consumption, and Essays 3 and 4 consider how the distributions of income and wealth affect consumption from a macro and micro perspective, respectively. Essay 1 considers the empirical practice of seemingly interchangeable use of two measures of consumption; total consumption expenditure and consumption expenditure on nondurable goods and services. Using data from Sweden and the US in an error correction model, it is shown that consumption functions based on the two measures exhibit significant differences in several aspects of econometric modelling. Essay 2, coauthored with Thomas Holgersson, considers derivation of a univariate and a multivariate version of a test for asymmetry, based on the third central moment. The logic behind the test is that the dependent variable should correspond to the specification of the econometric model; symmetric with linear models and asymmetric with non-linear models. The main result in the empirical application of the test is that orthodox theory seems to be supported for consumption of both nondurable and durable consumption. The consumption of durables shows little deviation from symmetry in the four-country sample, while the consumption of nondurables is shown to be asymmetric in two out of four cases, the UK and the US. Essay 3 departs from the observation that introducing income uncertainty makes the consumption function concave, implying that the distributions of wealth and income are omitted variables in aggregate Euler equations. This implication is tested through estimation of the distributions over time and augmentation of consumption functions, using Swedish data for 1963-2000. The results show that only the dispersion of wealth is significant, the explanation of which is found in the marked changes of the group of households with negative wealth; a group that according to a concave consumption function has the highest marginal propensity to consume. Essay 4 attempts to empirically specify the nature of the alleged concavity of the consumption function. Using grouped household level Swedish data for 1999-2001, it is shown that the marginal propensity to consume out of current resources, i.e. current income and net wealth, is strictly decreasing in current resources and net wealth, but approximately constant in income. Also, an empirical reciprocal to the stylized theoretical consumption function is estimated, and shown to bear a close resemblance to the theoretical version.
APA, Harvard, Vancouver, ISO, and other styles
24

Erguven, Sait. "Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607723/index.pdf.

Full text
Abstract:
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore
change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
APA, Harvard, Vancouver, ISO, and other styles
25

Povalač, Karel. "Sledování spektra a optimalizace systémů s více nosnými pro kognitivní rádio." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233577.

Full text
Abstract:
The doctoral thesis deals with spectrum sensing and subsequent use of the frequency spectrum by multicarrier communication system, which parameters are set on the basis of the optimization technique. Adaptation settings can be made with respect to several requirements as well as state and occupancy of individual communication channels. The system, which is characterized above is often referred as cognitive radio. Equipments operating on cognitive radio principles will be widely used in the near future, because of frequency spectrum limitation. One of the main contributions of the work is the novel usage of the Kolmogorov – Smirnov statistical test as an alternative detection of primary user signal presence. The new fitness function for Particle Swarm Optimization (PSO) has been introduced and the Error Vector Magnitude (EVM) parameter has been used in the adaptive greedy algorithm and PSO optimization. The dissertation thesis also incorporates information about the reliability of the frequency spectrum sensing in the modified greedy algorithm. The proposed methods are verified by the simulations and the frequency domain energy detection is implemented on the development board with FPGA.
APA, Harvard, Vancouver, ISO, and other styles
26

Madani, Soffana. "Contributions à l’estimation à noyau de fonctionnelles de la fonction de répartition avec applications en sciences économiques et de gestion." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1183/document.

Full text
Abstract:
La répartition des revenus d'une population, la distribution des instants de défaillance d'un matériel et l'évolution des bénéfices des contrats d'assurance vie - étudiées en sciences économiques et de gestion – sont liées a des fonctions continues appartenant à la classe des fonctionnelles de la fonction de répartition. Notre thèse porte sur l'estimation à noyau de fonctionnelles de la fonction de répartition avec applications en sciences économiques et de gestion. Dans le premier chapitre, nous proposons des estimateurs polynomiaux locaux dans le cadre i.i.d. de deux fonctionnelles de la fonction de répartition, notées LF et TF , utiles pour produire des estimateurs lisses de la courbe de Lorenz et du temps total de test normalisé (scaled total time on test transform). La méthode d'estimation est décrite dans Abdous, Berlinet et Hengartner (2003) et nous prouvons le bon comportement asymptotique des estimateurs polynomiaux locaux. Jusqu'alors, Gastwirth (1972) et Barlow et Campo (1975) avaient défini des estimateurs continus par morceaux de la courbe de Lorenz et du temps total de test normalisé, ce qui ne respectait pas la propriété de continuité des courbes initiales. Des illustrations sur données simulées et réelles sont proposées. Le second chapitre a pour but de fournir des estimateurs polynomiaux locaux dans le cadre i.i.d. des dérivées successives des fonctionnelles de la fonction de répartition explorées dans le chapitre précédent. A part l'estimation de la dérivée première de la fonction TF qui se traite à l'aide de l'estimation lisse de la fonction de répartition, la méthode d'estimation employée est l'approximation polynomiale locale des fonctionnelles de la fonction de répartition détaillée dans Berlinet et Thomas-Agnan (2004). Divers types de convergence ainsi que la normalité asymptotique sont obtenus, y compris pour la densité et ses dérivées successives. Des simulations apparaissent et sont commentées. Le point de départ du troisième chapitre est l'estimateur de Parzen-Rosenblatt (Rosenblatt (1956), Parzen (1964)) de la densité. Nous améliorons dans un premier temps le biais de l'estimateur de Parzen-Rosenblatt et de ses dérivées successives à l'aide de noyaux d'ordre supérieur (Berlinet (1993)). Nous démontrons ensuite les nouvelles conditions de normalité asymptotique de ces estimateurs. Enfin, nous construisons une méthode de correction des effets de bord pour les estimateurs des dérivées de la densité, grâce aux dérivées d'ordre supérieur. Le dernier chapitre s'intéresse au taux de hasard, qui contrairement aux deux fonctionnelles de la fonction de répartition traitées dans le premier chapitre, n'est pas un rapport de deux fonctionnelles linéaires de la fonction de répartition. Dans le cadre i.i.d., les estimateurs à noyau du taux de hasard et de ses dérivées successives sont construits à partir des estimateurs à noyau de la densité et ses dérivées successives. La normalité asymptotique des premiers estimateurs est logiquement obtenue à partir de celle des seconds. Nous nous plaçons ensuite dans le modèle à intensité multiplicative, un cadre plus général englobant des données censurées et dépendantes. Nous menons la procédure à terme de Ramlau-Hansen (1983) afin d'obtenir les bonnes propriétés asymptotiques des estimateurs du taux de hasard et de ses dérivées successives puis nous tentons d'appliquer l'approximation polynomiale locale dans ce contexte. Le taux d'accumulation du surplus dans le domaine de la participation aux bénéfices pourra alors être estimé non parametriquement puisqu'il dépend des taux de transition (taux de hasard d'un état vers un autre) d'une chaine de Markov (Ramlau-Hansen (1991), Norberg (1999))
The income distribution of a population, the distribution of failure times of a system and the evolution of the surplus in with-profit policies - studied in economics and management - are related to continuous functions belonging to the class of functionals of the distribution function. Our thesis covers the kernel estimation of some functionals of the distribution function with applications in economics and management. In the first chapter, we offer local polynomial estimators in the i.i.d. case of two functionals of the distribution function, written LF and TF , which are useful to produce the smooth estimators of the Lorenz curve and the scaled total time on test transform. The estimation method is described in Abdous, Berlinet and Hengartner (2003) and we prove the good asymptotic behavior of the local polynomial estimators. Until now, Gastwirth (1972) and Barlow and Campo (1975) have defined continuous piecewise estimators of the Lorenz curve and the scaled total time on test transform, which do not respect the continuity of the original curves. Illustrations on simulated and real data are given. The second chapter is intended to provide smooth estimators in the i.i.d. case of the derivatives of the two functionals of the distribution function presented in the last chapter. Apart from the estimation of the first derivative of the function TF with a smooth estimation of the distribution function, the estimation method is the local polynomial approximation of functionals of the distribution function detailed in Berlinet and Thomas-Agnan (2004). Various types of convergence and asymptotic normality are obtained, including the probability density function and its derivatives. Simulations appear and are discussed. The starting point of the third chapter is the Parzen-Rosenblatt estimator (Rosenblatt (1956), Parzen (1964)) of the probability density function. We first improve the bias of this estimator and its derivatives by using higher order kernels (Berlinet (1993)). Then we find the modified conditions for the asymptotic normality of these estimators. Finally, we build a method to remove boundary effects of the estimators of the probability density function and its derivatives, thanks to higher order derivatives. We are interested, in this final chapter, in the hazard rate function which, unlike the two functionals of the distribution function explored in the first chapter, is not a fraction of two linear functionals of the distribution function. In the i.i.d. case, kernel estimators of the hazard rate and its derivatives are produced from the kernel estimators of the probability density function and its derivatives. The asymptotic normality of the first estimators is logically obtained from the second ones. Then, we are placed in the multiplicative intensity model, a more general framework including censored and dependent data. We complete the described method in Ramlau-Hansen (1983) to obtain good asymptotic properties of the estimators of the hazard rate and its derivatives and we try to adopt the local polynomial approximation in this context. The surplus rate in with-profit policies will be nonparametrically estimated as its mathematical expression depends on transition rates (hazard rates from one state to another) in a Markov chain (Ramlau-Hansen (1991), Norberg (1999))
APA, Harvard, Vancouver, ISO, and other styles
27

Santos, Mariana Faria dos. "Modelling claim counts of homogeneous risk groups using copulas." Master's thesis, Instituto Superior de Economia e Gestão, 2010. http://hdl.handle.net/10400.5/2932.

Full text
Abstract:
Mestrado em Ciências Actuariais
Over the years modelling the dependence between random variables has been a challenge in many areas, like insurance and finance. Recently with the new capital requirement regime for the European insurance business, this subject is increasing importance since, according to Solvency II, the insurer's risks should be modelled separately, then aggregated follow¬ing some dependence structure. The challenge of this framework is to achieve an accurate way of joining dependent risks in order to not over or underestimate the capital require¬ments. The aim of this thesis is to give a practical application of a multivariate model based on copulas as well as all the theoretical and important concepts related to the theory of copulas. Although the definition of copula dates from 1959, only recently some authors such Clemen and Reilly (1999), Daul et al (2003), Dias (2004), Frees and Valdez (1998) and Embrechts et al (2003) applied the copula framework for the finance and insurance data. In the mean time, estimation procedures and goodness-of-fit tests has been developing in the literature. In this thesis we introduce the theory of copulas together with the study of copula models for insurance data. Beforehand, copula definition andpropertiesaswellassome types of copulas discussed in literature are introduced. We present methods to estimate the parameters and a goodness-of-fit test for copulas. Afterwards, we present a summary of Solvency II and a multivariate model to fit claim counts between three homogeneous risk groups that are the core of the automobile business: Third party liability property damages, third party liability bodily injury and material own damages. The methodology followed is based on copula models and the procedures are carried out in two steps. First, we model the marginal distributions of each risk group and test the goodness-of-fit of each distribution. We propose two models to fit the marginal distributions: a discrete and an approximating continuous model. In the first model we test a Poisson and a Negative Binomial distribution whereas in the second one we test a Gamma and a Normal distribution approximations. In spite of being the natural approach to fit the claim counts, the discrete model has some limitations since copulas have serious restrictions when the marginals are discrete. Thus, a continuous model is proposed to fit the data as an alternative avoiding these limitations. Finally, we fitdifferent copulas families estimating its parameters through some procedures, presented along this thesis. We evaluate the goodness-of-fit using a statistical test based on the empirical copula concept.
Ao longo dos últimos anos, avaliar a dependência entre variáveis aleatórias tem sido um desafio constante em muitas áreas, como por exemplo nos seguros e finanças. Recente¬mente, com o novo regime de solvência para as companhias de seguros europeias, este tema tem ganho um grande relevo, uma vez que, de acordo com o programa Solvência II, os riscos de uma companhia de seguros devem ser avaliados separadamente, sendo posteri¬ormente agregados de forma dependente. O grande desafio deste modelo é conseguir en¬contrar a forma mais correcta de agregar os diversos riscos considerando dependência, por forma a não subestimar nem sobrestimar o valor dos requisitos de capital. Posto isto, o objectivo desta tese é, por um lado, apresentar uma aplicação prática de um modelo mul-tivariado onde se utilizam diversas cópulas para avaliar a estrutura de dependência entre os riscos e, por outro lado, fornecer todos os conceitos teóricos mais importantes relacionados com a teoria das cópulas. Apesar da definição de cópula datar de 1959, apenas recente¬mente alguns autores, tais como Clemen and Reilly (1999), Daul et al (2003), Dias (2004), Frees and Valdez (1998) e Embrechts et al (2003), aplicaram este conceito à área da banca e seguros. Entretanto, têm sido desenvolvidos na literatura métodos para a estimação dos parâmetros e testes de ajustamento para modelos multivariados que utilizam cópulas. Nesta tese, a teoria das cópulas é introduzida juntamente com um estudo prático aplicado a dados de uma seguradora real. Em primeiro lugar, é apresentada a definição e as propriedades gerais das cópulas sendo também detalhadas algumas cópulas conheci¬das e estudadas na literatura. Além disto, são apresentados métodos para a estimação dos parâmetros e testes para avaliar a qualidade do ajustamento das cópulas aos dados. Segui¬damente, é feito um breve resumo sobre Solvência II e apresentado um modelo multivariado para ajustar o número de sinistros de três grupos de risco homogéneos que constituem o núcleo do ramo automóvel: Responsabilidade civil de danos materiais, responsabilidade civil de danos corporais e danos próprios. Os procedimentos são realizados em dois passos. Em primeiro lugar ajustam-se várias distribuições marginais para cada grupo de risco testando a qualidade de ajustamento de cada uma. Para o ajustamento das marginais são propostos dois modelos: um modelo discreto e um modelo de aproximação contínuo. No primeiro as distribuições testadas são a Binomial Negativa e a Poisson, enquanto que no segundo são testadas as distribuições Gama e Normal. O modelo discreto tem algumas limitações, uma vez que as cópulas têm sérias restrições quando as marginais são discretas. Daqui advém a necessidade de propor um modelo contínuo aproximado. Finalmente, ajustam-se diver¬sas cópulas utilizando métodos de estimação apresentados ao longo da tese. A qualidade do ajustamento é testada através de uma estatística tese baseada no conceito de cópula empírica.
APA, Harvard, Vancouver, ISO, and other styles
28

Tejkal, Martin. "Vybrané transformace náhodných veličin užívané v klasické lineární regresi." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318798.

Full text
Abstract:
Klasická lineární regrese a z ní odvozené testy hypotéz jsou založeny na předpokladu normálního rozdělení a shodnosti rozptylu závislých proměnných. V případě že jsou předpoklady normality porušeny, obvykle se užívá transformací závisle proměnných. První část této práce se zabývá transformacemi stabilizujícími rozptyl. Značná pozornost je udělena náhodným veličinám s Poissonovým a negativně binomickým rozdělením, pro které jsou studovány zobecněné transformace stabilizující rozptyl obsahující parametry v argumentu navíc. Pro tyto parametry jsou stanoveny jejich optimální hodnoty. Cílem druhé části práce je provést srovnání transformací uvedených v první části a dalších často užívaných transformací. Srovnání je provedeno v rámci analýzy rozptylu testováním hypotézy shodnosti středních hodnot p nezávislých náhodných výběrů s pomocí F testu. V této části jsou nejprve studovány vlastnosti F testu za předpokladu shodných a neshodných rozptylů napříč výběry. Následně je provedeno srovnání silofunkcí F testu aplikovaného pro p výběrů z Poissonova rozdělení transformovanými odmocninovou, logaritmickou a Yeo Johnsnovou transformací a z negativně binomického rozdělení transformovaného argumentem hyperbolického sinu, logaritmickou a Yeo-Johnsnovou transformací.
APA, Harvard, Vancouver, ISO, and other styles
29

Berglund, Filip. "Asymptotics of beta-Hermite Ensembles." Thesis, Linköpings universitet, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171096.

Full text
Abstract:
In this thesis we present results about some eigenvalue statistics of the beta-Hermite ensembles, both in the classical cases corresponding to beta = 1, 2, 4, that is the Gaussian orthogonal ensemble (consisting of real symmetric matrices), the Gaussian unitary ensemble (consisting of complex Hermitian matrices) and the Gaussian symplectic ensembles (consisting of quaternionic self-dual matrices) respectively. We also look at the less explored general beta-Hermite ensembles (consisting of real tridiagonal symmetric matrices). Specifically we look at the empirical distribution function and two different scalings of the largest eigenvalue. The results we present relating to these statistics are the convergence of the empirical distribution function to the semicircle law, the convergence of the scaled largest eigenvalue to the Tracy-Widom distributions, and with a different scaling, the convergence of the largest eigenvalue to 1. We also use simulations to illustrate these results. For the Gaussian unitary ensemble, we present an expression for its level density. To aid in understanding the Gaussian symplectic ensemble we present properties of the eigenvalues of quaternionic matrices. Finally, we prove a theorem about the symmetry of the order statistic of the eigenvalues of the beta-Hermite ensembles.
I denna kandidatuppsats presenterar vi resultat om några olika egenvärdens-statistikor från beta-Hermite ensemblerna, först i de klassiska fallen då beta = 1, 2, 4, det vill säga den gaussiska ortogonala ensemblen (bestående av reella symmetriska matriser), den gaussiska unitära ensemblen (bestående av komplexa hermitiska matriser) och den gaussiska symplektiska ensemblen (bestående av kvaternioniska själv-duala matriser). Vi tittar även på de mindre undersökta generella beta-Hermite ensemblerna (bestående av reella symmetriska tridiagonala matriser). Specifikt tittar vi på den empiriska fördelningsfunktionen och två olika normeringar av det största egenvärdet. De resultat vi presenterar för dessa statistikor är den empiriska fördelningsfunktionens konvergens mot halvcirkel-fördelningen, det normerade största egenvärdets konvergens mot Tracy-Widom fördelningen, och, med en annan normering, största egenvärdets konvergens mot 1. Vi illustrerar även dessa resultat med hjälp av simuleringar. För den gaussiska unitära ensemblen presenterar vi ett uttryck för dess nivåtäthet. För att underlätta förståelsen av den gaussiska symplektiska ensemblen presenterar vi egenskaper hos egenvärdena av kvaternioniska matriser. Slutligen bevisar vi en sats om symmetrin hos ordningsstatistikan av egenvärdena av beta-Hermite ensemblerna.
APA, Harvard, Vancouver, ISO, and other styles
30

ALENCASTRO, JUNIOR José Vianney Mendonça de. "Conformidade à lei de Newcomb-Benford de grandezas astronômicas segundo a medida de Kolnogorov-Smirnov." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18354.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-21T15:12:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5)
Made available in DSpace on 2017-02-21T15:12:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5) Previous issue date: 2016-09-09
A lei de Newcomb-Benford, também conhecida como a lei do dígito mais significativo, foi descrita pela primeira vez por Simon Newcomb, sendo apenas embasada estatisticamente após 57 anos pelo físico Frank Benford. Essa lei rege grandezas naturalmente aleatórias e tem sido utilizada por várias áreas como forma de selecionar e validar diversos tipos de dados. Em nosso trabalho tivemos como primeiro objetivo propor o uso de um método substituto ao qui-quadrado, sendo este atualmente o método comumente utilizado pela literatura para verificação da conformidade da Lei de Newcomb-Benford. Fizemos isso pois em uma massa de dados com uma grande quantidade de amostras o método qui-quadrado tende a sofrer de um problema estatístico conhecido por excesso de poder, gerando assim resultados do tipo falso negativo na estatística. Dessa forma propomos a substituição do método qui-quadrado pelo método de Kolmogorov-Smirnov baseado na Função de Distribuição Empírica para análise da conformidade global, pois esse método é mais robusto não sofrendo do excesso de poder e também é mais fiel à definição formal da Lei de Benford, já que o mesmo trabalha considerando as mantissas ao invés de apenas considerar dígitos isolados. Também propomos investigar um intervalo de confiança para o Kolmogorov-Smirnov baseando-nos em um qui-quadrado que não sofre de excesso de poder por se utilizar o Bootstraping. Em dois artigos publicados recentemente, dados de exoplanetas foram analisados e algumas grandezas foram declaradas como conformes à Lei de Benford. Com base nisso eles sugerem que o conhecimento dessa conformidade possa ser usado para uma análise na lista de objetos candidatos, o que poderá ajudar no futuro na identificação de novos exoplanetas nesta lista. Sendo assim, um outro objetivo de nosso trabalho foi explorar diversos bancos e catálogos de dados astronômicos em busca de grandezas, cuja a conformidade à lei do dígito significativo ainda não seja conhecida a fim de propor aplicações práticas para a área das ciências astronômicas.
The Newcomb-Benford law, also known as the most significant digit law, was described for the first time by astronomer and mathematician Simon Newcomb. This law was just statistically grounded after 57 years after the Newcomb’s discovery. This law governing naturally random greatness and, has been used by many knowledge areas to validate several kind of data. In this work, the first goal is propose a substitute of qui-square method. The qui-square method is the currently method used in the literature to verify the Newcomb-Benford Law’s conformity. It’s necessary because in a greatness with a big quantity of samples, the qui-square method can has false negatives results. This problem is named Excess of Power. Because that, we proposed to use the Kolmogorov-Smirnov method based in Empirical Distribution Function (EDF) to global conformity analysis. Because this method is more robust and not suffering of the Excess of Power problem. The Kolmogorov-Smirnov method also more faithful to the formal definition of Benford’s Law since the method working considering the mantissas instead of single digits. We also propose to invetigate a confidence interval for the Kolmogorov-Smirnov method based on a qui-square with Bootstrapping strategy which doesn’t suffer of Excess of Power problem. Recently, two papers were published. I this papaers exoplanets data were analysed and some greatness were declared conform to a Newcomb-Benford distribution. Because that, the authors suggest that knowledge of this conformity can be used for help in future to indentify new exoplanets in the candidates list. Therefore, another goal of this work is explorer a several astronomicals catalogs and database looking for greatness which conformity of Benford’s law is not known yet. And after that , the authors suggested practical aplications for astronomical sciences area.
APA, Harvard, Vancouver, ISO, and other styles
31

ARAÚJO, Raphaela Lima Belchior de. "Família composta Poisson-Truncada: propriedades e aplicações." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16315.

Full text
Abstract:
Submitted by Haroudo Xavier Filho (haroudo.xavierfo@ufpe.br) on 2016-04-05T14:28:43Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5)
Made available in DSpace on 2016-04-05T14:28:43Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertacao_Raphaela(CD).pdf: 1067677 bytes, checksum: 6d371901336a7515911aeffd9ee38c74 (MD5) Previous issue date: 2015-07-31
CAPES
Este trabalho analisa propriedades da família de distribuições de probabilidade Composta N e propõe a sub-família Composta Poisson-Truncada como um meio de compor distribuições de probabilidade. Suas propriedades foram estudadas e uma nova distribuição foi investigada: a distribuição Composta Poisson-Truncada Normal. Esta distribuição possui três parâmetros e tem uma flexibilidade para modelar dados multimodais. Demonstramos que sua densidade é dada por uma mistura infinita de densidades normais em que os pesos são dados pela função de massa de probabilidade da Poisson-Truncada. Dentre as propriedades exploradas desta distribuição estão a função característica e expressões para o cálculo dos momentos. Foram analisados três métodos de estimação para os parâmetros da distribuição Composta Poisson-Truncada Normal, sendo eles, o método dos momentos, o da função característica empírica (FCE) e o método de máxima verossimilhança (MV) via algoritmo EM. Simulações comparando estes três métodos foram realizadas e, por fim, para ilustrar o potencial da distribuição proposta, resultados numéricos com modelagem de dados reais são apresentados.
This work analyzes properties of the Compound N family of probability distributions and proposes the sub-family Compound Poisson-Truncated as a means of composing probability distributions. Its properties were studied and a new distribution was investigated: the Compound Poisson-Truncated Normal distribution. This distribution has three parameters and has the flexibility to model multimodal data. We demonstrated that its density is given by an infinite mixture of normal densities where in the weights are given by the Poisson-Truncated probability mass function. Among the explored properties of this distribution are the characteristic function end expressions for the calculation of moments. Three estimation methods were analyzed for the parameters of the Compound Poisson-Truncated Normal distribution, namely, the method of moments, the empirical characteristic function (ECF) and the method of maximum likelihood (ML) by EM algorithm. Simulations comparing these three methods were performed and, finally, to illustrate the potential of the proposed distribution numerical results with real data modeling are presented.
APA, Harvard, Vancouver, ISO, and other styles
32

Saleem, Rashid. "Towards an end-to-end multiband OFDM system analysis." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/towards-an-endtoend-multiband-ofdm-system-analysis(e711f32f-1ac6-4b48-8f4e-58309c0482d3).html.

Full text
Abstract:
Ultra Wideband (UWB) communication has recently drawn considerable attention from academia and industry. This is mainly owing to the ultra high speeds and cognitive features it could offer. The employability of UWB in numerous areas including but not limited to Wireless Personal Area Networks, WPAN's, Body Area Networks, BAN's, radar and medical imaging etc. has opened several avenues of research and development. However, still there is a disagreement on the standardization of UWB. Two contesting radios for UWB are Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) and DS-UWB (Direct Sequence Ultra Wideband). As nearly all of the reported research on UWB hasbeen about a very narrow/specific area of the communication system, this thesis looks at the end-to-end performance of an MB-OFDM approach. The overall aim of this project has been to first focus on three different aspects i.e. interference, antenna and propagation aspects of an MB-OFDM system individually and then present a holistic or an end-to-end system analysis finally. In the first phase of the project the author investigated the performance of MB-OFDM system under the effect of his proposed generic or technology non-specific interference. Avoiding the conventional Gaussian approximation, the author has employed an advanced stochastic method. A total of two approaches have been presented in this phase of the project. The first approach is an indirect one which involves the Moment Generating Functions (MGF's) of the Signal-to-Interference-plus-Noise-Ratio (SINR) and the Probability Density Function (pdf) of the SINR to calculate the Average Probabilities of Error of an MB-OFDM system under the influence of proposed generic interference. This approach assumed a specific two-dimensional Poisson spatial/geometric placement of interferers around the victim MB-OFDM receiver. The second approach is a direct approach and extends the first approach by employing a wider class of generic interference. In the second phase of the work the author designed, simulated, prototyped and tested novel compact monopole planar antennas for UWB application. In this phase of the research, compact antennas for the UWB application are presented. These designs employ low-loss Rogers duroid substrates and are fed by Copla-nar Waveguides. The antennas have a proposed feed-line to the main radiating element transition region. This transition region is formed by a special step-generating function-set called the "Inverse Parabolic Step Sequence" or IPSS. These IPSS-based antennas are simulated, prototyped and then tested in the ane-choic chamber. An empirical approach, aimed to further miniaturize IPSS-based antennas, was also derived in this phase of the project. The empirical approach has been applied to derive the design of a further miniaturized antenna. More-over, an electrical miniaturization limit has been concluded for the IPSS-based antennas. The third phase of the project has investigated the effect of the indoor furnishing on the distribution of the elevation Angle-of-Arrival (AOA) of the rays at the receiver. Previously, constant distributions for the AOA of the rays in the elevation direction had been reported. This phase of the research has proposed that the AOA distribution is not fixed. It is established by the author that the indoor elevation AOA distributions depend on the discrete levels of furnishing. A joint time-angle-furnishing channel model is presented in this research phase. In addition, this phase of the thesis proposes two vectorial or any direction AOA distributions for the UWB indoor environments. Finally, the last phase of this thesis is presented. As stated earlier, the overall aim of the project has been to look at three individual aspects of an MB-OFDM system, initially, and then look at the holistic system, finally. Therefore, this final phase of the research presents an end-to-end MB-OFDM system analysis. The interference analysis of the first phase of the project is revisited to re-calculate the probability of bit error with realistic/measured path loss exponents which have been reported in the existing literature. In this method, Gaussian Quadrature Rule based approximations are computed for the average probability of bit error. Last but not the least, an end-to-end or comprehensive system equation/impulse response is presented. The proposed system equation covers more aspects of an indoor UWB system than reported in the existing literature.
APA, Harvard, Vancouver, ISO, and other styles
33

McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chou, Cheng-Chieh, and 周正杰. "Empirical-Distribution-Function Tests for theBeta-Binomial Model." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/63290669580577465856.

Full text
Abstract:
碩士
淡江大學
數學學系碩士班
93
Empirical-distribution-function (EDF) goodness-of-fit tests are considered for the beta-binomial model. The testing procedures based on EDF statistics are given. A Monte Carlo study is conducted to investigate the accuracy and power of the tests against various alternative distributions. Our method is found to produce considerably greater power than that of Garren et al. (2001) in most cases. The tests are applied to data sets of the environmental toxicity studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Freitas, Luísa Maria Ribeiro de. "Testes de ajustamento baseados na função característica ponderada." Master's thesis, 2016. http://hdl.handle.net/10316/48129.

Full text
Abstract:
Dissertação de Mestrado em Matemática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.
Em 2014, Meintanis, Swanepoel e Allison propuseram um teste de ajustamento baseado numa nova função por eles introduzida e a que chamaram função característica ponderada por ser uma generalização da função característica. Este é o teste que estudamos nesta dissertação. Começamos assim por estabelecer as principais propriedades da nova função, passando a seguir ao estudo do teste de ajustamento nela baseado. Como ponto de partida, analisamos o caso particular do teste de ajustamento a uma distribuição fixa, generalizando de seguida a uma família de distribuições. Para ambos os casos, definimos a estatística de teste envolvente, determinamos a sua distribuição assintótica sob a hipótese nula e analisamos a convergência do teste. Por último, apresentamos os resultados de um estudo de simulação realizado para analisar a potência do teste de ajustamento a uma família de distribuições, tomando como referência um teste recomendado na literatura.
In 2014, Meintanis, Swanepoel e Allison proposed a goodness-of-fit test based on a new function introduced by them and called probability weighted characteristic function because it is a generalization of the usual characteristic function. The study of this test is the main goal of this dissertation. We start by establishing the main properties of the new function and then we examine in detail the goodness-of-fit test that depends on it. The starting point is to study the simple goodness-of-fit test and then the composite goodness-of-fit test. For both, the corresponding test statistics, limiting null distributions and consistency results are presented. Finally, we show the results of a simulation study carried out to analyze the power of the composite goodness-of-fit test and compare it with other well-known test.
APA, Harvard, Vancouver, ISO, and other styles
36

Shao, Su-chun, and 邵淑君. "Distribution Systems and Asymmetric Information:An Empirical Test on the Individual Health Insurance." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/86699194427951835465.

Full text
Abstract:
碩士
逢甲大學
風險管理與保險研究所
96
Development of the insurance market in the increasingly multi-channel, different distribution systems and marketing of its access patterns are also different, resulting in many different types of agency problem. The pathway of marketing strategy and trends in the development of many related research, in this study, an empirical analysis of different angles in different distribution systems was adopted to analyze the discrepancy under the lease quality and the loss of circumstances. This article (1) Commissions examination system (2) Product complexity (3) Insurance professional (4) marketing contracting mode (5) after-sales service, such as analysis of the five dimensions to be discussed. To a life insurance company to sample data, the insured included age, gender, amount insured, family policy, as independent variables. Tobit model is used to analyze the correlation between direct writer, bancassurance, telemarketing and damage of the lease. This study has two empirical findings. First, in terms of loss frequency, channels of bancassurance and telemarketing towards direct writer are significant negative correlation. Secondly, in terms of the loss ratio on the lease, bancassurance and telemarketing towards direct writer are significant negative correlation. Main reason is that for the sake of sales volume and commission consideration to the neglect of important information gathering and maintaining the quality of the lease. In other words, the agency problem of direct writer caused by the negative effects are conspicuous. Various distribution systems all have different access attributes, in the insurance industry to expand their business in the meantime, should be able to expand product differentiation supported by the correct underwriting mechanism to reduce the asymmetric information and agency cost, but also should attach importance to the Company cost efficiency, profit efficiency and the maintenance of quality of the lease.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsu, Pi-chun, and 徐碧君. "A Limiting Distribution Function Approximation to K-S Statistics for Normality Test." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/78424314561879835833.

Full text
Abstract:
碩士
靜宜大學
應用數學研究所
98
The main purpose of this study is to explore the limiting distribution of the Kolmogorov-Smirnov statistics in the normality test. Base on Gumble extream value theorem, Empirical Limiting Distribution (ELD) is developed by Monte Carlo method, and is compared with Dallal & Wilkinson (DW, 1986) approximation. Both curves are found to be closed each other, but ELD curve would not produce unreasonable probability value (p>1) like D-W curve does. When they are applies to normality test, the type I error probability will estimated to be closed to the significant level. For the most of the alternative hypothesis distribution, the test power will achieve 1 if n>100. However, if the distribution of the alternative hypothesis is closed to normal, such as Beta (2,2), it is not easily to be detected. At the significant level of 0.05, even when n = 300, both methods still obtain about 0.6 of the test power. Finally these two methods are applied to fitting the size distribution of raindrops into normal and lognormal distribution.
APA, Harvard, Vancouver, ISO, and other styles
38

Yu-TienWu and 吳昱瑱. "Evaluation of Semi-empirical Bidirectional Reflectance Distribution Function Models for Terrestrial Laser Scanning Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/58899631706766623159.

Full text
Abstract:
碩士
國立成功大學
測量及空間資訊學系
104
Terrestrial Laser Scanning (TLS) intensity data can be regarded as the relative reflectance of the surface material. TLS intensity is a function of the angle of incidence of the laser beam, the range distance between the laser and the surface, and the surface type. The Bidirectional Reflectance Distribution Function (BRDF) was used to describe the relation between the angle of incidence and TLS intensity. Five remote sensing semi-empirical BRDF models, including Rahman–Pinty–Verstraete (RPV) model and four kernel-driven models derived from Ross (1981), were evaluated using twelve flat sample materials. The goodness of fit of the BRDF models to the TLS intensity are evaluated based on visual examination of the fitted trend and their corresponding coefficients of determination. The results show that the Rossthin-Roujean model and RPV model are the most suitable for TLS data.
APA, Harvard, Vancouver, ISO, and other styles
39

Su, Lo-Shin, and 蘇樂生. "An Empirical Test of Hedging Model of Futures Markets- In the Condition of Different Basis Distribution." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/36367920550202306413.

Full text
Abstract:
碩士
銘傳大學
金融研究所
85
This study's purpose is to find the time-varying optimal hedge ratios which can improve the hedge performance and reduce the transaction cost. The functions of futures market are risk transference, price discovery and speculation. Risk transference is the most important function. The hedge ratios are the critical factor of risk transference. An optimal hedge ratios are usually defined as the proper proportion of cash position that should be covered with a opposite position on the futures market. In fact, the affection of hedge is replaced price risk to basis (spot price subtract futures price) risk. So, distribution of basis has article effect on estimating hedge ratios.   The study is trying to let basis distribution approach to normal distribution and to get an time-varying optimal hedge ratios. The traditional hedged ratios decided by different basis distribution. In the first model, they think the basis change is constant. They use linear regression model to get a constant hedge ratios in the whole hedged period. The model's is inconsiderate the time-varying information in the market. In the second model,they thought that the basis's change approach to random walk model. They use generalized autoregressive conditional hetersocedastic(GARCH) model to get a time-varying hedge ratios. An empirical test by Myers(1991) compare two model for estimating hedge ratios on futures contract. He find the hedge performance of GARCH model is marginal better than linear regression model. In a addition, because the GARCH is too sensitive in adjusting hedge ratios, the change of hedge ratios is violent. It will induce heavy transaction cost.   We think that basis distrbution can approach to normal distribution, then we can get a time-varying optimal hedge ratios. The time-varying optimal hedge model is superior to traditional linear regression and GARCH hedge model.It can solve the following problems, the linear regression model ignore the information of market and the GARCH model induce the heavy transaction cost. It is worth applying to improve the hedge performance and reduce the transaction cost on futures markets.
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Yu-Hsin, and 吳裕新. "The Relationship between Local Economic Growth and Income Distribution in China-An Empirical Test of Kuznets Hypothesis." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/30946106689987302876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lam, Wai Yan Wendy. "An empirical examination of the impact of item parameters on IRT information functions in mixed format tests." 2012. https://scholarworks.umass.edu/dissertations/AAI3498358.

Full text
Abstract:
IRT, also referred as “modern test theory”, offers many advantages over CTT-based methods in test development. Specifically, an IRT information function has the capability to build a test that has the desired precision of measurement for any defined proficiency scale when a sufficient number of test items are available. This feature is extremely useful when the information is used for decision making, for instance, whether an examinee attain certain mastery level. Computerized adaptive testing (CAT) is one of the many examples using IRT information functions in test construction. The purposes of this study were as follows: (1) to examine the consequences of improving the test quality through the addition of more discriminating items with different item formats; (2) to examine the effect of having a test where its difficulty does not align with the ability level of the intended population; (3) to investigate the change in decision consistency and decision accuracy; and (4) to understand changes in expected information when test quality is either improved or degraded, using both empirical and simulated data. Main findings from the study were as follows: (1) increasing the discriminating power of any types of items generally increased the level of information; however, sometimes it could bring adverse effect to the extreme ends of the ability continuum; (2) it was important to have more items that were targeted at the population of interest, otherwise, no matter how good the quality of the items may be, they were of less value in test development when they were not targeted to the distribution of candidate ability or at the cutscores; (3) decision consistency (DC), Kappa statistic, and decision accuracy (DA) increased with better quality items; (4) DC and Kappa were negatively affected when difficulty of the test did not match with the ability of the intended population; however, the effect was less severe if the test was easier than needed; (5) tests with more better quality items lowered false positive (FP) and false negative (FN) rate at the cutscores; (6) when test difficulty did not match with the ability of the target examinees, in general, both FP and FN rates increased; (7) polytomous items tended to yield more information than dichotomously scored items, regardless of the discriminating parameter and difficulty of the item; and (8) the more score categories an item had, the more information it could provide. Findings from this thesis should help testing agencies and practitioners to have better understanding of the item parameters on item and test information functions. This understanding is crucial for the improvement of the item bank quality and ultimately on how to build better tests that could provide more accurate proficiency classifications. However, at the same time, item writers should be conscientious about the fact that the item information function is merely a statistical tool for building a good test, other criteria should also be considered, for example, content balancing and content validity.
APA, Harvard, Vancouver, ISO, and other styles
42

Koné, Fangahagnian. "Test d'adéquation à la loi de Poisson bivariée au moyen de la fonction caractéristique." Thèse, 2016. http://hdl.handle.net/1866/18776.

Full text
Abstract:
Les tests d’adéquation font partie des pratiques qu’ont les statisticiens pour prendre une décision concernant l’hypothèse de l’utilisation d’une distribution paramétrique pour un échantillon. Dans ce mémoire, une application du test d’adéquation basé sur la fonction caractéristique proposé par Jiménez-Gamero et al. (2009) est faite dans le cas de la loi de Poisson bivariée. Dans un premier temps, le test est élaboré dans le cas de l’adéquation à une loi de Poisson univariée et nous avons trouvé son niveau bon. Ensuite cette élaboration est étendue au cas de la loi de Poisson bivariée et la puissance du test est calculée et comparée à celle des tests de l’indice de dispersion, du Quick test de Crockett et des deux familles de tests proposés par Novoa-Muñoz et Jiménez-Gamero (2014). Les résultats de la simulation ont permis de constater que le test avait un bon niveau comparativement aux tests de l’indice de dispersion et au Quick test de Crockett et qu’il était généralement moins puissant que les autres tests. Nous avons également découvert que le test de l’indice de dispersion devrait être bilatéral alors qu’il ne rejette que pour de grandes valeurs de la statistique de test. Finalement, la valeur-p de tous ces tests a été calculée sur un jeu de données de soccer et les conclusions comparées. Avec une valeur-p de 0,009, le test a rejeté l’hypothèse que les données provenaient d’une loi de Poisson bivariée alors que les tests proposés par Novoa-Muñoz et Jiménez-Gamero (2014) donnaient une conclusion différente.
Our aim in this thesis is to conduct the goodness-of-fit test based on empirical characteristic functions proposed by Jiménez-Gamero et al. (2009) in the case of the bivariate Poisson distribution. We first evaluate the test’s behaviour in the case of the univariate Poisson distribution and find that the estimated type I error probabilities are close to the nominal values. Next, we extend it to the bivariate case and calculate and compare its power with the dispersion index test for the bivariate Poisson, Crockett’s Quick test for the bivariate Poisson and the two test families proposed by Novoa-Muñoz et Jiménez-Gamero (2014). Simulation results show that the probability of type I error is close to the claimed level and that it is generally less powerful than other tests. We also discovered that the dispersion index test should be bilateral whereas it rejects for large values only. Finally, the p-value of all these tests is calculated on a real dataset from soccer. The p-value of the test is 0,009 and we reject the hypothesis that the data come from a Poisson bivariate while the tests proposed by Novoa-Muñoz et Jiménez-Gamero (2014) leads to a different conclusion.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Yu-Lun, and 劉宇倫. "The Interaction among Taiwan, American, China,South Korea , and Hong Kong Stock Market: An Empirical Analysis on Cointegration Approach Rank Test and Impulse Response Function." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/36765127780391741802.

Full text
Abstract:
碩士
國立高雄第一科技大學
金融研究所
99
This paper uses Breitung (2001)nonlinear cointegration model rank test to analyze cointegraion relation and long run equilibrium between Taiwan, American, China, South Korea , and Hong Kong stock market in Subprime Mortgage from January 4, 2007 through December 31, 2009. Besides, we use impulse response function to detect the change in co-movement relationship between Taiwan and American stock markets as exogenous variables change. Finally, Taiwan''s weighted stock index on the extent of portfolio investors in the proposed standard. According to empirical findings, we assert the following: 1. During the research period, subprime crisis causes a shift Taiwan and other stock index change, there are long cointegration trends for be Taiwan and the national stock markets . 2. It indirectly demonstrates that the T+15 trading day of American stock market has significantly influenced on the T trading day of Taiwan stock market. It indirectly demonstrates that the T+30 trading day of China stock market has significantly influenced on the T trading day of Taiwan stock market. 3. Response of Taiwan to South Korea Significant impact than from Hong Kong, It indirectly demonstrates that the T+30 trading day of South Korea stock market has significantly influenced on the T trading day of Taiwan stock market. Within Taiwan itself, before the more obvious impact on the subprime mortgage crisis 4. The largest impulse of Taiwan''s current to American, then gradually converge, while HK, SK stock market will also affect the Taiwan stock market. Before subprime mortgage crisis in the Taiwan stock market prior to a greater degree the impact of the US, after to a greater degree the impact of the China.
APA, Harvard, Vancouver, ISO, and other styles
44

Hu, Keng-Hao, and 胡耿豪. "New weighted mean test and interval estimation for the shape parameter of the lifetime distribution with bathtub-shaped or increasing failure rate function under the censored sample." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/50022998230919898829.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
94
In real life, we often encounter the situation of getting censored sample, for example, because of the restriction of time and cost or human mistake, we can’t get all observations. In practice, there are often these issues concerned in the data of the reliability or survival. In face of this censored data, we can’t apply traditional statistical inference to perform our analysis on it. Therefore, what we discuss in this paper is statistical inference with multiply type Ⅱ censored sample. In this paper, we discuss the lifetime distribution with the shape parameter of the bathtub-shaped or increasing failure rate function under the multiply type Ⅱ censored sample. First we provide 12 unweighted and weighted pivotal quantities to test the shape parameter of the bathtub-shaped or increasing failure rate function and establish confidence interval of the shape parameter under the multiply type Ⅱ censored sample. Secondly, we also find the best test statistic based on their most power of test among all test statistics. In addition, we obtain the best pivotal quantities with the shortest tolerance length. Finally, we give two examples and the Monte Carlo simulation to assess the behavior (including higher power and more shorter length of confidence interval) of these pivotal quantities for testing null hypotheses under given significance level and establishing confidence interval of shape parameter under the given confidence coefficient.
APA, Harvard, Vancouver, ISO, and other styles
45

Loureiro, Inês Alexandra Gonçalves. "O teste de uma hipótese univariada de normalidade por combinação de testes baseados na função característica empírica." Master's thesis, 2014. http://hdl.handle.net/10316/33687.

Full text
Abstract:
Dissertação de Mestrado em Matemática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.
Em 1983, Epps e Pulley propuseram uma família de testes para uma hipótese univariada de normalidade baseada na distância ponderada L2 entre a função característica empírica dos resíduos standardizados associados às observações e a função característica da distribuição normal standard, cuja ponderação depende de um parâmetro real β. Neste trabalho, consideramos um procedimento de teste múltiplo que combina um número nito de estatísticas de teste de Epps e Pulley para escolhas extremas ( 2 f0;1g) e não extremas ( 2]0;1[) do parâmetro β. Para cada uma das estatísticas envolvidas no teste múltiplo bem como para este, estudamos as suas propriedades sob a hipótese nula de normalidade e estabelecemos a convergência dos testes associados. Finalmente, apresentamos os resultados de um estudo de simulação realizado para analisar a potência do teste múltiplo e compará-la com outros testes de normalidade recomendados na literatura.
In 1983, Epps and Pulley proposed a family of tests for assessing univariate normality based on a weighted L2 distance between the empirical characteristic function of the scaled residuals and the characteristic function of the standard normal distribution, whose weight function depends on a real parameter β. We consider a new multiple test procedure which combines a nite set of Epps and Pulley test statistics including extreme ( 2 f0;1g) and non-extreme ( 2]0;1[) choices of the tuning parameter β. For each of the statistics involved in the combination and for the multiple test, we study their main properties under the null hypothesis of normality and we establish the convergence of the associated tests. Finally, we present the results of a simulation study carried out to analyze the power of the proposed multiple test and compare it with other highly recommended normality tests.
APA, Harvard, Vancouver, ISO, and other styles
46

Augustyniak, Maciej. "Une famille de distributions symétriques et leptocurtiques représentée par la différence de deux variables aléatoires gamma." Thèse, 2008. http://hdl.handle.net/1866/8192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Odintsov, Kirill. "Rozhodovací úlohy a empirická data; aplikace na nové typy úloh." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-328310.

Full text
Abstract:
This thesis concentrates on different approaches of solving decision making problems with an aspect of randomness. The basic methodologies of converting stochastic optimization problems to deterministic optimization problems are described. The proximity of solution of a problem and its empirical counterpart is shown. The empirical counterpart is used when we don't know the distribution of the random elements of the former problem. The distribution with heavy tails, stable distribution and their relationship is described. The stochastic dominance and the possibility of defining problems with stochastic dominance is introduced. The proximity of solution of problem with second order stochastic dominance and the solution of its empirical counterpart is proven. A portfolio management problem with second order stochastic dominance is solved by solving the equivalent empirical problem. Powered by TCPDF (www.tcpdf.org)
APA, Harvard, Vancouver, ISO, and other styles
48

Σπυρόπουλος, Πέτρος. "Κλιματικοί δείκτες και επεξεργασία χρονοσειρών βροχόπτωσης στην Δυτική Ελλάδα." Thesis, 2009. http://nemertes.lis.upatras.gr/jspui/handle/10889/2409.

Full text
Abstract:
Η παρούσα εργασία διαπραγματεύεται την επεξεργασία ετήσιων και εποχικών χρονοσειρών βροχόπτωσης από 12 σταθμούς της Δυτικής Ελλάδας για την περίοδο 1975-2004. Επιπλέον για τους 8 από τους συνολικά 12 σταθμούς όπου υπήρχε η δυνατότητα, η επεξεργασία αφορά μια περίοδο 50 ετών (1956-2005). Χρησιμοποιώντας ως κλιματικό δείκτη το ετήσιο βροχομετρικό ύψος προκύπτει ότι το σύνολο των 12 σταθμών χαρακτηρίζεται εν γένει από έναν συνδυασμό ημίυγρου ή υγρού κλιματικού τύπου. Χρησιμοποιώντας τον μη-παραμετρικό έλεγχο των Mann-Kendall για την εξακρίβωση παρουσίας τάσεων σε βάθος χρόνου, για την περίοδο 1975-2004 δεν διαφαίνεται η ύπαρξη κάποιας σημαντικής τάσης εκτός από τις ετήσιες βροχοπτώσεις του Πύργου που εμφανίζουν μία σημαντικά αρνητική τάση. Την περίοδο από το 1956-2005 προκύπτουν σημαντικά αρνητικές τάσεις τόσο σε εποχική βάση (κυρίως την άνοιξη) όσο και σε ετήσια για τους μισούς από τους οκτώ σταθμούς που εξετάστηκαν. Η Γάμμα κατανομή είναι εκείνη που περιγράφει καλύτερα το φυσικό μέγεθος ύψος βροχόπτωσης και στην περίπτωση μας προσδιορίζονται ανά σταθμό και για την περίοδο 1975-2004 (σε εποχική και ετήσια βάση), οι παράμετροι της με την βοήθεια της μεθόδου μέγιστης πιθανοφάνειας. Στα πλαίσια της φασματικής ανάλυσης, για να εξακριβωθεί η ύπαρξη ή οχι περιοδικότητας στην τιμή της διασποράς των εποχικών και ετήσιων τιμών βροχόπτωσης χρησιμοποιούνται οι 7 σταθμοί για τους οποίους υπάρχει επάρκεια μετρήσεων με εξεταζόμενη περίοδο την πεντηκονταετία 1955-2004 και κάνοντας χρήση της μεθόδου Blackman-Tukey. Προκύπτει με την εν λόγω μέθοδο ότι κατά την διάρκεια του φθινοπώρου και της άνοιξης δεν διαφαίνονται κάποια σαφή στοιχεία περιοδικότητας στην διασπορά των υψών υετού των 7 σταθμών. Αυτό δεν ισχύει όμως για τον χειμώνα αλλά και σε ετήσια βάση, όπου στα φάσματα των τιμών υετού των σταθμών αποκαλύπτονται τρεις περιοχές συχνοτήτων περιοδικότητας που μοιάζουν αρκετά μεταξύ τους. Αυτό αντανακλά το γεγονός ότι σε γενικές γραμμές οι σταθμοί της Δυτικής Ελλάδας επηρεάζονται από τα ίδια περιπου βαρομετρικά συστήματα και άρα είναι φυσιολογικό να εμφανίζουν παρόμοιες συνιστώσες περιοδικότητας στις διασπορές των τιμών υετού τους.
This work deals with the processing of annual and seasonal precipitation series from 12 stations of West Greece for a 30-year study period (1975-2004). Moreover for 8 out for τηε 12 stations where possible, the processing uses a 50-year study period (1956-2005). By using the annual precipitation height as an climatic index it follows that the total of the twelve stations is characterized generally by a combination of semi-wet and wet climatic type. Making use of nonparametric Mann-Kendall test for ascertaining the existence of trend, it doesn't follow any significant trend for the 30-year period (1975-2004), with the exception of the annual precipitation heights of Pirgos that show a significant negative trend. During the 50-year period (1956-2005) significant negative trends occur in seasonal (mainly during spring) and annual basis as well, for half of the eight stations which have been examined. Gamma distribution is tha type of statistical distribution that describes more effectively the physical quantity precipitation height, and in our case its parameters per station are being computed for the period 1975-2004, by using the maximum likelihood method. Under the framework of a Spectral analysis of the precipitation series (for the verification of periodicity in the variances of precipitation rates) , 7 stations are used for a 50-year study period (1955-2004) by using the Blackman-Tukey method. It follows after this method has been used, that precipitation series don't appear any periodicity during autumn and spring seasons. This is in contrast with the winter season and the annual rainfall values as well, where three parts of periodicity in the spectra of the stations appear that bear a common resemblance. This depicts the fact that genarally the total of West Greece stations are influenced by almost the same barometric pressure systems which leads to the variances of precipitation rates to appear common periodicity components.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography