Zeitschriftenartikel zum Thema „Poisson-Distributed observations“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Poisson-Distributed observations.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Poisson-Distributed observations" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ades, M., P. E. Caines und R. P. Malhame. „Stochastic optimal control under Poisson-distributed observations“. IEEE Transactions on Automatic Control 45, Nr. 1 (2000): 3–13. http://dx.doi.org/10.1109/9.827351.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hakulinen, Timo, und Tadeusz Dyba. „Precision of incidence predictions based on poisson distributed observations“. Statistics in Medicine 13, Nr. 15 (15.08.1994): 1513–23. http://dx.doi.org/10.1002/sim.4780131503.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kirmani, S. N. U. A., und Jacek Wesołowski. „Time spent below a random threshold by a Poisson driven sequence of observations“. Journal of Applied Probability 40, Nr. 3 (September 2003): 807–14. http://dx.doi.org/10.1239/jap/1059060907.

Der volle Inhalt der Quelle
Annotation:
The mean and the variance of the time S(t) spent by a system below a random threshold until t are obtained when the system level is modelled by the current value of a sequence of independent and identically distributed random variables appearing at the epochs of a nonhomogeneous Poisson process. In the case of the homogeneous Poisson process, the asymptotic distribution of S(t)/t as t → ∞ is derived.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kirmani, S. N. U. A., und Jacek Wesołowski. „Time spent below a random threshold by a Poisson driven sequence of observations“. Journal of Applied Probability 40, Nr. 03 (September 2003): 807–14. http://dx.doi.org/10.1017/s0021900200019756.

Der volle Inhalt der Quelle
Annotation:
The mean and the variance of the time S(t) spent by a system below a random threshold until t are obtained when the system level is modelled by the current value of a sequence of independent and identically distributed random variables appearing at the epochs of a nonhomogeneous Poisson process. In the case of the homogeneous Poisson process, the asymptotic distribution of S(t)/t as t → ∞ is derived.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Taylor, Greg. „EXISTENCE AND UNIQUENESS OF CHAIN LADDER SOLUTIONS“. ASTIN Bulletin 47, Nr. 1 (12.08.2016): 1–41. http://dx.doi.org/10.1017/asb.2016.23.

Der volle Inhalt der Quelle
Annotation:
AbstractThe cross-classified chain ladder has a number of versions, depending on the distribution to which observations are subject. The simplest case is that of Poisson distributed observations, and then maximum likelihood estimates of parameters are explicit. Most other cases, however, including Bayesian chain ladder models, lead to implicit MAP (Bayesian) or MLE (non-Bayesian) solutions for these parameter estimates, raising questions as to their existence and uniqueness. The present paper investigates these questions in the case where observations are distributed according to some member of the exponential dispersion family.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Li. „The GLR Chart for Poisson Process with Individual Observations“. Advanced Materials Research 542-543 (Juni 2012): 42–46. http://dx.doi.org/10.4028/www.scientific.net/amr.542-543.42.

Der volle Inhalt der Quelle
Annotation:
A GLR (generalized likelihood ratio) chart for Poisson distributed process with individual observations is proposed and the design procedure of the GLR chart is discussed. The performance of the GLR charts is compared to the exponentially weighted moving average (EWMA) chart and the GWMA chart. The numerical experiments show that the GLR chart has comparable performance as the other two charts. However, the GLR chart is much easier to design and implement since there are more design parameters in these two charts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Collings, Bruce J., und Barry H. Margolin. „Testing Goodness of Fit for the Poisson Assumption When Observations are Not Identically Distributed“. Journal of the American Statistical Association 80, Nr. 390 (Juni 1985): 411–18. http://dx.doi.org/10.1080/01621459.1985.10478132.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bülow, Tanja, Ralf-Dieter Hilgers und Nicole Heussen. „Confidence interval comparison: Precision of maximum likelihood estimates in LLOQ affected data“. PLOS ONE 18, Nr. 11 (02.11.2023): e0293640. http://dx.doi.org/10.1371/journal.pone.0293640.

Der volle Inhalt der Quelle
Annotation:
When data is derived under a single or multiple lower limits of quantification (LLOQ), estimation of distribution parameters as well as precision of these estimates appear to be challenging, as the way to account for unquantifiable observations due to LLOQs needs particular attention. The aim of this investigation is to characterize the precision of censored sample maximum likelihood estimates of the mean for normal, exponential and Poisson distribution affected by one or two LLOQs using confidence intervals (CI). In a simulation study, asymptotic and bias-corrected accelerated bootstrap CIs for the location parameter mean are compared with respect to coverage proportion and interval width. To enable this examination, we derived analytical expressions of the maximum likelihood location parameter estimate for the assumption of exponentially and Poisson distributed data, where the censored sample method and simple imputation method are used to account for LLOQs. Additionally, we vary the proportion of observations below the LLOQs. When based on the censored sample estimate, the bootstrap CI led to higher coverage proportions and narrower interval width than the asymptotic CI. The results differed by underlying distribution. Under the assumption of normality, the CI’s coverage proportion and width suffered most from high proportions of unquantifiable observations. For exponentially and Poisson distributed data, both CI approaches delivered similar results. To derive the CIs, the point estimates from the censored sample method are preferable, because the point estimate of the simple imputation method leads to higher bias for all investigated distributions. This biased simple imputation estimate impairs the coverage proportion of the respective CI. The bootstrap CI surpassed the asymptotic CIs with respect to coverage proportion for the investigated choice of distributional assumptions. The variety of distributions for which the methods are suitable gives the applicant a widely usable tool to handle LLOQ affected data with appropriate approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lauderdale, Benjamin E. „Compound Poisson—Gamma Regression Models for Dollar Outcomes That Are Sometimes Zero“. Political Analysis 20, Nr. 3 (2012): 387–99. http://dx.doi.org/10.1093/pan/mps018.

Der volle Inhalt der Quelle
Annotation:
Political scientists often study dollar-denominated outcomes that are zero for some observations. These zeros can arise because the data-generating process is granular: The observed outcome results from aggregation of a small number of discrete projects or grants, each of varying dollar size. This article describes the use of a compound distribution in which each observed outcome is the sum of a Poisson—distributed number of gamma distributed quantities, a special case of the Tweedie distribution. Regression models based on this distribution estimate loglinear marginal effects without either the ad hoc treatment of zeros necessary to use a log-dependent variable regression or the change in quantity of interest necessary to use a tobit or selection model. The compound Poisson—gamma regression is compared with commonly applied approaches in an application to data on high-speed rail grants from the United States federal government to the states, and against simulated data from several data-generating processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gnedin, Alexander V. „Optimal Stopping with Rank-Dependent Loss“. Journal of Applied Probability 44, Nr. 04 (Dezember 2007): 996–1011. http://dx.doi.org/10.1017/s0021900200003697.

Der volle Inhalt der Quelle
Annotation:
For τ, a stopping rule adapted to a sequence ofnindependent and identically distributed observations, we define the loss to be E[q(Rτ)], whereRjis the rank of thejth observation andqis a nondecreasing function of the rank. This setting covers both the best-choice problem, withq(r) =1(r> 1), and Robbins' problem, withq(r) =r. Asntends to ∞, the stopping problem acquires a limiting form which is associated with the planar Poisson process. Inspecting the limit we establish bounds on the stopping value and reveal qualitative features of the optimal rule. In particular, we show that the complete history dependence persists in the limit; thus answering a question asked by Bruss (2005) in the context of Robbins' problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Bruss, F. Thomas, und Yvik C. Swan. „A Continuous-Time Approach to Robbins' Problem of Minimizing the Expected Rank“. Journal of Applied Probability 46, Nr. 1 (März 2009): 1–18. http://dx.doi.org/10.1239/jap/1238592113.

Der volle Inhalt der Quelle
Annotation:
Let X1, X2, …, Xn be independent random variables uniformly distributed on [0,1]. We observe these sequentially and have to stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation? What is the value of the expected rank (as a function of n) and what is the limit of this value when n goes to ∞? This full-information expected selected-rank problem is known as Robbins' problem of minimizing the expected rank, and its general solution is unknown. In this paper we provide an alternative approach to Robbins' problem. Our model is similar to that of Gnedin (2007). For this, we consider a continuous-time version of the problem in which the observations follow a Poisson arrival process on ℝ+ × [0,1] of homogeneous rate 1. Translating the previous optimal selection problem in this setting, we prove that, under reasonable assumptions, the corresponding value function w(t) is bounded and Lipschitz continuous. Our main result is that the limiting value of the Poisson embedded problem exists and is equal to that of Robbins' problem. We prove that w(t) is differentiable and also derive a differential equation for this function. Although we have not succeeded in using this equation to improve on bounds on the optimal limiting value, we argue that it has this potential.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Bruss, F. Thomas, und Yvik C. Swan. „A Continuous-Time Approach to Robbins' Problem of Minimizing the Expected Rank“. Journal of Applied Probability 46, Nr. 01 (März 2009): 1–18. http://dx.doi.org/10.1017/s0021900200005192.

Der volle Inhalt der Quelle
Annotation:
Let X 1, X 2, …, X n be independent random variables uniformly distributed on [0,1]. We observe these sequentially and have to stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation? What is the value of the expected rank (as a function of n) and what is the limit of this value when n goes to ∞? This full-information expected selected-rank problem is known as Robbins' problem of minimizing the expected rank, and its general solution is unknown. In this paper we provide an alternative approach to Robbins' problem. Our model is similar to that of Gnedin (2007). For this, we consider a continuous-time version of the problem in which the observations follow a Poisson arrival process on ℝ+ × [0,1] of homogeneous rate 1. Translating the previous optimal selection problem in this setting, we prove that, under reasonable assumptions, the corresponding value function w (t) is bounded and Lipschitz continuous. Our main result is that the limiting value of the Poisson embedded problem exists and is equal to that of Robbins' problem. We prove that w (t) is differentiable and also derive a differential equation for this function. Although we have not succeeded in using this equation to improve on bounds on the optimal limiting value, we argue that it has this potential.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Olanrewaju, Rasaki Olawale. „Integer-valued Time Series Model via Generalized Linear Models Technique of Estimation“. International Annals of Science 4, Nr. 1 (29.04.2018): 35–43. http://dx.doi.org/10.21467/ias.4.1.35-43.

Der volle Inhalt der Quelle
Annotation:
The paper authenticated the need for separate positive integer time series model(s). This was done from the standpoint of a proposal for both mixtures of continuous and discrete time series models. Positive integer time series data are time series data subjected to a number of events per constant interval of time that relatedly fits into the analogy of conditional mean and variance which depends on immediate past observations. This includes dependency among observations that can be best described by Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model with Poisson distributed error term due to its positive integer defined range of values. As a result, an integer GARCH model with Poisson distributed error term was formed in this paper and called Integer Generalized Autoregressive Conditional Heteroscedasticity (INGARCH). Iterative Reweighted Least Square (IRLS) parameter estimation technique type of the Generalized Linear Models (GLM) was adopted to estimate parameters of the two spilt models; Linear and Log-linear INGARCH models deduced from the identity link function and logarithmic link function, respectively. This resulted from the log-likelihood function generated from the GLM via the random component that follows a Poisson distribution. A study of monthly successful bids of auction from 2003 to 2015 was carried out. The Probabilistic Integral Transformation (PIT) and scoring rule pinpointed the uniformity of the linear INGARCH than that of the log-linear INGARCH in describing first order autocorrelation, serial dependence and positive conditional effects among covariates based on the immediate past. The linear INGARCH model outperformed the log-linear INGARCH model with (AIC = 10514.47, BIC = 10545.01, QIC = 34128.56) and (AIC = 37588.83, BIC = 37614.28, QIC = 37587.3), respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Gnedin, Alexander V. „Optimal Stopping with Rank-Dependent Loss“. Journal of Applied Probability 44, Nr. 4 (Dezember 2007): 996–1011. http://dx.doi.org/10.1239/jap/1197908820.

Der volle Inhalt der Quelle
Annotation:
For τ, a stopping rule adapted to a sequence of n independent and identically distributed observations, we define the loss to be E[q(Rτ)], where Rj is the rank of the jth observation and q is a nondecreasing function of the rank. This setting covers both the best-choice problem, with q(r) = 1(r > 1), and Robbins' problem, with q(r) = r. As n tends to ∞, the stopping problem acquires a limiting form which is associated with the planar Poisson process. Inspecting the limit we establish bounds on the stopping value and reveal qualitative features of the optimal rule. In particular, we show that the complete history dependence persists in the limit; thus answering a question asked by Bruss (2005) in the context of Robbins' problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Mielenz, Norbert, Joachim Spilke und Eberhard von Borell. „Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars“. Archives Animal Breeding 57, Nr. 1 (29.01.2015): 1–19. http://dx.doi.org/10.5194/aab-57-26-2015.

Der volle Inhalt der Quelle
Annotation:
Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM). In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars) in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Mielenz, Norbert, Joachim Spilke und Eberhard von Borell. „Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars“. Archives Animal Breeding 57, Nr. 1 (29.01.2015): 1–19. http://dx.doi.org/10.7482/0003-9438-57-026.

Der volle Inhalt der Quelle
Annotation:
Abstract. Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM). In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars) in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cruces, M., L. G. Spitler, P. Scholz, R. Lynch, A. Seymour, J. W. T. Hessels, C. Gouiffés, G. H. Hilmarsson, M. Kramer und S. Munjal. „Repeating behaviour of FRB 121102: periodicity, waiting times, and energy distribution“. Monthly Notices of the Royal Astronomical Society 500, Nr. 1 (16.10.2020): 448–63. http://dx.doi.org/10.1093/mnras/staa3223.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT Detections from the repeating fast radio burst FRB 121102 are clustered in time, noticeable even in the earliest repeat bursts. Recently, it was argued that the source activity is periodic, suggesting that the clustering reflected a not-yet-identified periodicity. We performed an extensive multiwavelength campaign with the Effelsberg telescope, the Green Bank telescope, and the Arecibo Observatory to shadow the Gran Telescope Canaria (optical), NuSTAR (X-ray) and INTEGRAL (γ-ray). We detected 36 bursts with Effelsberg, one with a pulse width of 39 ms, the widest burst ever detected from FRB 121102. With one burst detected during simultaneous NuSTAR observations, we place a 5σ upper limit of 5 × 1047 erg on the 3–79 keV energy of an X-ray burst counterpart. We tested the periodicity hypothesis using 165 h of Effelsberg observations and find a periodicity of 161 ± 5 d. We predict the source to be active from 2020 July 9 to October 14 and subsequently from 2020 December 17 to 2021 March 24. We compare the wait times between consecutive bursts within a single observation to Weibull and Poisson distributions. We conclude that the strong clustering was indeed a consequence of a periodic activity and show that if the few events with millisecond separation are excluded, the arrival times are Poisson distributed. We model the bursts’ cumulative energy distribution with energies from ∼1038–1039 erg and find that it is well described by a power law with slope of γ = −1.1 ± 0.2. We propose that a single power law might be a poor descriptor of the data over many orders of magnitude.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Cament, Leonardo, Martin Adams und Pablo Barrios. „Space Debris Tracking with the Poisson Labeled Multi-Bernoulli Filter“. Sensors 21, Nr. 11 (26.05.2021): 3684. http://dx.doi.org/10.3390/s21113684.

Der volle Inhalt der Quelle
Annotation:
This paper presents a Bayesian filter based solution to the Space Object (SO) tracking problem using simulated optical telescopic observations. The presented solution utilizes the Probabilistic Admissible Region (PAR) approach, which is an orbital admissible region that adheres to the assumption of independence between newborn targets and surviving SOs. These SOs obey physical energy constraints in terms of orbital semi-major axis length and eccentricity within a range of orbits of interest. In this article, Low Earth Orbit (LEO) SOs are considered. The solution also adopts the Partially Uniform Birth (PUB) intensity, which generates uniformly distributed births in the sensor field of view. The measurement update then generates a particle SO distribution. In this work, a Poisson Labeled Multi-Bernoulli (PLMB) multi-target tracking filter is proposed, using the PUB intensity model for the multi-target birth density, and a PAR for the spatial density to determine the initial orbits of SOs. Experiments are demonstrated using simulated SO trajectories created from real Two-Line Element data, with simulated measurements from twelve telescopes located in observatories, which form part of the Falcon telescope network. Optimal Sub-Pattern Assignment (OSPA) and CLEAR MOT metrics demonstrate encouraging multi-SO tracking results even under very low numbers of observations per SO pass.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Astuti, Cindy Cahyaning, und Angga Dwi Mulyanto. „Estimation Parameters And Modelling Zero Inflated Negative Binomial“. CAUCHY 4, Nr. 3 (30.11.2016): 115. http://dx.doi.org/10.18860/ca.v4i3.3656.

Der volle Inhalt der Quelle
Annotation:
Regression analysis is used to determine relationship between one or several response variable (Y) with one or several predictor variables (X). Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean). Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation). Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB). In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE) method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM) algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Jones, G. H., und J. E. Vincent. „Meiosis in allopolyploid Crepis capillaris. II. Autotetraploids“. Genome 37, Nr. 3 (01.06.1994): 497–505. http://dx.doi.org/10.1139/g94-069.

Der volle Inhalt der Quelle
Annotation:
Meiotic chromosome pairing of autotetraploid Crepis capillaris was analysed by electron microscopy of surface-spread prophase I nuclei and compared with light microscopic observations of metaphase I chromosome configurations. Prophase I quadrivalent frequencies are high in all three tetrasomes. (A, D, and C) and partially dependent on chromosome size. At metaphase I quadrivalent frequencies are much lower and strongly dependent on chromosome size. There is no evidence for multivalent elimination during prophase I in this system, and the reduction in multivalent frequency at metaphase I can be explained by an insufficiency of appropriately placed chiasmata. The high frequencies of prophase I quadrivalents far exceed the two-thirds expected on a simple model with two terminal independent pairing initiation sites per tetrasome, suggesting that multiple pairing initiation occurs. Direct observations reveal relatively high frequencies of pairing partner switches (PPSs) at prophase I, which confirms this suggestion. The numbers of PPSs per tetrasome show a good fit to the Poisson distribution, and their positional distribution along chromosomes is random and nonlocalized. These observations favour a model of pairing initiation based on a large number of evenly distributed autonomous pairing sites each with a uniform and low probability of generating a PPS.Key words: autotetraploid, meiosis, Crepis capillaris, multivalent, pairing partner switch.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wang, L., C. Onof und C. Maksimovic. „Reconstruction of sub-daily rainfall sequences using multinomial multiplicative cascades“. Hydrology and Earth System Sciences Discussions 7, Nr. 4 (03.08.2010): 5267–97. http://dx.doi.org/10.5194/hessd-7-5267-2010.

Der volle Inhalt der Quelle
Annotation:
Abstract. This work aims to develop a semi-deterministic multiplicative cascade method for producing reliable short-term (sub-daily) rainfall sequences. The scaling feature of sub-daily rainfall sequences is analysed over the timescales of interest (i.e., 5 min to hourly in this research) to help derive the crucial parameters, i.e., the fragmentation ratios, for the proposed method. These derived ratios are then further used to stochastically disaggregate hourly rainfall sequences to 5-min using the multiplicative cascade process. The log-Poisson distributed cascade method is involved in this work to validate the proposed methodology by comparing certain statistics of the generated rainfall sequences over a specific range of timescales. The results demonstrate that the proposed methodology in general has the ability to reproduce the patterns of sub-daily rainfall observations from Greenwich raingauge station in London.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Uijlenhoet, R., J. M. Porrà, D. Sempere Torres und J. D. Creutin. „Edge effect causes apparent fractal correlation dimension of uniform spatial raindrop distribution“. Nonlinear Processes in Geophysics 16, Nr. 2 (09.04.2009): 287–97. http://dx.doi.org/10.5194/npg-16-287-2009.

Der volle Inhalt der Quelle
Annotation:
Abstract. Lovejoy and Schertzer (1990a) presented a statistical analysis of blotting paper observations of the (two-dimensional) spatial distribution of raindrop stains. They found empirical evidence for the fractal scaling behavior of raindrops in space, with potentially far-reaching implications for rainfall microphysics and radar meteorology. In particular, the fractal correlation dimensions determined from their blotting paper observations led them to conclude that "drops are (hierarchically) clustered" and that "inhomogeneity in rain is likely to extend down to millimeter scales". Confirming previously reported Monte Carlo simulations, we demonstrate analytically that the claims based on this analysis need to be reconsidered, as fractal correlation dimensions similar to the ones reported (i.e. smaller than the value of two expected for uniformly distributed raindrops) can result from instrumental artifacts (edge effects) in otherwise homogeneous Poissonian rainfall. Hence, the results of the blotting paper experiment are not statistically significant enough to reject the Poisson homogeneity hypothesis in favor of a fractal description of the discrete nature of rainfall. Our analysis is based on an analytical expression for the expected overlap area between a circle and a square, when the circle center is randomly (uniformly) distributed inside the square. The derived expression (πr2−8r3/3+r4/2, where r denotes the ratio between the circle radius and the side of the square) can be used as a reference curve against which to test the statistical significance of fractal correlation dimensions determined from spatial point patterns, such as those of raindrops and rainfall cells.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jaenada, María, und Leandro Pardo. „Robust Statistical Inference in Generalized Linear Models Based on Minimum Renyi’s Pseudodistance Estimators“. Entropy 24, Nr. 1 (13.01.2022): 123. http://dx.doi.org/10.3390/e24010123.

Der volle Inhalt der Quelle
Annotation:
Minimum Renyi’s pseudodistance estimators (MRPEs) enjoy good robustness properties without a significant loss of efficiency in general statistical models, and, in particular, for linear regression models (LRMs). In this line, Castilla et al. considered robust Wald-type test statistics in LRMs based on these MRPEs. In this paper, we extend the theory of MRPEs to Generalized Linear Models (GLMs) using independent and nonidentically distributed observations (INIDO). We derive asymptotic properties of the proposed estimators and analyze their influence function to asses their robustness properties. Additionally, we define robust Wald-type test statistics for testing linear hypothesis and theoretically study their asymptotic distribution, as well as their influence function. The performance of the proposed MRPEs and Wald-type test statistics are empirically examined for the Poisson Regression models through a simulation study, focusing on their robustness properties. We finally test the proposed methods in a real dataset related to the treatment of epilepsy, illustrating the superior performance of the robust MRPEs as well as Wald-type tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Predić, Bratislav, Nevena Radosavljević und Aleksandar Stojčić. „TIME SERIES ANALYSIS: FORECASTING SALES PERIODS IN WHOLESALE SYSTEMS“. Facta Universitatis, Series: Automatic Control and Robotics 18, Nr. 3 (27.01.2020): 177. http://dx.doi.org/10.22190/fuacr1903177p.

Der volle Inhalt der Quelle
Annotation:
The main goal of time series analysis is explaining the correlation and the main features of the data in chronological order by using appropriate statistical models. It is being used in various aspects of life and work, as well as in forecasting future product demands, service demands, etc. The most common type of time series data is the one whose observations are taken in equally distributed time intervals (daily, weekly, monthly, etc.). However, in this paper, we analyze a different kind of time series which represents product purchase moments. Thus, since there are not any regular observation periods, this irregular time series must be transformed in some way before traditional methods of analysis can be applied. After the data transformation is complete, the next step is modeling the nonstationary time series using commonly known models such as ARIMA and PNBD, which have been chosen for their fairly easy and successful forecasting processes. The goal of this analysis is timely product advertising to a customer in order to increase sales.Unlike some other models that consider the relationship between two or more different phenomena, time series models, including ARIMA, Pareto/NBD and Poisson models, examine the impact of historical values of a single phenomenon on its present and future value. This approach enables the study of the behavior of a given phenomenon over time and produces good results, especially if a large amount of historical data is available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Brizzi, Maurizio, Daniele Nani und Lucietta Betti. „Poisson distribution and process as a well-fitting pattern for counting variables in biologic models“. International Journal of High Dilution Research - ISSN 1982-6206 11, Nr. 40 (21.12.2021): 126–27. http://dx.doi.org/10.51910/ijhdr.v11i40.586.

Der volle Inhalt der Quelle
Annotation:
One of the major criticisms directed to basic research on high dilution effects is the lack of a steady statistical approach; therefore, it seems crucial to fix some milestones in statistical analysis of this kind of experimentation. Since plant research in homeopathy has been recently developed and one of the mostly used models is based on in vitro seed germination, here we propose a statistical approach focused on the Poisson distribution, that satisfactorily fits the number of non-germinated seeds. Poisson distribution is a discrete-valued model often used in statistics when representing the number X of specific events (telephone calls, industrial machine failures, genetic mutations etc.) that occur in a fixed period of time, supposing that instant probability of occurrence of such events is constant. If we denote with λ the average number of events that occur within the fixed period, the probability of observing exactly k events is: P(k) = e-λ λk /k! , k = 0, 1,2,… This distribution is commonly used when dealing with rare effects, in the sense that it has to be almost impossible to have two events at the same time. Poisson distribution is the basic model of the socalled Poisson process, which is a counting process N(t), where t is a time parameter, having these properties: - The process starts with zero: N(0) = 0; - The increments are independent; - The number of events that occur in a period of time d(t) follows a Poisson distribution with parameter proportional to d(t); - The waiting time, i.e. the time between an event and another one, follows and exponential distribution. In a series of experiments performed by our research group ([1], [2]., [3], [4]) we tried to apply this distribution to the number X of non-germinated seeds out of a fixed number N* of seeds in a Petri dish (usually N* = 33 or N* = 36). The goodness-of-fit was checked by different tests (Kolmogorov distance and chi-squared), as well as with the Poissonness plot proposed by Hoaglin [5]. The goodness-of-fit of Poisson distribution allows to use specific tests, like the global Poisson test (based on a chi-squared statistics) and the comparison of two Poisson parameters, based on the statistic z = X1–X2 / (X1+X2)1/2 which is, for large samples (at least 20 observations) approximately standard normally distributed. A very clear review of these tests based on Poisson distribution is given in [6]. This good fit of Poisson distribution suggests that the whole process of germination of wheat seeds may be considered as a non-homogeneous Poisson process, where the germination rate is not constant but changes over time. Keywords: Poisson process, counting variable, goodness-of-fit, wheat germination References [1] L.Betti, M.Brizzi, D.Nani, M.Peruzzi. A pilot statistical study with homeopathic potencies of Arsenicum Album in wheat germination as a simple model. British Homeopathic Journal; 83: 195-201. [2] M.Brizzi, L.Betti (1999), Using statistics for evaluating the effectiveness of homeopathy. Analysis of a large collection of data from simple plant models. III Congresso Nazionale della SIB (SocietàItaliana di Biometria) di Roma, Abstract Book, 74-76. [3] M.Brizzi, D.Nani, L.Betti, M.Peruzzi. Statistical analysis of the effect of high dilutions of Arsenic in a large dataset from a wheat germination model. British Homeopathic Journal, 2000;, 89, 63-67. [4] M.Brizzi, L.Betti (2010), Statistical tools for alternative research in plant experiments. “Metodološki Zvezki – Advances in Methodology and Statistics”, 7, 59-71. [5] D.C.Hoaglin (1980), A Poissonness plot. “The American Statistician”, 34, 146-149. [6] L.Sachs (1984) Applied statistics. A handbook of techniques. Springer Verlag, 186-189.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Watson, Kaitlyn E., Kyle M. Gardiner und Judith A. Singleton. „The impact of extreme heat events on hospital admissions to the Royal Hobart Hospital“. Journal of Public Health 42, Nr. 2 (09.04.2019): 333–39. http://dx.doi.org/10.1093/pubmed/fdz033.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Extreme heat (EH) events are increasing in frequency and duration and cause more deaths in Australia than any other extreme weather event. Consequently, EH events lead to an increase in the number of patient presentations to hospitals. Methods Climatic observations for Hobart’s region and Royal Hobart Hospital (RHH) emergency department admissions data were collected retrospectively for the study period of 2003–2010. A distributed lag non-linear model (DLNM) was fitted using a generalized linear model with quasi-Poisson family to obtain adjusted estimates for the relationship between temperature and the relative risk of being admitted to the RHH. Results The model demonstrated that relative to the annual mean temperature of 14°C, the relative risk of being admitted to the RHH for the years 2003–2010 was significantly higher for all temperatures above 27°C (P < 0.05 in all cases). The peak effect upon admission was noted on the same day as the EH event, however, the model suggests that a lag effect exists, increasing the likelihood of admission to the RHH for a further 14 days. Conclusions To relieve the added burden on emergency departments during these events, adaptation strategies adopted by public health organizations could include preventative health initiatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Albrecht, Peter. „An Evolutionary Credibility Model for Claim Numbers“. ASTIN Bulletin 15, Nr. 1 (April 1985): 1–17. http://dx.doi.org/10.2143/ast.15.1.2015029.

Der volle Inhalt der Quelle
Annotation:
This paper considers a particular credibility model for the claim numbers N1, N2, …, Nn, … of a single risk within a collective in successive periods 1, 2, …, n, … In the terminology of Jewell (1975) the model is an evolutionary credibility model, which means that the underlying risk parameter Λ is allowed to vary in successive periods (the structure function is allowed to be time dependent). Evolutionary credibility models for claim amounts have been studied by Bühlmann (1969, pp. 164–165), Gerber and Jones (1975), Jewell (1975, 1976), Taylor (1975), Sundt (1979, 1981, 1983) and Kremer (1982). Again in Jewell's terminology the considered model is on the other hand stationary, in the sense that the conditional distribution of Ni given the underlying risk parameter does not vary with i.The computation of the credibility estimate of Nn+1 involves the considerable labor of inverting an n × n covariance matrix (n is the number of observations). The above mentioned papers have therefore typically looked for model structures for which this inversion is unnecessary and instead a recursive formula for the credibility forecast can be obtained. Typically nth order stationary a priori sequences (e.g., ARMA (p, q)-processes) lead to an nth order recursive scheme. In this paper we impose the restriction that the conditional distribution of Ni is Poisson (which by the way leads to a model identical to the so called “doubly stochastic Poisson sequences” considered in the theory of stochastic point processes). What we gain is a recursive formula for the coefficients of the credibility estimate (not for the estimate itself!) in case of an arbitrary weakly stationary a priori sequence. In addition to this central result the estimation of the structural parameters is considered in this case and some more special models are analyzed. Among them are EARMA-processes (which are positive-valued stationary sequences possessing exponentially distributed marginals and the same autocorrelation structure as ARMA-processes) as a priori sequence and models which can be considered as (discrete) generalizations of the Pólya process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Bright, Tess, Islay Mactaggart, Min Kim, Jennifer Yip, Hannah Kuper und Sarah Polack. „Rationale for a Rapid Methodology to Assess the Prevalence of Hearing Loss in Population-Based Surveys“. International Journal of Environmental Research and Public Health 16, Nr. 18 (13.09.2019): 3405. http://dx.doi.org/10.3390/ijerph16183405.

Der volle Inhalt der Quelle
Annotation:
Data on the prevalence and causes of hearing loss is lacking from many low and middle-income countries, in part, because all-age population-based surveys of hearing loss can be expensive and time consuming. Restricting samples to older adults would reduce the sample size required, as hearing loss is more prevalent in this group. Population-based surveys of hearing loss require clinicians to be involved in the data collection team and reducing the duration of the survey may help to minimise the impact on service delivery. The objective of this paper was to identify the optimal age-group for conduct of population-based surveys of hearing loss, balancing sample size efficiencies, and expected response rates with ability to make inferences to the all-age population. Methods: Between 2013–2014, two all aged population-based surveys of hearing loss were conducted in one district each of India and Cameroon. Secondary data analysis was conducted to determine the proportion of hearing loss (moderate or greater) in people aged 30+, 40+ and 50+. Poisson regression models were developed to predict the expected prevalence of hearing loss in the whole population, based on the prevalence in people aged 30+, 40+, and 50+, which was compared to the observed prevalence. The distribution of causes in these age groups was also compared to the all-age population. Sample sizes and response rates were estimated to assess which age cut-off is most rapid. Results: Of 160 people in India and 131 in Cameroon with moderate or greater hearing loss, over 70% were older than 50 in both settings. For people aged 30+ (90.6% India; 76.3% Cameroon), 40+ (81% India; 75% Cameroon) and 50+ (73% India; 73% Cameroon) the proportions were higher. Prediction based on Poisson distributed observations the predicted prevalence based on those aged 30+, 40+, and 50+ fell within the confidence intervals of the observed prevalence. The distribution of probable causes of hearing loss in the older age groups was statistically similar to the total population. Sample size calculations and an analysis of response rates suggested that a focus on those aged 50+ would minimise costs the most by reducing the survey duration. Conclusion: Restricting the age group included in surveys of hearing loss, in particular to people aged 50+, would still allow inferences to be made to the total population, and would mean that the required sample size would be smaller, thus reducing the duration of the survey and costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Jansson, Moritz K., und Shelby Yamamoto. „The effect of temperature, humidity, precipitation and cloud coverage on the risk of COVID-19 infection in temperate regions of the USA—A case-crossover study“. PLOS ONE 17, Nr. 9 (15.09.2022): e0273511. http://dx.doi.org/10.1371/journal.pone.0273511.

Der volle Inhalt der Quelle
Annotation:
Background Observations based on the spread of SARS-CoV-2 early into the COVID-19 pandemic have suggested a reduced burden in tropical regions leading to the assumption of a dichotomy between cold and dry and wet and warm climates. Objectives Analyzing more than a whole year of COVID-19 infection data, this study intents to refine the understanding of meteorological variables (temperature, humidity, precipitation and cloud coverage) on COVID-19 transmission in settings that experience distinct seasonal changes. Methods and findings A time stratified case-crossover design was adopted with a conditional Poisson model in combination with a distributed lag nonlinear model to assess the short-term impact of mentioned meteorological factors on COVID-19 infections in five US study sites (New York City (NYC); Marion County, Indiana (MCI); Baltimore and Baltimore County, Maryland (BCM); Franklin County, Ohio (FCO); King County, Washington (KCW)). Higher-than-average temperatures were consistently associated with a decreased relative risk (RR) of COVID-19 infection in four study sites. At 20 degrees Celsius COVID-19 infection was associated with a relative risk of 0.35 (95%CI: 0.20–0.60) in NYC, 1.03 (95%CI:0.57–1.84) in MCI, 0.34 (95%CI: 0.20–0.57) in BCM, 0.52 (95%CI: 0.31–0.87) in FCO and 0.21 (95%CI: 0.10–0.44) in KCW. Higher-than-average humidity levels were associated with an increased relative risk of COVID-19 infection in four study sites. Relative to their respective means, at a humidity level of 15 g/kg (specific humidity) the RR was 5.83 (95%CI: 2.05–16.58) in BCM, at a humidity level of 10 g/kg the RR was 3.44 (95%CI: 1.95–6.01) in KCW. Conclusions The results of this study suggest opposed effects for higher-than-average temperature and humidity concerning the risk of COVID-19 infection. While a distinct seasonal pattern of COVID-19 has not yet emerged, warm and humid weather should not be generally regarded as a time of reduced risk of COVID-19 infections.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Safitri, Nurul Indah, I. Wayan Mangku und Hadi Sumarno. „A Study on the Estimator Distribution for the Expected Value of a Compound Periodic Poisson Process with Power Function Trend“. InPrime: Indonesian Journal of Pure and Applied Mathematics 4, Nr. 2 (04.11.2022): 82–90. http://dx.doi.org/10.15408/inprime.v4i2.25104.

Der volle Inhalt der Quelle
Annotation:
This article discusses the estimation for the expected value, also called the mean function, of a compound periodic Poisson process with a power function trend. The aims of our study are, first, to modify the existing estimator to produce a new estimator that is normally distributed, and, second, to determine the smallest observation interval size such that our proposed estimator is still normally distributed. Basically, we formulate the estimator using the moment method. We use Monte Carlo simulation to check the distribution of our new estimator. The result shows that a new estimator for the expected value of a compound periodic Poisson process with a power function trend is normally distributed and the simulation result shows that the distribution of the new estimator is already normally distributed at the length of 100 observation interval for a period of 1 unit. This interval is the smallest size of the observation interval. The Anderson-Darling test shows that when the period is getting larger, the p-value is also getting bigger. Therefore, the larger period requires a wider observation interval to ensure that the estimator still has a normal distribution.Keywords: moment method; normal distribution; Poisson process; the smallest observation interval. AbstrakPada artikel ini dibahas tentang pendugaan fungsi nilai harapan Proses Poisson periodik majemuk dengan tren fungsi pangkat. Tujuan penelitian kami adalah, pertama, memodifikasi penduga yang telah ada untuk menghasilkan penduga baru yang memiliki distribusi normal. Kedua, menentukan ukuran interval pengamatan terkecil sehingga penduga yang diusulkan masih berdistribusi normal. Pada dasarnya, penduga yang kami usulkan diformulasi menggunakan metode momen. Kami menggunakan metode simulasi Monte Carlo untuk memeriksa sebaran distribusinya. Hasil menunjukkan bahwa penduga yang baru untuk fungsi nilai harapan Proses Poisson periodik majemuk dengan tren fungsi pangkat memiliki distribusi normal. Hasil simulasi menunjukkan bahwa penduga baru telah berdistribusi normal pada panjang interval pengamatan 100 untuk periode sebesar 1 satuan. Interval pengamatan ini merupakan ukuran interval pengamatan terkecil. Selain itu, hasil uji Anderson-Darling menunjukkan bahwa ketika periode semakin besar maka p-value juga semakin besar. Oleh karena itu, periode yang lebih besar memerlukan interval pengamatan yang lebih panjang untuk menjamin penduga yang kami usulkan tetap berdistribusi normal.Kata Kunci: metode momen; distribusi normal; proses Poisson; interval pengamatan terkecil. 2020MSC: 62E17
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Dunson, J. C., T. E. Bleier, S. Roth, J. Heraud, C. H. Alvarez und A. Lira. „The Pulse Azimuth effect as seen in induction coil magnetometers located in California and Peru 2007–2010, and its possible association with earthquakes“. Natural Hazards and Earth System Sciences 11, Nr. 7 (29.07.2011): 2085–105. http://dx.doi.org/10.5194/nhess-11-2085-2011.

Der volle Inhalt der Quelle
Annotation:
Abstract. The QuakeFinder network of magnetometers has recorded geomagnetic field activity in California since 2000. Established as an effort to follow up observations of ULF activity reported from before and after the M = 7.1 Loma Prieta earthquake in 1989 by Stanford University, the QuakeFinder network has over 50 sites, fifteen of which are high-resolution QF1005 and QF1007 systems. Pairs of high-resolution sites have also been installed in Peru and Taiwan. Increases in pulse activity preceding nearby seismic events are followed by decreases in activity afterwards in the three cases that are discussed here. In addition, longer term data is shown, revealing a rich signal structure not previously known in QuakeFinder data, or by many other authors who have reported on pre-seismic ULF phenomena. These pulses occur as separate ensembles, with demonstrable repeatability and uniqueness across a number of properties such as waveform, angle of arrival, amplitude, and duration. Yet they appear to arrive with exponentially distributed inter-arrival times, which indicates a Poisson process rather than a periodic, i.e., stationary process. These pulses were observed using three-axis induction coil magnetometers that are buried 1–2 m under the surface of the Earth. Our sites use a Nyquist frequency of 16 Hertz (25 Hertz for the new QF1007 units), and they record these pulses at amplitudes from 0.1 to 20 nano-Tesla with durations of 0.1 to 12 s. They are predominantly unipolar pulses, which may imply charge migration, and they are stronger in the two horizontal (north-south and east-west) channels than they are in the vertical channels. Pulses have been seen to occur in bursts lasting many hours. The pulses have large amplitudes and study of the three-axis data shows that the amplitude ratios of the pulses taken from pairs of orthogonal coils is stable across the bursts, suggesting a similar source. This paper presents three instances of increases in pulse activity in the 30 days prior to an earthquake, which are each followed by steep declines after the event. The pulses are shown, methods of detecting the pulses and calculating their azimuths is developed and discussed, and then the paper is closed with a brief look at future work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

de Villiers, Didier, Marc Schleiss, Marie-Claire ten Veldhuis, Rolf Hut und Nick van de Giesen. „Something fishy going on? Evaluating the Poisson hypothesis for rainfall estimation using intervalometers: results from an experiment in Tanzania“. Atmospheric Measurement Techniques 14, Nr. 8 (17.08.2021): 5607–23. http://dx.doi.org/10.5194/amt-14-5607-2021.

Der volle Inhalt der Quelle
Annotation:
Abstract. A new type of rainfall sensor (the intervalometer), which counts the arrival of raindrops at a piezo electric element, is implemented during the Tanzanian monsoon season alongside tipping bucket rain gauges and an impact disdrometer. The aim is to test the validity of the Poisson hypothesis underlying the estimation of rainfall rates using an experimentally determined raindrop size distribution parameterisation based on Marshall and Palmer (1948)'s exponential one. These parameterisations are defined independently of the scale of observation and therefore implicitly assume that rainfall is a homogeneous Poisson process. The results show that 28.3 % of the total intervalometer observed rainfall patches can reasonably be considered Poisson distributed and that the main reasons for Poisson deviations of the remaining 71.7 % are non-compliance with the stationarity criterion (45.9 %), the presence of correlations between drop counts (7.0 %), particularly at higher arrival rates (ρa>500 m-2s-1), and failing a χ2 goodness-of-fit test for a Poisson distribution (17.7 %). Our results show that whilst the Poisson hypothesis is likely not strictly true for rainfall that contributes most to the total rainfall amount, it is quite useful in practice and may hold under certain rainfall conditions. The parameterisation that uses an experimentally determined power law relation between N0 and rainfall rate results in the best estimates of rainfall amount compared to co-located tipping bucket measurements. Despite the non-compliance with the Poisson hypothesis, estimates of total rainfall amount over the entire observational period derived from disdrometer drop counts are within 4 % of co-located tipping bucket measurements. Intervalometer estimates of total rainfall amount overestimate the co-located tipping bucket measurement by 12 %. The intervalometer principle shows potential for use as a rainfall measurement instrument.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Vantika, Sandy, Mokhammad Ridwan Yudhanegara und Karunia Eka Lestari. „POISSON REGRESSION MODELLING OF AUTOMOBILE INSURANCE USING R“. BAREKENG: Jurnal Ilmu Matematika dan Terapan 16, Nr. 4 (15.12.2022): 1399–410. http://dx.doi.org/10.30598/barekengvol16iss4pp1399-1410.

Der volle Inhalt der Quelle
Annotation:
Automobile insurance benefits are protecting the vehicle and minimizing customer losses. Insurance companies must provide funds to pay customer claims if a claim occurs. Insurance claims can be modelled by Poisson regression. Poisson regression is used to analyze the count data with Poisson distributed data responses. this paper, the data model of sample is automobile insurance claims from the companies in one year (in 2021) of observation which contains three types of insurance products, i.e., Total Loss Only (TLO), All Risk, and Comprehensive. The results of data analysis show that the highest number of claims comes from Comprehensive insurance products, especially if the premium value gets more extensive. In contrast, the least comes from TLO insurance products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ridhawati, Ridhawati, Suyitno Suyitno und Wasono Wasono. „Model Geographically Weighted Poisson Regression (GWPR) dengan Fungsi Pembobot Adaptive Gaussian“. EKSPONENSIAL 12, Nr. 2 (30.12.2021): 143. http://dx.doi.org/10.30872/eksponensial.v12i2.807.

Der volle Inhalt der Quelle
Annotation:
The Geographically Weighted Poisson Regression (GWPR) Model is a regression model developed from Poisson regression or a local form of Poisson regression. The GWPR model generates a local model parameter estimator at each observation location where the data is collected and assumes the data is Poisson distributed. The estimation of GWPR model parameters uses the Adaptive Gaussian weighting function by determining the optimum bandwidth using GCV criteria. Based on the GWPR model, it is found that the factors that influence the maternal mortality rate (MMR) data in 24 districts (cities) of East Kalimantan and West Kalimantan are the percentage of pregnant women receiving Fe3 tablets, pregnant women with obstetric complications and the number of hospitals. These three variables produce four groups of GWPR model. Based on the GCV value, it is obtained that the best model is the GWPR model because it has the smallest GCV value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zhang, Zhimin, Eric C. K. Cheung und Hailiang Yang. „ON THE COMPOUND POISSON RISK MODEL WITH PERIODIC CAPITAL INJECTIONS“. ASTIN Bulletin 48, Nr. 1 (28.09.2017): 435–77. http://dx.doi.org/10.1017/asb.2017.22.

Der volle Inhalt der Quelle
Annotation:
AbstractThe analysis of capital injection strategy in the literature of insurance risk models (e.g. Pafumi, 1998; Dickson and Waters, 2004) typically assumes that whenever the surplus becomes negative, the amount of shortfall is injected so that the company can continue its business forever. Recently, Nie et al. (2011) has proposed an alternative model in which capital is immediately injected to restore the surplus level to a positive level b when the surplus falls between zero and b, and the insurer is still subject to a positive ruin probability. Inspired by the idea of randomized observations in Albrecher et al. (2011b), in this paper, we further generalize Nie et al. (2011)'s model by assuming that capital injections are only allowed at a sequence of time points with inter-capital-injection times being Erlang distributed (so that deterministic time intervals can be approximated using the Erlangization technique in Asmussen et al. (2002)). When the claim amount is distributed as a combination of exponentials, explicit formulas for the Gerber–Shiu expected discounted penalty function (Gerber and Shiu, 1998) and the expected total discounted cost of capital injections before ruin are obtained. The derivations rely on a resolvent density associated with an Erlang random variable, which is shown to admit an explicit expression that is of independent interest as well. We shall provide numerical examples, including an application in pricing a perpetual reinsurance contract that makes the capital injections and demonstration of how to minimize the ruin probability via reinsurance. Minimization of the expected discounted capital injections plus a penalty applied at ruin with respect to the frequency of injections and the critical level b will also be illustrated numerically.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Faller, Andreas, und Ludger Rüschendorf. „On approximative solutions of optimal stopping problems“. Advances in Applied Probability 43, Nr. 4 (Dezember 2011): 1086–108. http://dx.doi.org/10.1239/aap/1324045700.

Der volle Inhalt der Quelle
Annotation:
In this paper we establish an extension of the method of approximating optimal discrete-time stopping problems by related limiting stopping problems for Poisson-type processes. This extension allows us to apply this method to a larger class of examples, such as those arising, for example, from point process convergence results in extreme value theory. Furthermore, we develop new classes of solutions of the differential equations which characterize optimal threshold functions. As a particular application, we give a fairly complete discussion of the approximative optimal stopping behavior of independent and identically distributed sequences with discount and observation costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Faller, Andreas, und Ludger Rüschendorf. „On approximative solutions of optimal stopping problems“. Advances in Applied Probability 43, Nr. 04 (Dezember 2011): 1086–108. http://dx.doi.org/10.1017/s0001867800005310.

Der volle Inhalt der Quelle
Annotation:
In this paper we establish an extension of the method of approximating optimal discrete-time stopping problems by related limiting stopping problems for Poisson-type processes. This extension allows us to apply this method to a larger class of examples, such as those arising, for example, from point process convergence results in extreme value theory. Furthermore, we develop new classes of solutions of the differential equations which characterize optimal threshold functions. As a particular application, we give a fairly complete discussion of the approximative optimal stopping behavior of independent and identically distributed sequences with discount and observation costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Niewiadomska, Ewa, Małgorzata Kowalska, Adam Niewiadomski, Michał Skrzypek und Michał A. Kowalski. „Assessment of Risk Hospitalization due to Acute Respiratory Incidents Related to Ozone Exposure in Silesian Voivodeship (Poland)“. International Journal of Environmental Research and Public Health 17, Nr. 10 (20.05.2020): 3591. http://dx.doi.org/10.3390/ijerph17103591.

Der volle Inhalt der Quelle
Annotation:
The main aim of this work is the estimation of health risks arising from exposure to ozone or other air pollutants by different statistical models taking into account delayed health effects. This paper presents the risk of hospitalization due to bronchitis and asthma exacerbation in adult inhabitants of Silesian Voivodeship from 1 January 2016 to 31 August 2017. Data were obtained from the daily register of hospitalizations for acute bronchitis (code J20–J21, International Classification of Diseases, Tenth Revision – ICD-10) and asthma (J45–J46) which is governed by the National Health Fund. Meteorological data and data on tropospheric ozone concentrations were obtained from the regional environmental monitoring database of the Provincial Inspector of Environmental Protection in Katowice. The paper includes descriptive and analytical statistical methods used in the estimation of health risk with a delayed effect: Almon Distributed Lag Model, the Poisson Distributed Lag Model, and Distributed Lag Non-Linear Model (DLNM). A significant relationship has only been confirmed by DLNM for bronchitis and a relatively short period (1–3 days) from exposure above the limit value (120 µg/m3). The relative risk value was RR = 1.15 (95% CI 1.03–1.28) for a 2-day lag. However, conclusive findings require the continuation of the study over longer observation periods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Huelsenbeck, John P., Bret Larget und David Swofford. „A Compound Poisson Process for Relaxing the Molecular Clock“. Genetics 154, Nr. 4 (01.04.2000): 1879–92. http://dx.doi.org/10.1093/genetics/154.4.1879.

Der volle Inhalt der Quelle
Annotation:
Abstract The molecular clock hypothesis remains an important conceptual and analytical tool in evolutionary biology despite the repeated observation that the clock hypothesis does not perfectly explain observed DNA sequence variation. We introduce a parametric model that relaxes the molecular clock by allowing rates to vary across lineages according to a compound Poisson process. Events of substitution rate change are placed onto a phylogenetic tree according to a Poisson process. When an event of substitution rate change occurs, the current rate of substitution is modified by a gamma-distributed random variable. Parameters of the model can be estimated using Bayesian inference. We use Markov chain Monte Carlo integration to evaluate the posterior probability distribution because the posterior probability involves high dimensional integrals and summations. Specifically, we use the Metropolis-Hastings-Green algorithm with 11 different move types to evaluate the posterior distribution. We demonstrate the method by analyzing a complete mtDNA sequence data set from 23 mammals. The model presented here has several potential advantages over other models that have been proposed to relax the clock because it is parametric and does not assume that rates change only at speciation events. This model should prove useful for estimating divergence times when substitution rates vary across lineages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Liu, Yuequn, Ruichu Cai, Wei Chen, Jie Qiao, Yuguang Yan, Zijian Li, Keli Zhang und Zhifeng Hao. „TNPAR: Topological Neural Poisson Auto-Regressive Model for Learning Granger Causal Structure from Event Sequences“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 20491–99. http://dx.doi.org/10.1609/aaai.v38i18.30033.

Der volle Inhalt der Quelle
Annotation:
Learning Granger causality from event sequences is a challenging but essential task across various applications. Most existing methods rely on the assumption that event sequences are independent and identically distributed (i.i.d.). However, this i.i.d. assumption is often violated due to the inherent dependencies among the event sequences. Fortunately, in practice, we find these dependencies can be modeled by a topological network, suggesting a potential solution to the non-i.i.d. problem by introducing the prior topological network into Granger causal discovery. This observation prompts us to tackle two ensuing challenges: 1) how to model the event sequences while incorporating both the prior topological network and the latent Granger causal structure, and 2) how to learn the Granger causal structure. To this end, we devise a unified topological neural Poisson auto-regressive model with two processes. In the generation process, we employ a variant of the neural Poisson process to model the event sequences, considering influences from both the topological network and the Granger causal structure. In the inference process, we formulate an amortized inference algorithm to infer the latent Granger causal structure. We encapsulate these two processes within a unified likelihood function, providing an end-to-end framework for this task. Experiments on simulated and real-world data demonstrate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Siswantining, Titin, Muhammad Ihsan, Saskya Mary Soemartojo, Devvi Sarwinda, Herley Shaori Al-Ash und Ika Marta Sari. „MULTIPLE IMPUTATION FOR ORDINARY COUNT DATA BY NORMAL DISTRIBUTION APPROXIMATION“. MEDIA STATISTIKA 14, Nr. 1 (24.06.2021): 68–78. http://dx.doi.org/10.14710/medstat.14.1.68-78.

Der volle Inhalt der Quelle
Annotation:
Missing values are a problem that is often encountered in various fields and must be addressed to obtain good statistical inference such as parameter estimation. Missing values can be found in any type of data, included count data that has Poisson distributed. One solution to overcome that problem is applying multiple imputation techniques. The multiple imputation technique for the case of count data consists of three main stages, namely the imputation, the analysis, and pooling parameter. The use of the normal distribution refers to the sampling distribution using the central limit theorem for discrete distributions. This study is also equipped with numerical simulations which aim to compare accuracy based on the resulting bias value. Based on the study, the solutions proposed to overcome the missing values in the count data yield satisfactory results. This is indicated by the size of the bias parameter estimate is small. But the bias value tends to increase with increasing percentage of observation of missing values and when the parameter values are small.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Roukema, Boudewijn F. „Anti-clustering in the national SARS-CoV-2 daily infection counts“. PeerJ 9 (27.08.2021): e11856. http://dx.doi.org/10.7717/peerj.11856.

Der volle Inhalt der Quelle
Annotation:
The noise in daily infection counts of an epidemic should be super-Poissonian due to intrinsic epidemiological and administrative clustering. Here, we use this clustering to classify the official national SARS-CoV-2 daily infection counts and check for infection counts that are unusually anti-clustered. We adopt a one-parameter model of $\phi _i^{\prime}$ infections per cluster, dividing any daily count ni into $n_i/ _i^{\prime}$ ‘clusters’, for ‘country’ i. We assume that ${n_i}/\phi _i^{\prime}$ on a given day j is drawn from a Poisson distribution whose mean is robustly estimated from the four neighbouring days, and calculate the inferred Poisson probability $P_{ij}^{\prime}$ of the observation. The $P_{ij}^{\prime}$ values should be uniformly distributed. We find the value $\phi_i$ that minimises the Kolmogorov–Smirnov distance from a uniform distribution. We investigate the (ϕi, Ni) distribution, for total infection count Ni. We consider consecutive count sequences above a threshold of 50 daily infections. We find that most of the daily infection count sequences are inconsistent with a Poissonian model. Most are found to be consistent with the ϕi model. The 28-, 14- and 7-day least noisy sequences for several countries are best modelled as sub-Poissonian, suggesting a distinct epidemiological family. The 28-day least noisy sequence of Algeria has a preferred model that is strongly sub-Poissonian, with $\phi _i^{28} < 0.1$. Tajikistan, Turkey, Russia, Belarus, Albania, United Arab Emirates and Nicaragua have preferred models that are also sub-Poissonian, with $\phi _i^{28} < 0.5$. A statistically significant (Pτ < 0.05) correlation was found between the lack of media freedom in a country, as represented by a high Reporters sans frontieres Press Freedom Index (PFI2020), and the lack of statistical noise in the country’s daily counts. The ϕi model appears to be an effective detector of suspiciously low statistical noise in the national SARS-CoV-2 daily infection counts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Butkevičius, Jonas, und Algimantas Juozapavičius. „THE METHODOLOGY OF MODELLING TAXI RANK SERVICE“. TRANSPORT 18, Nr. 4 (30.06.2003): 153–56. http://dx.doi.org/10.3846/16483840.2003.10414087.

Der volle Inhalt der Quelle
Annotation:
The operation of a taxi rank as a passengers' transportation service was not researched till now by means of scientific methods. Authors of this article are suggesting the methodology for the analysis and evaluation of a taxi rank as a passengers' transportation system, when passengers pick up a taxi at taxi ranks. According to the analysis of the collected data the assumption has been made that the number of passengers coming to a taxi rank, as well as the number of taxis arriving to take passengers at a rank are distributed by Poisson distribution law controlled by specific values of parameters. This observation enables us to consider a taxi rank as a queueing system, having behavior controlled by two parameters: the expected number of taxis arriving to the rank at a time period and the expected number of passengers picking up a taxi at a rank at the same period of time. Usually values of these parameters do not coincide what makes the modelling of a taxi rank a valuable queueing system. Theoretical considerations as well as practical formulas of this at1icle lead us to the methodology of modelling the demand for taxis at a taxi rank. The methodology suggested enables to estimate the efficiency of taxi rank functioning from different points of view. A taxi company may evaluate the costs related to the outage time of a taxi, as well as the amount of taxis in a rank satisfying the demand of passengers for transportation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Puigcorbé, Susanna, Joan R. Villalbí, Xisca Sureda, Marina Bosque-Prous, Ester Teixidó-Compañó, Manuel Franco, Montserrat Bartroli und Albert Espelt. „Assessing the association between tourism and the alcohol urban environment in Barcelona: a cross-sectional study“. BMJ Open 10, Nr. 9 (September 2020): e037569. http://dx.doi.org/10.1136/bmjopen-2020-037569.

Der volle Inhalt der Quelle
Annotation:
ObjectivesAlcohol availability and promotion are not distributed equally in the urban context. Evidence shows that the socioeconomic level seems to influence the amount of alcohol-related elements in an area. Some studies suggest that tourism could also affect the distribution of these elements. We explore with a valid instrument in a large city whether there is an association between high tourism pressure and a greater presence of alcohol-related elements in the urban environment.DesignObservational ecological study.SettingThe study was conducted in Barcelona during 2017–2018.ParticipantsWe assessed urban exposure to alcohol by performing social systematic observation using the OHCITIES Instrument in a stratified random sample of 170 census tracts within the city’s 73 neighbourhoods.Primary and secondary outcome measuresFor each census tract we calculated the density of alcohol premises, and of promotion in public places per 1000 residents. We estimated tourism pressure using the number of tourist beds per 1000 residents in each neighbourhood and calculated quartiles. To assess the relationship between rate ratios of elements of alcohol urban environment and tourism pressure, we calculated Spearman correlations and fitted Poisson regression models with robust error variance.ResultsThe median densities obtained were of 8.18 alcohol premises and of 7.59 alcohol advertising and promotion elements visible from the public space per 1000 population. Census tracts with the highest tourism pressure had 2.5 (95% CI: 1.85–3.38) times more outlets and 2.3 (95% CI: 1.64–3.23) times more promotion elements per 1000 residents than those in the lowest tourism pressure quartile.ConclusionsWe observed a strong association between tourism pressure and alcohol exposure in the city of Barcelona.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Findayani, Sitti Rahmawati, Asrul Sani und Muh Kabil Djafar. „Analisis Sistem Antrian pada Pelayanan Customer Service (Studi Kasus: PT Bank BRI Cabang Raha)“. JOSTECH Journal of Science and Technology 3, Nr. 2 (02.10.2023): 109–21. http://dx.doi.org/10.15548/jostech.v3i2.5719.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study is to find out the queuing model using the structure of the single phase multi-channel queue system in customer service at PT Bank BRI Raha Branch and find out how to complete the queue model on customer service at PT Bank BRI Raha Branch. This research was conducted with direct observation on customer service at Bank BRI Raha Branch. From the data obtained, a steady state test and a Chi-Square kindness test were carried out on arrival patterns and service patterns. Then complete the queuing model using the single phase multi channel queue system structure. The results of the analysis showed that the arrival of customers at Bank BRI Raha Branch was 630 customers with an arrival rate of 2 people per hour and a customer service rate of 5 customers per hour. The queuing system in customer service at Bank BRI Raha Branch follows the queue model which means that the Poisson distributed arrival rate and service time are exponential, the number of channels in the dual system (2 customer service with 1 queue line), the queue discipline used by First Come First Serve (those served are customers who arrive first), the number of incoming customers is not limited or infinite in the queuing system and the population size at the input source is infinite. Busy period opportunities as large as , the average number of customers waiting in the queue is person, the average number of customer in the system is person, the average time spent by a customer waiting is hours, the average time spent in the system including services is hours.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Melton, Colin, Pranav Singh, Oliver Venn, Earl Hubbell, Samuel Gross, Yasushi Saito, Joshua Newman et al. „Tumor methylation patterns to measure tumor fraction in cell-free DNA.“ Journal of Clinical Oncology 38, Nr. 15_suppl (20.05.2020): 3052. http://dx.doi.org/10.1200/jco.2020.38.15_suppl.3052.

Der volle Inhalt der Quelle
Annotation:
3052 Background: Cell-free DNA (cfDNA) tumor fraction (TF), the proportion of tumor molecules in a cfDNA sample, is a direct measurement of signal for cfDNA cancer applications. The Circulating Cell-free Genome Atlas study (CCGA; NCT02889978) is a prospective, multi-center, observational, case-control study designed to support development of a methylation-based, multi-cancer detection test in which a classifier is trained to distinguish cancer from non-cancer. Here we leveraged CCGA data to examine the relationship between cfDNA containing tumor DNA methylation patterns, TF, and cancer classification performance. Methods: The CCGA classifier was trained on whole-genome bisulfite sequencing (WGBS) and targeted methylation (TM) sequencing data to detect cancer versus non-cancer. 822 samples had biopsy WGBS performed; of those, 231 also had cfDNA targeted methylation (TM) and cfDNA whole-genome sequencing (WGS). Biopsy WGBS identified somatic single nucleotide variants (SNV) and methylation variants (MV; defined as methylation patterns in sequenced DNA fragments observed commonly in biopsy but rarely [ < 1/10,000] in the cfDNA of non-cancer controls [n = 898]). Observed tumor fragment counts (SNV in WGS; MV in TM), were modeled as a Poisson process with rate dependent on TF. TF and classifier limits of detection (LOD) were each assessed using Bayesian logistic regression. Results: Across biopsy samples, a median of 2,635 MV was distributed across the genome, with a median of 86.8% shared with ≥1 participant, and a median of 69.3% targeted by the TM assay. TF LOD from MV was 0.00050 (95% credible interval [CI]: 0.00041 - 0.00061); MV and SNV estimates were concordant (Spearman’s Rho: 0.820). MV TF estimates explained classifier performance (Spearman’s Rho: 0.856) and allowed determination of the classifier LOD (0.00082 [95% CI: 0.00057 - 0.00115]). Conclusions: These data demonstrate the existence of methylation patterns in tumor-derived cfDNA fragments that are rarely found in individuals without cancer; their abundance directly measured TF, and was a major factor influencing classification performance. Finally, the low classifier LOD (~0.1%) motivates further clinical development of a methylation-based assay for cancer detection. Clinical trial information: NCT02889978 .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Hayman, Matthew, Robert A. Stillwell, Josh Carnes, Grant J. Kirchhoff, Scott M. Spuler und Jeffrey P. Thayer. „2D signal estimation for sparse distributed target photon counting data“. Scientific Reports 14, Nr. 1 (06.05.2024). http://dx.doi.org/10.1038/s41598-024-60464-1.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this study, we explore the utilization of penalized likelihood estimation for the analysis of sparse photon counting data obtained from distributed target lidar systems. Specifically, we adapt the Poisson Total Variation processing technique to cater to this application. By assuming a Poisson noise model for the photon count observations, our approach yields denoised estimates of backscatter photon flux and related parameters. This facilitates the processing of raw photon counting signals with exceptionally high temporal and range resolutions (demonstrated here to 50 Hz and 75 cm resolutions), including data acquired through time-correlated single photon counting, without significant sacrifice of resolution. Through examination involving both simulated and real-world 2D atmospheric data, our method consistently demonstrates superior accuracy in signal recovery compared to the conventional histogram-based approach commonly employed in distributed target lidar applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Basrak, Bojan, Nikolina Milinčević und Petra Žugec. „On extremes of random clusters and marked renewal cluster processes“. Journal of Applied Probability, 09.12.2022, 1–15. http://dx.doi.org/10.1017/jpr.2022.52.

Der volle Inhalt der Quelle
Annotation:
Abstract This article describes the limiting distribution of the extremes of observations that arrive in clusters. We start by studying the tail behaviour of an individual cluster, and then we apply the developed theory to determine the limiting distribution of $\max\{X_j\,:\, j=0,\ldots, K(t)\}$ , where K(t) is the number of independent and identically distributed observations $(X_j)$ arriving up to the time t according to a general marked renewal cluster process. The results are illustrated in the context of some commonly used Poisson cluster models such as the marked Hawkes process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wang, Zhongzhen, Petros Dellaportas und Ioannis Kosmidis. „Bayesian tensor factorisations for time series of counts“. Machine Learning, 27.12.2023. http://dx.doi.org/10.1007/s10994-023-06441-7.

Der volle Inhalt der Quelle
Annotation:
AbstractWe propose a flexible nonparametric Bayesian modelling framework for multivariate time series of count data based on tensor factorisations. Our models can be viewed as infinite state space Markov chains of known maximal order with non-linear serial dependence through the introduction of appropriate latent variables. Alternatively, our models can be viewed as Bayesian hierarchical models with conditionally independent Poisson distributed observations. Inference about the important lags and their complex interactions is achieved via MCMC. When the observed counts are large, we deal with the resulting computational complexity of Bayesian inference via a two-step inferential strategy based on an initial analysis of a training set of the data. Our methodology is illustrated using simulation experiments and analysis of real-world data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Yu, Miaomiao, Zhijun Wang und Chunjie Wu. „Online detection of the incidence via transfer learning“. Naval Research Logistics (NRL), 03.05.2024. http://dx.doi.org/10.1002/nav.22191.

Der volle Inhalt der Quelle
Annotation:
AbstractThe counting process has abundant applications in reality, and Poisson process monitoring actually has received extensive attention and research. However, conventional methods experience poor performance when shifts appears early and only small number of historical observations in Phase I can be used for estimation. To overcome it, we creatively propose a new online monitoring algorithm under the transfer learning framework, which utilizes the information from observations of additional data sources so that the target process can be described better. By making the utmost of the somewhat correlated data from other domains, which is measured by a bivariate Gamma distributed statistic presented by us, the explicit properties (e.g., posterior probability mass function, posterior expectation, and posterior variance) are also strictly proved. Furthermore, based on the above theoretical results, we design two computationally efficient control schemes in Phase II, that is a control chart based on the cumulative distribution function for large shifts and an exponentially weighted moving average control chart for small shifts. For a better understanding of the more practical applications and transferability matter, we provide some optimal values for parameter setting. Extensive numerical simulations and a case of skin cancer incidence in America verify the superiorities of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie