Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Poisson log-normal model.

Zeitschriftenartikel zum Thema „Poisson log-normal model“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Poisson log-normal model" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Trinh, Giang, Cam Rungie, Malcolm Wright, Carl Driesener und John Dawes. „Predicting future purchases with the Poisson log-normal model“. Marketing Letters 25, Nr. 2 (03.08.2013): 219–34. http://dx.doi.org/10.1007/s11002-013-9254-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gallopin, Mélina, Andrea Rau und Florence Jaffrézic. „A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data“. PLoS ONE 8, Nr. 10 (17.10.2013): e77503. http://dx.doi.org/10.1371/journal.pone.0077503.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pescim, Rodrigo R., Edwin M. M. Ortega, Adriano K. Suzuki, Vicente G. Cancho und Gauss M. Cordeiro. „A new destructive Poisson odd log-logistic generalized half-normal cure rate model“. Communications in Statistics - Theory and Methods 48, Nr. 9 (27.04.2018): 2113–28. http://dx.doi.org/10.1080/03610926.2018.1459709.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sileshi, G. „Selecting the right statistical model for analysis of insect count data by using information theoretic measures“. Bulletin of Entomological Research 96, Nr. 5 (Oktober 2006): 479–88. http://dx.doi.org/10.1079/ber2006449.

Der volle Inhalt der Quelle
Annotation:
AbstractResearchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Choi, Yoonha, Marc Coram, Jie Peng und Hua Tang. „A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data“. Journal of Computational Biology 24, Nr. 7 (Juli 2017): 721–31. http://dx.doi.org/10.1089/cmb.2017.0053.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sunandi, Etis, Khairil Anwar Notodiputro und Bagus Sartono. „A STUDY OF GENERALIZED LINEAR MIXED MODEL FOR COUNT DATA USING HIERARCHICAL BAYES METHOD“. MEDIA STATISTIKA 14, Nr. 2 (12.12.2021): 194–205. http://dx.doi.org/10.14710/medstat.14.2.194-205.

Der volle Inhalt der Quelle
Annotation:
Poisson Log-Normal Model is one of the hierarchical mixed models that can be used for count data. Several estimation methods can be used to estimate the model parameters. The first objective of this study was to examine the performance of the parameter estimator and model built using the Hierarchical Bayes method via Markov Chain Monte Carlo (MCMC) with simulation. The second objective was applied the Poisson Log-Normal model to the West Java illiteracy Cases data which is sourced from the Susenas data on March 2019. In 2019, the incidence of illiteracy is a very rare occurrence in West Java Province. So that, it is suitable as an application case in this study. The simulation results showed that the Hierarchical Bayes parameter estimator through MCMC has the smallest Root Mean Squared Error of Prediction (RMSEP) value and the absolute bias is relatively mostly similar when compared to the Maximum Likelihood (ML) and Penalized Quasi-Likelihood (PQL) methods. Meanwhile, the empirical results showed that the fixed variable is the number of respondents who have a maximum education of elementary school have the greatest risk of illiteracy. Also, the diversity of census blocks significantly affects illiteracy cases in West Java 2019.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Oflaz, Zarina Nukeshtayeva, Ceylan Yozgatligil und A. Sevtap Selcuk-Kestel. „AGGREGATE CLAIM ESTIMATION USING BIVARIATE HIDDEN MARKOV MODEL“. ASTIN Bulletin 49, Nr. 1 (29.11.2018): 189–215. http://dx.doi.org/10.1017/asb.2018.29.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, we propose an approach for modeling claim dependence, with the assumption that the claim numbers and the aggregate claim amounts are mutually and serially dependent through an underlying hidden state and can be characterized by a hidden finite state Markov chain using bivariate Hidden Markov Model (BHMM). We construct three different BHMMs, namely Poisson–Normal HMM, Poisson–Gamma HMM, and Negative Binomial–Gamma HMM, stemming from the most commonly used distributions in insurance studies. Expectation Maximization algorithm is implemented and for the maximization of the state-dependent part of log-likelihood of BHMMs, the estimates are derived analytically. To illustrate the proposed model, motor third-party liability claims in Istanbul, Turkey, are employed in the frame of Poisson–Normal HMM under a different number of states. In addition, we derive the forecast distribution, calculate state predictions, and determine the most likely sequence of states. The results indicate that the dependence under indirect factors can be captured in terms of different states, namely low, medium, and high states.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mielenz, Norbert, Joachim Spilke und Eberhard von Borell. „Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars“. Archives Animal Breeding 57, Nr. 1 (29.01.2015): 1–19. http://dx.doi.org/10.5194/aab-57-26-2015.

Der volle Inhalt der Quelle
Annotation:
Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM). In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars) in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mielenz, Norbert, Joachim Spilke und Eberhard von Borell. „Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars“. Archives Animal Breeding 57, Nr. 1 (29.01.2015): 1–19. http://dx.doi.org/10.7482/0003-9438-57-026.

Der volle Inhalt der Quelle
Annotation:
Abstract. Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM). In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars) in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kunakh, O. N., S. S. Kramarenko, A. V. Zhukov, A. S. Kramarenko und N. V. Yorkina. „Fitting competing models and evaluation of model parameters of the abundance distribution of the land snail Vallonia pulchella (Pulmonata, Valloniidae)“. Regulatory Mechanisms in Biosystems 9, Nr. 2 (25.04.2018): 198–202. http://dx.doi.org/10.15421/021829.

Der volle Inhalt der Quelle
Annotation:
This paper summarizes the mechanisms behind the patterning of the intra-population abundance distribution of the land snail Vallonia pulchella (Müller, 1774). The molluscs were collected in recultivated soil formed on red-brown clays (Pokrov, Ukraine). Data obtained in this study reveal that V. pulchella population abundance ranges from 1 to 13 individuals per 100 g of soil sample. To obtain estimates of the mean, three models were used: the model of the arithmetic mean, the Poisson model and a log-normal model. The arithmetic mean of the occurrence of this species during the study period was 1.84 individuals/sample. Estimation of the average number of molluscs in one sample calculated using the Poisson model is lower and equals 1.40 individuals/sample. The distribution of the number of individuals in a population was described by the graphics "rank – abundance". The individual sample plot sites with molluscs may be regarded as equivalents of individual species in the community. For the analysis, the following models were used: broken sticks model, niche preemption model, log-normal model, Zipf model, and Zipf-Mandelbrot model. Applying the log-normal distribution gives a lower estimate of the mean density at 1.28 individuals/sample. Median value and mode is estimated at 1.00 individuals/sample. The Zipf-Mandelbrot model was shown as the most adequate to describe distribution of the V. pulchella population within the study area. The Zipf-Mandelbrot model belongs to the family of so-called non-Gaussian distributions. This means that the sample statistics do not possess asymptotic properties and by increasing the sample size, they tend to infinity, and are not close to the values of the general population. Therefore, the average value of the random variable that describes the non-Gaussian distribution has no statistical meaning. From an environmental point of view, this means that within the study area the capacity of the habitat is large, and for some combination of environmental conditions the rapid growth of the abundance of a given species is possible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wu, Jun, Xiaodong Zhao, Zongli Lin und Zhifeng Shao. „An effective differential expression analysis of deep-sequencing data based on the Poisson log-normal model“. Journal of Bioinformatics and Computational Biology 13, Nr. 02 (April 2015): 1550001. http://dx.doi.org/10.1142/s0219720015500018.

Der volle Inhalt der Quelle
Annotation:
Tremendous amount of deep-sequencing data has unprecedentedly improved our understanding in biomedical science by digital sequence reads. To mine useful information from such data, a proper distribution for modeling all range of the count data and accurate parameter estimation are required. In this paper, we propose a method, called "DEPln," for differential expression analysis based on the Poisson log-normal (PLN) distribution with an accurate parameter estimation strategy, which aims to overcome the inconvenience in the mathematical analysis of the traditional PLN distribution. The performance of our proposed method is validated by both synthetic and real data. Experimental results indicate that our method outperforms the traditional methods in terms of the discrimination ability and results in a good tradeoff between the recall rate and the precision. Thus, our work provides a new approach for gene expression analysis and has strong potential in deep-sequencing based research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Almuqrin, Muqrin A. „Bayesian and non-Bayesian inference for the compound Poisson log-normal model with application in finance“. Alexandria Engineering Journal 90 (März 2024): 24–43. http://dx.doi.org/10.1016/j.aej.2024.01.031.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Sinclair, David, und Giles Hooker. „Sparse inverse covariance estimation for high-throughput microRNA sequencing data in the Poisson log-normal graphical model“. Journal of Statistical Computation and Simulation 89, Nr. 16 (23.08.2019): 3105–17. http://dx.doi.org/10.1080/00949655.2019.1657116.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Cataño Velez, Elkin. „Robustez estadística“. Lecturas de Economía, Nr. 24 (26.01.2011): 85–99. http://dx.doi.org/10.17533/udea.le.n24a7769.

Der volle Inhalt der Quelle
Annotation:
• Resumen: El uso de modelos paramétricos estocásticos exactos tales como el normal, log-normal, exponencial, poisson, gama, etc. está hoy profundamente arraigado en la práctica estadística. La razón es que ellos permiten la representación aproximada de un conjunto de datos que puede ser fácilmente descrita e interpretada. Sin embargo, es bien conocido que el mundo real no se comporta tan bien como lo describen estos modelos. Recientemente surge una técnica estadística la cual emplea también los modelos paramétricos pero la inferencia es realizada para un entorno del modelo asumido. Es decir, aunque emplea modelos paramétricos, los procedimientos que construye no dependen fundamentalmente de las hipótesis inherentes a ellos. • Abstract: The use of precise stochastic parameter models, such as the normal, log-normal, exponential, poisson and gamma models, is nowadays deeply entrenched in statistical practice. The cause of this trend is their ability to approximately represent a series of data that can be easily described and interpreted. Nevertheless, it is well known that the real world doesn't behave as these models describe it. Recently, a statistical technique, employing parametrical models, appeared. This method tests part of the assumed model that is, although it employs parametrical models the procedure, it builds, doesn't fundamentally depend on the included hypothesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Huang, Guo Xiang, Supapan Chaiprapat und Kriangkrai Waiyagan. „A Probabilistic Model of Wood Defects“. Applied Mechanics and Materials 799-800 (Oktober 2015): 217–21. http://dx.doi.org/10.4028/www.scientific.net/amm.799-800.217.

Der volle Inhalt der Quelle
Annotation:
Although widely used in construction and industrial applications, wood is more prone to defects of different kinds than other materials. These defects are unpredictable and differing randomly from plank to plank. This uncertain nature of the defects complicates establishment of manufacturing plans. In this study, a probabilistic model of wood defects was constructed as a function of three variables which were quantity of defects, position of defects and size of defects. The Kolmogorov-Smirnov hypotheses testing on distributional forms of these variables were carried out. Results showed that Poisson, uniform, and log-normal distributions were suitable to represent the variables statistically. Being knowledgeable of how the defects are distributed on the plank will be of benefit in profitability justification of a cutting plan.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Yahia, H., N. Schneider, S. Bontemps, L. Bonne, G. Attuel, S. Dib, V. Ossenkopf-Okada et al. „Description of turbulent dynamics in the interstellar medium: multifractal-microcanonical analysis“. Astronomy & Astrophysics 649 (Mai 2021): A33. http://dx.doi.org/10.1051/0004-6361/202039874.

Der volle Inhalt der Quelle
Annotation:
Observations of the interstellar medium (ISM) show a complex density and velocity structure, which is in part attributed to turbulence. Consequently, the multifractal formalism should be applied to observation maps of the ISM in order to characterize its turbulent and multiplicative cascade properties. However, the multifractal formalism, even in its more advanced and recent canonical versions, requires a large number of realizations of the system, which usually cannot be obtained in astronomy. We present a self-contained introduction to the multifractal formalism in a “microcanonical” version, which allows us, for the first time, to compute precise turbulence characteristic parameters from a single observational map without the need for averages in a grand ensemble of statistical observables (e.g., a temporal sequence of images). We compute the singularity exponents and the singularity spectrum for both observations and magnetohydrodynamic simulations, which include key parameters to describe turbulence in the ISM. For the observations we focus on the 250 μm Herschel map of the Musca filament. Scaling properties are investigated using spatial 2D structure functions, and we apply a two-point log-correlation magnitude analysis over various lines of the spatial observation, which is known to be directly related to the existence of a multiplicative cascade under precise conditions. It reveals a clear signature of a multiplicative cascade in Musca with an inertial range from 0.05–0.65 pc. We show that the proposed microcanonical approach provides singularity spectra that are truly scale invariant, as required to validate any method used to analyze multifractality. The obtained singularity spectrum of Musca, which is sufficiently precise for the first time, is clearly not as symmetric as usually observed in log-normal behavior. We claim that the singularity spectrum of the ISM toward Musca features a more log-Poisson shape. Since log-Poisson behavior is claimed to exist when dissipation is stronger for rare events in turbulent flows, in contrast to more homogeneous (in volume and time) dissipation events, we suggest that this deviation from log-normality could trace enhanced dissipation in rare events at small scales, which may explain, or is at least consistent with, the dominant filamentary structure in Musca. Moreover, we find that subregions in Musca tend to show different multifractal properties: While a few regions can be described by a log-normal model, other regions have singularity spectra better fitted by a log-Poisson model. This strongly suggests that different types of dynamics exist inside the Musca cloud. We note that this deviation from log-normality and these differences between subregions appear only after reducing noise features, using a sparse edge-aware algorithm, which have the tendency to “log-normalize” an observational map. Implications for the star formation process are discussed. Our study establishes fundamental tools that will be applied to other galactic clouds and simulations in forthcoming studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Santolino, Miguel. „Should Selection of the Optimum Stochastic Mortality Model Be Based on the Original or the Logarithmic Scale of the Mortality Rate?“ Risks 11, Nr. 10 (28.09.2023): 170. http://dx.doi.org/10.3390/risks11100170.

Der volle Inhalt der Quelle
Annotation:
Stochastic mortality models seek to forecast future mortality rates; thus, it is apparent that the objective variable should be the mortality rate expressed in the original scale. However, the performance of stochastic mortality models—in terms, that is, of their goodness-of-fit and prediction accuracy—is often based on the logarithmic scale of the mortality rate. In this article, we examine whether the same forecast outcomes are obtained when the performance of mortality models is assessed based on the original and log scales of the mortality rate. We compare four different stochastic mortality models: the original Lee–Carter model, the Lee–Carter model with (log)normal distribution, the Lee–Carter model with Poisson distribution and the median Lee–Carter model. We show that the preferred model will depend on the scale of the objective variable, the selection criteria measure and the range of ages analysed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Charles Levi und Christian Partrat. „Statistical Analysis of Natural Events in the United States“. ASTIN Bulletin 21, Nr. 2 (November 1991): 253–76. http://dx.doi.org/10.1017/s051503610000458x.

Der volle Inhalt der Quelle
Annotation:
AbstractA statistical analysis is performed on natural events which can produce important damages to insurers. The analysis is based on hurricanes which have been observed in the United States between 1954 et 1986.At first, independence between the number and the amount of the losses is examined. Different distributions (Poisson and negative binomial for frequency and exponential, Pareto and lognormal for severity) are tested. Along classical tests as chi-square, Kolmogorov-Smirnov and non parametric tests, a test with weights on the upper tail of the distribution is used: the Anderson – Darling test.Confidence intervals for the probability of occurrence of a claim and expected frequency for different potential levels of claims are derived. The Poisson Log-normal model gives a very good fit to the data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Parreira da Silva, Guilherme, Henrique Aparecido Laureano, Ricardo Rasmussen Petterle, Paulo Justiniano Ribeiro Júnior und Wagner Hugo Bonat. „Multivariate Generalized Linear Mixed Models for Count Data“. Austrian Journal of Statistics 53, Nr. 1 (15.01.2024): 44–69. http://dx.doi.org/10.17713/ajs.v53i1.1574.

Der volle Inhalt der Quelle
Annotation:
Univariate regression models have rich literature for counting data. However, this is not the case for multivariate count data. Therefore, we present the Multivariate Generalized Linear Mixed Models framework that deals with a multivariate set of responses, measuring the correlation between them through random effects that follows a multivariate normal distribution. This model is based on a GLMM with a random intercept and the estimation process remains the same as a standard GLMM with random effects integrated out via Laplace approximation. We efficiently implemented this model through the TMB package available in R. We used Poisson, negative binomial (NB), and COM-Poisson distributions. To assess the estimator properties, we conducted a simulation study considering four different sample sizes and three different correlation values for each distribution. We achieved unbiased and consistent estimators for Poisson and NB distributions; for COM-Poisson estimators were consistent, but biased, especially for dispersion, variance, and correlation parameter estimators. These models were applied to two datasets. The first concerns a sample from 30 different sites collected in Australia where the number of times each one of the 41 different ant species was registered; which results in an impressive 820 variance-covariance and 41 dispersion parameters are estimated simultaneously, let alone the regression parameters. The second is from the Australia Health Survey with 5 response variables and 5190 respondents. These datasets can be considered overdispersed by the generalized dispersion index. The COM-Poisson model overcame the other two competitors considering three goodness-of-fit indexes, AIC, BIC, and maximized log-likelihood values. As a result, it estimated parameters with smaller standard errors and a greater number of significant correlation coefficients. Therefore, the proposed model is capable of dealing with multivariate count data, either under- equi- or overdispersed responses, and measuring any kind of correlation between them taking into account the effects of the covariates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Prastiwi, Dina Asti, Sugito und Puspita Kartikasari. „Analysis non-poisson systems cases of queuing passenger aircraft at Ahmad Yani Airport“. Natural Science: Journal of Science and Technology 10, Nr. 1 (31.05.2021): 01–05. http://dx.doi.org/10.22487/25411969.2021.v10.i1.15452.

Der volle Inhalt der Quelle
Annotation:
Aircraft are effective transportation compared to land and sea transportation. This causes the growth of demand flow of movement, both passengers and aircraft each time period is always increasing. However, The issues that arise are the issues that arise. Meanwhile, the problems that occur are capacity building. Based on the description above, one example of a queue system that is often encountered in daily life is the transportation service system, for example the plane queue at Ahmad Yani International Airport. Based on observations made while boarding the plane, it is not according to the schedule, which is normal because of the arrival of the plane that is not on time. This causes the airport parking lot to be full or busy and can prevent the arrival of the aircraft. Related to the application of the queue method can overcome the difficulties in standing aircraft parking facilities at Ahmad Yani Airport in Semarang.The results analysis model data of passenger plane queue system at Ahmad Yani International Airport (G / G / 8) : (GD / ∞ / ∞). The non-poisson model of time distribution interval arrivals the normal log and logistic distribution time, servers 8 th queue discipline use FIFO, unlimited service capacity, and unlimited transfer resources
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bai, Xue, Yang Zhao, Jian Ma, Yunxi Liu und Qiwu Wang. „Grain-Size Distribution Effects on the Attenuation of Laser-Generated Ultrasound in α-Titanium Alloy“. Materials 12, Nr. 1 (29.12.2018): 102. http://dx.doi.org/10.3390/ma12010102.

Der volle Inhalt der Quelle
Annotation:
Average grain size is usually used to describe a polycrystalline medium; however, many investigations demonstrate the grain-size distribution has a measurable effect on most of mechanical properties. This paper addresses the experimental quantification for the effects of grain-size distribution on attenuation in α-titanium alloy by laser ultrasonics. Microstructures with different mean grain sizes of 26–49 μm are obtained via annealing at 800 °C for different holding times, having an approximately log-normal distribution of grain sizes. Experimental measurements were examined by using two different theoretical models: (i) the classical Rokhlin’s model considering a single mean grain size, and (ii) the improved Turner’s model incorporating a log-normal distribution of grain sizes in the attenuation evaluation. Quantitative agreement between the experiment and the latter model was found in the Rayleigh and the Rayleigh-to-stochastic transition regions. A larger attenuation level was exhibited than the classical theoretical prediction considering a single mean grain size, and the frequency dependence of attenuation reduced from a classical fourth power to an approximately second power due to a greater probability of large grains than the assumed Poisson statistics. The provided results would help support the use of laser ultrasound technology for the non-destructive evaluation of grain size distribution in polycrystalline materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bajdik, Chris D., und David C. Schneider. „Models of the Fish Yield from Lakes: Does the Random Component Matter?“ Canadian Journal of Fisheries and Aquatic Sciences 48, Nr. 4 (01.04.1991): 619–22. http://dx.doi.org/10.1139/f91-079.

Der volle Inhalt der Quelle
Annotation:
Generalized linear models were used to investigate the sensitivity of paramater estimates to choice of the random error assumption in models of fisheries data. We examined models of fish yield from lakes as a function of (i) Ryder's morphoedaphic index, (ii) lake area, lake depth, and concentration of dissolved solids, and (iii) fishing effort. Models were fit using a normal, log-normal, gamma, or Poisson distribution to generate the random error. Plots of standardized Pearson residuals and standardized deviance residuals were used to evaluate the distributional assumptions. For each data set, observations were found to be consistent with several distributions; however, some distributions were shown to be clearly inappropriate. Inappropriate distributional assumptions produced substantially different parameter estimates. Generalized linear models allow a variety of distributional assumptions to be incorporated in a model, and thereby let us study their effects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

JOYCE, TOBY, BAHMAN HONARI, SIMON WILSON, JOHN DONOVAN und OONAGH GAFFNEY. „MODELS FOR OPTIMIZATION OF PRODUCTION ENVIRONMENTAL STRESS TESTING ON ELECTRONIC CIRCUIT PACKS“. International Journal of Reliability, Quality and Safety Engineering 15, Nr. 06 (Dezember 2008): 555–79. http://dx.doi.org/10.1142/s0218539308003222.

Der volle Inhalt der Quelle
Annotation:
The problem of optimizing accelerated production testing is a pressing one in most electronic manufacturing facilities. Yet, practical models are scarce in the literature, especially for testing high volumes of electronic circuit packs in failure-accelerating environments. In this paper, we develop both a log-linear and linear model, based initially on the Weibull distribution. The models developed are suitable for modeling accelerated production testing data from a temperature-cycled environment. The model is "piecewise" in that the failures in each discrete "piece" of the temperature cycle are modeled as if the testing was in parallel rather than sequential mode. An extra covariate is introduced to indicate age at the start of each piece. The failures in a piece then depend on the stress in the piece itself and the time elapsed to the start of the piece. This last dependence captures the influence of reliability growth and has the result of providing an alternative linear model to the log-linear one. The paper demonstrates a simpler use of Poisson regression. An application, using actual production data, is described. Uses of the Loglogistic, Logistic, Lognormal and Normal distributions are also illustrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ebid, Abdel Hameed I. M., Sara M. Abdel Motaleb, Mahmoud I. Mostafa und Mahmoud M. A. Soliman. „Novel nomogram-based integrated gonadotropin therapy individualization in in vitro fertilization/intracytoplasmic sperm injection: A modeling approach“. Clinical and Experimental Reproductive Medicine 48, Nr. 2 (01.06.2021): 163–73. http://dx.doi.org/10.5653/cerm.2020.03909.

Der volle Inhalt der Quelle
Annotation:
Objective: This study aimed to characterize a validated model for predicting oocyte retrieval in controlled ovarian stimulation (COS) and to construct model-based nomograms for assistance in clinical decision-making regarding the gonadotropin protocol and dose.Methods: This observational, retrospective, cohort study included 636 women with primary unexplained infertility and a normal menstrual cycle who were attempting assisted reproductive therapy for the first time. The enrolled women were split into an index group (n=497) for model building and a validation group (n=139). The primary outcome was absolute oocyte count. The dose-response relationship was tested using modified Poisson, negative binomial, hybrid Poisson-Emax, and linear models. The validation group was similarly analyzed, and its results were compared to that of the index group. Results: The Poisson model with the log-link function demonstrated superior predictive performance and precision (Akaike information criterion, 2,704; λ=8.27; relative standard error (λ)=2.02%). The covariate analysis included women’s age (p<0.001), antral follicle count (p<0.001), basal follicle-stimulating hormone level (p<0.001), gonadotropin dose (p=0.042), and protocol type (p=0.002 and p<0.001 for short and antagonist protocols, respectively). The estimates from 500 bootstrap samples were close to those of the original model. The validation group (n=139) showed model assessment metrics comparable to the index model. Based on the fitted model, a static nomogram was built to improve visualization. In addition, a dynamic electronic tool was created for convenience of use.Conclusion: Based on our validated model, nomograms were constructed to help clinicians individualize the stimulation protocol and gonadotropin doses in COS cycles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Sagar, Anil Kumar, und D. K. Lobiyal. „Fault Tolerant Coverage and Connectivity in Presence of Channel Randomness“. Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/818135.

Der volle Inhalt der Quelle
Annotation:
Some applications of wireless sensor network requireK-coverage andK-connectivity to ensure the system to be fault tolerance and to make it more reliable. Therefore, it makes coverage and connectivity an important issue in wireless sensor networks. In this paper, we proposedK-coverage andK-connectivity models for wireless sensor networks. In both models, nodes are distributed according to Poisson distribution in the sensor field. To make the proposed model more realistic we used log-normal shadowing path loss model to capture the radio irregularities and studied its impact onK-coverage andK-connectivity. The value ofKcan be different for different types of applications. Further, we also analyzed the problem of node failure forK-coverage model. In the simulation section, results clearly show that coverage and connectivity of wireless sensor network depend on the node density, shadowing parameters like the path loss exponent, and standard deviation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Nugraha, Dwita Safira, Dwi Susanti und Sukono Sukono. „Determination of the Contribution of the Reserve Fund for Flood Natural Disaster Management in the DKI Jakarta Region“. International Journal of Global Operations Research 2, Nr. 4 (03.11.2021): 162–67. http://dx.doi.org/10.47194/ijgor.v2i4.88.

Der volle Inhalt der Quelle
Annotation:
Floods are natural disasters that are quite difficult to predict. As a result, there are many losses both materially, morally and even to the point of taking lives. In Indonesia, one of the areas that experience flooding the most is DKI Jakarta. In early 2020, flooding was the biggest cause of loss in the region. The role of the people of DKI Jakarta is very important in collecting contributions to the reserve fund for disaster emergency response. Therefore, this study aims to estimate the amount of reserve fund contributions for community-based flood disaster management in the DKI Jakarta area based on the Collective Risk Model method approach, using Poisson and Log-Normal distributions, including parameter estimates and (μ,σ) , resulting in an estimate of the expected magnitude of the risk of loss. Based on these expectations, the contribution amount can be calculated using the Individual and Collective Risk Model. The result of this research is the contribution of funds which is calculated based on the principle of expected value
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Park, Man Sik, Jin Ki Eom, Jungsoon Choi und Tae-Young Heo. „Analysis of the Railway Accident-Related Damages in South Korea“. Applied Sciences 10, Nr. 24 (08.12.2020): 8769. http://dx.doi.org/10.3390/app10248769.

Der volle Inhalt der Quelle
Annotation:
Railway accidents are critical issues characterized by a large number of injuries and fatalities per accident due to massive public transport systems. This study proposes a new approach for evaluating the damages resulting from railway accidents using the two-part models (TPMs) such as the zero-inflated Poisson regression model (ZIP model) and the zero-inflated negative-binomial regression model (ZINB model) for the non-negative count measurements and the zero-inflated gamma regression model (ZIG model) and the zero-inflated log-normal regression model (ZILN model) for the semi-continuous measurements. The models are employed for the evaluation of the railway accidents on Korea Railroad, considering the accident damages, such as the train delay time, the number of trains delayed and the cost of considering the accident count responses, for the period 2008 to 2016. From the results obtained, we found that the human-related factors, the high-speed railway system or the Korea Train Express (KTX) and the number of casualties, are the main cost-escalating factors. The number of trains delayed and the amount of delay time tend to increase both the probability of incurring costs and the amount of cost. For better evaluation, the railway accident data should contain accurate information with less recurrence of zeros.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Subedi, Sanjeena. „Clustering Matrix Variate Longitudinal Count Data“. Analytics 2, Nr. 2 (05.05.2023): 426–37. http://dx.doi.org/10.3390/analytics2020024.

Der volle Inhalt der Quelle
Annotation:
Matrix variate longitudinal discrete data can arise in transcriptomics studies when the data are collected for N genes at r conditions over t time points, and thus, each observation Yn for n=1,…,N can be written as an r×t matrix. When dealing with such data, the number of parameters in the model can be greatly reduced by considering the matrix variate structure. The components of the covariance matrix then also provide a meaningful interpretation. In this work, a mixture of matrix variate Poisson-log normal distributions is introduced for clustering longitudinal read counts from RNA-seq studies. To account for the longitudinal nature of the data, a modified Cholesky-decomposition is utilized for a component of the covariance structure. Furthermore, a parsimonious family of models is developed by imposing constraints on elements of these decompositions. The models are applied to both real and simulated data, and it is demonstrated that the proposed approach can recover the underlying cluster structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Liu, David. „Markov modulated jump-diffusions for currency options when regime switching risk is priced“. International Journal of Financial Engineering 06, Nr. 04 (Dezember 2019): 1950038. http://dx.doi.org/10.1142/s2424786319500385.

Der volle Inhalt der Quelle
Annotation:
In the current literature, regime-switching risk is NOT priced in the Markov-modulated jump-diffusion models for currency options. We therefore develop a hidden Markov-modulated jump-diffusion model under the regime-switching economy where the regime-switching risk is priced. In the model, the dynamics of the spot foreign exchange rate captures both the rare events and the time-inhomogeneity in the fluctuating currency market. In particular, the rare events are described by a compound Poisson process with log-normal jump amplitude, and the time-varying rates are formulated by a continuous-time finite-state Markov chain. Unlike previous research, the proposed model can price regime-switching risk, in addition to diffusion risk and jump risk, based on the Esscher transform conditional on a single initial regime of economy. Numerical experiments are conducted and their results reveal that the impact of pricing regime-switching risk on the currency option prices does not seem significant in contradictory to the findings made by Siu and Yang [Siu, TK and H Yang (2009). Option Pricing When The Regime-Switching Risk is priced. Acta Mathematicae Applicatae Sinica, English Series, Vol. 25, No. 3, pp. 369–388].
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

O. T., John,, Jibasen, D., Abdulkadir, S. S. und Singla, S. „Modelling Service Times Using Some Beta-Based Compound Distribution“. African Journal of Mathematics and Statistics Studies 7, Nr. 4 (28.10.2024): 105–21. http://dx.doi.org/10.52589/ajmss-z8zxx03l.

Der volle Inhalt der Quelle
Annotation:
The design of the queueing model involves modelling the arrival and service processes of the system. Conventionally, the arrival process is assumed to follow Poisson while service times are assumed to be exponentially distributed. Other distributions such as Weibull, uniform, lognormal have been used to model service times however, generalized distributions have not been used in this regard. In recent times, attention have been shifted to generalised families of distributions including Beta generalized family of distributions which led to the development of Beta-based distributions. Distributions generated from a mixture of beta random variables are quite numerous in literature with little or no application to service times data. In this study, six Beta-based compound distributions - Beta-Log-logistic distribution (BLlogD), Beta-Weibull distribution (BWeiD), Beta-Lomax distribution (BLomD), Beta-exponential distribution (BExD), Beta-Gompertz distribution (BGomD) and Beta-log-normal distribution (BLnormD) - were compared with the classical service times model on four service times data sets. Maximum likelihood estimator was employed in estimating parameters of the selected models while Akaike Information Criteria (AIC), Consistent Akaike Information Criterion (CAIC), Bayesian Information Criterion (BIC) and Hannan Quin information criterion (HQIC) statistics were employed to select the best model. CDFs, PDFs and PP-plots were used to fit the data of the suggested models. Results from the study shows that Beta-Exponential distribution (BExpD) performed better for the datasets I (AIC=640.3, CAIC=640.5,BIC=648.1 and HQIC=643.4), Beta-weibull distribution (BWeiD) performed better for the data sets II and III (AIC=204.2, 2142.4,CAIC=204.2, 2142.7,BIC=212.8, 2154.9 and HQIC=207.6, 2147.5) while Beta-log-logistic distribution (BLlogD) performed better for the data sets IV (AIC=2275.3,CAIC=2275.5, BIC=2289.3 and HQIC=2280.9). Findings from this study revealed some useful Beta-based compound distributions which performed better than the conventional service time model. From the findings of this study, we recommend researchers to dive deep into queueing theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Whitaker, Thomas, Francis Giesbrecht und Jeremy Wu. „Suitability of Several Statistical Models to Simulate Observed Distribution of Sample Test Results in Inspections of Aflatoxin-Contaminated Peanut Lots“. Journal of AOAC INTERNATIONAL 79, Nr. 4 (01.07.1996): 981–88. http://dx.doi.org/10.1093/jaoac/79.4.981.

Der volle Inhalt der Quelle
Annotation:
Abstract The acceptability of 10 theoretical distributions to simulate observed distribution of sample aflatoxin test results was evaluated by using 2 parameter estimation methods and 3 goodness of fit (GOF) tests. All theoretical distributions were compared with 120 observed distributions of aflatoxin test results of farmers' stock peanuts. For a given parameter estimation method and GOF test, the negative binomial distribution had the highest percentage of statistically acceptable fits. The log normal and Poisson-gamma (gamma shape parameter = 0.5) distributions had slightly fewer but an almost equal percentage of acceptable fits. For the 3 most acceptable statistical models, the negative binomial had the greatest percentage of best or closest fits. Both the parameter estimation method and the GOF test had an influence on which theoretical distribution had the largest number of acceptable fits. All theoretical distributions, except the negative binomial distribution, had more acceptable fits when model parameters were determined by the maximum likelihood method. The negative binomial had slightly more acceptable fits when model parameters were estimated by the method of moments. The results also demonstrated the importance of using the same GOF test for comparing the acceptability of several theoretical distributions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Ötting, Marius, Roland Langrock und Christian Deutscher. „Integrating multiple data sources in match-fixing warning systems“. Statistical Modelling 18, Nr. 5-6 (18.11.2018): 483–504. http://dx.doi.org/10.1177/1471082x18804933.

Der volle Inhalt der Quelle
Annotation:
Recent years have seen several match-fixing scandals in soccer. In order to avoid match-fixing, existing literature and fraud detection systems primarily focus on analysing betting odds provided by bookmakers. In our work, we suggest to not only analyse odds but also total volume placed on bets, thereby making use of more of the information available. As a case study for our method, we consider the second division in Italian soccer, Serie B, since for this league it has effectively been proven that some matches were fixed, such that to some extent we can ground truth our approach. For the betting volume data, we use a flexible generalized additive model for location, scale and shape (GAMLSS), with log-normal response, to account for the various complex patterns present in the data. For the betting odds, we use a GAMLSS with bivariate Poisson response to model the number of goals scored by both teams, and to subsequently derive the corresponding odds. We then conduct outlier detection in order to flag suspicious matches. Our results indicate that monitoring both betting volumes and betting odds can lead to more reliable detection of suspicious matches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

DUNN, C. E., B. ROWLINGSON, R. S. BHOPAL und P. DIGGLE. „Meteorological conditions and incidence of Legionnaires' disease in Glasgow, Scotland: application of statistical modelling“. Epidemiology and Infection 141, Nr. 4 (12.06.2012): 687–96. http://dx.doi.org/10.1017/s095026881200101x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis study investigated the relationships between Legionnaires' disease (LD) incidence and weather in Glasgow, UK, by using advanced statistical methods. Using daily meteorological data and 78 LD cases with known exact date of onset, we fitted a series of Poisson log-linear regression models with explanatory variables for air temperature, relative humidity, wind speed and year, and sine-cosine terms for within-year seasonal variation. Our initial model showed an association between LD incidence and 2-day lagged humidity (positive, P = 0·0236) and wind speed (negative, P = 0·033). However, after adjusting for year-by-year and seasonal variation in cases there were no significant associations with weather. We also used normal linear models to assess the importance of short-term, unseasonable weather values. The most significant association was between LD incidence and air temperature residual lagged by 1 day prior to onset (P = 0·0014). The contextual role of unseasonably high air temperatures is worthy of further investigation. Our methods and results have further advanced understanding of the role which weather plays in risk of LD infection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Wu, Honghong, Shiyong Huang, Xin Wang, Liping Yang und Zhigang Yuan. „Distribution and Anisotropy of the Energy Transfer Rate in the Solar Wind Turbulence“. Astrophysical Journal 977, Nr. 1 (01.12.2024): 94. https://doi.org/10.3847/1538-4357/ad90b2.

Der volle Inhalt der Quelle
Annotation:
Abstract The distribution of the energy transfer rate is critical for the interpretation of the intermittent energy cascade in the solar wind turbulence. However, the true observational distribution of the energy transfer rate in the solar wind and its anisotropy remain unknown. Here, we use a 7 day interval measured by Wind in the fast solar wind and investigate the distribution and anisotropy of the energy transfer rate based on the log-Poisson model. We find that the probability density distribution consists of two parts. The majority part locates at smaller values and is consistent with the log-normal distribution. The estimated mean value and standard deviation of the logarithmic energy transfer rate for the majority are both smaller in the direction parallel to the local mean magnetic field than in the perpendicular direction. The mean value displays a power-law shape with respect to the scale, with flatter index in the parallel direction and steeper index in the perpendicular direction. The minority part locates at larger values and expands as the scale decreases, indicating the growing intermittency toward smaller scales. The flatness for parallel logarithmic energy transfer rate is larger than that for perpendicular. And it rises as the scale decreases for all directions, demonstrating the relatively longer tail of the distribution with decreasing scale. Our results provide new insight to help interpret the intermittent energy cascade process in the solar wind turbulence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kenup, Caio Fittipaldi, Raíssa Sepulvida, Catharina Kreischer und Fernando A. S. Fernandez. „Walking on their own legs: unassisted population growth of the agouti Dasyprocta leporina, reintroduced to restore seed dispersal in an Atlantic Forest reserve“. Oryx 52, Nr. 3 (15.02.2017): 571–78. http://dx.doi.org/10.1017/s0030605316001149.

Der volle Inhalt der Quelle
Annotation:
AbstractReintroduction of locally extirpated species is an increasingly popular conservation tool. However, few initiatives focus on the restoration of ecological processes. In addition, many reintroductions fail to conduct post-release monitoring, hampering both assessment of their success and implementation of adaptive management actions. In 2009 a reintroduction effort was initiated to re-establish a population of the red-rumped agouti Dasyprocta leporina, a scatter-hoarding rodent known to be an important disperser of large seeds, with the aim of restoring ecological processes at Tijuca National Park, south-east Brazil. To assess whether this reintroduced population established successfully we monitored it using mark–resighting during November 2013–March 2015. Population size and survival were estimated using a robust design Poisson-log normal mixed-effects mark–resight model. By March 2015 the number of wild-born individuals fluctuated around 30 and overall growth of the population was positive. As the reintroduced population is capable of unassisted growth, we conclude that the reintroduction has been successful in the medium term. We recommend the cessation of releases, with efforts redirected to continued monitoring, investigation and management of possible threats to the species’ persistence, and to quantification of the re-establishment of ecological processes. Reintroduction of D. leporina populations can be a cost-effective tool to restore ecological processes, especially seed dispersal, in Neotropical forests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Vinnik, Ilana, und Tatjana Miljkovic. „Modeling the Inter-Arrival Time Between Severe Storms in the United States Using Finite Mixtures“. Risks 13, Nr. 2 (21.01.2025): 19. https://doi.org/10.3390/risks13020019.

Der volle Inhalt der Quelle
Annotation:
When inter-arrival times between events follow an exponential distribution, this implies a Poisson frequency of events, as both models assume events occur independently and at a constant average rate. However, these assumptions are often violated in real-insurance applications. When the rate at which events occur changes over time, the exponential distribution becomes unsuitable. In this paper, we study the distribution of inter-arrival times of severe storms, which exhibit substantial variability, violating the assumption of a constant average rate. A new approach is proposed for modeling severe storm recurrence patterns using a finite mixture of log-normal distributions. This approach effectively captures both frequent, closely spaced storm events and extended quiet periods, addressing the inherent variability in inter-event durations. Parameter estimation is performed using the Expectation–Maximization algorithm, with model selection validated via the Bayesian information criterion (BIC). To complement the parametric approach, Kaplan–Meier survival analysis was employed to provide non-parametric insights into storm-free intervals. Additionally, a simulation-based framework estimates storm recurrence probabilities and assesses financial risks through probable maximum loss (PML) calculations. The proposed methodology is applied to the Billion-Dollar Weather and Climate Disasters dataset, compiled by the U.S. National Oceanic and Atmospheric Administration (NOAA). The results demonstrate the model’s effectiveness in predicting severe storm recurrence intervals, offering valuable tools for managing risk in the property and casualty insurance industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tavakoli, Sahar, und Shibu Yooseph. „Learning a mixture of microbial networks using minorization–maximization“. Bioinformatics 35, Nr. 14 (Juli 2019): i23—i30. http://dx.doi.org/10.1093/bioinformatics/btz370.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation The interactions among the constituent members of a microbial community play a major role in determining the overall behavior of the community and the abundance levels of its members. These interactions can be modeled using a network whose nodes represent microbial taxa and edges represent pairwise interactions. A microbial network is typically constructed from a sample-taxa count matrix that is obtained by sequencing multiple biological samples and identifying taxa counts. From large-scale microbiome studies, it is evident that microbial community compositions and interactions are impacted by environmental and/or host factors. Thus, it is not unreasonable to expect that a sample-taxa matrix generated as part of a large study involving multiple environmental or clinical parameters can be associated with more than one microbial network. However, to our knowledge, microbial network inference methods proposed thus far assume that the sample-taxa matrix is associated with a single network. Results We present a mixture model framework to address the scenario when the sample-taxa matrix is associated with K microbial networks. This count matrix is modeled using a mixture of K Multivariate Poisson Log-Normal distributions and parameters are estimated using a maximum likelihood framework. Our parameter estimation algorithm is based on the minorization–maximization principle combined with gradient ascent and block updates. Synthetic datasets were generated to assess the performance of our approach on absolute count data, compositional data and normalized data. We also addressed the recovery of sparse networks based on an l1-penalty model. Availability and implementation MixMPLN is implemented in R and is freely available at https://github.com/sahatava/MixMPLN. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Hirsch, Katharina, Andreas Wienke und Oliver Kuss. „Log-normal frailty models fitted as Poisson generalized linear mixed models“. Computer Methods and Programs in Biomedicine 137 (Dezember 2016): 167–75. http://dx.doi.org/10.1016/j.cmpb.2016.09.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Tehreem, Zara, Zulfiqar Ali, Nadhir Al-Ansari, Rizwan Niaz, Ijaz Hussain und Saad Sh Sammen. „A Novel Appraisal Protocol for Spatiotemporal Patterns of Rainfall by Reconnaissance the Precipitation Concentration Index (PCI) with Global Warming Context“. Mathematical Problems in Engineering 2022 (04.08.2022): 1–9. http://dx.doi.org/10.1155/2022/3012100.

Der volle Inhalt der Quelle
Annotation:
In global warming contexts, continuous increment in temperature triggers several environmental, economic, and ecological challenges. Its impacts have severe effects on energy, agriculture, and socioeconomic structure. Moreover, the strong correlation between temperature and dynamic changing of rainfall patterns greatly influences the natural cycles of water resources. Therefore, it is necessary to examine the spatiotemporal variation of precipitation to improve precipitation monitoring systems. Thereby, it helps to make future planning for flood control and water resource management. Considering the importance of the spatiotemporal assessment of precipitation, the current study provides a new method: regional contextual precipitation concentration index (RCPCI) to analyze spatial-temporal patterns of annual rainfall intensities by reconnaissance the precipitation concentration index (PCI) in the global warming context. The current study modifies the existing version of PCI by propagating the role of temperature as auxiliary information. Further, based on spatial and nonspatial correlation analysis, the current study compares the performance of RCPCI and PCI for 45 meteorological stations of Pakistan. Tjøstheim’s coefficient and the modified t-test are used for testing and estimating the spatial correlation between both indices. In addition, the Poisson log-normal spatial model is used to assess the spatial distribution of each rainfall pattern. Outcomes associated with the current analysis show that the proposed method is a good and efficient substitute for PCI in the global warming scenario in the presence of temperature data. Therefore, to make accurate and precise climate and precipitation mitigation policies, the proposed method may incorporate uncovering the yearly pattern of rainfall.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Seo, Eun-Young, Tae-Seok Ahn und Young-Gun Zo. „Agreement, Precision, and Accuracy of Epifluorescence Microscopy Methods for Enumeration of Total Bacterial Numbers“. Applied and Environmental Microbiology 76, Nr. 6 (22.01.2010): 1981–91. http://dx.doi.org/10.1128/aem.01724-09.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT To assess interchangeability of estimates of bacterial abundance by different epifluorescence microscopy methods, total bacterial numbers (TBNs) determined by most widely accepted protocols were statistically compared. Bacteria in a set of distinctive samples were stained with acridine orange (AO), 4′-6-diamidino-2-phenylindole (DAPI), and BacLight and enumerated by visual counting (VC) and supervised image analysis (IA). Model II regression and Bland-Altman analysis proved general agreements between IA and VC methods, although IA counts tended to be lower than VC counts by 7% on a logarithmic scale. Distributions of cells and latex beads on polycarbonate filters were best fitted to negative binomial models rather than to Poisson or log-normal models. The fitted models revealed higher precisions of TBNs by the IA method than those by the VC method. In pairwise comparisons of the staining methods, TBNs by AO and BacLight staining showed good agreement with each other, but DAPI staining had tendencies of underestimation. Although precisions of the three staining methods were comparable to one another (intraclass correlation coefficients, 0.97 to 0.98), accuracy of the DAPI staining method was rebutted by disproportionateness of TBNs between pairs of samples that carried 2-fold different volumes of identical cell suspensions. It was concluded that the TBN values estimated by AO and BacLight staining are relatively accurate and interchangeable for quantitative interpretation and that IA provides better precision than does VC. As a prudent measure, it is suggested to avoid use of DAPI staining for comparative studies investigating accuracy of novel cell-counting methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Wu, Hao, Xinwei Deng und Naren Ramakrishnan. „Sparse estimation of multivariate Poisson log-normal models from count data“. Statistical Analysis and Data Mining: The ASA Data Science Journal 11, Nr. 2 (10.01.2018): 66–77. http://dx.doi.org/10.1002/sam.11370.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Pipa, Gordon, Sonja Grün und Carl van Vreeswijk. „Impact of Spike Train Autostructure on Probability Distribution of Joint Spike Events“. Neural Computation 25, Nr. 5 (Mai 2013): 1123–63. http://dx.doi.org/10.1162/neco_a_00432.

Der volle Inhalt der Quelle
Annotation:
The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Harnau, Jonas. „Misspecification Tests for Log-Normal and Over-Dispersed Poisson Chain-Ladder Models“. Risks 6, Nr. 2 (23.03.2018): 25. http://dx.doi.org/10.3390/risks6020025.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Spratt, Belinda, Erhan Kozan und Michael Sinnott. „Analysis of uncertainty in the surgical department: durations, requests and cancellations“. Australian Health Review 43, Nr. 6 (2019): 706. http://dx.doi.org/10.1071/ah18082.

Der volle Inhalt der Quelle
Annotation:
Objective Analytical techniques are being implemented with increasing frequency to improve the management of surgical departments and to ensure that decisions are well informed. Often these analytical techniques rely on the validity of underlying statistical assumptions, including those around choice of distribution when modelling uncertainty. The aim of the present study was to determine a set of suitable statistical distributions and provide recommendations to assist hospital planning staff, based on three full years of historical data. Methods Statistical analysis was performed to determine the most appropriate distributions and models in a variety of surgical contexts. Data from 2013 to 2015 were collected from the surgical department at a large Australian public hospital. Results A log-normal distribution approximation of the total duration of surgeries in an operating room is appropriate when considering probability of overtime. Surgical requests can be modelled as a Poisson process with rate dependent on urgency and day of the week. Individual cancellations could be modelled as Bernoulli trials, with the probability of patient-, staff- and resource-based cancellations provided herein. Conclusions The analysis presented herein can be used to ensure that assumptions surrounding planning and scheduling in the surgical department are valid. Understanding the stochasticity in the surgical department may result in the implementation of more realistic decision models. What is known about the topic? Many surgical departments rely on crude estimates and general intuition to predict surgical duration, surgical requests (both elective and non-elective) and cancellations. What does this paper add? This paper describes how statistical analysis can be performed to validate common assumptions surrounding surgical uncertainty. The paper also provides a set of recommended distributions and associated parameters that can be used to model uncertainty in a large public hospital’s surgical department. What are the implications for practitioners? The insights on surgical uncertainty provided here will prove valuable for administrative staff who want to incorporate uncertainty in their surgical planning and scheduling decisions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Ricciardo Lamonica, Giuseppe. „The Log Normal and the Poisson Gravity Models in the Analysis of Interactions Phenomena“. American Journal of Theoretical and Applied Statistics 4, Nr. 4 (2015): 291. http://dx.doi.org/10.11648/j.ajtas.20150404.19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Park, Ariel, Bhaskar Thakur und Safia Khan. „0738 Association of socio-demographic determinants, smoking, and obesity with sleep trouble among women of reproductive age“. SLEEP 46, Supplement_1 (01.05.2023): A325. http://dx.doi.org/10.1093/sleep/zsad077.0738.

Der volle Inhalt der Quelle
Annotation:
Abstract Introduction While sleep quality in women is well known to be affected by hormonal factors such as menstrual cycle, pregnancy, and postpartum period, the connections between their sleep and socio-demographic characteristics are not widely studied. Our study objective was to examine the self-reported sleep problems among women of reproductive age in the US and their association with socio-demographic determinants of health as well as other health-related factors using the National Health and Nutrition Examination Survey (NHANES). Methods We analyzed a secondary data with 5,610 female participants aged 15 to 49 years who participated in the NHANES conducted during 2013 – 2018 to examine for the prevalence of self-reported sleep trouble as well as socio-demographic determinants (age, ethnicity, marital status, education, and income) and the presence of smoking and obesity. A design-based F-test and survey-based generalized linear model with Poisson family and log link function were performed to estimate the adjusted and unadjusted prevalence ratio (PR) with statistical significance of associated factors. Results The likelihood of trouble sleeping for those in the age group of 41 to 49 were significantly higher than those in the age group of 15 to 20 (PR 1.77 [95% CI 1.14-2.77]). Significantly more minority women including non-Hispanic Asian (PR 0.46 [95% CI 0.32-0.67]), Hispanic (PR 0.63 [95% CI 0.46-0.86]), and non-Hispanic black (PR 0.73 [95% CI 0.54-0.99]) reported less likely to have trouble sleeping than non-Hispanic multiracial group. When looking at marital status, married individuals reported less trouble sleeping (PR 0.83 [95% CI 0.80-0.99]) as compared to those who never married. Smoking was also associated with more trouble sleeping (PR 1.43 [95% CI 1.27-1.61]). Prevalence of reporting trouble sleeping was significantly higher for obese women compared to those with normal weight (PR 1.25 [95% CI 1.07-1.46]). There were no statistically significant differences in reported sleep trouble based on education status and income. Conclusion Using the large-scale population-based cross-sectional survey analysis, we identified that the problem of sleep disturbance was independently and positively associated with the higher age, unmarried, smokers and obese women in the reproductive age group. Support (if any)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Yong, Benny, Farah Kristiani, Robyn Irawan und Marcellus. „Bayesian Poisson-gamma and Log-normal models for estimating the relative risks of health insurance claims due to dengue disease in Bandung“. Journal of Statistics and Management Systems 23, Nr. 8 (05.08.2020): 1497–512. http://dx.doi.org/10.1080/09720510.2020.1741224.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Al-Ameri, Nagham Jasim. „Perforation location optimization through 1-D mechanical earth model for high-pressure deep formations“. Journal of Petroleum Exploration and Production Technology 11, Nr. 12 (04.10.2021): 4243–52. http://dx.doi.org/10.1007/s13202-021-01314-y.

Der volle Inhalt der Quelle
Annotation:
AbstractOptimum perforation location selection is an important study to improve well production and hence in the reservoir development process, especially for unconventional high-pressure formations such as the formations under study. Reservoir geomechanics is one of the key factors to find optimal perforation location. This study aims to detect optimum perforation location by investigating the changes in geomechanical properties and wellbore stress for high-pressure formations and studying the difference in different stress type behaviors between normal and abnormal formations. The calculations are achieved by building one-dimensional mechanical earth model using the data of four deep abnormal wells located in Southern Iraqi oil fields. The magnitude of different stress types and geomechanical properties was estimated from well-log data using the Techlog software. The directions of the horizontal stresses are determined in the current wells utilizing image-log formation micro-imager (FMI) and caliper logs. The results in terms of rock mechanical properties showed a reduction in Poisson’s ratio, Young modulus, and bulk modulus near the high-pressure zones as compared to normal pressure zones because of the presence of anhydrite, salt cycles, and shales. Low maximum and minimum horizontal stress values are also observed in high-pressure zones as compared to normal pressure zones indicating the effects of geomechanical properties on horizontal stress estimation. Around the wellbore of the studied wells, formation breakouts are the most expected situation according to the results of the wellbore stress state (effective vertical stress (σzz) > effective tangential stress (σθθ) > effective radial stress (σrr)).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Al-Ameri, Nagham Jasim. „Perforation location optimization through 1-D mechanical earth model for high-pressure deep formations“. Journal of Petroleum Exploration and Production Technology 11, Nr. 12 (04.10.2021): 4243–52. http://dx.doi.org/10.1007/s13202-021-01314-y.

Der volle Inhalt der Quelle
Annotation:
AbstractOptimum perforation location selection is an important study to improve well production and hence in the reservoir development process, especially for unconventional high-pressure formations such as the formations under study. Reservoir geomechanics is one of the key factors to find optimal perforation location. This study aims to detect optimum perforation location by investigating the changes in geomechanical properties and wellbore stress for high-pressure formations and studying the difference in different stress type behaviors between normal and abnormal formations. The calculations are achieved by building one-dimensional mechanical earth model using the data of four deep abnormal wells located in Southern Iraqi oil fields. The magnitude of different stress types and geomechanical properties was estimated from well-log data using the Techlog software. The directions of the horizontal stresses are determined in the current wells utilizing image-log formation micro-imager (FMI) and caliper logs. The results in terms of rock mechanical properties showed a reduction in Poisson’s ratio, Young modulus, and bulk modulus near the high-pressure zones as compared to normal pressure zones because of the presence of anhydrite, salt cycles, and shales. Low maximum and minimum horizontal stress values are also observed in high-pressure zones as compared to normal pressure zones indicating the effects of geomechanical properties on horizontal stress estimation. Around the wellbore of the studied wells, formation breakouts are the most expected situation according to the results of the wellbore stress state (effective vertical stress (σzz) > effective tangential stress (σθθ) > effective radial stress (σrr)).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Irawan, R., B. Yong und F. Kristiani. „Non-Spatial Analysis of Relative Risk of Dengue Disease in Bandung Using Poisson-gamma and Log-normal Models: A Case Study of Dengue Data from Santo Borromeus Hospital in 2013“. Journal of Physics: Conference Series 812 (Februar 2017): 012034. http://dx.doi.org/10.1088/1742-6596/812/1/012034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie