To see the other types of publications on this topic, follow the link: Estimates.

Journal articles on the topic 'Estimates'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Estimates.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Little, Roderick J., and Roger J. Lewis. "Estimands, Estimators, and Estimates." JAMA 326, no. 10 (September 14, 2021): 967. http://dx.doi.org/10.1001/jama.2021.2886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beauducel, André, Christopher Harms, and Norbert Hilger. "Reliability Estimates for Three Factor Score Estimators." International Journal of Statistics and Probability 5, no. 6 (October 26, 2016): 94. http://dx.doi.org/10.5539/ijsp.v5n6p94.

Full text
Abstract:
Estimates for the reliability of Thurstone’s regression factor score estimator, Bartlett’s factor score estimator, and McDonald’s factor score estimator were proposed. Moreover, conditions for equal reliability of the factor score estimators were presented and the reliability estimates were compared by means of simulation studies. Under conditions inducing unequal reliabilities, reliability estimates were largest for the regression score estimator and lowest for McDonald’s factor score estimator. We provide an R-script and an SPSS-script for the computation of the respective reliability estimates.
APA, Harvard, Vancouver, ISO, and other styles
3

Zeh, Judith E., and Andre E. Punt. "Updated 1978-2001 abundance estimates and their correlations for the Bering-Chukchi-Beaufort Seas stock of bowhead whales." J. Cetacean Res. Manage. 7, no. 2 (March 15, 2023): 169–75. http://dx.doi.org/10.47536/jcrm.v7i2.750.

Full text
Abstract:
The method of Cooke (1996) and Punt and Butterworth (1999) for computing abundance estimates for bowhead whales of the BeringChukchi-Beaufort Seas stock is reviewed. These abundance estimates are computed from estimates N4 of the number of whales that passed within the 4km visual range of the observation ‘perch’ from which the whales are counted, the estimated proportions P4 of the whales that passed within this range and the estimated standard errors (SE) of N4 and P4. Errors discovered while assembling the data used in developing previous estimates were corrected, and new estimated detection probabilities, N4 and P4 values and SEs were computed using the corrected data. The method of Cooke (1996) and Punt and Butterworth (1999) was then applied. The resulting 2001 abundance estimate was 10,545 (95% confidence interval 8,200 to 13,500), extremely close to the 2001 N4/P4 abundance estimate of 10,470 (95% confidence interval 8,100 to 13,500) (George et al., 2004). The estimated rate of increase of this population from 1978 to 2001 was 3.4% per year (95% confidence interval 1.7% to 5%).
APA, Harvard, Vancouver, ISO, and other styles
4

Mead, J. L. "Discontinuous parameter estimates with least squares estimators." Applied Mathematics and Computation 219, no. 10 (January 2013): 5210–23. http://dx.doi.org/10.1016/j.amc.2012.11.067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Adler, Robert F., Guojun Gu, and George J. Huffman. "Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)." Journal of Applied Meteorology and Climatology 51, no. 1 (January 2012): 84–99. http://dx.doi.org/10.1175/jamc-d-11-052.1.

Full text
Abstract:
AbstractA procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within ±50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation σ of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (σ/μ, where μ is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%–15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (σ) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet’s mean precipitation.
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Yong, and Dimiter M. Dimitrov. "A Short Note on Obtaining Point Estimates of the IRT Ability Parameter With MCMC Estimation in Mplus: How Many Plausible Values Are Needed?" Educational and Psychological Measurement 79, no. 2 (May 29, 2018): 272–87. http://dx.doi.org/10.1177/0013164418777569.

Full text
Abstract:
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent variable point estimates is unclear. This is especially relevant when an item response theory (IRT) model is estimated with MCMC (Markov chain Monte Carlo) methods in Mplus and point estimates of the IRT ability parameter are of interest, as Mplus only estimates the posterior distribution of each ability parameter. In order to obtain point estimates of the ability parameter, a number of plausible values can be drawn from the posterior distribution of each individual ability parameter and their mean (the posterior mean ability estimate) can be used as an individual ability point estimate. In this note, we conducted a simulation study to investigate how many plausible values were needed to obtain accurate posterior mean ability estimates. The results indicate that 20 is the minimum number of plausible values required to obtain point estimates of the IRT ability parameter that are comparable to marginal maximum likelihood estimation(MMLE)/expected a posteriori (EAP) estimates. A real dataset was used to demonstrate the comparison between MMLE/EAP point estimates and posterior mean ability estimates based on different number of plausible values.
APA, Harvard, Vancouver, ISO, and other styles
7

OMAR, MH, and AM MEHANNA. "Comparison of measured and estimated crop evapotranspiration over Egypt." MAUSAM 37, no. 2 (April 11, 2022): 153–58. http://dx.doi.org/10.54302/mausam.v37i2.2216.

Full text
Abstract:
Mehanna (1976) estimated potential evapotranspiration (PE) for a number of meteorological stations in Egypt, using Penman's method with adjustment of the constants of the radiation term and the aerodynamic term, such that they would agree with measurements of radiation in Egypt and with estimates by Omar ( 1971) of PE in a large field at Giza. Omar and Mehanna (1984) compared seasonal measurements of PE at Bahtim (near Cairo) using potential evapotranspirometers with Mehanna's estimates of PE at Bahtim and with estimates by the methods given in the FAO Irrigation and Drainage Paper No.24 on "Crop Water Requirements” by Doorenbos and Pruitt (1977). The main features of the comparisons were that Mehanna, and three of thc FAO estimates (Blanev-Cridd1e, radiation, and pan evaporation) are within +- 10% of the measurements while the Penman estimate was 15% higher. Mehanna's estimates of FE were used to calculite ET- crop (as -defined in 1he FAO paper) for 4 main crops in Egypt [cotton, maize, wheat and berseem (cover] at 9 meteorological stations, using crop coefficients given in the FAO paper. The estimated ET crop values- at meteorological stations enabled to calculate ET crop at a number of agricultural research stations. Estimates of ET crop were compared with measurements of crop evapotranspiration in conditions--similar-to those of ET crop, and also with measurements in all conditions including those of ET crop. The average ratio, for the four crops, of measured to estimated evapotranspiration was 0.95 and 0.80 respectively. Average ratios were also given corresponding to cases when the FAO Blaney.Criddle,' radiation and Penman methods were used to estimate PE. It is concluded that the comparisons may probably confirm the reliability of applying Mehanna's estimates of PE to the crop coefficients given in the FAO paper to estimate ET crop over Egypt.
APA, Harvard, Vancouver, ISO, and other styles
8

Bowen, W. D., R. A. Myers, and K. Hay. "Abundance Estimation of a Dispersed, Dynamic Population: Hooded Seals (Cystophora cristata) in the Northwest Atlantic." Canadian Journal of Fisheries and Aquatic Sciences 44, no. 2 (February 1, 1987): 282–95. http://dx.doi.org/10.1139/f87-037.

Full text
Abstract:
Pup production of hooded seals (Cystophora cristata) in the Northwest Atlantic was estimated by aerial survey. Simultaneous surveys and the collection of ground-truth data were conducted in March 1984 in both major whelping areas, namely the floe ice in the Davis Strait and off northeastern Newfoundland (the Front). Abundance estimates were obtained from both fixed-wing photographic and helicopter sighting surveys using a strip survey method for unequal-sized sampling units. These abundance estimates were corrected to account for pups which had left the ice and those pups which had yet to be born in each area. A maximum likelihood method was used to combine estimates of abundance from several surveys with estimates of the number of pups in each developmental stage to obtain an estimate of total production. This method weighted each survey point estimate of abundance by the estimated sampling variance and each estimate of the proportion of pups on the ice in each stage by the sample size corrected for loss of degrees of freedom associated with the sampling design. Total production at the Front was estimated to be 62 400 with 95% confidence limits of 43 700 to 89 400 and in the Davis Strait was 19 000 with a 95% confidence interval of 14 000 to 23 000. Total pup production estimates for the Front and Davis Strait are likely underestimates for several reasons, but are substantially higher than those previously assumed for the Northwest Atlantic.
APA, Harvard, Vancouver, ISO, and other styles
9

Rahman, Mohammad Lutfor, Steven G. Gilmour, Peter J. Zemroch, and Pauline R. Ziman. "Bayesian analysis of fuel economy experiments." Journal of Statistical Research 54, no. 1 (August 25, 2020): 43–63. http://dx.doi.org/10.47302/jsr.2020540103.

Full text
Abstract:
Statistical analysts can encounter difficulties in obtaining point and interval estimates for fixed effects when sample sizes are small and there are two or more error strata to consider. Standard methods can lead to certain variance components being estimated as zero which often seems contrary to engineering experience and judgement. Shell Global Solutions (UK) has encountered such challenges and is always looking for ways to make its statistical techniques as robust as possible. In this instance, the challenge was to estimate fuel effects and confidence limits from small-sample fuel economy experiments where both test-to-test and day-to-day variation had to be taken into account. Using likelihood-based methods, the experimenters estimated the day-to-day variance component to be zero which was unrealistic. The reason behind this zero estimate is that the data set is not large enough to estimate it reliably. The experimenters were also unsure about the fixed parameter estimates obtained by likelihood methods in linear mixed models. In this paper, we looked for an alternative to compare the likelihood estimates against and found the Bayesian platform to be appropriate. Bayesian methods assuming some non-informative and weakly informative priors enable us to compare the parameter estimates and the variance components. Profile likelihood and bootstrap based methods verified that the Bayesian point and interval estimates were not unreasonable. Also, simulation studies have assessed the quality of likelihood and Bayesian estimates in this study.
APA, Harvard, Vancouver, ISO, and other styles
10

Frehlich, Rod, and Larry Cornman. "Estimating Spatial Velocity Statistics with Coherent Doppler Lidar." Journal of Atmospheric and Oceanic Technology 19, no. 3 (March 1, 2002): 355–66. http://dx.doi.org/10.1175/1520-0426-19.3.355.

Full text
Abstract:
Abstract The spatial statistics of a simulated turbulent velocity field are estimated using radial velocity estimates from simulated coherent Doppler lidar data. The structure functions from the radial velocity estimates are processed to estimate the energy dissipation rate ε and the integral length scale Li, assuming a theoretical model for isotropic wind fields. The performance of the estimates are described by their bias, standard deviation, and percentiles. The estimates of ε2/3 are generally unbiased and robust. The distribution of the estimates of Li are highly skewed; however, the median of the distribution is generally unbiased. The effects of the spatial averaging by the atmospheric movement transverse to the lidar beam during the dwell time of each radial velocity estimate are determined, as well as the error scaling as a function of the dimensions of the total measurement region. Accurate estimates of Li require very large measurement domains in order to observe a large number of independent samples of the spatial scales that define Li.
APA, Harvard, Vancouver, ISO, and other styles
11

Stenson, G. B., R. A. Myers, M. O. Hammill, I. H. Ni, W. G. Warren, and M. C. S. Kingsley. "Pup Production of Harp Seals, Phoca groenlandica, in the Northwest Atlantic." Canadian Journal of Fisheries and Aquatic Sciences 50, no. 11 (November 1, 1993): 2429–39. http://dx.doi.org/10.1139/f93-267.

Full text
Abstract:
Northwest Atlantic harp seal, Phoca groenlandica, pup production was estimated from aerial surveys flown off eastern Newfoundland ("Front") and in the Gulf of St. Lawrence ("Gulf") during March 1990. One visual and two independent photographic estimates were obtained at the Front; a single photographic estimate was obtained in the Gulf. Photographic estimates were corrected for misidentified pups by comparing black-and-white photographs with ultraviolet imagery. Estimates were also corrected for pups absent from the ice at the time of the survey using distinct age-related developmental stages. Stage durations in the Gulf appeared consistent with previous studies but were increased by 30% to improve the fit to staging data collected at the Front. The best estimate of pup production at the Front was obtained from the visual surveys. A total of 467 000 (SE = 31 000) pups were born in three whelping concentrations. The photographic estimates were comparable. Pup production estimates for the southern (Magdalen Island) and northern (Mecatina) Gulf whelping patches were 106 000 (SE = 23 000) and 4 400 (SE = 1300), respectively. Thus, total pup production was estimated to be 578 000 (SE = 39 000).
APA, Harvard, Vancouver, ISO, and other styles
12

Hone, Jim, and Tony Buckmaster. "How many are there? The use and misuse of continental-scale wildlife abundance estimates." Wildlife Research 41, no. 6 (2014): 473. http://dx.doi.org/10.1071/wr14059.

Full text
Abstract:
The number of individuals in a wildlife population is often estimated and the estimates used for wildlife management. The scientific basis of published continental-scale estimates of individuals in Australia of feral cats and feral pigs is reviewed and contrasted with estimation of red kangaroo abundance and the usage of the estimates. We reviewed all papers on feral cats, feral pigs and red kangaroos found in a Web of Science search and in Australian Wildlife Research and Wildlife Research, and related Australian and overseas scientific and ‘grey’ literature. The estimated number of feral cats in Australia has often been repeated without rigorous evaluation of the origin of the estimate. We propose an origin. The number of feral pigs in Australia was estimated and since then has sometimes been quoted correctly and sometimes misquoted. In contrast, red kangaroo numbers in Australia have been estimated by more rigorous methods and the relevant literature demonstrates active refining and reviewing of estimation procedures and management usage. We propose four criteria for acceptable use of wildlife abundance estimates in wildlife management. The criteria are: use of appropriate statistical or mathematical analysis; precision estimated; original source cited; and age (current or out-of-date) of an estimate evaluated. The criteria are then used here to assess the strength of evidence of the abundance estimates and each has at least one deficiency (being out-of-date). We do know feral cats, feral pigs and red kangaroos occur in Australia but we do not know currently how many feral cats or feral pigs are in Australia. Our knowledge of red kangaroo abundance is stronger at the state than the continental scale, and is also out-of-date at the continental scale. We recommend greater consideration be given to whether abundance estimates at the continental scale are needed and to their use, and not misuse, in wildlife management.
APA, Harvard, Vancouver, ISO, and other styles
13

Montague*, Thayne. "Water Loss Estimates of Three Containerized Landscape Tree Species Using Thermal Dissipation Probes and Load Cells." HortScience 39, no. 4 (July 2004): 790E—790. http://dx.doi.org/10.21273/hortsci.39.4.790e.

Full text
Abstract:
Granier style thermal dissipation probes (TDP) have been used to estimate whole plant water loss on a variety of tree and vine species. However, studies using TDPs to investigate water loss of landscape tree species is rare. This research compared containerized tree water loss estimates of three landscape tree species using TDPs with containerized tree water loss estimates as measured by load cells. Over a three-year period, established, 5.0 cm caliper Bradford pear (Pyrus calleryana `Bradford'), English oak (Quercus robar), and sweetgum (Liquidambar styraciflua `Rotundiloba') trees in 75 L containers were placed on load cells, and water loss was measured for a 60-d period. One 3.0 cm TDP was placed into the north side of each trunk 30 cm above soil level. To reduce evaporation, container growing media was covered with plastic. Each night, plants were irrigated to soil field capacity and allowed to drain. To provide thermal insulation TDPs and tree trunks (up to 30 cm) were covered with aluminum foil coated bubble wrap. Hourly TDP water loss estimates for each species over a three-day period indicate TDP estimated water loss followed a similar trend as load cell estimated water loss. However, TDP estimates were generally less, especially during peak transpiration periods. In addition, mean, total daily water loss estimates for each species was less for TDP estimated water loss when compared to load cell estimated water loss. Although TDP estimated water loss has been verified for several plant species, these data suggest potential errors can arise when using TDPs to estimate water loss of select landscape tree species. Additional work is likely needed to confirm estimated sap flow using TDPs for many tree species.
APA, Harvard, Vancouver, ISO, and other styles
14

Gurry, Peter J. "The Number of Variants in the Greek New Testament: A Proposed Estimate." New Testament Studies 62, no. 1 (November 20, 2015): 97–121. http://dx.doi.org/10.1017/s0028688515000314.

Full text
Abstract:
Since the publication of John Mill's Greek New Testament in 1707, scholars have shown repeated interest in the number of textual variants in our extant witnesses. Past estimates, however, have failed to tell who estimated, how the estimate was derived, or even what was being estimated. This study addresses all three problems and so offers an up-to-date estimate based on the most extensive collation data available. The result is a higher number than almost all previous estimates. Proper use shows that the number reflects the frequency with which scribes copied more than their infidelity in doing so.
APA, Harvard, Vancouver, ISO, and other styles
15

Kimura, Daniel K. "Variability, Tuning, and Simulation for the Doubleday–DerisoCatch-at-Age Model." Canadian Journal of Fisheries and Aquatic Sciences 46, no. 6 (June 1, 1989): 941–49. http://dx.doi.org/10.1139/f89-121.

Full text
Abstract:
This paper presents a method for estimating the variances of biomass estimates made using the nonlinear least squares catch-at-age model. The estimates begin with the standard covariance matrix of parameters estimated using nonlinear least squares and is therefore an extension of usual methods. In this paper the catch-at-age model is tuned to independent survey estimates of biomass. If the variances of these survey estimates are known, it is possible to estimate an optimal value for the weighting variable λ in the nonlinear least squares. The methods are applied to the walleye pollock (Theragra chalcogramma) fishery in the Eastern Bering Sea. Results from this study generally support the theoretical interpretation of λ as a ratio of variances. When the model is constrained to recent survey biomass estimates, estimates of the coefficient of variation of biomasses in earlier years are large. Also, the results largely depend on what constraints on selectivities are assumed. A simulation was performed assuming catches-at-age are distributed as lognormal random variables and that selectivities are all estimated. Results from the simulation indicate that modelled biomass estimates for early years are questionable, and probably seriously biased. However, the simulation generally validated the analytic estimates of the variance of modelled biomass estimates.
APA, Harvard, Vancouver, ISO, and other styles
16

Höglund-Isaksson, L. "Global anthropogenic methane emissions 2005–2030: technical mitigation potentials and costs." Atmospheric Chemistry and Physics Discussions 12, no. 5 (May 3, 2012): 11275–315. http://dx.doi.org/10.5194/acpd-12-11275-2012.

Full text
Abstract:
Abstract. This paper presents estimates of current and future global anthropogenic methane emissions, their technical mitigation potential and associated costs for the period 2005 to 2030. The analysis uses the GAINS model framework to estimate emissions, mitigation potentials and costs for all major sources of anthropogenic methane for 83 countries/regions, which are aggregated to produce global estimates. Global anthropogenic methane emissions are estimated at 323 Mt methane in 2005, with an expected increase to 414 Mt methane in 2030. Major uncertainty sources in emission estimates are identified and discussed. Mitigation costs are estimated defining two different cost perspectives; the social planner cost perspective and the private investor cost perspective.
APA, Harvard, Vancouver, ISO, and other styles
17

Greene, M. "Human estimates of object frequency are frequently over-estimated." Journal of Vision 14, no. 10 (August 22, 2014): 1128. http://dx.doi.org/10.1167/14.10.1128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

POKORNY, KIAN, and DILEEP SULE. "UNCERTAINTY ESTIMATES IN THE FUZZY-PRODUCT-LIMIT ESTIMATOR." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 13, no. 01 (February 2005): 11–26. http://dx.doi.org/10.1142/s021848850500328x.

Full text
Abstract:
The Fuzzy-Product-Limit Estimator (FPLE) is a method for estimating a survival curve and the mean survival time when very few data are available and a high proportion of the data are censored. Considering censored times as vague failure times, the censored values are represented by fuzzy membership functions that represent a belief of continued survival of the associated unit. Associated with any estimate is uncertainty. With the FPLE two distinct types of uncertainty exist in the estimate, the uncertainty due to the randomness in the recorded times and the vague uncertainty in the failure of the censored units. This paper addresses the problem of providing confidence bounds and estimates of uncertainty for the FPLE. Several methods for estimating the vague uncertainty in the estimator are suggested. Among them are the use of Efron's Bootstrap that obtains a confidence interval of the FPLE to quantify random uncertainty and produces an empirical distribution that is used to quantify properties of the vague uncertainty. Also, a method to obtain a graphical representation of the random and vague uncertainties is developed. The new methods provide confidence intervals that quantify statistical uncertainty as well as the vague uncertainty in the estimates. Finally, results of simulations are provided to demonstrate the efficacy of the estimator and uncertainty in the estimates.
APA, Harvard, Vancouver, ISO, and other styles
19

de Araujo, Marcelo R. A., and B. E. Coulman. "Parent-offspring regression in meadow bromegrass (Bromus riparius Rehm.): Evaluation of two methodologies on heritability estimates." Canadian Journal of Plant Science 84, no. 1 (January 1, 2004): 125–27. http://dx.doi.org/10.4141/p02-119.

Full text
Abstract:
To determine the nature and extent of inflation of estimates of heritabilities by parent-offspring regression methods, 40 clones of meadow bromegrass (Bromus riparius Rehm.) and their half-sib progenies were studied in completely randomized block design trials, with six replications in Saskatoon and Melfort, Canada. Clones and progenies were evaluated for dry matter yield, seed yield, plant height, fertility index and harvest index. The results of the analysis showed a consistent inflation of heritability estimates derived from the simple parent-offspring regression, when compared to the regression estimate by variance-covariance analysis. The two methods successfully removed the environmental covariances from the estimates. However, in the simple regression analysis, error covariance was not removed from the numerat or; therefore, heritabilities estimated by this methodology were higher than those estimated by the variance-covariance method. It was concluded that estimates derived from variance-covariance analysis provide less biased estimates of heritability. Key words: Regression analysis, heritability, meadow bromegrass
APA, Harvard, Vancouver, ISO, and other styles
20

Wilson, David, Simon Hulka, and Leon Bennun. "A review of raptor carcass persistence trials and the practical implications for fatality estimation at wind farms." PeerJ 10 (November 15, 2022): e14163. http://dx.doi.org/10.7717/peerj.14163.

Full text
Abstract:
Bird and bat turbine collision fatalities are a principal biodiversity impact at wind energy facilities. Raptors are a group at particular risk and often the focus of post-construction fatality monitoring programs. To estimate fatalities from detected carcasses requires correction for biases, including for carcasses that are removed or decompose before the following search. This is addressed through persistence trials, where carcasses are monitored until no longer detectable or the trial ends. Sourcing sufficient raptor carcasses for trials is challenging and surrogates that are typically used often have shorter persistence times than raptors. We collated information from raptor carcass persistence trials to evaluate consistencies between trials and assess the implications of using persistence values from other studies in wind facility fatality estimates. We compiled individual raptor carcass persistence times from published sources along with information on methods and location, estimated carcass persistence using GenEst and ran full fatality estimates using the carcass persistence estimates and mock datasets for other information. We compiled results from 22 trials from 17 sites across four terrestrial biomes, with trials lasting between 7 and 365 days and involving between 11 and 115 carcasses. Median carcass persistence was estimated at 420 days (90% confidence interval (CI) of 290 to 607 days) for the full dataset. Persistence time varied significantly between trials (trial-specific persistence estimates of 14 (5–42) days to 1,586 (816–3,084) days) but not between terrestrial biomes. We also found no significant relationship between either the number of carcasses in the trial or trial duration and estimated carcass persistence. Using a mock dataset with 12 observed fatalities, we estimated annual fatalities of 25 (16–33) or 26 (17–36) individuals using a 14- or 28-day search interval respectively using global dataset. When using trial-specific carcass persistence estimates and the same mock dataset, estimated annual fatalities ranged from 22 (14–30) to 37 (21–63) individuals for a 14-day search interval, and from 22 (15–31) to 47 (26–84) individuals for a 28-day search interval. The different raptor carcass persistence rates between trials translated to small effects on fatality estimates when using recommended search frequencies, since persistence rates were generally much longer than the search interval. When threatened raptor species, or raptors of particular concern to stakeholders are present, and no site-specific carcass persistence estimates are available, projects should use the lowest median carcass persistence estimate from this study to provide precautionary estimates of fatalities. At sites without threatened species, or where the risk of collision to raptors is low, the global median carcass persistence estimate from this review could be used to provide a plausible estimate for annual raptor fatalities.
APA, Harvard, Vancouver, ISO, and other styles
21

Dale, Nancy M., Mark Myatt, Claudine Prudhon, and André Briend. "Using cross-sectional surveys to estimate the number of severely malnourished children needing to be enrolled in specific treatment programmes." Public Health Nutrition 20, no. 8 (January 24, 2017): 1362–66. http://dx.doi.org/10.1017/s1368980016003578.

Full text
Abstract:
AbstractObjectiveWhen planning severe acute malnutrition (SAM) treatment services, estimates of the number of children requiring treatment are needed. Prevalence surveys, used with population estimates, can directly estimate the number of prevalent cases but not the number of subsequent incident cases. Health managers often use a prevalence-to-incidence conversion factor (J) derived from two African cohort studies to estimate incidence and add the expected number of incident cases to prevalent cases to estimate expected SAM caseload for a given period. The present study aimed to estimate J empirically in different contexts.DesignObservational study, with J estimated by correlating expected numbers of children to be treated, based on prevalence surveys, population estimates and assumed coverage, with the observed numbers of SAM patients treated.SettingSurvey and programme data from six African and Asian countries.SubjectsTwenty-four data sets including prevalence surveys and programme admissions data for 5 months following the survey.ResultsA statistically significant relationship between the number of SAM cases admitted to SAM treatment services and the estimated burden of SAM from prevalence surveys was found. Estimate for the slope (intercept forced to be zero) was 2·17 (95 % CI 1·33, 3·79). Estimates for the prevalence-to-incidence conversion factor J varied from 2·81 to 11·21, assuming programme coverage of 100 % and 38 %, respectively.ConclusionsEstimation of expected caseload from prevalence may require revision of the currently used prevalence-to-incidence conversion factor J of 1·6. Appropriate values for J may vary between different locations.
APA, Harvard, Vancouver, ISO, and other styles
22

Smallwood, K. Shawn. "USA Wind Energy-Caused Bat Fatalities Increase with Shorter Fatality Search Intervals." Diversity 12, no. 3 (March 12, 2020): 98. http://dx.doi.org/10.3390/d12030098.

Full text
Abstract:
Wind turbine collision fatalities of bats have likely increased with the rapid expansion of installed wind energy capacity in the USA since the last national-level fatality estimates were generated in 2012. An assumed linear increase of fatalities with installed capacity would expand my estimate of bat fatalities across the USA from 0.89 million in 2012 to 1.11 million in 2014 and to 1.72 million in 2019. However, this assumed linear relationship could have been invalidated by shifts in turbine size, tower height, fatality search interval during monitoring, and regional variation in bat fatalities. I tested for effects of these factors in fatality monitoring reports through 2014. I found no significant relationship between bat fatality rates and wind turbine size. Bat fatality rates increased with increasing tower height, but this increase mirrored the increase in fatality rates with shortened fatality search intervals that accompanied the increase in tower heights. Regional weighting of mean project-level bat fatalities increased the national-level estimate 17% to 1.3 (95% CI: 0.15–3.0) million. After I restricted the estimate’s basis to project-level fatality rates that were estimated from fatality search intervals <10 days, my estimate increased by another 71% to 2.22 (95% CI: 1.77–2.72) million bat fatalities in the USA’s lower 48 states in 2014. Project-level fatality estimates based on search intervals <10 days were, on average, eight times higher than estimates based on longer search intervals. Shorter search intervals detected more small-bodied species, which contributed to a larger all-bat fatality estimate.
APA, Harvard, Vancouver, ISO, and other styles
23

Shanks, Janet E., Richard H. Wilson, and Nancy K. Cambron. "Multiple Frequency Tympanometry." Journal of Speech, Language, and Hearing Research 36, no. 1 (February 1993): 178–85. http://dx.doi.org/10.1044/jshr.3601.178.

Full text
Abstract:
Three methods for compensating multiple frequency acoustic admittance measurements for ear canal volume were studied in 26 men with normal middle ear transmission systems. Peak compensated static acoustic admittance (| y |) and phase angle (ø) were calculated from sweep frequency tympanograms (226–1243 Hz in 113 Hz increments). Of the procedures used to compensate for volume in rectangular form, the ear canal pressure used to estimate volume had the largest effect on the estimate of middle ear resonance. Median resonance was 800 Hz for admittance measurements compensated at 200 daPa versus 1100 Hz for measurements compensated at –350 daPa. The remaining two methods, compensation of susceptance only versus both susceptance and conductance and compensation using the minimum volume versus separate volumes at each frequency, did not affect estimates of middle ear resonance. Estimates of middle ear resonance from compensated phase angle measurements also were compared with estimates of resonance from admittance and phase difference curves. although resonance could not be estimated from the phase difference curve, resonance estimated from the admittance difference curve agreed with the estimate from compensated phase angle.
APA, Harvard, Vancouver, ISO, and other styles
24

Ukachukwu, Omoyemeh Jennifer, Lindsey Smart, Justyna Jeziorska, Helena Mitasova, and John S. King. "Active Remote Sensing Assessment of Biomass Productivity and Canopy Structure of Short-Rotation Coppice American Sycamore (Platanus occidentalis L.)." Remote Sensing 16, no. 14 (July 15, 2024): 2589. http://dx.doi.org/10.3390/rs16142589.

Full text
Abstract:
The short-rotation coppice (SRC) culture of trees provides a sustainable form of renewable biomass energy, while simultaneously sequestering carbon and contributing to the regional carbon feedstock balance. To understand the role of SRC in carbon feedstock balances, field inventories with selective destructive tree sampling are commonly used to estimate aboveground biomass (AGB) and canopy structure dynamics. However, these methods are resource intensive and spatially limited. To address these constraints, we examined the utility of publicly available airborne Light Detection and Ranging (LiDAR) data and easily accessible imagery from Unmanned Aerial Systems (UASs) to estimate the AGB and canopy structure of an American sycamore SRC in the piedmont region of North Carolina, USA. We compared LiDAR-derived AGB estimates to field estimates from 2015, and UAS-derived AGB estimates to field estimates from 2022 across four planting densities (10,000, 5000, 2500, and 1250 trees per hectare (tph)). The results showed significant effects of planting density treatments on LIDAR- and UAS-derived canopy metrics and significant relationships between these canopy metrics and AGB. In the 10,000 tph, the field-estimated AGB in 2015 (7.00 ± 1.56 Mg ha−1) and LiDAR-derived AGB (7.19 ± 0.13 Mg ha−1) were comparable. On the other hand, the UAS-derived AGB was overestimated in the 10,000 tph planting density and underestimated in the 1250 tph compared to the 2022 field-estimated AGB. This study demonstrates that the remote sensing-derived estimates are within an acceptable level of error for biomass estimation when compared to precise field estimates, thereby showing the potential for increasing the use of accessible remote-sensing technology to estimate AGB of SRC plantations.
APA, Harvard, Vancouver, ISO, and other styles
25

Leightner, Jonathan, Tomoo Inoue, and Pierre Lafaye de Micheaux. "Variable Slope Forecasting Methods and COVID-19 Risk." Journal of Risk and Financial Management 14, no. 10 (October 3, 2021): 467. http://dx.doi.org/10.3390/jrfm14100467.

Full text
Abstract:
There are many real-world situations in which complex interacting forces are best described by a series of equations. Traditional regression approaches to these situations involve modeling and estimating each individual equation (producing estimates of “partial derivatives”) and then solving the entire system for reduced form relationships (“total derivatives”). We examine three estimation methods that produce “total derivative estimates” without having to model and estimate each separate equation. These methods produce a unique total derivative estimate for every observation, where the differences in these estimates are produced by omitted variables. A plot of these estimates over time shows how the estimated relationship has evolved over time due to omitted variables. A moving 95% confidence interval (constructed like a moving average) means that there is only a five percent chance that the next total derivative would lie outside that confidence interval if the recent variability of omitted variables does not increase. Simulations show that two of these methods produce much less error than ignoring the omitted variables problem does when the importance of omitted variables noticeably exceeds random error. In an example, the spread rate of COVID-19 is estimated for Brazil, Europe, South Africa, the UK, and the USA.
APA, Harvard, Vancouver, ISO, and other styles
26

Ocampo, Alex, Joseph J. Valadez, Bethany Hedt-Gauthier, and Marcello Pagano. "How to estimate health service coverage in 58 districts of Benin with no survey data: Using hybrid estimation to fill the gaps." PLOS Global Public Health 2, no. 5 (May 25, 2022): e0000178. http://dx.doi.org/10.1371/journal.pgph.0000178.

Full text
Abstract:
The global movement to use routine information for managing health systems to achieve the Sustainable Development Goals, relies on administrative data which have inherent biases when used to estimate coverage with health services. Health policies and interventions planned with incorrect information can have detrimental impacts on communities. Statistical inferences using administrative data can be improved when they are combined with random probability survey data. Sometimes, survey data are only available for some districts. We present new methods for extending combined estimation techniques to all districts by combining additional data sources. Our study uses data from a probability survey (n = 1786) conducted during 2015 in 19 of Benin’s 77 communes and administrative count data from all of them for a national immunization day (n = 2,792,803). Communes are equivalent to districts. We extend combined-data estimation from 19 to 77 communes by estimating denominators using the survey data and then building a statistical model using population estimates from different sources to estimate denominators in adjacent districts. By dividing administrative numerators by the model-estimated denominators we obtain extrapolated hybrid prevalence estimates. Framing the problem in the Bayesian paradigm guarantees estimated prevalence rates fall within the appropriate ranges and conveniently incorporates a sensitivity analysis. Our new methodology, estimated Benin’s polio vaccination rates for 77 communes. We leveraged probability survey data from 19 communes to formulate estimates for the 58 communes with administrative data alone; polio vaccination coverage estimates in the 58 communes decreased to ranges consistent with those from the probability surveys (87%, standard deviation = 0.09) and more credible than the administrative estimates. Combining probability survey and administrative data can be extended beyond the districts in which both are collected to estimate coverage in an entire catchment area. These more accurate results will better inform health policy-making and intervention planning to reduce waste and improve health in communities.
APA, Harvard, Vancouver, ISO, and other styles
27

Koots, Kenneth R., and John P. Gibson. "Realized Sampling Variances of Estimates of Genetic Parameters and the Difference Between Genetic and Phenotypic Correlations." Genetics 143, no. 3 (July 1, 1996): 1409–16. http://dx.doi.org/10.1093/genetics/143.3.1409.

Full text
Abstract:
Abstract A data set of 1572 heritability estimates and 1015 pairs of genetic and phenotypic correlation estimates, constructed from a survey of published beef cattle genetic parameter estimates, provided a rare opportunity to study realized sampling variances of genetic parameter estimates. The distribution of both heritability estimates and genetic correlation estimates, when plotted against estimated accuracy, was consistent with random error variance being some three times the sampling variance predicted from standard formulae. This result was consistent with the observation that the variance of estimates of heritabilities and genetic correlations between populations were about four times the predicted sampling variance, suggesting few real differences in genetic parameters between populations. Except where there was a strong biological or statistical expectation of a difference, there was little evidence for differences between genetic and phenotypic correlations for most trait combinations or for differences in genetic correlations between populations. These results suggest that, even for controlled populations, estimating genetic parameters specific to a given population is less useful than commonly believed. A serendipitous discovery was that, in the standard formula for theoretical standard error of a genetic correlation estimate, the heritabilities refer to the estimated values and not, as seems generally assumed, the true population values.
APA, Harvard, Vancouver, ISO, and other styles
28

Babakus, Emin, Carl E. Ferguson, and Karl G. Jöreskog. "The Sensitivity of Confirmatory Maximum Likelihood Factor Analysis to Violations of Measurement Scale and Distributional Assumptions." Journal of Marketing Research 24, no. 2 (May 1987): 222–28. http://dx.doi.org/10.1177/002224378702400209.

Full text
Abstract:
A large-scale simulation design was used to study the sensitivity of maximum likelihood (ML) factor analysis to violations of measurement scale and distributional assumptions in the input data. Product-moment, polychoric. Spearman's rho, and Kendall's tau- b correlations computed from ordinal data were used to estimate a single-factor model. The resulting ML estimates were compared on the bases of convergence rates and improper solutions, accuracy of the loading estimates, fit statistics, and estimated standard errors. The LISREL maximum likelihood solution algorithm was used to estimate model parameters. The polychoric correlation procedure was found to provide the most accurate estimates of pairwise correlations and factor loadings but performed worst on all goodness-of-fit criteria. LISREL overestimated all standard errors, thus not reflecting the effects of standardization as previously assumed. When the data were categorized, the polychoric correlations led to the best estimates of the standard errors.
APA, Harvard, Vancouver, ISO, and other styles
29

Mariasin, Sean, Andrew Coles, Erez Karpas, Wheeler Ruml, Solomon Eyal Shimony, and Shahaf Shperberg. "Evaluating Distributional Predictions of Search Time: Put Up or Shut Up Games (Extended Abstract)." Proceedings of the International Symposium on Combinatorial Search 17 (June 1, 2024): 277–78. http://dx.doi.org/10.1609/socs.v17i1.31579.

Full text
Abstract:
Metareasoning can be a helpful technique for controlling search in situations where computation time is an important resource, such as real-time planning and search, algorithm portfolios, and concurrent planning and execution. Metareasoning often involves an estimate of the remaining search time of a running algorithm, and several ways to compute such estimates have been presented in the literature. In this paper, we argue that many applications actually require a full estimated probability distribution over the remaining time, rather than just a point estimate of expected search time. We study several methods for estimating such distributions, including some novel adaptations of existing schemes. To properly evaluate the estimates, we introduce `put-up or shut-up games', which probe the distributional estimates without requiring infeasible computation. Our experimental evaluation reveals that estimates that are more accurate in expected value do not necessarily deliver better distributions, yielding worse scores in the game.
APA, Harvard, Vancouver, ISO, and other styles
30

Govindarajulu, Z., and A. Dmitrienko. "estimates." Annals of Statistics 28, no. 5 (October 2000): 1472–501. http://dx.doi.org/10.1214/aos/1015957403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jahns, Lisa, Lenore Arab, Alicia Carriquiry, and Barry M. Popkin. "The use of external within-person variance estimates to adjust nutrient intake distributions over time and across populations." Public Health Nutrition 8, no. 1 (February 2005): 69–76. http://dx.doi.org/10.1079/phn2004671.

Full text
Abstract:
AbstractObjective:To examine the utility of using external estimates of within-person variation to adjust usual nutrient intake distributions.Design:Analyses of the prevalence of inadequate intake of an example nutrient by the Estimated Average Requirement (EAR) cut-point method using three different methods of statistical adjustment of the usual intake distribution of a single 24-hour recall in Russian children in 1996, using the Iowa State University method for adjustment of the distribution. First, adjusting the usual intake distribution with day 2 recalls from the same 1996 sample (the correct method) second, adjusting the distribution using external variance estimates derived from US children in 1996; and third, adjusting the distribution using external estimates derived from Russian children of the same age in 2000. We also present prevalence estimates based on naïve statistical analysis of the unadjusted distribution of intakes.Setting/subjects:Children drawn from the Russia Longitudinal Monitoring Survey in 1996 and 2000 and from the 1996 Continuing Survey of Food Intakes by Individuals.Results:When the EAR cut-point method is applied to a single recall, the resulting prevalence estimate in this study is inflated by 100–1300%. When the intake distribution is adjusted using an external variance estimate, the prevalence estimate is much less biased, suggesting that any adjustment may give less biased estimates than no adjustment.Conclusions:In moderately large samples, adjusting distributions with external estimates of variances results in more reliable prevalence estimates than using 1–day data.
APA, Harvard, Vancouver, ISO, and other styles
32

Miratrix, Luke W., Michael J. Weiss, and Brit Henderson. "An Applied Researcher’s Guide to Estimating Effects from Multisite Individually Randomized Trials: Estimands, Estimators, and Estimates." Journal of Research on Educational Effectiveness 14, no. 1 (January 2, 2021): 270–308. http://dx.doi.org/10.1080/19345747.2020.1831115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Qian, Zhongmin. "Gradient estimates and heat kernel estimates." Proceedings of the Royal Society of Edinburgh: Section A Mathematics 125, no. 5 (1995): 975–90. http://dx.doi.org/10.1017/s0308210500022599.

Full text
Abstract:
In the first part of this paper, Yau's estimates for positive L-harmonic functions and Li and Yau's gradient estimates for the positive solutions of a general parabolic heat equation on a complete Riemannian manifold are obtained by the use of Bakry and Emery's theory. In the second part we establish a heat kernel bound for a second-order differential operator which has a bounded and measurable drift, using Girsanov's formula.
APA, Harvard, Vancouver, ISO, and other styles
34

Bolen, Rebecca M., and Maria Scannapieco. "Can Valid Estimates Be High Estimates?" Social Service Review 75, no. 1 (March 2001): 159–66. http://dx.doi.org/10.1086/591887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Utsumi, Nobuyuki, Hyungjun Kim, F. Joseph Turk, and Ziad S. Haddad. "Improving Satellite-Based Subhourly Surface Rain Estimates Using Vertical Rain Profile Information." Journal of Hydrometeorology 20, no. 5 (May 1, 2019): 1015–26. http://dx.doi.org/10.1175/jhm-d-18-0225.1.

Full text
Abstract:
Abstract Quantifying time-averaged rain rate, or rain accumulation, on subhourly time scales is essential for various application studies requiring rain estimates. This study proposes a novel idea to estimate subhourly time-averaged surface rain rate based on the instantaneous vertical rain profile observed from low-Earth-orbiting satellites. Instantaneous rain estimates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) are compared with 1-min surface rain gauges in North America and Kwajalein atoll for the warm seasons of 2005–14. Time-lagged correlation analysis between PR rain rates at various height levels and surface rain gauge data shows that the peak of the correlations tends to be delayed for PR rain at higher levels up to around 6-km altitude. PR estimates for low to middle height levels have better correlations with time-delayed surface gauge data than the PR’s estimated surface rain rate product. This implies that rain estimates for lower to middle heights may have skill to estimate the eventual surface rain rate that occurs 1–30 min later. Therefore, in this study, the vertical profiles of TRMM PR instantaneous rain estimates are averaged between the surface and various heights above the surface to represent time-averaged surface rain rate. It was shown that vertically averaged PR estimates up to middle heights (~4.5 km) exhibit better skill, compared to the PR estimated instantaneous surface rain product, to represent subhourly (~30 min) time-averaged surface rain rate. These findings highlight the merit of additional consideration of vertical rain profiles, not only instantaneous surface rain rate, to improve subhourly surface estimates of satellite-based rain products.
APA, Harvard, Vancouver, ISO, and other styles
36

Raphael, Steven. "Estimating the Union Earnings Effect Using a Sample of Displaced Workers." ILR Review 53, no. 3 (April 2000): 503–21. http://dx.doi.org/10.1177/001979390005300308.

Full text
Abstract:
This paper improves on past longitudinal estimates of the union earnings effect by using a sample of workers for whom the error in measuring changes in union status is minimized. The author uses a sample of workers displaced by plant closings from the 1994 and 1996 Current Population Survey Displaced Workers Supplement files to estimate the effects of union membership on weekly earnings. When models are estimated using the entire sample of displaced workers, longitudinal estimates of the union earnings effect are quite similar in magnitude to estimates from cross-sectional regressions. In models estimated separately by skill group, the author finds some evidence of positive selection into unions among workers with low observed skills and negative selection into unions among workers with high observed skills.
APA, Harvard, Vancouver, ISO, and other styles
37

Asiedu, Edward, and Thanasis Stengos. "An Empirical Estimation of the Underground Economy in Ghana." Economics Research International 2014 (July 21, 2014): 1–14. http://dx.doi.org/10.1155/2014/891237.

Full text
Abstract:
The main aim of this paper is to estimate the size of the underground economy in Ghana during the period 1983–2003. There is no agreement on the appropriate estimation approach to adopt to measure the size of the underground activities. To this end, we employ the well-applied currency demand approach in our measurement. Parameter estimates from the estimated currency demand equation are used in quantifying the ratio of “underground” to “measured” output/income for the Ghanaian economy. The estimated long-run average size of the underground economy to GDP for Ghana over the period is 40%. The underground economy is found to vary from a high of 54% in 1985 to a low of 25% in 1999. Estimates may represent lower bound estimates.
APA, Harvard, Vancouver, ISO, and other styles
38

Montague, Thayne, and Roger Kjelgren. "Use of Thermal Dissipation Probes to Estimate Water Loss of Containerized Landscape Trees." Journal of Environmental Horticulture 24, no. 2 (June 1, 2006): 95–104. http://dx.doi.org/10.24266/0738-2898-24.2.95.

Full text
Abstract:
Abstract Granier style thermal dissipation probes (TDPs) have been used to estimate whole plant water use on a variety of tree and vine species. However, studies using TDPs and load cells (gravimetric water loss) to estimate water use of landscape tree species are rare. This research compared gravimetric water loss (estimated with load cells) of four containerized landscape tree species with water loss estimated with TDPs. Over a 66 day period, an experiment compared water loss of three established, 5.0 cm (2 in) caliper poplar (Populus nigra ‘Italica’) trees in 75-liter (20 gal) containers on load cells to TDP estimated water loss. Each tree had a single 30 mm (1.2 inch) TDP inserted into the trunk at four heights above soil level (15, 30, 45, and 60 cm (6, 12, 18, and 24 in, respectively)). Data revealed TDP estimated water loss was less than load cell estimated water loss regardless of TDP height, but TDP estimated water loss at the 30 cm height was closest to actual load cell estimated tree water loss. Over the next three years, similar sized Bradford pear (Pyrus calleryana ‘Bradford’), English oak (Quercus robur x Q. bicolor ‘Asjes’), poplar (Populus deltoides ‘Siouxland’), and sweetgum (Liquidambar styraciflua ‘Rotundiloba’) trees in containers were placed on load cells and one 30 mm TDP was placed into the trunk of each tree 30 cm above soil level. Over an extended time period, tree water loss was estimated using load cells and TDPs. Hourly TDP water loss estimates for each species over a three day period indicate TDP estimated water loss followed similar trends as load cell estimated water loss. However, TDP estimates were generally less than load cell estimates, especially during peak transpiration periods. For each species, mean total daily water loss estimates were less for TDP estimated water loss when compared to load cell estimated water loss. Although TDP estimated water loss has been correlated with actual tree water loss for many species, these data suggest errors may arise when using TDPs to estimate water loss of small, containerized landscape tree species.
APA, Harvard, Vancouver, ISO, and other styles
39

Suárez-Menéndez, Marcos, Martine Bérubé, Fabrício Furni, Vania E. Rivera-León, Mads-Peter Heide-Jørgensen, Finn Larsen, Richard Sears, et al. "Wild pedigrees inform mutation rates and historic abundance in baleen whales." Science 381, no. 6661 (September 2023): 990–95. http://dx.doi.org/10.1126/science.adf2160.

Full text
Abstract:
Phylogeny-based estimates suggesting a low germline mutation rate (μ) in baleen whales have influenced research ranging from assessments of whaling impacts to evolutionary cancer biology. We estimated μ directly from pedigrees in four baleen whale species for both the mitochondrial control region and nuclear genome. The results suggest values higher than those obtained through phylogeny-based estimates and similar to pedigree-based values for primates and toothed whales. Applying our estimate of μ reduces previous genetic-based estimates of preexploitation whale abundance by 86% and suggests that μ cannot explain low cancer rates in gigantic mammals. Our study shows that it is feasible to estimate μ directly from pedigrees in natural populations, with wide-ranging implications for ecological and evolutionary research.
APA, Harvard, Vancouver, ISO, and other styles
40

Davis, Paul M., David D. Jackson, and Yan Y. Kagan. "The longer it has been since the last earthquake, the longer the expected time till the next?" Bulletin of the Seismological Society of America 79, no. 5 (October 1, 1989): 1439–56. http://dx.doi.org/10.1785/bssa0790051439.

Full text
Abstract:
Abstract We adopt a lognormal distribution for earthquake interval times, and we use a locally determined rather than a generic coefficient of variation, to estimate the probability of occurrence of characteristic earthquakes. We extend previous methods in two ways. First, we account for the aseismic period since the last event (the “seismic drought”) in updating the parameter estimates. Second, in calculating the earthquake probability we allow for uncertainties in the mean recurrence time and its variance by averaging over their likelihood. Both extensions can strongly influence the calculated earthquake probabilities, especially for long droughts in regions with few documented earthquakes. As time passes, the recurrence time and variance estimates increase if no additional events occur, leading eventually to an affirmative answer to the question in the title. The earthquake risk estimate begins to drop when the drought exceeds the estimated recurrence time. For the Parkfield area of California, the probability of a magnitude 6 event in the next 5 years is about 34 per cent, much lower than previous estimates. Furthermore, the estimated 5-year probability will decrease with every uneventful year after 1988. For the Coachella Valley segment of the San Andreas Fault, the uncertainties are large, and we estimate the probability of a large event in the next 30 years to be 9 per cent, again much smaller than previous estimates. On the Mojave (Pallett Creek) segment the catalog includes 10 events, and the present drought is just approaching the recurrence interval, so the estimated risk is revised very little by our methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Kumah, Kingsley K., Joost C. B. Hoedjes, Noam David, Ben H. P. Maathuis, H. Oliver Gao, and Bob Z. Su. "The MSG Technique: Improving Commercial Microwave Link Rainfall Intensity by Using Rain Area Detection from Meteosat Second Generation." Remote Sensing 13, no. 16 (August 19, 2021): 3274. http://dx.doi.org/10.3390/rs13163274.

Full text
Abstract:
Commercial microwave link (MWL) used by mobile telecom operators for data transmission can provide hydro-meteorologically valid rainfall estimates according to studies in the past decade. For the first time, this study investigated a new method, the MSG technique, that uses Meteosat Second Generation (MSG) satellite data to improve MWL rainfall estimates. The investigation, conducted during daytime, used MSG optical (VIS0.6) and near IR (NIR1.6) data to estimate rain areas along a 15 GHz, 9.88 km MWL for classifying the MWL signal into wet–dry periods and estimate the baseline level. Additionally, the MSG technique estimated a new parameter, wet path length, representing the length of the MWL that was wet during wet periods. Finally, MWL rainfall intensity estimates from this new MSG and conventional techniques were compared to rain gauge estimates. The results show that the MSG technique is robust and can estimate gauge comparable rainfall estimates. The evaluation scores every three hours of RMSD, relative bias, and r2 based on the entire evaluation period results of the MSG technique were 2.61 mm h−1, 0.47, and 0.81, compared to 2.09 mm h−1, 0.04, and 0.84 of the conventional technique, respectively. For convective rain events with high intensity spatially varying rainfall, the results show that the MSG technique may approximate the actual mean rainfall estimates better than the conventional technique.
APA, Harvard, Vancouver, ISO, and other styles
42

Wendelboe, Aaron M., Janis Campbell, Micah McCumber, Kai Ding, Dale Bratzler, Michele Beckman, Nimia Reyes, and Gary E. Raskob. "Incidence of Venous Thromboembolism Estimated Using Hospital Discharge Data: Differences Between Event-Based Estimates and Patient-Based Estimates." Blood 124, no. 21 (December 6, 2014): 3508. http://dx.doi.org/10.1182/blood.v124.21.3508.3508.

Full text
Abstract:
Abstract Background Hospital discharge data have been used to estimate the burden of venous thromboembolism (VTE) disease. However, most of these databases are de-identified which limits their utility for estimating VTE incidence due to the inability to identify multiple hospitalizations for the same VTE event, and the inability to differentiate between first-time and recurrent VTE events. Objective We aimed to estimate the magnitude of error in estimates of VTE incidence derived from hospital discharge data by comparing the results obtained when patient identifying information is included, thus enabling us to remove duplicate patient events and stratify by first-time and recurrent VTE events, to the estimates obtained using only de-identified data. Methods In collaboration with the Centers for Disease Control and Prevention (CDC) and the Oklahoma State Department of Health (OSDH), we established a pilot surveillance system for VTE events in Oklahoma County, OK during 2012–2014. The OSHD Commissioner of Health made VTE events reportable conditions from 2010 to 2015 which facilitated our acquisition of hospital discharge data with patient identifiers for years 2010–2012 from the OSDH. The data included the inpatient, outpatient surgical, and ambulatory surgery center discharges. A deep vein thrombosis diagnosis was defined as the presence of any of the ICD-9-CM codes 451.1x, 451.81, 451.83, 453.2, 453.4x, 671.3x, and 671.4x. A pulmonary embolism diagnosis was defined as the presence of either of the ICD-9-CM codes 415.1x and 673.2x. Data were de-duplicated and linked across datasets using Link Plus software incorporating patient identifying variables. Duplicate events for the same person caused by hospital transfers were defined a priori as a second hospital admission with a VTE diagnosis code occurring within 72 hours of the previous discharge date with a VTE present on admission (POA) code for the second admission of “Yes” or “Unknown.” Potentially recurrent events were defined as two hospital admissions of the same patient ≥72 hours apart with a VTE diagnosis. Census Bureau estimates for 2010–2012 were used to define the population at risk in Oklahoma County. Incidence rates (IR) and 95% confidence intervals (CI) were calculated using the Poisson distribution and reported as events per 100,000 population per year. Rate differences and excess fractions were calculated to account for the contribution of recurrent and duplicate events to overall estimates and to differentiate between event-based incidence estimates and patient-based estimates. Results We identified 3,299 unique patients with VTE events. The overall event-based IR for VTE events was 249 (95% CI: 241–257). The IR for potentially recurrent events was 35 (95% CI: 32–38) and for duplicate events caused by patient transfers was 13 (95% CI: 11–14). Thus, the rate difference between event-based estimates and patient-based estimates was 48 (95%CI: 44–51) giving a patient-based IR for first-time events of 201 (95% CI: 194–208). The excess fraction was 19.2% (95% CI: 17.8%-20.5%), of which 14.1% (95% CI: 12.9%–15.2%) is attributed to potential recurrent events and 5.1% (95% CI: 4.4%–5.8%) is attributed to duplicate events caused by patient transfers. Conclusions Using event-based estimates for VTE disease resulted in an over-estimate of the incidence rate of first-time VTE events by up to 20%. Included in this excess estimate is the burden caused by potential recurrent events (14%) and duplicate events caused by patient transfers (5%). We designed our case definitions to accurately measure first-time events, and to capture all duplicate events and potential recurrent events. Assuming these data are representative of national trends, applying these excess fractions to estimates from de-identified data may improve the validity of measuring the incidence of first-time VTE events from de-identified hospital discharge data. Disclosures Bratzler: Centers for Disease Control and Prevention: Consultancy; Sanofi Pasteur: Consultancy. Raskob:Bayer Healthcare: Consultancy, Honoraria; BMS: Consultancy, Honoraria; Daiichi Sankyo: Consultancy, Honoraria; Janssen Pharmaceuticals: Consultancy, Honoraria; Pfizer: Consultancy, Honoraria; ISIS Pharmaceuticals: Consultancy, Honoraria.
APA, Harvard, Vancouver, ISO, and other styles
43

Kiely, Belinda E., Andrew J. Martin, Martin H. N. Tattersall, Anna K. Nowak, David Goldstein, Nicholas R. C. Wilcken, David K. Wyld, et al. "The Median Informs the Message: Accuracy of Individualized Scenarios for Survival Time Based on Oncologists' Estimates." Journal of Clinical Oncology 31, no. 28 (October 1, 2013): 3565–71. http://dx.doi.org/10.1200/jco.2012.44.7821.

Full text
Abstract:
PurposeTo determine the accuracy and usefulness of oncologists' estimates of survival time in individual patients with advanced cancer.Patients and MethodsTwenty-one oncologists estimated the “median survival of a group of identical patients” for each of 114 patients with advanced cancer. Accuracy was defined by the proportions of patients with an observed survival time bounded by prespecified multiples of their estimated survival time. We expected 50% to live longer (or shorter) than their oncologist's estimate (calibration), 50% to live from half to double their estimate (typical scenario), 5% to 10% to live ≤ one quarter of their estimate (worst-case scenario), and 5% to 10% to live three or more times their estimate (best-case scenario). Estimates within 0.67 to 1.33 times observed survival were deemed precise. Discriminative value was assessed with Harrell's C-statistic and prognostic significance with proportional hazards regression.ResultsMedian survival time was 11 months. Oncologists' estimates were relatively well-calibrated (61% shorter than observed), imprecise (29% from 0.67 to 1.33 times observed), and moderately discriminative (Harrell C-statistic 0.63; P = .001). The proportion of patients with an observed survival half to double their oncologist's estimate was 63%, ≤ one quarter of their oncologist's estimate was 6%, and three or more times their oncologist's estimate was 14%. Independent predictors of observed survival were oncologist's estimate (hazard ratio [HR] = 0.92; P = .004), dry mouth (HR = 5.1; P < .0001), alkaline phosphatase more than 101U/L (HR = 2.8; P = .0002), Karnofsky performance status ≤ 70 (HR = 2.3; P = .007), prostate primary (HR = 0.23; P = .002), and steroid use (HR = 2.4; P = .02).ConclusionOncologists' estimates of survival time were relatively well-calibrated, moderately discriminative, independently associated with observed survival, and a reasonable basis for estimating worst-case, typical, and best-case scenarios for survival.
APA, Harvard, Vancouver, ISO, and other styles
44

LICHTENSTEIN, PAUL, CAMILLA BJÖRK, CHRISTINA M. HULTMAN, EDWARD SCOLNICK, PAMELA SKLAR, and PATRICK F. SULLIVAN. "Recurrence risks for schizophrenia in a Swedish National Cohort." Psychological Medicine 36, no. 10 (July 25, 2006): 1417–25. http://dx.doi.org/10.1017/s0033291706008385.

Full text
Abstract:
Objective. Recurrence risk estimates for schizophrenia are fundamental to our understanding of this complex disease. Widely cited estimates are from small/older samples. If these estimates are biased upwards, then the rationale for molecular genetic studies of schizophrenia may not be as solid.Method. We created a population-based, Swedish national cohort by linking two Swedish national registers into a relational database (the Swedish Hospital Discharge Register and the Multi-Generation Register). Affection was defined as the lifetime presence of at least two in-patient hospitalizations with a core schizophrenia diagnosis.Results. Merging the Swedish national registers created a population-based cohort of 7739202 individuals of known parentage. The lifetime prevalence of the narrow definition of schizophrenia was 0·407% and we estimated that one in every 79 extended Swedish families had been impacted by schizophrenia. The proportion of affected families with multiple affected members was 3·81%. Recurrence risk estimates for all relative types were strikingly similar to those reported in smaller and older studies. For example, we estimated λsibs at 8·55 [95% confidence interval (CI) 7·86–9·57] compared with a literature estimate of 8·6.Conclusions. In the largest and most comprehensive sample yet studied, we confirm the accepted estimates of recurrence risks for schizophrenia, and provide more accurate estimates of recurrence risks of schizophrenia in relatives, an estimate of the familial impact of schizophrenia, and the multiplex proportion (essential for gauging the generalizability of findings from multiplex pedigrees). These data may be valuable for planning and interpreting genetic studies of schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
45

Gayeski, Nick, George Pess, and Tim Beechie. "A life-table model estimation of the parr capacity of a late 19th century Puget Sound steelhead population." FACETS 1, no. 1 (March 1, 2017): 83–104. http://dx.doi.org/10.1139/facets-2015-0010.

Full text
Abstract:
An age-structured life-cycle model of steelhead ( Oncorhynchus mykiss) for the Stillaguamish River in Puget Sound, Washington, USA, was employed to estimate the number of age-1 steelhead parr that could have produced the estimated adult return of 69 000 in 1895. We then divided the estimated parr numbers by the estimated area of steelhead rearing habitat in the Stillaguamish River basin in 1895 and under current conditions to estimate density of rearing steelhead then and now. Scaled to estimates of total wetted area of tributary and mainstem shallow shoreline habitat, our historic estimates averaged 0.39–0.49 parr·m−2, and ranged from 0.24 to 0.7 parr·m−2. These values are significantly greater than current densities in the Stillaguamish (mainstem average: 0.15 parr·m−2, tributaries: 0.07 parr·m−2), but well within the range of recent estimates of steelhead parr rearing densities in high-quality habitats. Our results indicate that modest improvement in the capacity of mainstem and tributary rearing habitat in Puget Sound rivers will yield large recovery benefits if realized in a large proportion of the area of river basins currently accessible to steelhead.
APA, Harvard, Vancouver, ISO, and other styles
46

Imbayarwo-Chikosi, E. V., S. M. Makuza, C. B. A. Wollny, and J. W. Banda. "Genetic and phenotypic parameters for individual cow somatic cell counts in Zimbabwean Holstein-Friesian cattle." Archives Animal Breeding 44, no. 2 (October 10, 2001): 129–38. http://dx.doi.org/10.5194/aab-44-129-2001.

Full text
Abstract:
Abstract. Genetic and phenotypic parameters for lactation average individual cow SCC in Holstein cattle were estimated. Records from the Zimbabwe Dairy Services Association included a total of 7912 lactation records, from which were 1453 first lactation, 2211 second lactation and 4248 third and later lactation records for the period 1994 to 1998. SCC were transformed through a base 2 logarithm. Genetic parameters were estimated with the AIREML Programme. A univariate mixed animal model was used to estimate heritabilities and repeatabilities. The heritability estimates for log2SCC were 0.10, 0.12 and 0.14 for the respective first, second, third and subsequent lactations. Estimate of repeatability for log2SCC was 0.17. Genetic and phenotypic correlations were estimated with a multivariate mixed animal model. Genetic correlation estimates between log2SCC and the production traits were low to medium and negative (−0.05 to −0.55) whilst the phenotypic correlation estimates were low and negative (−0.04 to −0.22). As individual cow SCC data collection continues in Zimbabwe, there are opportunities for genetic re-evaluation of this trait with a larger data set. This could possibly include udder type traits, most of which have been reported to be associated with SCC.
APA, Harvard, Vancouver, ISO, and other styles
47

Hammitt, James K., Jin-Tan Liu, and Jin-Long Liu. "Contingent valuation of a Taiwanese wetland." Environment and Development Economics 6, no. 2 (May 2001): 259–68. http://dx.doi.org/10.1017/s1355770x01000146.

Full text
Abstract:
Wetlands provide a variety of important environmental services including flood control, wildlife habitat, waste treatment, and recreational opportunities. Because most of these services are public goods, the value of wetland preservation cannot be directly obtained from market prices but may be estimated using revealed-preference or stated-preference methods. We estimate the value to local residents of protecting the Kuantu wetland in Taiwan using contingent valuation. Estimates are sensitive to question format, with estimates using a double-bounded dichotomous-choice format about three times larger than estimates using a single open-ended question. Using the open-ended format, the estimated annual mean household willingness to pay to preserve the Kuantu wetland is about US$21. Using the dichotomous-choice questions, the value is about US$65. These estimates suggest the total present-value willingness to pay to preserve Kuantu wetland is about US$200 million to US$1.2 billion (discounted at 5–10%).
APA, Harvard, Vancouver, ISO, and other styles
48

Shige, Shoichi, Satoshi Kida, Hiroki Ashiwake, Takuji Kubota, and Kazumasa Aonashi. "Improvement of TMI Rain Retrievals in Mountainous Areas." Journal of Applied Meteorology and Climatology 52, no. 1 (January 2013): 242–54. http://dx.doi.org/10.1175/jamc-d-12-074.1.

Full text
Abstract:
AbstractHeavy rainfall associated with shallow orographic rainfall systems has been underestimated by passive microwave radiometer algorithms owing to weak ice scattering signatures. The authors improve the performance of estimates made using a passive microwave radiometer algorithm, the Global Satellite Mapping of Precipitation (GSMaP) algorithm, from data obtained by the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) for orographic heavy rainfall. An orographic/nonorographic rainfall classification scheme is developed on the basis of orographically forced upward vertical motion and the convergence of surface moisture flux estimated from ancillary data. Lookup tables derived from orographic precipitation profiles are used to estimate rainfall for an orographic rainfall pixel, whereas those derived from original precipitation profiles are used to estimate rainfall for a nonorographic rainfall pixel. Rainfall estimates made using the revised GSMaP algorithm are in better agreement with estimates from data obtained by the radar on the TRMM satellite and by gauge-calibrated ground radars than are estimates made using the original GSMaP algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Schwarz, Carl J., Richard E. Bailey, James R. Irvine, and Frank C. Dalziel. "Estimating Salmon Spawning Escapement Using Capture–Recapture Methods." Canadian Journal of Fisheries and Aquatic Sciences 50, no. 6 (June 1, 1993): 1181–97. http://dx.doi.org/10.1139/f93-135.

Full text
Abstract:
We describe a method of estimating the spawning escapement of coho salmon (Oncorhynchus kisutch) from capture–recapture data. Traditional capture–recapture analyses do not directly provide estimates of escapements; however, we show how simple modifications to the Jolly–Seber method can estimate the total number of fish returning to a river including those that enter and die between sampling occasions. Spawning runs of Pacific salmon were simulated and their escapements estimated using capture–recapture. The performance of the maximum likelihood estimators (MLEs), the censored MLEs, the constrained MLEs, and less-biased estimators in estimating the run sizes and providing estimates of precision were evaluated. Simulation results indicated that constrained MLEs provided the most appropriate estimates of escapement and that standard errors be computed using the large-sample variance formulae evaluated at these estimates. These methods were used to estimate the escapements of coho salmon to a small river on Vancouver Island in 1989 and 1990.
APA, Harvard, Vancouver, ISO, and other styles
50

Emhjellen, Magne, Kjetil Emhjellen, and Petter Osmundsen. "Cost Estimation Overruns in the North Sea." Project Management Journal 34, no. 1 (March 2003): 23–29. http://dx.doi.org/10.1177/875697280303400104.

Full text
Abstract:
Recently, a Norwegian government report on the cost overruns North Sea projects was presented (NOU 1999:11). It concluded that there was a 25% increase in development costs from project sanction (POD, Plan for Operation and Development) to last CCE (Capital Cost Estimate) for the 11 oil field projects investigated. Many reasons like unclear project assumptions in early phase, optimistic interpolation of previous project assumptions, optimistic estimates, and underestimation of uncertainty were given as reasons for overruns. In this paper we highlight the possibility that the cost overruns are not necessarily all due to the reasons given, but also to an error in the estimation and reporting of the capital expenditure cost (CAPEX). Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. The oil companies report, and are required to do so by government authorities, the estimated 50/50 (median) cost estimate instead of the estimated expected value cost estimate. We demonstrate how the practice of using a 50/50 (median) CAPEX estimate for the 11 projects, when the cost uncertainty distributions are asymmetric, may explain at least part of the “overruns.” Hence, we advocate changing the practice of using 50/50 cost estimates instead of expected value cost estimates for project management and decision purposes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography