To see the other types of publications on this topic, follow the link: Total Expected Error.

Journal articles on the topic 'Total Expected Error'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Total Expected Error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Aldelgawy, Mohammed. "Evaluation of Cadastral Work Done Using Total Station Instrument." Academic Perspective Procedia 1, no. 1 (November 9, 2018): 115–29. http://dx.doi.org/10.33793/acperpro.01.01.24.

Full text
Abstract:
Total station has become the main tool in most engineering work. Accordingly, evaluation of this work has gained a significant importance. A methodology to evaluate precision of cadastral work done using total station is presented here. The used technique is based on propagation of random errors of quantities measured by total station; i.e., distance and both horizontal and vertical angles. Random error in distance is produced by EDM unit integrated into total station. Whereas, random errors in horizontal and vertical angles are produced by theodolite integrated unit. Moreover, backsight process conducted in field results in addition random error in horizontal angles. This research studies how the above errors affect the resulted rectangular coordinates measured by total station for each observed point. Experiments were done using both simulated and real datasets. Results showed that the calculated errors were close to the expected errors and did not exceed the allowable ones.
APA, Harvard, Vancouver, ISO, and other styles
2

Karon, Brad S., James C. Boyd, and George G. Klee. "Glucose Meter Performance Criteria for Tight Glycemic Control Estimated by Simulation Modeling." Clinical Chemistry 56, no. 7 (July 1, 2010): 1091–97. http://dx.doi.org/10.1373/clinchem.2010.145367.

Full text
Abstract:
Abstract Background: Glucose meter analytical performance criteria required for safe and effective management of patients on tight glycemic control (TGC) are not currently defined. We used simulation modeling to relate glucose meter performance characteristics to insulin dosing errors during TGC. Methods: We used 29 920 glucose values from patients on TGC at 1 institution to represent the expected distribution of glucose values during TGC, and we used 2 different simulation models to relate glucose meter analytical performance to insulin dosing error using these 29 920 initial glucose values and assuming 10%, 15%, or 20% total allowable error (TEa) criteria. Results: One-category insulin dosing errors were common under all error conditions. Two-category insulin dosing errors occurred more frequently when either 20% or 15% TEa was assumed compared with 10% total error. Dosing errors of 3 or more categories, those most likely to result in hypoglycemia and thus patient harm, occurred infrequently under all error conditions with the exception of 20% TEa. Conclusions: Glucose meter technologies that operate within a 15% total allowable error tolerance are unlikely to produce large (≥3-category) insulin dosing errors during TGC. Increasing performance to 10% TEa should reduce the frequency of 2-category insulin dosing errors, although additional studies are necessary to determine the clinical impact of such errors during TGC. Current criteria that allow 20% total allowable error in glucose meters may not be optimal for patient management during TGC.
APA, Harvard, Vancouver, ISO, and other styles
3

Rochon, Yves J., Peyman Rahnama, and Ian C. McDade. "Satellite Measurement of Stratospheric Winds and Ozone Using Doppler Michelson Interferometry. Part II: Retrieval Method and Expected Performance." Journal of Atmospheric and Oceanic Technology 23, no. 6 (June 1, 2006): 770–84. http://dx.doi.org/10.1175/jtech1882.1.

Full text
Abstract:
Abstract This paper is about the retrieval of horizontal wind and ozone number density from measurement simulations for the Stratospheric Wind Interferometer for Transport Studies (SWIFT). This instrument relies on the concept of imaging Doppler Michelson interferometry applied to thermal infrared emission originating from the stratosphere. The instrument and measurement simulations are described in detail in the first of this series of two papers. In this second paper, a summary of the measurement simulations and a data retrieval method suited to these measurements are first presented. The inversion method consists of the maximum a posteriori solution approach with added differential regularization and, when required, iterations performed with the Gauss–Newton method. Inversion characterization and an error analysis have been performed. Retrieval noise estimates have been obtained both from derived covariance matrices and sample inversions. Retrieval noise levels for wind and ozone number density of ∼1–3 m s−1 and <1% have been obtained over the altitude range of 20–45 km with Backus–Gilbert resolving lengths of ∼1.5 km. Retrieval noise levels over the extended altitude range of 15–55 km are less than 10 m s−1 and 2%. The sensitivity to other error sources has been examined through a few sample realizations. The contributions from these other errors can be as important as or more so than retrieval noise. An error budget identifying contributing wind and ozone error levels to total errors of 5 m s−1 and 5% for altitudes of 20–45 km has been prepared relying on the retrieval errors and knowledge of the instrument design.
APA, Harvard, Vancouver, ISO, and other styles
4

Wee, Nam-Sook. "Optimal Maintenance Schedules of Computer Software." Probability in the Engineering and Informational Sciences 4, no. 2 (April 1990): 243–55. http://dx.doi.org/10.1017/s026996480000156x.

Full text
Abstract:
We present a decision procedure to determine the optimal maintenance intervals of a computer software throughout its operational phase. Our model accounts for the average cost per each maintenance activity and the damage cost per failure with the future cost discounted. Our decision policy is optimal in the sense that it minimizes the expected total cost. Our model assumes that the total number of errors in the software has a Poisson distribution with known mean λ and each error causes failures independently of other errors at a known constant failure rate. We study the structures of the optimal policy in terms of λ and present efficient numerical algorithms to compute the optimal maintenance time intervals, the optimal total number of maintenances, and the minimal total expected cost throughout the maintenance phase.
APA, Harvard, Vancouver, ISO, and other styles
5

Indriani, Silvia. "Students’ Errors in Using the Simple Present Tense at Polytechnic ATI Padang." Lingua Cultura 13, no. 3 (September 27, 2019): 217. http://dx.doi.org/10.21512/lc.v13i3.5840.

Full text
Abstract:
The research aimed at analyzing the errors in using simple present tense at Logistics Management of Agro-Industry of Polytechnic ATI Padang. A qualitative method with descriptive approach was applied. The samples were 15% of 153 total students or 23 students. Data were collected through the writing test; namely, descriptive essay. The results show that many students commit errors in using the simple present tense. The errors are classified into four types: omission, addition, misinformation, and misordering. There are 107 errors with the highest number that is omission (61 errors or 57%). Misinformation is in second place with 29 errors (27,1%). The error of addition gains 11,2 % with 12 errors. The lowest error is misordering, which gains 4,7% with only five errors. In conclusion, the most dominant error made by the students is omission with 57% and misordering is the lowest one with 4,7%. Therefore, the lecturers are expected to improve the teaching strategies in teaching simple present tense to reduce the numbers of students’ errors.
APA, Harvard, Vancouver, ISO, and other styles
6

Mehtätalo, Lauri, and Annika Kangas. "An approach to optimizing field data collection in an inventory by compartments." Canadian Journal of Forest Research 35, no. 1 (January 1, 2005): 100–112. http://dx.doi.org/10.1139/x04-139.

Full text
Abstract:
This study presents models for the expected error of the total volume and saw timber volume due to sampling errors of stand measurements. The measurements considered are horizontal point sample plots, stem numbers from circular plots, sample tree heights, sample order statistics (i.e., quantile trees), and sample tree heights from the previous inventory. Different measurement strategies were constructed by systematically varying the numbers of these measurements. A model system developed for this study was used in a data set of 170 stands to predict the total volume and saw timber volume of each stand with each measurement strategy. The errors of these volumes were modeled using stand characteristics and the numbers of measurements as predictors. The most important factors affecting the error in the total volume were the number of horizontal point sample plots and height sample trees. In addition, the number of quantile trees had a strong effect on the error of saw timber volume. The errors were slightly reduced when an old height measurement was used. There were significant interactions between stand characteristics and measurement strategies. Thus, the optimal measurement strategy varies between stands. A demonstration is provided of how constrained optimization can be used to find the optimal strategy for any one stand.
APA, Harvard, Vancouver, ISO, and other styles
7

Verhoelst, T., J. Granville, F. Hendrick, U. Köhler, C. Lerot, J. P. Pommereau, A. Redondas, M. Van Roozendael, and J. C. Lambert. "Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons." Atmospheric Measurement Techniques 8, no. 12 (December 2, 2015): 5039–62. http://dx.doi.org/10.5194/amt-8-5039-2015.

Full text
Abstract:
Abstract. Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith–sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-)DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the comparisons with ZSL-DOAS instruments at high latitudes, especially in the presence of a polar vortex due to the strong TOC gradient it induces. At tropical latitudes, where TOC variability is lower, both types of errors remain below about 1 % and consequently do not contribute significantly to the comparison error budget. The detailed analysis of the comparison results, including the metrological errors, suggests that the published random measurement uncertainties for GODFITv3 reprocessed satellite data are potentially overestimated, and adjustments are proposed here. This successful application of the OSSSMOSE system to close for the first time the error budget of TOC comparisons, bodes well for potential future applications, which are briefly touched upon.
APA, Harvard, Vancouver, ISO, and other styles
8

Seoane, Fernando, Shirin Abtahi, Farhad Abtahi, Lars Ellegård, Gudmundur Johannsson, Ingvar Bosaeus, and Leigh C. Ward. "Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations." BioMed Research International 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/656323.

Full text
Abstract:
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun’s prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Schouten, S. M., M. E. van de Velde, G. J. L. Kaspers, L. B. Mokkink, I. M. van der Sluis, C. van den Bos, A. Hartman, F. C. H. Abbink, and M. H. van den Berg. "Measuring vincristine-induced peripheral neuropathy in children with cancer: validation of the Dutch pediatric–modified Total Neuropathy Score." Supportive Care in Cancer 28, no. 6 (November 16, 2019): 2867–73. http://dx.doi.org/10.1007/s00520-019-05106-3.

Full text
Abstract:
Abstract Purpose The aims were to evaluate the construct validity and reliability of the Dutch version of the pediatric-modified Total Neuropathy Score (ped-mTNS) for assessing vincristine-induced peripheral neuropathy (VIPN) in Dutch pediatric oncology patients aged 5–18 years. Methods Construct validity (primary aim) of the ped-mTNS was determined by testing hypotheses about expected correlation between scores of the ped-mTNS (range: 0–32) and the Common Terminology Criteria for Adverse Events (CTCAE) (range: 0–18) for patients and healthy controls and by comparing patients and controls regarding their total ped-mTNS scores and the proportion of children identified with VIPN. Inter-rater and intra-rater reliability and measurement error (secondary aims) were assessed in a subgroup of study participants. Results Among the 112 children (56 patients and 56 age- and gender-matched healthy controls) evaluated, correlation between CTCAE and ped-mTNS scores was as expected (moderate (r = 0.60)). Moreover, as expected, patients had significantly higher ped-mTNS scores and more frequent symptoms of VIPN compared with controls (both p < .001). Reliability as measured within the intra-rater group (n = 10) (intra-class correlation coefficient (ICCagreement) = 0.64, standard error of measurement (SEMagreement) = 2.92, and smallest detectable change (SDCagreement) = 8.1) and within the inter-rater subgroup (n = 10) (ICCagreement = 0.63, SEMagreement = 3.7, and SDCagreement = 10.26) indicates insufficient reliability. Conclusion The Dutch version of the ped-mTNS appears to have good construct validity for assessing VIPN in a Dutch pediatric oncology population, whereas reliability appears to be insufficient and measurement error high. To improve standardization of VIPN assessment in children, future research aimed at evaluating and further optimizing the psychometric characteristics of the ped-mTNS is needed.
APA, Harvard, Vancouver, ISO, and other styles
10

Verhoelst, T., J. Granville, F. Hendrick, U. Köhler, C. Lerot, J. P. Pommereau, A. Redondas, M. Van Roozendael, and J. C. Lambert. "Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons." Atmospheric Measurement Techniques Discussions 8, no. 8 (August 4, 2015): 8023–82. http://dx.doi.org/10.5194/amtd-8-8023-2015.

Full text
Abstract:
Abstract. Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement uncertainties but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between on the one hand GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and on the other hand direct-sun and zenith-sky reference measurements such as from Dobsons, Brewers, and zenith scattered light (ZSL-)DOAS instruments respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the comparisons with ZSL-DOAS instruments at high latitudes, especially in the presence of a polar vortex. At tropical latitudes, where TOC variability is lower, both types of errors remain below about 1 % and consequently do not contribute significantly to the comparison error budget. The detailed analysis of the comparison results, including now the metrological errors, suggests that the published random measurement uncertainties for GODFITv3 reprocessed satellite data are potentially overestimated, and adjustments are proposed here. This successful application of the OSSSMOSE sytem to close for the first time the error budget of TOC comparisons, bodes well for potential future applications, which are briefly touched upon.
APA, Harvard, Vancouver, ISO, and other styles
11

Cohn, Joseph V., Paul DiZio, and James R. Lackner. "Reaching During Virtual Rotation: Context Specific Compensations for Expected Coriolis Forces." Journal of Neurophysiology 83, no. 6 (June 1, 2000): 3230–40. http://dx.doi.org/10.1152/jn.2000.83.6.3230.

Full text
Abstract:
Subjects who are in an enclosed chamber rotating at constant velocity feel physically stationary but make errors when pointing to targets. Reaching paths and endpoints are deviated in the direction of the transient inertial Coriolis forces generated by their arm movements. By contrast, reaching movements made during natural, voluntary torso rotation seem to be accurate, and subjects are unaware of the Coriolis forces generated by their movements. This pattern suggests that the motor plan for reaching movements uses a representation of body motion to prepare compensations for impending self-generated accelerative loads on the arm. If so, stationary subjects who are experiencing illusory self-rotation should make reaching errors when pointing to a target. These errors should be in the direction opposite the Coriolis accelerations their arm movements would generate if they were actually rotating. To determine whether such compensations exist, we had subjects in four experiments make visually open-loop reaches to targets while they were experiencing compelling illusory self-rotation and displacement induced by rotation of a complex, natural visual scene. The paths and endpoints of their initial reaching movements were significantly displaced leftward during counterclockwise illusory rotary displacement and rightward during clockwise illusory self-displacement. Subjects reached in a curvilinear path to the wrong place. These reaching errors were opposite in direction to the Coriolis forces that would have been generated by their arm movements during actual torso rotation. The magnitude of path curvature and endpoint errors increased as the speed of illusory self-rotation increased. In successive reaches, movement paths became straighter and endpoints more accurate despite the absence of visual error feedback or tactile feedback about target location. When subjects were again presented a stationary scene, their initial reaches were indistinguishable from pre-exposure baseline, indicating a total absence of aftereffects. These experiments demonstrate that the nervous system automatically compensates in a context-specific fashion for the Coriolis forces associated with reaching movements.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Chung Ho. "Dodge-Romig LTPD Single Sampling Plan under Quality Investment and Inspection Error." Applied Mechanics and Materials 284-287 (January 2013): 3591–96. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3591.

Full text
Abstract:
In this study, the author proposes an economic design of quality investment for a Dodge-Romig single sampling inspection plan with inspection error. The optimal sampling inspection plan and quality investment level are jointly determined by minimizing the expected total cost of product under the specified consumer’s risk. Finally, the comparison of solution between the model with/without inspection error will be provided for illustration.
APA, Harvard, Vancouver, ISO, and other styles
13

Helmy, Naeder, Mai Lan Dao Trong, and Stefanie P. Kühnel. "Accuracy of Patient Specific Cutting Blocks in Total Knee Arthroplasty." BioMed Research International 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/562919.

Full text
Abstract:
Background.Long-term survival of total knee arthroplasty (TKA) is mainly determined by optimal positioning of the components and prosthesis alignment. Implant positioning can be optimized by computer assisted surgery (CAS). Patient specific cutting blocks (PSCB) seem to have the potential to improve component alignment compared to the conventional technique and to be comparable to CAS.Methods.113 knees were selected for PSI and included in this study. Pre- and postoperative mechanical axis, represented by the hip-knee-angle (HKA), the proximal tibial angle (PTA), the distal femoral angle (DFA), and the tibial slope (TS) were measured and the deviation from expected ideal values was calculated.Results.With a margin of error of ±3°, success rates were 81.4% for HKA, 92.0% for TPA, and 94.7% for DFA. With the margin of error for alignments extended to ±4°, we obtained a success rate of 92.9% for the HKA, 98.2% for the PTA, and 99.1% for the DFA. The TS showed postoperative results of 2.86 ± 2.02° (mean change 1.76 ± 2.85°).Conclusion.PSCBs for TKA seem to restore the overall leg alignment. Our data suggest that each individual component can be implanted accurately and the results are comparable to the ones in CAS.
APA, Harvard, Vancouver, ISO, and other styles
14

van Beesten, E. Ruben, and Ward Romeijnders. "Convex approximations for two-stage mixed-integer mean-risk recourse models with conditional value-at-risk." Mathematical Programming 181, no. 2 (September 9, 2019): 473–507. http://dx.doi.org/10.1007/s10107-019-01428-6.

Full text
Abstract:
Abstract In traditional two-stage mixed-integer recourse models, the expected value of the total costs is minimized. In order to address risk-averse attitudes of decision makers, we consider a weighted mean-risk objective instead. Conditional value-at-risk is used as our risk measure. Integrality conditions on decision variables make the model non-convex and hence, hard to solve. To tackle this problem, we derive convex approximation models and corresponding error bounds, that depend on the total variations of the density functions of the random right-hand side variables in the model. We show that the error bounds converge to zero if these total variations go to zero. In addition, for the special cases of totally unimodular and simple integer recourse models we derive sharper error bounds.
APA, Harvard, Vancouver, ISO, and other styles
15

Kalinowski, Steven T. "Genetic polymorphism and mixed-stock fisheries analysis." Canadian Journal of Fisheries and Aquatic Sciences 61, no. 7 (July 1, 2004): 1075–82. http://dx.doi.org/10.1139/f04-060.

Full text
Abstract:
Genetic data can be used to estimate the stock composition of mixed-stock fisheries. Designing efficient strategies for estimating mixture proportions is important, but several aspects of study design remain poorly understood, particularly the relationship between genetic polymorphism and estimation error. In this study, computer simulation was used to investigate how the following variables affect expected squared error of mixture estimates: the number of loci examined, the number of alleles at those loci, and the size of baseline data sets. This work showed that (i) loci with more alleles produced estimates of stock proportions that had a lower expected squared error than less polymorphic loci, (ii) highly polymorphic loci did not require larger samples than less polymorphic loci, and (iii) the total number of independent alleles examined is a reasonable indicator of the quality of estimates of stock proportions.
APA, Harvard, Vancouver, ISO, and other styles
16

Manene, M. M. "Step-Wise Group Screening Designs with Unequal A-Priori Probabilities and Errors in Observations." Sultan Qaboos University Journal for Science [SQUJS] 8, no. 2 (June 1, 2003): 153. http://dx.doi.org/10.24200/squjs.vol8iss2pp153-165.

Full text
Abstract:
The performance of step-wise group screening with unequal a-priori probabilities in terms of the expected number of runs and the expected maximum number of incorrect decisions is considered. A method of obtaining optimal step-wise designs with unequal a-priori probabilities is presented for the case in which the direction of each defective factor is assumed to be known a -priori and observations are subject to error. An appropriate cost function is introduced and the value of the group size which minimizes the expected total cost is obtained.
APA, Harvard, Vancouver, ISO, and other styles
17

Director, Hannah M., Adrian E. Raftery, and Cecilia M. Bitz. "Improved Sea Ice Forecasting through Spatiotemporal Bias Correction." Journal of Climate 30, no. 23 (December 2017): 9493–510. http://dx.doi.org/10.1175/jcli-d-17-0185.1.

Full text
Abstract:
A new method, called contour shifting, is proposed for correcting the bias in forecasts of contours such as sea ice concentration above certain thresholds. Retrospective comparisons of observations and dynamical model forecasts are used to build a statistical spatiotemporal model of how predicted contours typically differ from observed contours. Forecasted contours from a dynamical model are then adjusted to correct for expected errors in their location. The statistical model changes over time to reflect the changing error patterns that result from reducing sea ice cover in the satellite era in both models and observations. For an evaluation period from 2001 to 2013, these bias-corrected forecasts are on average more accurate than the unadjusted dynamical model forecasts for all forecast months in the year at four different lead times. The total area, which is incorrectly categorized as containing sea ice or not, is reduced by 3.3 × 105 km2 (or 21.3%) on average. The root-mean-square error of forecasts of total sea ice area is also reduced for all lead times.
APA, Harvard, Vancouver, ISO, and other styles
18

Cho, Sung Ho, and Eun Soo Lee. "A Development of 3-Dimensional Coordinates Monitoring System for Underground Pipeline Using IMU Sensor." Applied Mechanics and Materials 204-208 (October 2012): 2749–52. http://dx.doi.org/10.4028/www.scientific.net/amm.204-208.2749.

Full text
Abstract:
The three-dimensional coordinates monitoring system to underground pipeline using IMU (Inertial Measurement Unit) sensing technique was developed. Three-dimensional coordinates obtained from the developed system were compared with three-dimensional coordinates obtained from using total stations and levels. when compared with the results, maximum error of the horizontal and vertical positions were 7cm ,14cm respectively. In our Country, tolerance error of underground utility surveying is ± 30cm. Therefore, the developed system is expected to be utilized the underground pipeline location surveying .
APA, Harvard, Vancouver, ISO, and other styles
19

Klipp, Telmo dos Santos, Adriano Petry, Jonas Rodrigues de Souza, Eurico Rodrigues de Paula, Gabriel Sandim Falcão, and Haroldo Fraga de Campos Velho. "Ionosonde total electron content evaluation using International Global Navigation Satellite System Service data." Annales Geophysicae 38, no. 2 (March 18, 2020): 347–57. http://dx.doi.org/10.5194/angeo-38-347-2020.

Full text
Abstract:
Abstract. In this work, a period of 2 years (2016–2017) of ionospheric total electron content (ITEC) from ionosondes operating in Brazil is compared to the International GNSS (Global Navigation Satellite System) Service (IGS) vertical total electron content (vTEC) data. Sounding instruments from the National Institute for Space Research (INPE) provided the ionograms used, which were filtered based on confidence score (CS) and C-Level flag evaluation. Differences between vTEC from IGS maps and ionosonde TEC were accumulated in terms of root mean squared error (RMSE). As expected, we noticed that the ITEC values provided by ionosondes are systematically underestimated, which is attributed to a limitation in the electron density modeling for the ionogram topside that considers a fixed scale height, which makes density values decay too rapidly above ∼800 km, while IGS takes in account electron density from GNSS stations up to the satellite network orbits. The topside density profiles covering the plasmasphere were re-modeled using two different approaches: an optimization of the adapted α-Chapman exponential decay that includes a transition function between the F2 layer and plasmasphere and a corrected version of the NeQuick topside formulation. The electron density integration height was extended to 20 000 km to compute TEC. Chapman parameters for the F2 layer were extracted from each ionogram, and the plasmaspheric scale height was set to 10 000 km. A criterion to optimize the proportionality coefficient used to calculate the plasmaspheric basis density was introduced in this work. The NeQuick variable scale height was calculated using empirical parameters determined with data from Swarm satellites. The mean RMSE for the whole period using adapted α-Chapman optimization reached a minimum of 5.32 TECU, that is, 23 % lower than initial ITEC errors, while for the NeQuick topside formulation the error was reduced by 27 %.
APA, Harvard, Vancouver, ISO, and other styles
20

Kolaitis, Phokion G., Lucian Popa, and Kun Qian. "Knowledge Refinement via Rule Selection." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2886–94. http://dx.doi.org/10.1609/aaai.v33i01.33012886.

Full text
Abstract:
In several different applications, including data transformation and entity resolution, rules are used to capture aspects of knowledge about the application at hand. Often, a large set of such rules is generated automatically or semi-automatically, and the challenge is to refine the encapsulated knowledge by selecting a subset of rules based on the expected operational behavior of the rules on available data. In this paper, we carry out a systematic complexity-theoretic investigation of the following rule selection problem: given a set of rules specified by Horn formulas, and a pair of an input database and an output database, find a subset of the rules that minimizes the total error, that is, the number of false positive and false negative errors arising from the selected rules. We first establish computational hardness results for the decision problems underlying this minimization problem, as well as upper and lower bounds for its approximability. We then investigate a bi-objective optimization version of the rule selection problem in which both the total error and the size of the selected rules are taken into account. We show that testing for membership in the Pareto front of this bi-objective optimization problem is DP-complete. Finally, we show that a similar DP-completeness result holds for a bi-level optimization version of the rule selection problem, where one minimizes first the total error and then the size.
APA, Harvard, Vancouver, ISO, and other styles
21

Parvin, Curtis A., and Ann M. Gronowski. "Effect of analytical run length on quality-control (QC) performance and the QC planning process." Clinical Chemistry 43, no. 11 (November 1, 1997): 2149–54. http://dx.doi.org/10.1093/clinchem/43.11.2149.

Full text
Abstract:
Abstract The performance measure traditionally used in the quality-control (QC) planning process is the probability of rejecting an analytical run when an out-of-control error condition exists. A shortcoming of this performance measure is that it doesn’t allow comparison of QC strategies that define analytical runs differently. Accommodating different analytical run definitions is straightforward if QC performance is measured in terms of the average number of patient samples to error detection, or the average number of patient samples containing an analytical error that exceeds total allowable error. By using these performance measures to investigate the impact of different analytical run definitions on QC performance demonstrates that during routine QC monitoring, the length of the interval between QC tests can have a major influence on the expected number of unacceptable results produced during the existence of an out-of-control error condition.
APA, Harvard, Vancouver, ISO, and other styles
22

Birchall, J. "Reduction of the effects of transverse polarization in a measurement of parity violation in proton–proton scattering at 230 MeV." Canadian Journal of Physics 66, no. 6 (June 1, 1988): 530–33. http://dx.doi.org/10.1139/p88-088.

Full text
Abstract:
An overview is given of some of the sources of systematic error expected in a new measurement of parity violation in proton–roton scattering at 230 MeV. The experiment involves the measurement of an angular distribution of the longitudinal analyzing power, Az(θ), as well as of the more conventionally measured total analyzing power, Az.
APA, Harvard, Vancouver, ISO, and other styles
23

Verma, Shreeya, Julia Marshall, Mark Parrington, Anna Agustí-Panareda, Sebastien Massart, Martyn P. Chipperfield, Christopher Wilson, and Christoph Gerbig. "Extending methane profiles from aircraft into the stratosphere for satellite total column validation using the ECMWF C-IFS and TOMCAT/SLIMCAT 3-D model." Atmospheric Chemistry and Physics 17, no. 11 (June 7, 2017): 6663–78. http://dx.doi.org/10.5194/acp-17-6663-2017.

Full text
Abstract:
Abstract. Airborne observations of greenhouse gases are a very useful reference for validation of satellite-based column-averaged dry air mole fraction data. However, since the aircraft data are available only up to about 9–13 km altitude, these profiles do not fully represent the depth of the atmosphere observed by satellites and therefore need to be extended synthetically into the stratosphere. In the near future, observations of CO2 and CH4 made from passenger aircraft are expected to be available through the In-Service Aircraft for a Global Observing System (IAGOS) project. In this study, we analyse three different data sources that are available for the stratospheric extension of aircraft profiles by comparing the error introduced by each of them into the total column and provide recommendations regarding the best approach. First, we analyse CH4 fields from two different models of atmospheric composition – the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System for Composition (C-IFS) and the TOMCAT/SLIMCAT 3-D chemical transport model. Secondly, we consider scenarios that simulate the effect of using CH4 climatologies such as those based on balloons or satellite limb soundings. Thirdly, we assess the impact of using a priori profiles used in the satellite retrievals for the stratospheric part of the total column. We find that the models considered in this study have a better estimation of the stratospheric CH4 as compared to the climatology-based data and the satellite a priori profiles. Both the C-IFS and TOMCAT models have a bias of about −9 ppb at the locations where tropospheric vertical profiles will be measured by IAGOS. The C-IFS model, however, has a lower random error (6.5 ppb) than TOMCAT (12.8 ppb). These values are well within the minimum desired accuracy and precision of satellite total column XCH4 retrievals (10 and 34 ppb, respectively). In comparison, the a priori profile from the University of Leicester Greenhouse Gases Observing Satellite (GOSAT) Proxy XCH4 retrieval and climatology-based data introduce larger random errors in the total column, being limited in spatial coverage and temporal variability. Furthermore, we find that the bias in the models varies with latitude and season. Therefore, applying appropriate bias correction to the model fields before using them for profile extension is expected to further decrease the error contributed by the stratospheric part of the profile to the total column.
APA, Harvard, Vancouver, ISO, and other styles
24

Berend, Daniel, Shlomi Dolev, and Ariel Hanemann. "Graph Degree Sequence Solely Determines the Expected Hopfield Network Pattern Stability." Neural Computation 27, no. 1 (January 2015): 202–10. http://dx.doi.org/10.1162/neco_a_00685.

Full text
Abstract:
We analyze the effect of network topology on the pattern stability of the Hopfield neural network in the case of general graphs. The patterns are randomly selected from a uniform distribution. We start the Hopfield procedure from some pattern v. An error in an entry e of v is the situation where, if the procedure is started at e, the value of e flips. Such an entry is an instability point. Note that we disregard the value at e by the end of the procedure, as well as what happens if we start the procedure from another pattern [Formula: see text] or another entry [Formula: see text] of v. We measure the instability of the system by the expected total number of instability points of all the patterns. Our main result is that the instability of the system does not depend on the exact topology of the underlying graph, but rather only on its degree sequence. Moreover, for a large number of nodes, the instability can be approximated by [Formula: see text], where [Formula: see text] is the standard normal distribution function and [Formula: see text] are the degrees of the nodes.
APA, Harvard, Vancouver, ISO, and other styles
25

Maggioni, Viviana, Mathew R. P. Sapiano, Robert F. Adler, Yudong Tian, and George J. Huffman. "An Error Model for Uncertainty Quantification in High-Time-Resolution Precipitation Products." Journal of Hydrometeorology 15, no. 3 (June 1, 2014): 1274–92. http://dx.doi.org/10.1175/jhm-d-13-0112.1.

Full text
Abstract:
Abstract This study proposes a new framework, Precipitation Uncertainties for Satellite Hydrology (PUSH), to provide time-varying, global estimates of errors for high-time-resolution, multisatellite precipitation products using a technique calibrated with high-quality validation data. Errors are estimated for the widely used Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the NOAA Climate Prediction Center (CPC) Unified gauge dataset as the benchmark. PUSH estimates the probability distribution of reference precipitation given the satellite observation, from which the error can be computed as the difference (or ratio) between the satellite product and the estimated reference. The framework proposes different modeling approaches for each combination of rain and no-rain cases: correct no-precipitation detection (both satellite and gauges measure no precipitation), missed precipitation (satellite records a zero, but the gauges detect precipitation), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Each case is explored and explicitly modeled to create a unified approach that combines all four scenarios. Results show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles of the distribution. The spatial pattern of the error is also adequately reproduced by PUSH, and good agreement between observed and estimated errors is observed. The model is also able to capture missed precipitation and false detection uncertainties, whose contribution to the total error can be significant. The resulting error estimates could be attached to the corresponding high-resolution satellite precipitation products.
APA, Harvard, Vancouver, ISO, and other styles
26

Lim, K. B., and H. L. Pardue. "Error-compensating kinetic method for enzymatic determination of DNAs." Clinical Chemistry 39, no. 9 (September 1, 1993): 1850–56. http://dx.doi.org/10.1093/clinchem/39.9.1850.

Full text
Abstract:
Abstract We describe the adaptation and evaluation of an error-compensating method for kinetic determinations of deoxyribonucleic acids (DNAs). The DNA is first reacted with ethidium bromide to produce a fluorescent intercalation complex. Subsequent treatment of the complex with DNase catalyzes hydrolysis of the DNA, causing a time-dependent decrease in fluorescence, which is monitored. A model for two-component parallel first-order processes is fit to the decay curve to predict the total change in fluorescence expected if the process were monitored to equilibrium. The predicted change in fluorescence response varies linearly with DNA concentration with an intercept corresponding to 0.13 mg/L DNA. Results by the predictive method are 47-, 58-, and 250-fold less dependent on DNase activity, temperature, and ethidium bromide concentration, respectively, than are results for an initial-rate method utilizing the same data. Moreover, the predictive method yields a significantly wider linear range than the initial-rate method, and is much less affected by blank fluorescence and RNA interference than is an equilibrium method based on the reaction of DNA with ethidium bromide alone.
APA, Harvard, Vancouver, ISO, and other styles
27

Prager, Michael H., and Alec D. MacCall. "Sensitivities and Variances of Virtual Population Analysis As Applied to the Mackerel, Scomber japonicus." Canadian Journal of Fisheries and Aquatic Sciences 45, no. 3 (March 1, 1988): 539–47. http://dx.doi.org/10.1139/f88-063.

Full text
Abstract:
Virtual population analysis (VPA) is widely used in fish stock assessment. However, VPA results are generally presented as point estimates, without error variance. Using numerical methods, we estimated the total variance of historical (1929–65) biomass estimates of mackerel, Scomber japonicus, off southern California. In the years before 1940, coefficients of variation (CV's) approached 100%; later, when weights at age and the age structure of the catch were better known, the CV's were about 25%. Most of the variability derives from uncertainties in estimates of natural mortality (M) and of weights at age. We also developed dimensionless coefficients (sensitivities) to examine the effects of errors in the inputs on the VPA biomass estimates. The largest sensitivities were to M and the total catch and varied substantially from year to year. As expected, sensitivity to M decreased with increasing exploitation, and sensitivity to catch increased with increasing exploitation. Using such sensitivities, one could estimate the error in a biomass estimate for a past year when M (or any other input) was thought to be unusually high or low. Thus, retrospective corrections can be made. Also, such sensitivities form an analytic tool for examining the properties of VPA, or any quantitative model.
APA, Harvard, Vancouver, ISO, and other styles
28

Forrester, Mathias B. "Pattern of oseltamivir ingestions reported to Texas poison centers." Human & Experimental Toxicology 29, no. 2 (December 16, 2009): 137–40. http://dx.doi.org/10.1177/0960327109357219.

Full text
Abstract:
During serious influenza outbreaks, the number of oseltamivir exposures reported to poison centers might be expected to increase. This investigation describes the pattern of oseltamivir ingestions reported to Texas poison centers during 2000—2008. Of 298 total ingestions, 91.9% occurred in December—March, 76.8% involved patients aged 0—19 years, 72.5% resulted from therapeutic error, 90.0% were managed on-site, and 80.0% had no effect. The most frequently reported adverse clinical effects were vomiting (7.5%), nausea (3.8%), and abdominal pain (3.8%). Oseltamivir ingestions were reported to Texas poison centers primarily during periods of influenza outbreak. Most involved children, resulted from therapeutic error, and were managed on-site without serious outcome.
APA, Harvard, Vancouver, ISO, and other styles
29

Kay, Melissa C., Emily W. Duffy, Lisa J. Harnack, Andrea S. Anater, Joel C. Hampton, Alison L. Eldridge, and Mary Story. "Development and Application of a Total Diet Quality Index for Toddlers." Nutrients 13, no. 6 (June 5, 2021): 1943. http://dx.doi.org/10.3390/nu13061943.

Full text
Abstract:
For the first time, the 2020–2025 Dietary Guidelines for Americans include recommendations for infants and toddlers under 2 years old. We aimed to create a diet quality index based on a scoring system for ages 12 to 23.9 months, the Toddler Diet Quality Index (DQI), and evaluate its construct validity using 24 h dietary recall data collected from a national sample of children from the Feeding Infants and Toddlers Study (FITS) 2016. The mean (standard error) Toddler DQI was 49 (0.6) out of 100 possible points, indicating room for improvement. Toddlers under-consumed seafood, greens and beans, and plant proteins and over-consumed refined grains and added sugars. Toddler DQI scores were higher among children who were ever breastfed, lived in households with higher incomes, and who were Hispanic. The Toddler DQI performed as expected and offers a measurement tool to assess the dietary quality of young children in accordance with federal nutrition guidelines. This is important for providing guidance that can be used to inform public health nutrition policies, programs, and practices to improve diets of young children.
APA, Harvard, Vancouver, ISO, and other styles
30

Eissa, Fathy H., Shuo-Jye Wu, and Hamid H. Ahmed. "Estimation of the Parameters and Expected Test Time of Exponentiated Weibull Lifetimes Under Type II Progressive Censoring Scheme With Random Removals." International Journal of Statistics and Probability 8, no. 2 (February 11, 2019): 124. http://dx.doi.org/10.5539/ijsp.v8n2p124.

Full text
Abstract:
Based on progressive type-II censored sample with random removals, point and interval estimations for the shape parameters of the exponentiated Weibull distribution are discussed. Computational formula for the expected total test time are derived for different situations of sampling plans. This is useful in planning a life test experiment. The efficiency of the estimators are compared in terms of the root mean square error, the variance and the coverage probability of the corresponding confidence intervals. A simulation study is presented for several values of removal probability and different values of failure percentage. Also, numerical applications are conducted to illustrate and compare the usefulness of the different sampling plans in terms of expected test times for different patterns of failure rates.
APA, Harvard, Vancouver, ISO, and other styles
31

Ruiz-Arias, J. A., J. Dudhia, C. A. Gueymard, and D. Pozo-Vázquez. "Assessment of the Level-3 MODIS daily aerosol optical depth in the context of surface solar radiation and numerical weather modeling." Atmospheric Chemistry and Physics 13, no. 2 (January 18, 2013): 675–92. http://dx.doi.org/10.5194/acp-13-675-2013.

Full text
Abstract:
Abstract. The daily Level-3 MODIS aerosol optical depth (AOD) product is a global daily spatial aggregation of the Level-2 MODIS AOD (10-km spatial resolution) into a regular grid with a resolution of 1° × 1°. It offers interesting characteristics for surface solar radiation and numerical weather modeling applications. However, most of the validation efforts so far have focused on Level-2 products and only rarely on Level 3. In this contribution, we compare the Level-3 Collection 5.1 MODIS AOD dataset from the Terra satellite available since 2000 against observed daily AOD values at 550 nm from more than 500 AERONET ground stations around the globe. Overall, the mean error of the dataset is 0.03 (17%, relative to the mean ground-observed AOD), with a root mean square error of 0.14 (73%, relative to the same), but these errors are also found highly dependent on geographical region. We propose new functions for the expected error of the Level-3 AOD, as well as for both its mean error and its standard deviation. Additionally, we investigate the role of pixel count vis-à-vis the reliability of the AOD estimates, and also explore to what extent the spatial aggregation from Level 2 to Level 3 influences the total uncertainty in the Level-3 AOD. Finally, we use a radiative transfer model to investigate how the Level-3 AOD uncertainty propagates into the calculated direct normal and global horizontal irradiances.
APA, Harvard, Vancouver, ISO, and other styles
32

Sellitto, P., G. Dufour, M. Eremenko, J. Cuesta, P. Dauphin, G. Forêt, B. Gaubert, M. Beekmann, V. H. Peuch, and J. M. Flaud. "Analysis of the potential of one possible instrumental configuration of the next generation of IASI instruments to monitor lower tropospheric ozone." Atmospheric Measurement Techniques 6, no. 3 (March 8, 2013): 621–35. http://dx.doi.org/10.5194/amt-6-621-2013.

Full text
Abstract:
Abstract. To evaluate the added value brought by the next generation of IASI (Infrared Atmospheric Sounder Interferometer) instruments to monitor lower tropospheric (LT) ozone, we developed a pseudo-observation simulator, including a direct simulator of thermal infrared spectra and a full inversion scheme to retrieve ozone concentration profiles. We based our simulations on the instrumental configuration of IASI and of an IASI-like instrument, with a factor 2 improvement in terms of spectral resolution and radiometric noise. This scenario, that will be referred to as IASI/2, is one possible configuration of the IASI-NG (New Generation) instrument (the configuration called IASI-NG/IRS2) currently designed by CNES (Centre National d'Études Spatiales). IASI-NG is expected to be launched in the 2020 timeframe as part of the EPS-SG (EUMETSAT Polar System-Second Generation, formerly post-EPS) mission. We produced one month (August 2009) of tropospheric ozone pseudo-observations based on these two instrumental configurations. We compared the pseudo-observations and we found a clear improvement of LT ozone (up to 6 km altitude) pseudo-observations quality for IASI/2. The estimated total error is expected to be more than 35% smaller at 5 km, and 20% smaller for the LT ozone column. The total error on the LT ozone column is, on average, lower than 10% for IASI/2. IASI/2 is expected to have a significantly better vertical sensitivity (monthly average degrees of freedom surface–6 km of 0.70) and to be sensitive at lower altitudes (more than 0.5 km lower than IASI, reaching nearly 3 km). Vertical ozone layers of 4 to 5 km thickness are expected to be resolved by IASI/2, while IASI has a vertical resolution of 6–8 km. According to our analyses, IASI/2 is expected to have the possibility of effectively separate lower from upper tropospheric ozone information even for low sensitivity scenarios. In addition, IASI/2 is expected to be able to better monitor LT ozone patterns at local spatial scale and to monitor abrupt temporal evolutions occurring at timescales of a few days, thus bringing an expected added value with respect to IASI for the monitoring of air quality.
APA, Harvard, Vancouver, ISO, and other styles
33

Lau, F. L., and J. Sejvar. "Computerized Dose Estimates for Maintenance." Journal of Engineering for Gas Turbines and Power 110, no. 4 (October 1, 1988): 666–69. http://dx.doi.org/10.1115/1.3240189.

Full text
Abstract:
The minimization and control of radiation exposure usually starts with the determination of the total expected dose. Manual determination of an accurate value can require an extensive, error-prone amount of data manipulation. By computerizing the model for this determination the individual task and total dose values can be identified much more rapidly and accurately than by manual methods. Also, high-dose operations can be easily evaluated for ways to reduce exposure. The microcomputer also allows instantaneous updating of estimates as changes are made both in the planning stage and in the performance of the job.
APA, Harvard, Vancouver, ISO, and other styles
34

Cesaroni, Claudio, Luca Spogli, and Giorgiana De Franceschi. "IONORING: Real-Time Monitoring of the Total Electron Content over Italy." Remote Sensing 13, no. 16 (August 19, 2021): 3290. http://dx.doi.org/10.3390/rs13163290.

Full text
Abstract:
IONORING (IONOspheric RING) is a tool capable to provide the real-time monitoring and modeling of the ionospheric Total Electron Content (TEC) over Italy, in the latitudinal and longitudinal ranges of 35°N–48°N and 5°E–20°E, respectively. IONORING exploits the Global Navigation Satellite System (GNSS) data acquired by the RING (Rete Integrata Nazionale GNSS) network, managed by the Istituto Nazionale di Geofisica e Vulcanologia (INGV). The system provides TEC real-time maps with a very fine spatial resolution (0.1° latitude x 0.1° longitude), with a refresh time of 10 min and a typical latency below the minute. The TEC estimated at the ionospheric piercing points from about 40 RING stations, equally distributed over the Italian territory, are interpolated using locally (weighted) regression scatter plot smoothing (LOWESS). The validation is performed by comparing the IONORING TEC maps (in real-time) with independent products: (i) the Global Ionospheric Maps (GIM) - final product- provided by the International GNSS Service (IGS), and (ii) the European TEC maps from the Royal Observatory of Belgium. The validation results are satisfactory in terms of Root Mean Square Error (RMSE) between 2 and 3 TECu for both comparisons. The potential of IONORING in depicting the TEC daily and seasonal variations is analyzed over 3 years, from May 2017 to April 2020, as well as its capability to account for the effect of the disturbed geospace on the ionosphere at mid-latitudes. The IONORING response to the X9.3 flare event of September 2017 highlights a sudden TEC increase over Italy of about 20%, with a small, expected dependence on the latitude, i.e., on the distance from the subsolar point. Subsequent large regional TEC various were observed in response to related follow-on geomagnetic storms. This storm is also used as a case event to demonstrate the potential of IONORING in improving the accuracy of the GNSS Single Point Positioning. By processing data in kinematic mode and by using the Klobuchar as the model to provide the ionospheric correction, the resulting Horizontal Positioning Error is 4.3 m, lowering to, 3.84 m when GIM maps are used. If IONORING maps are used as the reference ionosphere, the error is as low as 2.5 m. Real-times application and services in which IONORING is currently integrated are also described in the conclusive remarks.
APA, Harvard, Vancouver, ISO, and other styles
35

Jamri, M. Saifuzam, Muhammad Nizam Kamarudin, and Mohd Luqman Mohd Jamil. "Total power deficiency estimation of isolated power system network using full-state observer method." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1249. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1249-1257.

Full text
Abstract:
<span>An isolated electrical network with an independent local distributed generator is very sensitive towards the contingencies between load demand and supply. Although the network system has less complexity in term of structure, its stability condition is crucial due to its stand-alone operating condition. The total power deficit in the network gives the important information related to the dynamical frequency responses which may directly affect the system’s stability level. In this paper, the approach to estimate the total power deficiency for the isolated electrical network was presented by utilized the Luenberger observer method. Although the power deficit is not the state variable in the network mathematical model, the solution of estimation problem was feasible by introducing the new variable using additional dummy system. The simulation was carried out by using MATLAB/Simulink environment and the designed estimator was verified using multifarious load demand changes. The results show that the estimated signal was successfully tracked the expected actual signal with minimum error.</span>
APA, Harvard, Vancouver, ISO, and other styles
36

McMahon, Samuel E., Paul Magill, Daniel P. Bopf, and David E. Beverland. "A device to make the pelvic sagittal plane horizontal and reduce error in cup inclination during total hip arthroplasty: a validation study." HIP International 28, no. 5 (September 2018): 473–77. http://dx.doi.org/10.1177/1120700017752615.

Full text
Abstract:
Introduction: Radiological inclination (RI) is determined in part by operative inclination (OI), which is defined as the angle between the cup axis or handle and the sagittal plane. In lateral decubitus the theatre floor becomes a surrogate for the pelvic sagittal plane. Critically at the time of cup insertion if the pelvic sagittal plane is not parallel to the floor either because the upper hemi pelvis is internally rotated or adducted, RI can be much greater than expected. We have developed a simple Pelvic Orientation Device (POD) to help achieve a horizontal pelvic sagittal plane. Methods: A model representing the posterior aspect of the pelvis was created. This permitted known movement in 2 planes to simulate internal rotation and adduction of the upper hemi pelvis, with 15 known pre-set positions. 20 participants tested the POD in 5 random, blinded position combinations, providing 200 readings. The accuracy was measured by subtracting each reading from the known value. Results: Two statistical outliers were identified and removed from analysis. The mean adduction error was 0.73°. For internal rotation, the mean error was −0.03°. Accuracy within 2.0° was achieved in 176 of 190 (93%) of readings. The maximum error was 3.6° for internal rotation and 3.1° for adduction. Conclusion: In a model pelvis the POD provided an accurate and reproducible method of achieving a horizontal sagittal plane. Applied clinically, this simple tool has the potential to reduce the high values of RI sometimes seen following THA in lateral decubitus.
APA, Harvard, Vancouver, ISO, and other styles
37

Hansen, Michael J., Louise Chavarie, Andrew M. Muir, Kimberly L. Howland, and Charles C. Krueger. "Variation in Fork-to-Total Length Relationships of North American Lake Trout Populations." Journal of Fish and Wildlife Management 11, no. 1 (February 17, 2020): 263–72. http://dx.doi.org/10.3996/102019-jfwm-096.

Full text
Abstract:
Abstract Length of fish species with forked tails, such as the Lake Trout Salvelinus namaycush, can be measured as total (TL), fork (FL), or standard (SL) length, although individual studies of such species often rely on only one measurement, which hinders comparisons among studies. To determine if variation in the relationship between FL and TL among Lake Trout populations affected estimates of FL from TL, we compared length relationships within Lake Trout populations sampled in multiple years, among multiple locations within lakes, among lakes, and from all samples from across the species' range. Samples were from across the geographic range of the species and a wide range of lake sizes (1.31–82,100 km2) to represent the full range of variation in abiotic and biotic variables expected to influence the FL:TL relationship. The functional relationship for estimating FL (mm) from TL (mm) was FL = 0.91 × TL − 8.28 and TL from FL was TL = 1.09 × FL + 9.05. Error induced by length conversion was less when using a length relationship from a different year in the same lake than from a different area in the same lake or from a different lake. Estimation error was lowest when using an overall length conversion from across the species' range, which suggests the overall relationship could be used whenever a more accurate length conversion is not available for a population of interest. Our findings should be useful for providing a standardized model for converting FL to TL (and TL to FL) for Lake Trout, such as comparing published findings of different measurement units, converting measurement units by agencies or institutions that change sampling methods over time, or programs that use different sampling methods among areas.
APA, Harvard, Vancouver, ISO, and other styles
38

Nikitovic, Vladimir. "Demographic future of Serbia from a different angle." Stanovnistvo 51, no. 2 (2013): 53–81. http://dx.doi.org/10.2298/stnv1302053n.

Full text
Abstract:
Based on the assessment of the empirical errors in the official population forecasts of Serbia, the paper shows why forecast users might want a change of the current official concept. The article consists of three parts. The first gives a brief chronological overview of the methods and hypotheses in the official population forecasts of Serbia during the last 60 years. The second refers to the quantification of the past forecast errors in projecting total fertility rate, life expectancy at birth and total population aiming at assessment of the empirical variability. The third part shows the probabilistic population forecast of Serbia based on Bayesian hierarchical models of vital components, as implemented in the 2012 revision of United Nations World population prospects. The empirical error served as an evaluation tool of the probabilistic distributions of total population. In spite of the increased availability and quality of input data and developing of advanced projection techniques during the period, there was no obvious improvement noted neither in accuracy nor in the expression of the uncertainty inherent to forecasting in the official population forecasts in Serbia up to date. In general, fertility has been overestimated while improvements in mortality have been underestimated. It has been shown that accuracy largely depends on the stability of demographic processes throughout the projection horizon, which confirms findings from similar studies in other countries. The uncertainty in the demographic trends remains a major challenge for forecasters. A typical judgment that the smallest error will be made if a recently observed trend is assumed to continue has been linked to the low fertility variant in the past Serbian forecasts. The target level of the medium fertility, interpreted as "most likely" outcome, was firmly bound to replacement fertility until recently thus reflecting desirable rather than realistic future. Therefore, the reversal in the trend of the total population of Serbia came as a surprise or much earlier than expected for the forecasters. The probabilistic population forecast of Serbia provides results that users can clearly understand and use along with attached information on error magnitude. Before running the projection model, it was necessary to adjust the 2012 UN estimates for Serbia to suit the current official estimates and recent relevant studies on demographic trends in the country. The comparison of probabilistic hypotheses and results with the current official projection aims to highlight the key benefits of the new approach in terms of reduced subjectivity, improved accuracy and quantified uncertainty. The latter could be particularly relevant for decision makers allowing them to calculate the expected costs involved in wrong decisions. From the perspective of the forecast based on "UN model", the strong optimism of the current official projection appears to be groundless. Besides, the empiric evaluation of the probabilistic distributions of total population suggests that it fully reflects the pattern of observed uncertainty in the past forecasts of Serbian population.
APA, Harvard, Vancouver, ISO, and other styles
39

Lawrence, J. P., R. J. Leigh, and P. S. Monks. "The impact of surface reflectance variability on total column differential absorption LiDAR measurements of atmospheric CO<sub>2</sub>." Atmospheric Measurement Techniques Discussions 3, no. 1 (January 11, 2010): 147–84. http://dx.doi.org/10.5194/amtd-3-147-2010.

Full text
Abstract:
Abstract. The remote sensing technique, total column differential absorption LiDAR (TC-DIAL) has been proposed in a number of feasibility studies as a suitable method for making total column measurements of atmospheric CO2 from space. Among the sources of error associated with TC-DIAL retrievals from space is an undefined modulation of the received signals resulting from the variability in the Earth's surface reflectance between the LiDAR pulses. This source of uncertainty is investigated from a satellite perspective by the application of a computer model for spaceborne TC-DIAL instruments. The simulations are carried out over Europe and South America using modified MODIS surface reflectance maps and a DIAL configuration similar to that suggested for the proposed ESA A-SCOPE mission. A positive bias of 0.01 ppmv in both continental test sets is observed using 10 Hz pulse repetition frequency and 200 km integration distance. This bias is a consequence of non-linearity in the DIAL equation, and in particular regions such as the Alps and over certain coastlines it contributes to positive errors of between 0.05 and 0.16 ppmv for 200 and 50 km integration distances. These retrieval errors are defined as lower bound estimates owing to the likely resolution difference between the surface reflectance data and the expected surface heterogeneity observed by a DIAL instrument.
APA, Harvard, Vancouver, ISO, and other styles
40

Mustafa, Mohammad Z., Ashraf A. Khan, Harry Bennett, Andrew J. Tatham, and Mark Wright. "Accuracy of biometric formulae in hypermetropic patients undergoing cataract surgery." European Journal of Ophthalmology 29, no. 5 (October 1, 2018): 510–15. http://dx.doi.org/10.1177/1120672118803509.

Full text
Abstract:
Purpose: To audit and analyse the accuracy of current biometric formulae on refractive outcomes following cataract surgery in patients with axial length less than 22 mm. Methods: A total of 84 eyes from 84 patients with axial length <22 mm were identified from consecutive patients undergoing cataract surgery retrospectively at a single university hospital. All subjects had biometry using the IOLMaster (Carl Zeiss Meditec, Inc, Dublin, CA, USA) and a Sensar AR40 intraocular lens implant (Abbott Medical Optics, CA, USA). One eye from each patient was randomly selected for inclusion. Prediction errors were calculated by comparing expected refraction from optimized formulas (SRK/T, Hoffer Q, Haigis and Holladay 1) to postoperative refraction. A national survey of ophthalmologists was conducted to ascertain biometric formula preference for small eyes. Results: The mean axial length was 21.00 ± 0.55 mm. Mean error was greatest for Hoffer Q at −0.57 dioptres. There was no significant difference in mean absolute error between formulae. SRK/T achieved the highest percentage of outcomes within 0.5 dioptres (45.2%) and 1 dioptre (76.2%) of target. Shallower anterior chamber depth was associated with higher mean absolute error for SRK/T (p = 0.028), Hoffer Q (p = 0.003) and Haigis (p = 0.016) but not Holladay (p = 0.111). Conclusion: SRK/T had the highest proportion of patients achieving refractive results close to predicted outcomes. However, there was a significant association between a shallower anterior chamber depth and higher mean absolute error for all formulae except Holladay 1. This suggests that anterior chamber depth with axial length should be considered when counselling patients about refractive outcome.
APA, Harvard, Vancouver, ISO, and other styles
41

Nugraha, Dimas Pramita, and Inayah Inayah. "Gambaran Farmakoterapi Pasien Common Cold Di Puskesmas Pekanbaru." Jurnal Ilmu Kedokteran 10, no. 1 (December 29, 2017): 63. http://dx.doi.org/10.26891/jik.v10i1.2016.63-66.

Full text
Abstract:
Common cold is still a disease with the most number of cases in Indonesia and the province of Riau in out patientswho visited the primary health center (Puskesmas). However, in primary health care, like Puskesmas and privatepractice physicians are expected pharmacotherapy common cold is not rational. Medication errors is a common problem.The purpose of this study was to determine how the use of pharmacotherapy in patients with the common cold inPuskesmas Pekanbaru. This study was an observational descriptive , with a total sample 4602 people who meet thespecified criteria .The results showed that the percentage patients common cold using symptomatic analgesic-antipyreticdrugs 70.2%.However, the percentage of patients that using antibiotics in common cold is still quite a lot (36%), alsothe use corticosteroid (17,9%) that showed medication error. The patterns pharmacotherapy of common cold inPuskesmas Pekanbaru was relatively good, but need improvement .
APA, Harvard, Vancouver, ISO, and other styles
42

Pandey, Aakriti, Arun Kaushik, Sanjay K. Singh, and Umesh Singh. "Statistical Analysis for Generalized Progressive Hybrid Censored Data from Lindley Distribution under Step-Stress Partially Accelerated Life Test Model." Austrian Journal of Statistics 50, no. 1 (February 3, 2021): 105–20. http://dx.doi.org/10.17713/ajs.v50i1.1004.

Full text
Abstract:
The aim of this paper is to present the estimation procedure for the step-stress partially accelerated life test model under the generalized progressive hybrid censoring scheme. The uncertainties are assumed to be governed by Lindley distribution. The problem with point and interval estimation of the parameters as well as the acceleration factor using maximum likelihood approach for the step-stress partially accelerated life test model has been considered. A simulation study is conducted to monitor the performance of the estimators on the basis of the mean squared error under the considered censoring scheme. The expected total time of the test under an accelerated condition is computed to examine the effects of the parameters on the duration of the test. In addition, a graph of the expected total time of the test under accelerated and un-accelerated conditions is provided to highlight the effect due to acceleration. One real data set has been analyzed for illustrative purposes.
APA, Harvard, Vancouver, ISO, and other styles
43

Sonnewald, Maike, Carl Wunsch, and Patrick Heimbach. "Linear Predictability: A Sea Surface Height Case Study." Journal of Climate 31, no. 7 (April 2018): 2599–611. http://dx.doi.org/10.1175/jcli-d-17-0142.1.

Full text
Abstract:
A benchmark of linear predictability of sea surface height (SSH) globally is presented, complementing more complicated studies of SSH predictability. Twenty years of the Estimating the Circulation and Climate of the Ocean (ECCOv4) state estimate (1992–2011) are used, fitting autoregressive moving average [ARMA([Formula: see text])] models where the order of the coefficients is chosen by the Akaike information criteria (AIC). Up to 50% of the ocean SSH variability is dominated by the seasonal signal. The variance accounted for by the nonseasonal SSH is particularly distinct in the Southern and Pacific Oceans, containing >95% of the total SSH variance, and the expected prediction error growth takes a few months to reach a threshold of 1 cm. Isolated regions take 12 months or more to cross an accuracy threshold of 1 cm. Including the trend significantly increases the time taken to reach the threshold, particularly in the South Pacific. Annual averaging has expected prediction error growth of a few years to reach a threshold of 1 cm. Including the trend mainly increases the time taken to reach the threshold, but the time series is short and noisy.
APA, Harvard, Vancouver, ISO, and other styles
44

Sultana, Mahmuda, Md Sazzad Hossain, Iffat Ara, and Jobaida Sultana. "Medical Errors and Patient Safety Education: Views of Intern Doctors." Bangladesh Medical Research Council Bulletin 44, no. 2 (November 22, 2018): 82–88. http://dx.doi.org/10.3329/bmrcb.v44i2.38701.

Full text
Abstract:
Medical errors and patient safety have become increasingly important in the area of medical research in the recent years. World health Organization and other international committees have long been recommending the early integration of education about errors and patient safety in undergraduate and graduate medical education. To integrate patient safety education into existing curriculum views of the doctors towards patient safety education is an important issue. This descriptive type of cross sectional study was carried out to explore the views of intern doctors regarding medical error and patient safety education in undergraduate medical education of Bangladesh. The study was carried out in seven (three public and four private) medical colleges of Bangladesh over a period from July 2014 to June 2015. Study population was 400 intern doctors. Data were collected by self-administered structured questionnaire. The existing curriculum was also reviewed to find out patient safety issues. The study revealed that the topic medical error and patient safety were mostly neglected in the curriculum. But the intern doctors had positive attitude towards patient safety education. A total of 84.8% of the intern doctors with a high average score of 4.24 agreed that teaching students about patient safety should be a priority in medical students training while 87.8% agreed that learning about patient safety before graduation from medical colleges would produce more effective doctors. Among the respondents 76.6% expected more training on patient safety. Almost half of the participants (52.3%) reported that they had been assigned to tasks for which they were not trained or where medical errors could have happened easily (57.5 %). From this study it can be concluded that, there was a distinct need for more education and training in the field of medical error and patient safety among the intern doctors.
APA, Harvard, Vancouver, ISO, and other styles
45

Kotivuori, Eetu, Matti Maltamo, Lauri Korhonen, Jacob L. Strunk, and Petteri Packalen. "Prediction error aggregation behaviour for remote sensing augmented forest inventory approaches." Forestry: An International Journal of Forest Research 94, no. 4 (March 24, 2021): 576–87. http://dx.doi.org/10.1093/forestry/cpab007.

Full text
Abstract:
Abstract In this study we investigated the behaviour of aggregate prediction errors in a forest inventory augmented with multispectral Airborne Laser Scanning and airborne imagery. We compared an Area-Based Approach (ABA), Edge-tree corrected ABA (EABA) and Individual Tree Detection (ITD). The study used 109 large 30 × 30 m sample plots, which were divided into four 15 × 15 m subplots. Four different levels of aggregation were examined: all four subplots (quartet), two diagonal subplots (diagonal), two edge-adjacent subplots (adjacent) and subplots without aggregation. We noted that the errors at aggregated levels depend on the selected predictor variables, and therefore, this effect was studied by repeating the variable selection 200 times. At the subplot level, EABA provided the lowest mean of root mean square error ($\overline{\mathrm{RMSE}}$) values of 200 repetitions for total stem volume (EABA 21.1 percent, ABA 23.5 percent, ITD 26.2 percent). EABA also fared the best for diagonal and adjacent aggregation ($\overline{\mathrm{RMSE}}$: 17.6 percent, 17.4 percent), followed by ABA ($\overline{\mathrm{RMSE}}$: 19.3 percent, 18.2 percent) and ITD ($\overline{\mathrm{RMSE}}$: 21.8, 21.9 percent). Adjacent subplot errors of ABA were less correlated than errors of diagonal subplots, which resulted also in clearly lower RMSEs for adjacent subplots. This appears to result from edge tree effects, where omission and commission errors cancel for trees leaning from one subplot into the other. The best aggregate performance was achieved at the quartet level, as expected from fundamental properties of variance. ABA and EABA had similar RMSEs at the quartet level ($\overline{\mathrm{RMSE}}$ 15.5 and 15.3 percent), with poorer ITD performance ($\overline{\mathrm{RMSE}}$ 19.4 percent).
APA, Harvard, Vancouver, ISO, and other styles
46

Tjhai, Chandra, and Kyle O’Keefe. "Using Step Size and Lower Limb Segment Orientation from Multiple Low-Cost Wearable Inertial/Magnetic Sensors for Pedestrian Navigation." Sensors 19, no. 14 (July 17, 2019): 3140. http://dx.doi.org/10.3390/s19143140.

Full text
Abstract:
This paper demonstrates the use of multiple low-cost inertial/magnetic sensors as a pedestrian navigation system for indoor positioning. This research looks at the problem of pedestrian navigation in a practical manner by investigating dead-reckoning methods using low-cost sensors. This work uses the estimated sensor orientation angles to compute the step size from the kinematics of a skeletal model. The orientations of limbs are represented by the tilt angles estimated from the inertial measurements, especially the pitch angle. In addition, different step size estimation methods are compared. A sensor data logging system is developed in order to record all motion data from every limb segment using a single platform and similar types of sensors. A skeletal model of five segments is chosen to model the forward kinematics of the lower limbs. A treadmill walk experiment with an optical motion capture system is conducted for algorithm evaluation. The mean error of the estimated orientation angles of the limbs is less than 6 degrees. The results show that the step length mean error is 3.2 cm, the left stride length mean error is 12.5 cm, and the right stride length mean error is 9 cm. The expected positioning error is less than 5% of the total distance travelled.
APA, Harvard, Vancouver, ISO, and other styles
47

Bédard, Joël, Jean-François Caron, Mark Buehner, Seung-Jong Baek, and Luc Fillion. "Hybrid Background Error Covariances for a Limited-Area Deterministic Weather Prediction System." Weather and Forecasting 35, no. 3 (May 6, 2020): 1051–66. http://dx.doi.org/10.1175/waf-d-19-0069.1.

Full text
Abstract:
Abstract This study introduces an experimental regional assimilation configuration for a 4D ensemble–variational (4D-EnVar) deterministic weather prediction system. A total of 16 assimilation experiments covering July 2014 are presented to assess both experimental regional climatological background error covariances and updates in the treatment of flow-dependent error covariances. The regional climatological background error covariances are estimated using statistical correlations between variables instead of using balance operators. These error covariance estimates allow the analyses to fit more closely with the assimilated observations than when using the lower-resolution global background error covariances (due to shorter correlation scales), and the ensuing forecasts are significantly improved. The use of ensemble-based background error covariances is also improved by reducing vertical and horizontal localization length scales for the flow-dependent background error covariance component. Also, reducing the number of ensemble members employed in the deterministic analysis (from 256 to 128) reduced computational costs by half without degrading the accuracy of analyses and forecasts. The impact of the relative contributions of the climatological and flow-dependent background error covariance components is also examined. Results show that the experimental regional system benefits from giving a lower (higher) weight to climatological (flow-dependent) error covariances. When compared with the operational assimilation configuration of the continental prediction system, the proposed modifications to the background error covariances improve both surface and upper-air RMSE scores by nearly 1%. Still, the use of a higher-resolution ensemble to estimate flow-dependent background error covariances does not yet provide added value, although it is expected to allow for a better use of dense observations in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Ruiz-Arias, J. A., J. Dudhia, C. A. Gueymard, and D. Pozo-Vázquez. "Assessment of the Level-3 MODIS daily aerosol optical depth in the context of surface solar radiation and numerical weather modeling." Atmospheric Chemistry and Physics Discussions 12, no. 9 (September 7, 2012): 23219–60. http://dx.doi.org/10.5194/acpd-12-23219-2012.

Full text
Abstract:
Abstract. The Level-3 MODIS aerosol optical depth (AOD) product offers interesting features for surface solar radiation and numerical weather modeling applications. Remarkably, the Collection 5.1 dataset extends over more than a decade, and provides daily values of AOD over a global regular grid of 1°×1° spatial resolution. However, most of the validation efforts so far have focused on Level-2 products (10-km, at original resolution) and only rarely on Level-3 (at aggregated spatial resolution of 1°×1°). In this contribution, we compare the Level-3 Collection 5.1 MODIS AOD dataset available since 2000 against observed daily AOD values at 550 nm from more than 500 AERONET ground stations around the globe. One aim of this study is to check the advisability of this MODIS dataset for surface shortwave solar radiation calculations using numerical weather models. Overall, the mean error of the dataset is 0.03 (17%, relative to the mean ground-observed AOD), with a root mean square error of 0.14 (73%, relative to the same), albeit these values are found highly dependent on geographical region. For AOD values below about 0.3 the expected error is found very similar to that of the Level-2 product. However, for larger AOD values, higher errors are found. Consequently, we propose new functions for the expected error of the Level-3 AOD, as well as for both its mean error and its standard deviation. Additionally, we investigate the role of pixel count vis-à-vis the reliability of the AOD estimates. Our results show that a higher pixel count does not necessarily turn into a more reliable AOD estimate. Therefore, we recommend to verify this assumption in the dataset at hand if the pixel count is meant to be used. We also explore to what extent the spatial aggregation from Level-2 to Level-3 influences the total uncertainty in the Level-3 AOD. In particular, we found that, roughly, half of the error might be attributable to Level-3 AOD sub-pixel variability. Finally, we use a~radiative transfer model to investigate how the Level-3 AOD uncertainty propagates into the calculated direct normal (DNI) and global horizontal (GHI) irradiances. Overall, results indicate that, for Level-3 AODs smaller than 0.5, the induced uncertainty in DNI due to the AOD uncertainty alone is below 15% on average, and below 5% for GHI (for a solar zenith angle of 30°. However, the uncertainty in AOD is highly spatially variable, and so is that in irradiance.
APA, Harvard, Vancouver, ISO, and other styles
49

Bookbinder, M. J., and K. J. Panosian. "Correct and incorrect estimation of within-day and between-day variation." Clinical Chemistry 32, no. 9 (September 1, 1986): 1734–37. http://dx.doi.org/10.1093/clinchem/32.9.1734.

Full text
Abstract:
Abstract Between-day variance is an ambiguous term representing either total variance or pure between-day variance. In either case, it is often incorrectly calculated even though analysis of variance (ANOVA) and other excellent methods of estimation are available. We used statistical theory to predict the magnitude of error expected from using several intuitive approaches to estimation of variance components. We also evaluated the impact of estimating the total population variance instead of pure between-day variance and the impact of using biased estimators. We found that estimates of variance components could be systematically biased by several hundred percent. On the basis of these results, we make recommendations to remove these biases and to standardize precision estimates.
APA, Harvard, Vancouver, ISO, and other styles
50

Vishwakarma, Pradeep Kumar, Arun Kaushik, Aakriti Pandey, Umesh Singh, and Sanjay Kumar Singh. "Bayesian Estimation for Inverse Weibull Distribution Under Progressive Type-II Censored Data With Beta-Binomial Removals." Austrian Journal of Statistics 47, no. 1 (January 30, 2018): 77–94. http://dx.doi.org/10.17713/ajs.v47i1.578.

Full text
Abstract:
This paper deals with the estimation procedure for inverse Weibull distribution under progressive type-II censored samples when removals follow Beta-binomial probability law. To estimate the unknown parameters, the maximum likelihood and Bayes estimators are obtained under progressive censoring scheme mentioned above. Bayes estimates are obtained using Markov chain Monte Carlo (MCMC) technique considering square error loss function and compared with the corresponding MLE's. Further, the expected total time on test is obtained under considered censoring scheme. Finally, a real data set has been analysed to check the validity of the study.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography