Academic literature on the topic 'Total Expected Error'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Total Expected Error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Total Expected Error"

1

Aldelgawy, Mohammed. "Evaluation of Cadastral Work Done Using Total Station Instrument." Academic Perspective Procedia 1, no. 1 (November 9, 2018): 115–29. http://dx.doi.org/10.33793/acperpro.01.01.24.

Full text
Abstract:
Total station has become the main tool in most engineering work. Accordingly, evaluation of this work has gained a significant importance. A methodology to evaluate precision of cadastral work done using total station is presented here. The used technique is based on propagation of random errors of quantities measured by total station; i.e., distance and both horizontal and vertical angles. Random error in distance is produced by EDM unit integrated into total station. Whereas, random errors in horizontal and vertical angles are produced by theodolite integrated unit. Moreover, backsight process conducted in field results in addition random error in horizontal angles. This research studies how the above errors affect the resulted rectangular coordinates measured by total station for each observed point. Experiments were done using both simulated and real datasets. Results showed that the calculated errors were close to the expected errors and did not exceed the allowable ones.
APA, Harvard, Vancouver, ISO, and other styles
2

Karon, Brad S., James C. Boyd, and George G. Klee. "Glucose Meter Performance Criteria for Tight Glycemic Control Estimated by Simulation Modeling." Clinical Chemistry 56, no. 7 (July 1, 2010): 1091–97. http://dx.doi.org/10.1373/clinchem.2010.145367.

Full text
Abstract:
Abstract Background: Glucose meter analytical performance criteria required for safe and effective management of patients on tight glycemic control (TGC) are not currently defined. We used simulation modeling to relate glucose meter performance characteristics to insulin dosing errors during TGC. Methods: We used 29 920 glucose values from patients on TGC at 1 institution to represent the expected distribution of glucose values during TGC, and we used 2 different simulation models to relate glucose meter analytical performance to insulin dosing error using these 29 920 initial glucose values and assuming 10%, 15%, or 20% total allowable error (TEa) criteria. Results: One-category insulin dosing errors were common under all error conditions. Two-category insulin dosing errors occurred more frequently when either 20% or 15% TEa was assumed compared with 10% total error. Dosing errors of 3 or more categories, those most likely to result in hypoglycemia and thus patient harm, occurred infrequently under all error conditions with the exception of 20% TEa. Conclusions: Glucose meter technologies that operate within a 15% total allowable error tolerance are unlikely to produce large (≥3-category) insulin dosing errors during TGC. Increasing performance to 10% TEa should reduce the frequency of 2-category insulin dosing errors, although additional studies are necessary to determine the clinical impact of such errors during TGC. Current criteria that allow 20% total allowable error in glucose meters may not be optimal for patient management during TGC.
APA, Harvard, Vancouver, ISO, and other styles
3

Rochon, Yves J., Peyman Rahnama, and Ian C. McDade. "Satellite Measurement of Stratospheric Winds and Ozone Using Doppler Michelson Interferometry. Part II: Retrieval Method and Expected Performance." Journal of Atmospheric and Oceanic Technology 23, no. 6 (June 1, 2006): 770–84. http://dx.doi.org/10.1175/jtech1882.1.

Full text
Abstract:
Abstract This paper is about the retrieval of horizontal wind and ozone number density from measurement simulations for the Stratospheric Wind Interferometer for Transport Studies (SWIFT). This instrument relies on the concept of imaging Doppler Michelson interferometry applied to thermal infrared emission originating from the stratosphere. The instrument and measurement simulations are described in detail in the first of this series of two papers. In this second paper, a summary of the measurement simulations and a data retrieval method suited to these measurements are first presented. The inversion method consists of the maximum a posteriori solution approach with added differential regularization and, when required, iterations performed with the Gauss–Newton method. Inversion characterization and an error analysis have been performed. Retrieval noise estimates have been obtained both from derived covariance matrices and sample inversions. Retrieval noise levels for wind and ozone number density of ∼1–3 m s−1 and <1% have been obtained over the altitude range of 20–45 km with Backus–Gilbert resolving lengths of ∼1.5 km. Retrieval noise levels over the extended altitude range of 15–55 km are less than 10 m s−1 and 2%. The sensitivity to other error sources has been examined through a few sample realizations. The contributions from these other errors can be as important as or more so than retrieval noise. An error budget identifying contributing wind and ozone error levels to total errors of 5 m s−1 and 5% for altitudes of 20–45 km has been prepared relying on the retrieval errors and knowledge of the instrument design.
APA, Harvard, Vancouver, ISO, and other styles
4

Wee, Nam-Sook. "Optimal Maintenance Schedules of Computer Software." Probability in the Engineering and Informational Sciences 4, no. 2 (April 1990): 243–55. http://dx.doi.org/10.1017/s026996480000156x.

Full text
Abstract:
We present a decision procedure to determine the optimal maintenance intervals of a computer software throughout its operational phase. Our model accounts for the average cost per each maintenance activity and the damage cost per failure with the future cost discounted. Our decision policy is optimal in the sense that it minimizes the expected total cost. Our model assumes that the total number of errors in the software has a Poisson distribution with known mean λ and each error causes failures independently of other errors at a known constant failure rate. We study the structures of the optimal policy in terms of λ and present efficient numerical algorithms to compute the optimal maintenance time intervals, the optimal total number of maintenances, and the minimal total expected cost throughout the maintenance phase.
APA, Harvard, Vancouver, ISO, and other styles
5

Indriani, Silvia. "Students’ Errors in Using the Simple Present Tense at Polytechnic ATI Padang." Lingua Cultura 13, no. 3 (September 27, 2019): 217. http://dx.doi.org/10.21512/lc.v13i3.5840.

Full text
Abstract:
The research aimed at analyzing the errors in using simple present tense at Logistics Management of Agro-Industry of Polytechnic ATI Padang. A qualitative method with descriptive approach was applied. The samples were 15% of 153 total students or 23 students. Data were collected through the writing test; namely, descriptive essay. The results show that many students commit errors in using the simple present tense. The errors are classified into four types: omission, addition, misinformation, and misordering. There are 107 errors with the highest number that is omission (61 errors or 57%). Misinformation is in second place with 29 errors (27,1%). The error of addition gains 11,2 % with 12 errors. The lowest error is misordering, which gains 4,7% with only five errors. In conclusion, the most dominant error made by the students is omission with 57% and misordering is the lowest one with 4,7%. Therefore, the lecturers are expected to improve the teaching strategies in teaching simple present tense to reduce the numbers of students’ errors.
APA, Harvard, Vancouver, ISO, and other styles
6

Mehtätalo, Lauri, and Annika Kangas. "An approach to optimizing field data collection in an inventory by compartments." Canadian Journal of Forest Research 35, no. 1 (January 1, 2005): 100–112. http://dx.doi.org/10.1139/x04-139.

Full text
Abstract:
This study presents models for the expected error of the total volume and saw timber volume due to sampling errors of stand measurements. The measurements considered are horizontal point sample plots, stem numbers from circular plots, sample tree heights, sample order statistics (i.e., quantile trees), and sample tree heights from the previous inventory. Different measurement strategies were constructed by systematically varying the numbers of these measurements. A model system developed for this study was used in a data set of 170 stands to predict the total volume and saw timber volume of each stand with each measurement strategy. The errors of these volumes were modeled using stand characteristics and the numbers of measurements as predictors. The most important factors affecting the error in the total volume were the number of horizontal point sample plots and height sample trees. In addition, the number of quantile trees had a strong effect on the error of saw timber volume. The errors were slightly reduced when an old height measurement was used. There were significant interactions between stand characteristics and measurement strategies. Thus, the optimal measurement strategy varies between stands. A demonstration is provided of how constrained optimization can be used to find the optimal strategy for any one stand.
APA, Harvard, Vancouver, ISO, and other styles
7

Verhoelst, T., J. Granville, F. Hendrick, U. Köhler, C. Lerot, J. P. Pommereau, A. Redondas, M. Van Roozendael, and J. C. Lambert. "Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons." Atmospheric Measurement Techniques 8, no. 12 (December 2, 2015): 5039–62. http://dx.doi.org/10.5194/amt-8-5039-2015.

Full text
Abstract:
Abstract. Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith–sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-)DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the comparisons with ZSL-DOAS instruments at high latitudes, especially in the presence of a polar vortex due to the strong TOC gradient it induces. At tropical latitudes, where TOC variability is lower, both types of errors remain below about 1 % and consequently do not contribute significantly to the comparison error budget. The detailed analysis of the comparison results, including the metrological errors, suggests that the published random measurement uncertainties for GODFITv3 reprocessed satellite data are potentially overestimated, and adjustments are proposed here. This successful application of the OSSSMOSE system to close for the first time the error budget of TOC comparisons, bodes well for potential future applications, which are briefly touched upon.
APA, Harvard, Vancouver, ISO, and other styles
8

Seoane, Fernando, Shirin Abtahi, Farhad Abtahi, Lars Ellegård, Gudmundur Johannsson, Ingvar Bosaeus, and Leigh C. Ward. "Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations." BioMed Research International 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/656323.

Full text
Abstract:
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun’s prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Schouten, S. M., M. E. van de Velde, G. J. L. Kaspers, L. B. Mokkink, I. M. van der Sluis, C. van den Bos, A. Hartman, F. C. H. Abbink, and M. H. van den Berg. "Measuring vincristine-induced peripheral neuropathy in children with cancer: validation of the Dutch pediatric–modified Total Neuropathy Score." Supportive Care in Cancer 28, no. 6 (November 16, 2019): 2867–73. http://dx.doi.org/10.1007/s00520-019-05106-3.

Full text
Abstract:
Abstract Purpose The aims were to evaluate the construct validity and reliability of the Dutch version of the pediatric-modified Total Neuropathy Score (ped-mTNS) for assessing vincristine-induced peripheral neuropathy (VIPN) in Dutch pediatric oncology patients aged 5–18 years. Methods Construct validity (primary aim) of the ped-mTNS was determined by testing hypotheses about expected correlation between scores of the ped-mTNS (range: 0–32) and the Common Terminology Criteria for Adverse Events (CTCAE) (range: 0–18) for patients and healthy controls and by comparing patients and controls regarding their total ped-mTNS scores and the proportion of children identified with VIPN. Inter-rater and intra-rater reliability and measurement error (secondary aims) were assessed in a subgroup of study participants. Results Among the 112 children (56 patients and 56 age- and gender-matched healthy controls) evaluated, correlation between CTCAE and ped-mTNS scores was as expected (moderate (r = 0.60)). Moreover, as expected, patients had significantly higher ped-mTNS scores and more frequent symptoms of VIPN compared with controls (both p < .001). Reliability as measured within the intra-rater group (n = 10) (intra-class correlation coefficient (ICCagreement) = 0.64, standard error of measurement (SEMagreement) = 2.92, and smallest detectable change (SDCagreement) = 8.1) and within the inter-rater subgroup (n = 10) (ICCagreement = 0.63, SEMagreement = 3.7, and SDCagreement = 10.26) indicates insufficient reliability. Conclusion The Dutch version of the ped-mTNS appears to have good construct validity for assessing VIPN in a Dutch pediatric oncology population, whereas reliability appears to be insufficient and measurement error high. To improve standardization of VIPN assessment in children, future research aimed at evaluating and further optimizing the psychometric characteristics of the ped-mTNS is needed.
APA, Harvard, Vancouver, ISO, and other styles
10

Verhoelst, T., J. Granville, F. Hendrick, U. Köhler, C. Lerot, J. P. Pommereau, A. Redondas, M. Van Roozendael, and J. C. Lambert. "Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons." Atmospheric Measurement Techniques Discussions 8, no. 8 (August 4, 2015): 8023–82. http://dx.doi.org/10.5194/amtd-8-8023-2015.

Full text
Abstract:
Abstract. Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement uncertainties but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between on the one hand GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and on the other hand direct-sun and zenith-sky reference measurements such as from Dobsons, Brewers, and zenith scattered light (ZSL-)DOAS instruments respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the comparisons with ZSL-DOAS instruments at high latitudes, especially in the presence of a polar vortex. At tropical latitudes, where TOC variability is lower, both types of errors remain below about 1 % and consequently do not contribute significantly to the comparison error budget. The detailed analysis of the comparison results, including now the metrological errors, suggests that the published random measurement uncertainties for GODFITv3 reprocessed satellite data are potentially overestimated, and adjustments are proposed here. This successful application of the OSSSMOSE sytem to close for the first time the error budget of TOC comparisons, bodes well for potential future applications, which are briefly touched upon.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Total Expected Error"

1

Rodriguez, Alexander John, and alex73@bigpond net au. "Experimental Analysis of Disc Thickness Variation Development in Motor Vehicle Brakes." RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070209.123739.

Full text
Abstract:
Over the past decade vehicle judder caused by Disc Thickness Variation (DTV) has become of major concern to automobile manufacturers worldwide. Judder is usually perceived by the driver as minor to severe vibrations transferred through the chassis during braking [1-9]. In this research, DTV is investigated via the use of a Smart Brake Pad (SBP). The SBP is a tool that will enable engineers to better understand the processes which occur in the harsh and confined environment that exists between the brake pad and disc whilst braking. It is also a tool that will enable engineers to better understand the causes of DTV and stick-slip the initiators of low and high frequency vibration in motor vehicle brakes. Furthermore, the technology can equally be used to solve many other still remaining mysteries in automotive, aerospace, rail or anywhere where two surfaces may come in contact. The SBP consists of sensors embedded into an automotive brake pad enabling it to measure pressure between the brake pad and disc whilst braking. The two sensor technologies investigated were Thick Film (TF) and Fibre Optic (FO) technologies. Each type was tested individually using a Material Testing System (MTS) at room and elevated temperatures. The chosen SBP was then successfully tested in simulated driving conditions. A preliminary mathematical model was developed and tested for the TF sensor and a novel Finite Element Analysis (FEA) model for the FO sensor. A new method called the Total Expected Error (TEE) method was also developed to simplify the sensor specification process to ensure consistent comparisons are made between sensors. Most importantly, our achievement will lead to improved comfort levels for the motorist.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Total Expected Error"

1

Ahmad, Rauf, and Silvelyn Zwanzig. "On Total Least Squares Estimation for Longitudinal Errors-in-Variables Models." In Measurement Error in Longitudinal Data, 359–80. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198859987.003.0015.

Full text
Abstract:
The objective of this study is to evaluate the total least squares (TLS) estimator for the linear mixed model when the design matrix is subject to measurement errors, with special focus on models for longitudinal or repeated-measures data. We consider measurement errors only in the design matrix concerning the fixed part of the model and estimate its corresponding parameter vector under the TLS set up. After treating two variants of the general case, the random coefficient model is discussed as a special case. We evaluate conditions, on the design matrices as well as on variance component parameters, under which a reasonable TLS estimator can be expected in such models. Analysis of a real data example is also provided.
APA, Harvard, Vancouver, ISO, and other styles
2

Fox, Dov. "Damage Awards." In Birth Rights and Wrongs, 87–96. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190675721.003.0007.

Full text
Abstract:
Two questions should guide award determinations for procreation deprived, imposed, and confounded: First, how serious is a plaintiff’s reproductive loss? The answer goes to the nature and duration of that loss’s practical consequences for the plaintiff’s life. The second question asks how likely any future loss is to come about, and the extent to which its cause can be traced to a defendant’s misconduct, as opposed to some other factor for which the defendant isn’t to blame. The severity of reproductive injuries calls for objective inquiry into how a reasonable person in the plaintiff’s shoes would be affected. Permanent injuries tend to be more severe than temporary ones because they can be expected to cause greater disruption to major life activities like education, work, marriage, friendships, and emotional well-being. The question isn’t what plaintiffs would have done if they’d known that negligence would dash their efforts—it’s how much those injuries can be expected to impair their lives, from the perspective of their own ideals and circumstances. The causation element of this damages inquiry asks: What are the odds that plaintiffs would have suffered the complained-of reproductive outcome if it hadn’t been for the professional misconduct? Preexisting infertility, contraceptive user error, and genetic uncertainty can deprive, impose, or confound procreation just the same in the absence of any wrongdoing. Probabilistic recovery starts with the award total corresponding to the absolute loss in question, and reduces it by the extent to which the loss was caused by outside forces.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Total Expected Error"

1

Perez, Ethan, Ryan T. Kelly, Kotaro Matsui, Naoki Tani, and Aleksandar Jemcov. "Analysis of the Convergence Rate of Turbulence Model Uncertainties for Transonic Axial Compressor Simulation." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15716.

Full text
Abstract:
Abstract Numerical experiments were performed to assess the effect of numerical discretization error on the convergence rate of polynomial chaos (PC) approximations for a transonic axial compressor stage. A random variable with a uniform distribution and expected value of one was introduced into the expression for turbulent viscosity of the k-ω SST turbulence model. Model uncertainty was quantified from the expected value and standard deviation estimates obtained via univariate non-intrusive polynomial chaos. Spectral projection and point collocation were both used and their results were compared. The effect of discretization error on convergence of the PC approximation was investigated using a grid refinement study with four grids. The PC expansion was computed for each grid while maintaining the same boundary conditions, basis functions, model evaluations, random variable distribution, and polynomial order. The quantities of interest (QOIs) were total–to–total pressure ratio, total–to–total temperature, and adiabatic efficiency. The grid resolution was found to have an influence on resulting surrogate models and the estimates of expected value and standard deviation for all QOIs. However, the estimates converged towards final values as the mesh was refined. Point collocation provided different estimates from spectral projection and the difference was also found to depend on the mesh size.
APA, Harvard, Vancouver, ISO, and other styles
2

Worthingham, Robert, Tom Morrison, and Guy Desjardins. "Comparison of Estimates From a Growth Model 5 Years After the Previous Inspection." In 2000 3rd International Pipeline Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/ipc2000-208.

Full text
Abstract:
A corrosion growth modelling procedure using repeated inline inspection data has been employed as part of the maintenance program planning for a pipeline in the Alberta portion of the TransCanada system. The methodology of matching corrosion features between the different in-line inspections, and estimating their severity at a future date, is shown to be an excellent proactive cost saving methodology. Throughout this paper estimated 80% confidence intervals for tool measurement error, total prediction error and growth methodology error are given. In this abstract the values have been rounded. For maximum penetration, for the features reported on three inspections, the confidence interval for total prediction error varies from ±12% to ±17%, and for the growth methodology from ±8% to ±10% of the wall thickness (for the 1998 and 1999 dig programs respectively). For features reported on two inspections the confidence interval varies from ±19% to ±22% for total prediction error (1998 and 1999 digs respectively), and is about ±17% for the growth methodology (for both dig programs). The estimated confidence interval for prediction error in failure pressure is about ±560 kPa for the 1998 dig program. For the 1999 dig program a good estimate of the confidence interval for total prediction error could not be obtained. Assuming the failure pressure data obtained from field measurements were perfect, the estimate of the maximum confidence interval was ±850 kPa. For the laser profile measurement field tool, compared to an ultrasonic pencil probe, the confidence interval for penetration is less than ±2% of the wall thickness. The true confidence interval values in some cases are expected to be smaller than reported above for several reasons discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Weathers, J. B., B. T. Marvel, K. K. Srinivasan, P. J. Mago, L. M. Chamra, and W. G. Steele. "Error Propagation in Heat Release Analysis of Pilot Ignited Natural Gas Combustion." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-42144.

Full text
Abstract:
Uncertainty within measured variables and how such errors propagate throughout a given equation or set of equations can greatly affect the accuracy and understanding of the result for a given experiment. The major motivation (or impetus) for performing a detailed uncertainty analysis before beginning an experiment is to identify variables or parameters that would have the greatest/least impact on the total uncertainty of the result. The scope of this study is to perform a detailed uncertainty analysis on estimates of net heat release in a compression ignition engine. The analysis will examine each term of the net heat release rate equation, which is routinely estimated using a single zone thermodynamic model, and evaluate the respective Uncertainty Magnification Factors (UMF) and Uncertainty Percentage Distribution (UPC). Since the net work output from the engine is directly related to in-cylinder pressure data, it is important to evaluate the uncertainties associated with cylinder pressure measurement. The primary objective of this paper is to analyze the effect of biased and precision uncertainties associated with the measured cylinder pressure data on the rate of heat release (ROHR) of a pilot ignited natural gas engine. Sensitivity analysis of other parameters such as the correct estimation of compression ratio and using appropriate thermodynamic properties of combustion gases are also discussed. The estimates from this analysis are expected to aid the development of a detailed experimental matrix to analyze the nature of energy release and performance of combustion engines.
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Pushpendra, and Nadine Aubry. "Direct Simulation of Electrorheological Suspensions." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-61527.

Full text
Abstract:
A numerical scheme based on the distributed Lagrange multiplier method (DLM) is used to study the motion of particles of a dielectric suspensions subjected to uniform and nonuniform electric fields. The Maxwell stress tensor method is used for computing electrostatic forces. In the point dipole approximation the total electrostatic force acting on a particle can be divided into two distinct contributions, one due to dielectrophoresis and the second due to particle-particle interactions. The former is zero when the applied electric field is uniform and the latter depends on the distance between the particles. In the Maxwell stress tensor approach these two contribution appear together. Simulations show that as expected the error in the point dipole approximation decreases, as the distance between the particles increases.
APA, Harvard, Vancouver, ISO, and other styles
5

Esakkimuthu, T., Marykutty Abraham, and S. Akila. "Application of Artificial Neural Network to Predict TDS Concentrations of the River Thamirabarani, India." In Intelligent Computing and Technologies Conference. AIJR Publisher, 2021. http://dx.doi.org/10.21467/proceedings.115.6.

Full text
Abstract:
River water quality modeling is of prime importance in predicting the health of the rivers and in turn warns the human society about the future possibility of water problem in that area. Total dissolved solids is a prominent parameter used to access the quality of the river water. In our current study, artificial neural networking models have been developed to predict the concentrations of total dissolved solids of the river Thamirabarani in India. Neural Network toolbox of the MATLAB 2017 application was used to create and train the models. Monthly data from year 2016 to 2019 at four different sites near Thamirabarani river were procured from Tamilnadu pollution control board. Many artificial neural network architectures were built and the best performing architecture was selected for this study. With several parameters such as pH, chloride, turbidity, hardness, dissolved oxygen as input and the total dissolved solids as output parameter, the model was trained for many iterations and a final architecture was arrived which predicts the futuristic TDS concentrations of Thamirabarani in a more accurate manner. The predicted and the expected values were very close to each other. The root mean square error (RMSE) values for the selected stations such as Papanasam, Cheranmahadevi, Tirunelveli and Punnaikayal were 0.565, 0.591, 0.648 and 0.67 respectively.
APA, Harvard, Vancouver, ISO, and other styles
6

Rigo, H. Gregor. "Dancing the Emissions Limitation Limbo: How Low Dare You Go?" In 10th Annual North American Waste-to-Energy Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/nawtec10-1022.

Full text
Abstract:
After promulgation of the New Source Performance Standards (NSPS) and Emissions Guidelines (EG) for Large and Small Municipal Waste Combustors (MWCs), the Environmental Protection Agency (EPA) entered a new regulatory arena – regulating the remaining risks to public health and the environment after Maximum Available Control Technology (MACT) is applied. The residual risk from MWCs is expected to be negligible; however, the public, and some state and local regulators are now looking for ways to assure continuation of the exemplary emissions performance being measured at many of these retrofit sources. Hence, the question now becomes: how low can an achievable emissions limitation be? Confidence should not be placed in a source’s ability to continually meet the low emissions limitations embodied in the MWC EGs and NSPSs. Contrary to assertions in the Response to Comments for the Small MWC regulations [1], the Environmental Protection Agency could not have properly considered and incorporated measurement uncertainty into its dioxin guidelines; no one knew the uncertainty of total dioxin measurements above 28 ng/dsm3 corrected to 7 percent O2 until 2001 when the work supporting this paper was performed. When the 13 ng/dsm3 corrected to 7 percent O2 NSPS for MWCs was developed, the data needed to determine measurement uncertainty of most Section 129 pollutants had not even been collected. Further, asserting that the data used to derive the NSPS emissions limitations include measurement error, and therefore, any data-derived emissions limitations inherently consider that error, is only true if the measurement error is much smaller (say less than 10 percent) than the short and long term variations in emissions performance. Beginning with a set of three total dioxin measurements that averaged 4 ng/dsm3 corrected to 7 percent O2, the emissions limitation meeting the 95 percent statistical confidence level criterion underlying many NSPS, is almost 15 ng/dsm3 corrected to 7 percent O2. If the statistical criterion is changed to inclusion of “almost all” the expected results when these facilities continue to emit as they did during the original data acquisition, the emissions limitation becomes almost 18 ng/dsm3 corrected to 7 percent O2. Consequently, sources must not agree to standards that do not properly consider measurement method precision if they want to avoid exceedances when everything is working properly.
APA, Harvard, Vancouver, ISO, and other styles
7

Murray, Steve J., Rose M. Ray, and Helene L. Grossman. "Using Weibull Analysis for Cases With an Unknown Susceptible Population." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43867.

Full text
Abstract:
Weibull analysis is a powerful predictive tool for studying failure trends of engineering systems. [1] One noted shortcoming is that traditional techniques require the size of the susceptible population to be known. The method described in this paper allows for estimation of the size of the susceptible population using only failure data and no assumptions about total population size or susceptible portion. In the analysis of failures of mass-produced products, a large amount of failure data may be available, but all the conditions that define the susceptible population may never be known. For example, units with a particular usage condition may be expected to fail over time following a Weibull model, but the number of units subjected to that usage condition may never be known. To assume that the entire population is susceptible to the failure mode would greatly over-predict future failures, and the model could not be used to guide decision-making. By doing a least squares fit to the trend of failures versus time, a Weibull model can be fit to the data and then used to estimate the total number of susceptible units expected in the population. The ability to accurately estimate the size of the susceptible sub-population from failure data will be explored as a function of the size of the data set used, for known sets of failure data. For example, for a failure distribution that has increased, peaked, and then decreased to zero, almost the entire population has failed, so an estimate of the size of the susceptible population from this data is likely to be accurate. On the contrary, for only a few data points that show an increasing failure rate over time, little can be determined. Monte Carlo simulations will be used in order to estimate the error associated with this technique. Our analysis will show that predictions of total susceptible populations become similar to the actual susceptible populations when the predicted mean time to failure (MTTF) from the observed data is shorter than the observation time. In effect, predictions become accurate when it is clear to the observer that the number of failures per unit time has peaked.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Chang-Nian, Ji-Tian Han, Li Shao, Tien-Chien Jen, and Yi-Hsin Yen. "Design of Equipment for Manufacturing Helically-Coiled Tubes and its Automatic Control System." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-37146.

Full text
Abstract:
A simple but accurate method for manufacturing helically-coiled tubes was proposed, and the manufacturing equipment and its automatic control system were designed. The main geometric parameters of helically-coiled tubes are determined exactly based on the theorem “three given points determine a circle” and the definition of the helix angle of helically-coiled tubes. The finished equipment primarily consists of the mechanical noumenon and the automatic control system. In this design, three die wheels A, B and C made of wearable steel are used to adjust the positions of the raw materials in order to determine the product geometric parameters expected in advance. Three servo motors working with precision linear sliding rheostat and PID closed-loop control functions drive the three wheels mentioned above in different directions. The parameter e determining the base circle diameter of coil diameter is obtained by adjusting the position of wheel C up and down, and the parameter e’ determining the helix angle is obtained by adjusting the relative distance between wheel B and wheel A in the helical axis direction. The whole manufacture process is automatically controlled by a piece of software compiled by Visual Basic, including the processes of baiting and cutting, installing wheels and calibration, motor controlling, bending tubes, and product inspection etc. The design parameters for manufacturing helically-coiled tubes using SUS304 stainless steel or other similar materials are tube diameters of 6–50 mm, coil diameters of 100–700 mm and helical pitches of 10–50 mm. A total of fourteen finished products were selected as random samples for inspection. The result showed that the average working velocity was about 0.6 m/min; the root mean square errors (RMSE) of coil diameter and helical pitch of finished products were 3.85 mm and 0.97 mm, respectively; and the maximum roundness error of tubes was only 0.09 mm.
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, Katherine, Susannah Turner, and Graham Goodfellow. "A Case Study on the Application of Structural Reliability Analysis to Assess Integrity for Internal Corrosion of Unpiggable Pipelines." In 2016 11th International Pipeline Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/ipc2016-64341.

Full text
Abstract:
Operators wish to understand the condition of their pipelines to manage ongoing integrity. Information on the condition of the pipeline along its entire length can be obtained using in-line inspection (ILI). However, some pipelines cannot be internally inspected due, for example, to tee connections, tight bends, low flow or to a lack of launcher and receiver facilities. The condition of these ‘unpiggable’ lines can sometimes be largely unknown. To aid the understating of the pipeline condition without ILI data, operators will often rely on alternative sources of information, such as localised external inspections, model predictions and company and individual experience. However, there may be significant uncertainty associated with these alternative data sources when using them to assess the condition of the entire pipeline. This uncertainty may be understood by applying a probabilistic approach to the assessment of pipeline integrity using structural reliability analysis (SRA) methods. An SRA approach applies probabilistic input parameters to a failure prediction model for a defined limit state function. Previous IPC papers[1,2,3] have presented guidance on probabilistic assessments to model pipeline failure. Recommended probability distributions are presented which account for uncertainties associated with line pipe properties, defect sizing and the error associated with the predicted failure model. However, there is little published guidance readily available on recommended defect characteristic distributions specific to internal corrosion features. Parameter distributions are recommended for defect sizing based on empirical data, which are mainly used for external corrosion features. In this paper, a case study is used to present a practical application of an SRA methodology for assessment of pipeline integrity with respect to internal corrosion. Discussion is presented on alternative sources of information for the assessment when ILI data is unavailable, including targeted external inspections of unpiggable lines and data sets from comparable piggable lines. Probability distributions are derived from the available inspection data for the internal corrosion feature size and corrosion rate input parameters to the SRA. Probabilistic analysis is used to account for the expected population of unknown features in the uninspected parts of the pipelines. The expected feature size, corrosion rate and feature density calculated are used in the SRA to estimate the total probability of failure due to internal corrosion over time for the entire length of the pipeline. Recommendations are provided on the application of an SRA methodology to assess pipeline failure due to internal corrosion.
APA, Harvard, Vancouver, ISO, and other styles
10

Lenzi, Giulio, Andrea Fioravanti, Giovanni Ferrara, and Lorenzo Ferrari. "Development of an Innovative Multi-Sensor Waveguide Probe With Improved Measurement Capabilities." In ASME Turbo Expo 2014: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/gt2014-26425.

Full text
Abstract:
Currently waveguide probes are widely used in several turbomachinery applications ranging from the analysis of flow instabilities to the investigation of thermoacoustic phenomena. There are many advantages to using a waveguide probe. For example, the same sensor can be adopted for different measurement points, thus reducing the total number of sensors or a cheaper sensor with a lower operating temperature capability can be used instead of a more expensive one in case of high temperature applications. Typically, a waveguide probe is made up of a transmitting duct which connects the measurement point with a sensor housing and a damping duct which attenuates the pressure fluctuations reflected by the duct end. If properly designed (i.e. with a very long damping duct), the theoretical response of a wave guide has a monotone trend with an attenuation factor that increases with the frequency and the length of the transmitting duct. Unfortunately, the real geometry of the waveguide components and the type of connection between them have a strong influence on the behavior of the system. Even the smallest discontinuity in the duct connections can lead to a very complex frequency response and a reduced operating range. The geometry of the sensor housing itself is another element which contributes to increasing the differences between the expected and real frequency responses of a waveguide since its impedance is generally unknown. Previous studies developed by the authors have demonstrated that the replacement of the damping duct with a properly designed termination could be a good solution to increase the waveguide operating range and center it on the frequencies of interest. In detail, the termination could be used to balance the detrimental effects of discontinuities and sensor presence. In this paper an innovative waveguide system leading to a further increase of the operating range is proposed and tested. The system is based on the measurement of the pressure oscillations propagating in the transmitting duct by means of three sensors placed at different distances from the pressure tap. The pressures measured by the three sensors are then combined and processed to calculate the pressure at the transmitting duct inlet. The arrangement of the sensing elements and the geometry of the termination are designed to minimize the error of this estimation. The frequency response achieved with the proposed arrangement turns out to be very flat over a wide range of frequencies. Thanks to the minor errors in the estimation of pressure modulus and phase the probe is also suitable for the signal reconstruction both in frequency and time domain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography