Academic literature on the topic 'Weighted ratio estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Weighted ratio estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Weighted ratio estimator"

1

Bhushan, Shashi, Anoop Kumar, Amer Ibrahim Al-Omari, and Ghadah A. Alomani. "Mean Estimation for Time-Based Surveys Using Memory-Type Logarithmic Estimators." Mathematics 11, no. 9 (April 30, 2023): 2125. http://dx.doi.org/10.3390/math11092125.

Full text
Abstract:
This article examines the issue of population mean estimation utilizing past and present data in the form of an exponentially weighted moving average (EWMA) statistic under simple random sampling (SRS). We suggest memory-type logarithmic estimators and derive their properties, such as mean-square error (MSE) and bias up to a first-order approximation. Using the EWMA statistic, the conventional and novel memory-type estimators are compared. Real and artificial populations are used as examples to illustrate the theoretical findings. According to the empirical findings, memory-type logarithmic estimators are superior to the conventional mean estimator, ratio estimator, product estimator, logarithmic-type estimator, and memory-type ratio and product estimators.
APA, Harvard, Vancouver, ISO, and other styles
2

Zarnoch, S. J., and W. A. Bechtold. "Estimating mapped-plot forest attributes with ratios of means." Canadian Journal of Forest Research 30, no. 5 (May 1, 2000): 688–97. http://dx.doi.org/10.1139/x99-247.

Full text
Abstract:
The mapped-plot design utilized by the U.S. Department of Agriculture (USDA) Forest Inventory and Analysis and the National Forest Health Monitoring Programs is described. Data from 2458 forested mapped plots systematically spread across 25 states reveal that 35% straddle multiple conditions. The ratio-of-means estimator is developed as a method to obtain estimates of forest attributes from mapped plots, along with measures of variability useful for constructing confidence intervals. Basic inventory statistics from North and South Carolina were examined to see if these data satisfied the conditions necessary to qualify the ratio of means as the best linear unbiased estimator. It is shown that the ratio-of-means estimator is equivalent to the Horwitz-Thompson, the mean-of-ratios, and the weighted-mean-of-ratios estimators under certain situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Panda, K. B., and M. Sen. "Weighted Ratio-cum-Product Estimator for Finite Population Mean." International Journal of Scientific Research in Mathematical and Statistical Sciences 5, no. 4 (August 31, 2018): 354–58. http://dx.doi.org/10.26438/ijsrmss/v5i4.354358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wada, Kazumi, Keiichiro Sakashita, and Hiroe Tsubaki. "Robust Estimation for a Generalised Ratio Model." Austrian Journal of Statistics 50, no. 1 (February 3, 2021): 74–87. http://dx.doi.org/10.17713/ajs.v50i1.994.

Full text
Abstract:
It is known that data such as business sales and household income need data transformation prior to regression estimate as the data has a homoscedastic error. However, data transformations make the estimation of mean and total unstable. Therefore, the ratio model is often used for imputation in the field of official statistics to avoid the problem. Our study aims to robustify the estimator following the ratio model by means of M-estimation. Reformulation of the conventional ratio model with homoscedastic quasi-error term provides quasi-residuals which can be used as a measure of outlyingness as same as a linear regression model. A generalisation of the model, which accommodates varied error terms with different heteroscedasticity, is also proposed. Functions for robustified estimators of the generalised ratio model are implemented by the iterative re-weighted least squares algorithm in R environment and illustrated using random datasets. Monte Carlo simulation confirms accuracy of the proposed estimators, as well as their computational efficiency. A comparison of the scale parameters between the average absolute deviation (AAD) and median absolute deviation (MAD) is made regarding Tukey's biweight function. The results with Huber's weight function are also provided for reference. The proposed robust estimator of the generalised ratio model is used for imputation of major corporate accounting items of the 2016 Economic Census for Business Activity in Japan.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, M., G. Huang, J. Zhang, F. Hua, and L. Lu. "A WEIGHTED COHERENCE ESTIMATOR FOR COHERENT CHANGE DETECTION IN SYNTHETIC APERTURE RADAR IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 31, 2022): 1369–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1369-2022.

Full text
Abstract:
Abstract. Synthetic aperture radar (SAR) coherent change detection (CCD) often utilizes the degree of coherence to detect changes that have occurred between two data collections. Although having shown some performances in change detection, many existing coherence estimators are still relatively limited because the change areas do not stand out well from all decorrelation areas due to the low cluster-to-noise ratio (CNR) and volume scattering. Moreover, many estimators require the equal-variance assumption between two SAR images of the same scene. However, the assumption is less likely to be met in regions of significant differences in intensity, such as the change areas. To address these problems, we proposed an improved coherence estimator that introduces the parameters about the true-variance ratio as the weights. Since these parameters are closely related to the ratio-change statistic in intensity-based change detection algorithms, their introduction frees the estimator from the need for the equal-variance assumption and enables detection results to largely combine the advantages of intensity-based and CCD methods. Experiments on simulated and real SAR image pairs demonstrate the effectiveness of the proposed estimator in highlighting the change, obviously improving the contrast between the change and the background.
APA, Harvard, Vancouver, ISO, and other styles
6

Aanes, Sondre, and Jon Helge Vølstad. "Efficient statistical estimators and sampling strategies for estimating the age composition of fish." Canadian Journal of Fisheries and Aquatic Sciences 72, no. 6 (June 2015): 938–53. http://dx.doi.org/10.1139/cjfas-2014-0408.

Full text
Abstract:
Estimates of age compositions of fish populations or catches that are fundamental inputs to analytical stock assessment models are generally obtained from sample surveys, and multistage cluster sampling of fish is the norm. We use simulations and extensive empirical survey data for Northeast Arctic cod (Gadus morhua) to compare the efficiency of estimators that use age–length keys (ALKs) with design-based estimators for estimating age compositions of fish. The design-based weighted ratio estimator produces the most accurate estimates for cluster-correlated data, and an alternative estimator based on a weighted ALK is equivalent under certain constraints. Using simulations to evaluate subsampling strategies, we show that otolith collections from a length-stratified subsample of one fish per 5 cm length bin (∼10 fish total) per haul or trip is sufficient and nearly as efficient as a random subsample of 20 fish. Our study also indicates that the common practice of applying fixed ALKs to length composition data can severely underestimate the variance in estimates of age compositions and that “borrowing” of ALKs developed for other gears, areas, or time periods can cause serious bias.
APA, Harvard, Vancouver, ISO, and other styles
7

Khan, Hina, Saleh Farooq, Muhammad Aslam, and Masood Amjad Khan. "Exponentially Weighted Moving Average Control Charts for the Process Mean Using Exponential Ratio Type Estimator." Journal of Probability and Statistics 2018 (October 1, 2018): 1–15. http://dx.doi.org/10.1155/2018/9413939.

Full text
Abstract:
This study proposes EWMA-type control charts by considering some auxiliary information. The ratio estimation technique for the mean with ranked set sampling design is used in designing the control structure of the proposed charts. We have developed EWMA control charts using two exponential ratio-type estimators based on ranked set sampling for the process mean to obtain specific ARLs, being suitable when small process shifts are of interest.
APA, Harvard, Vancouver, ISO, and other styles
8

Naz, Farah, Tahir Nawaz, Tianxiao Pang, and Muhammad Abid. "Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers." Symmetry 12, no. 1 (December 19, 2019): 16. http://dx.doi.org/10.3390/sym12010016.

Full text
Abstract:
The use of auxiliary information in survey sampling to enhance the efficiency of the estimators of population parameters is a common phenomenon. Generally, the ratio and regression estimators are developed by using the known information on conventional parameters of the auxiliary variables, such as variance, coefficient of variation, coefficient of skewness, coefficient of kurtosis, or correlation between the study and auxiliary variable. The efficiency of these estimators is dubious in the presence of outliers in the data and a nonsymmetrical population. This study presents improved variance estimators under simple random sampling without replacement with the assumption that the information on some nonconventional dispersion measures of the auxiliary variable is readily available. These auxiliary variables can be the inter-decile range, sample inter-quartile range, probability-weighted moment estimator, Gini mean difference estimator, Downton’s estimator, median absolute deviation from the median, and so forth. The algebraic expressions for the bias and mean square error of the proposed estimators are obtained and the efficiency conditions are derived to compare with the existing estimators. The percentage relative efficiencies are used to numerically compare the results of the proposed estimators with the existing estimators by using real datasets, indicating the supremacy of the suggested estimators.
APA, Harvard, Vancouver, ISO, and other styles
9

Schoch, Tobias. "On the Strong Law of Large Numbers for Nonnegative Random Variables. With an Application in Survey Sampling." Austrian Journal of Statistics 50, no. 3 (July 5, 2021): 1–12. http://dx.doi.org/10.17713/ajs.v50i3.631.

Full text
Abstract:
Strong laws of large numbers with arbitrary norming sequences for nonnegative not necessarily independent random variables are obtained. From these results we establish, among other things, stability results for weighted sums of nonnegative random variables. A survey sampling application is provided on strong consistency of the Horvitz--Thompson estimator and the ratio estimator.
APA, Harvard, Vancouver, ISO, and other styles
10

Agarwal, Ankush, and Sandeep Juneja. "Nearest Neighbor Based Estimation Technique for Pricing Bermudan Options." International Game Theory Review 17, no. 01 (March 2015): 1540002. http://dx.doi.org/10.1142/s0219198915400022.

Full text
Abstract:
Bermudan option is an option which allows the holder to exercise at pre-specified time instants where the aim is to maximize expected payoff upon exercise. In most practical cases, the underlying dimensionality of Bermudan options is high and the numerical methods for solving partial differential equations as satisfied by the price process become inapplicable. In the absence of analytical formula a popular approach is to solve the Bermudan option pricing problem approximately using dynamic programming via estimation of the so-called continuation value function. In this paper we develop a nearest neighbor estimator based technique which gives biased estimators for the true option price. We provide algorithms for calculating lower and upper biased estimators which can be used to construct valid confidence intervals. The computation of lower biased estimator is straightforward and relies on suboptimal exercise policy generated using the nearest neighbor estimate of the continuation value function. The upper biased estimator is similarly obtained using likelihood ratio weighted nearest neighbors. We analyze the convergence properties of mean square error of the lower biased estimator. We develop order of magnitude relationship between the simulation parameters and computational budget in an asymptotic regime as the computational budget increases to infinity.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Weighted ratio estimator"

1

Holmin, Jessica Marie. "Aging and Weight-Ratio Estimation." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1143.

Full text
Abstract:
Many researchers have explored the way younger people perceive weight ratios using a variety of methodologies; however, very few researchers have used a more direct ratio estimation procedure, in which participants estimate an actual ratio between two or more weights. Of the few researchers who have used a direct method, the participants who were recruited were invariably younger adults. To date, there has been no research performed to examine how older adults perceive weight-ratios, using direct estimation or any other technique. Past research has provided evidence that older adults have more difficulty than younger adults in perceiving small differences in weight (i.e., the difference threshold for older adults is higher than that of younger adults). Given this result, one might expect that older adults would demonstrate similar impairments in weight ratio estimation compared to younger adults. The current experiment compared the abilities of 17 younger and 17 older adults to estimate weight ratios, using a direct ratio estimation procedure. On any given trial, participants were presented with two weights, and were asked to provide a direct estimate of the ratio, with the heavier in relation to the lighter. The results showed that the participants’ perceived weight ratios increased as a linear function of the actual weight ratios and that compared to younger adults, the older adults overestimated the weight ratios. The age-related overestimation was especially pronounced at higher weight ratios.
APA, Harvard, Vancouver, ISO, and other styles
2

Hsieh, Mei-hua, and 謝美華. "A Study of Foreign Exchange Futures Optimal Hedge Ratios Estimation: Exponentially Weighted Moving Average." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/08707982456580457297.

Full text
Abstract:
碩士
國立高雄第一科技大學
財務管理所
93
ABSTRACT Risk managers generally concern the usage of futures to hedge the portfolio exposure. Investors, arbitragers, and hedgers can manage the assets well in the standardized futures market. This study mainly discusses how to use a simple approach to estimate the foreign exchange futures’ optimal hedge ratio (OHR) for well hedging performance. Under the most widely used minimum variance framework, OHR correlates to the estimation of the covariance between spot and futures returns and the variance of futures return. This paper replaces more complex models (like multivariate GARCH model) by the EWMA (exponentially weighted moving average) estimator for the estimation of OHR. Additionally, whether different settings of the decay factor influences the hedging performance is examined in the research. Five foreign exchange (GBP.EUR.JPY.AUD.CAD) futures’ OHR are computed and relative performance based on different decay factors are also compared to choose an optimal setting of the EWMA estimator. There are 1,000 daily samples covering the period from 2001.2.13 to 2004.12.31. The empirical findings of this research indicate that the highest hedging efficiency can be achieved while the decay factor is 0.99, and declines with the lower setting of the decay factor. However, the further examine of the statistic significant difference shows that under the 1% significant level, there are no significant differences between the hedging performances based on various settings of the decay factor. This implies that the determined decay factors do not influence the hedging performance.
APA, Harvard, Vancouver, ISO, and other styles
3

HSU, TAI-YU, and 許玳瑜. "Empirical Likelihood Ratio Tests with Smoothing Estimators and a Weighted Approach for Two Sample Comparison under Current Status Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/sdx9z5.

Full text
Abstract:
碩士
國立中正大學
數學系統計科學研究所
106
We introduce three type non-parametric two sample test of the survival function of the failure time under the current status data. We construct the empirical likelihood ratio test with a weighted approach for two sample comparison based on MLE, SMLE and MSLE. A bootstrap method is given for constructing the null hypothesis distribution and determining p-value. In simulation study, we examine the finite-sample performance of the proposed approach and compare it with the method by Groeneboom (2012) which is also a two sample test for current status data. Finally, we give an example to illustrate our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Saleh, Mohammad A. "Nonlinear Parameter Estimation for Multiple Site-Type Polyolefin Catalysts Using an Integrated Microstructure Deconvolution Methodology." Thesis, 2011. http://hdl.handle.net/10012/5818.

Full text
Abstract:
The microstructure of polyolefins determines their macroscopic properties. Consequently, it is essential to predict how polymerization conditions will affect polyolefin microstructure. The most important microstructural distributions of ethylene/alfa-olefin copolymers made with coordination catalysts are their molecular weight (MWD), chemical composition (CCD), and comonomer sequence length (CSLD). Several mathematical models have been developed to predict these microstructural distributions; reliable techniques to estimate parameters for these models, however, are still poorly developed, especially for catalysts that have multiple site types, such as heterogeneous Ziegler-Natta complexes. Most commercial polyolefins are made with heterogeneous Ziegler-Natta catalysts, which make polyolefins with broad MWD, CCD, and CSLD. This behavior is attributed to the presence of several active site types, leading to a final product that can be seen as a blend of polymers made on the different catalyst site types. The main objective of this project is to develop a methodology to estimate the most important parameters needed to describe the microstructure of ethylene/alfa-olefin copolymers made with these multiple site-type catalysts. To accomplish this objective, we developed the Integrated Deconvolution Estimation Model (IDEM). IDEM estimates ethylene/alf-olefin reactivity ratios for each site type in two-steps. In the first step, the copolymer MWD, measured by high-temperature gel permeation chromatography, is deconvoluted into several Flory’s most probable distributions to determine the number of site types and the weight fractions of copolymer made on each of them. In the second estimation step, the model uses the MWD deconvolution information to fit the copolymer triad distributions measured by 13C NMR and estimate the reactivity ratios per site type. This is the first time that MWD and triad distribution information is integrated to estimate the reactivity ratio per site type of multiple site-type catalysts used to make ethylene/alfa-olefin copolymers. IDEM was applied to two sets of ethylene-co-1-butene copolymers made with a commercial Ziegler-Natta catalyst, covering a wide range of 1-butene fractions. In the first set of samples (EBH), hydrogen was used as a chain transfer agent, whereas it was absent in the second set (EB). Comparison of the reactivity ratio estimates for the sets of samples permitted the quantification of the hydrogen effect on the reactivity ratios of the different site types present in the Ziegler-Natta catalyst used in this thesis. Since 13C NMR it is an essential analytical step in IDEM, triad distributions for the EB and EBH copolymers were measured in two different laboratories (Department of Chemistry at the University of Waterloo, and Dow Chemical Research Center at Freeport, Texas). IDEM was applied to both set of triad measurements to find out the effect of interlaboratory 13C NMR analysis on reactivity ratio estimation.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Weighted ratio estimator"

1

Simon, Gleeson. Part I The Elements of Bank Financial Supervision, 5 Bank Capital Requirements. Oxford University Press, 2018. http://dx.doi.org/10.1093/law/9780198793410.003.0005.

Full text
Abstract:
This chapter begins by discussing the three overlapping capital requirements that banks are subject to. The first is the orthodox Basel capital requirement. The second is the Leverage Ratio, which is simply a non-risk-weighted capital requirement. The third is the stress test requirement. This has historically been the largest of the three. Stress testing identifies a particular probable state of the world, estimates the total loss which would occur if that state of the world were to eventuate, and requires capital sufficient to ensure that the bank retains sufficient capital after suffering the projected losses. The remainder of the chapter covers Pillar 2 assessment, capital floor, and capital buffers.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferro, Charles J., and Khai Ping Ng. Recommendations for management of high renal risk chronic kidney disease. Edited by David J. Goldsmith. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199592548.003.0099.

Full text
Abstract:
Poorer renal function is associated with increasing morbidity and mortality. In the wider population this is mainly as a consequence of cardiovascular disease. Renal patients are more likely to progress to end-stage renal disease, but also have high cardiovascular risk. Aiming to reduce both progression of renal impairment and cardiovascular disease are not contradictory. Focusing on the management of high-risk patients with proteinuria and reduced glomerular filtration rates, it is recommended that blood pressure should be kept below 140/90, or 130/80 if proteinuria is > 1 g/24 h (protein:creatinine ratio (PCR) >100 mg/mmol or 0.9 g/g). These targets may be modified according to age and other factors. Angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor antagonists should form part of the therapy for patients with proteinuria > 0.5 g/24 h (PCR > 50 mg/mmol or 0.45 g/g). Use of ACEIs or angiotensin receptor blockers in patients with lower levels of proteinuria may be indicated in some patient groups even in the absence of hypertension, notably in diabetic nephropathy. Evidence that other agents that reduce proteinuria bring additional benefits is weak at present. The best studies of ‘dual-blockade’ with various combinations of ACEIs, ARBs, and renin inhibitors have shown additional hazard with little evidence of additional benefit. Hyperlipidaemia—regardless of lipid levels, statin therapy is indicated in secondary cardiovascular prevention, and in primary prevention where cardiovascular risk is high, noting that current risk estimation tools do not adequately account for the increased risk of patients with CKD. There is not substantial evidence that lipid lowering therapy impacts on average rates of loss of GFR in progressive CKD. Non-drug lifestyle interventions to reduce cardiovascular risk, including stopping smoking, are important for all. Acidosis—in more advanced CKD it is justified to treat acidosis with oral sodium bicarbonate. Diet—sodium restriction to < 100 mmol/day (6 g/day) and avoidance of excessive dietary protein are justified in early to moderate CKD. Recommendations to limit levels of protein to 0.8 g/kg body weight are suggested by some, but additional protective effects of this are likely to be slight in patients who are otherwise well managed. Low-protein diets may carry some risk. Lower-protein diets may however be used to prevent symptoms in advanced CKD not treated by dialysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Skiba, Grzegorz. Fizjologiczne, żywieniowe i genetyczne uwarunkowania właściwości kości rosnących świń. The Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences, 2020. http://dx.doi.org/10.22358/mono_gs_2020.

Full text
Abstract:
Bones are multifunctional passive organs of movement that supports soft tissue and directly attached muscles. They also protect internal organs and are a reserve of calcium, phosphorus and magnesium. Each bone is covered with periosteum, and the adjacent bone surfaces are covered by articular cartilage. Histologically, the bone is an organ composed of many different tissues. The main component is bone tissue (cortical and spongy) composed of a set of bone cells and intercellular substance (mineral and organic), it also contains fat, hematopoietic (bone marrow) and cartilaginous tissue. Bones are a tissue that even in adult life retains the ability to change shape and structure depending on changes in their mechanical and hormonal environment, as well as self-renewal and repair capabilities. This process is called bone turnover. The basic processes of bone turnover are: • bone modeling (incessantly changes in bone shape during individual growth) following resorption and tissue formation at various locations (e.g. bone marrow formation) to increase mass and skeletal morphology. This process occurs in the bones of growing individuals and stops after reaching puberty • bone remodeling (processes involve in maintaining bone tissue by resorbing and replacing old bone tissue with new tissue in the same place, e.g. repairing micro fractures). It is a process involving the removal and internal remodeling of existing bone and is responsible for maintaining tissue mass and architecture of mature bones. Bone turnover is regulated by two types of transformation: • osteoclastogenesis, i.e. formation of cells responsible for bone resorption • osteoblastogenesis, i.e. formation of cells responsible for bone formation (bone matrix synthesis and mineralization) Bone maturity can be defined as the completion of basic structural development and mineralization leading to maximum mass and optimal mechanical strength. The highest rate of increase in pig bone mass is observed in the first twelve weeks after birth. This period of growth is considered crucial for optimizing the growth of the skeleton of pigs, because the degree of bone mineralization in later life stages (adulthood) depends largely on the amount of bone minerals accumulated in the early stages of their growth. The development of the technique allows to determine the condition of the skeletal system (or individual bones) in living animals by methods used in human medicine, or after their slaughter. For in vivo determination of bone properties, Abstract 10 double energy X-ray absorptiometry or computed tomography scanning techniques are used. Both methods allow the quantification of mineral content and bone mineral density. The most important property from a practical point of view is the bone’s bending strength, which is directly determined by the maximum bending force. The most important factors affecting bone strength are: • age (growth period), • gender and the associated hormonal balance, • genotype and modification of genes responsible for bone growth • chemical composition of the body (protein and fat content, and the proportion between these components), • physical activity and related bone load, • nutritional factors: – protein intake influencing synthesis of organic matrix of bone, – content of minerals in the feed (CA, P, Zn, Ca/P, Mg, Mn, Na, Cl, K, Cu ratio) influencing synthesis of the inorganic matrix of bone, – mineral/protein ratio in the diet (Ca/protein, P/protein, Zn/protein) – feed energy concentration, – energy source (content of saturated fatty acids - SFA, content of polyun saturated fatty acids - PUFA, in particular ALA, EPA, DPA, DHA), – feed additives, in particular: enzymes (e.g. phytase releasing of minerals bounded in phytin complexes), probiotics and prebiotics (e.g. inulin improving the function of the digestive tract by increasing absorption of nutrients), – vitamin content that regulate metabolism and biochemical changes occurring in bone tissue (e.g. vitamin D3, B6, C and K). This study was based on the results of research experiments from available literature, and studies on growing pigs carried out at the Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences. The tests were performed in total on 300 pigs of Duroc, Pietrain, Puławska breeds, line 990 and hybrids (Great White × Duroc, Great White × Landrace), PIC pigs, slaughtered at different body weight during the growth period from 15 to 130 kg. Bones for biomechanical tests were collected after slaughter from each pig. Their length, mass and volume were determined. Based on these measurements, the specific weight (density, g/cm3) was calculated. Then each bone was cut in the middle of the shaft and the outer and inner diameters were measured both horizontally and vertically. Based on these measurements, the following indicators were calculated: • cortical thickness, • cortical surface, • cortical index. Abstract 11 Bone strength was tested by a three-point bending test. The obtained data enabled the determination of: • bending force (the magnitude of the maximum force at which disintegration and disruption of bone structure occurs), • strength (the amount of maximum force needed to break/crack of bone), • stiffness (quotient of the force acting on the bone and the amount of displacement occurring under the influence of this force). Investigation of changes in physical and biomechanical features of bones during growth was performed on pigs of the synthetic 990 line growing from 15 to 130 kg body weight. The animals were slaughtered successively at a body weight of 15, 30, 40, 50, 70, 90, 110 and 130 kg. After slaughter, the following bones were separated from the right half-carcass: humerus, 3rd and 4th metatarsal bone, femur, tibia and fibula as well as 3rd and 4th metatarsal bone. The features of bones were determined using methods described in the methodology. Describing bone growth with the Gompertz equation, it was found that the earliest slowdown of bone growth curve was observed for metacarpal and metatarsal bones. This means that these bones matured the most quickly. The established data also indicate that the rib is the slowest maturing bone. The femur, humerus, tibia and fibula were between the values of these features for the metatarsal, metacarpal and rib bones. The rate of increase in bone mass and length differed significantly between the examined bones, but in all cases it was lower (coefficient b <1) than the growth rate of the whole body of the animal. The fastest growth rate was estimated for the rib mass (coefficient b = 0.93). Among the long bones, the humerus (coefficient b = 0.81) was characterized by the fastest rate of weight gain, however femur the smallest (coefficient b = 0.71). The lowest rate of bone mass increase was observed in the foot bones, with the metacarpal bones having a slightly higher value of coefficient b than the metatarsal bones (0.67 vs 0.62). The third bone had a lower growth rate than the fourth bone, regardless of whether they were metatarsal or metacarpal. The value of the bending force increased as the animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. The rate of change in the value of this indicator increased at a similar rate as the body weight changes of the animals in the case of the fibula and the fourth metacarpal bone (b value = 0.98), and more slowly in the case of the metatarsal bone, the third metacarpal bone, and the tibia bone (values of the b ratio 0.81–0.85), and the slowest femur, humerus and rib (value of b = 0.60–0.66). Bone stiffness increased as animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. Abstract 12 The rate of change in the value of this indicator changed at a faster rate than the increase in weight of pigs in the case of metacarpal and metatarsal bones (coefficient b = 1.01–1.22), slightly slower in the case of fibula (coefficient b = 0.92), definitely slower in the case of the tibia (b = 0.73), ribs (b = 0.66), femur (b = 0.59) and humerus (b = 0.50). Bone strength increased as animals grew. Regardless of the growth point tested, bone strength was as follows femur > tibia > humerus > 4 metacarpal> 3 metacarpal> 3 metatarsal > 4 metatarsal > rib> fibula. The rate of increase in strength of all examined bones was greater than the rate of weight gain of pigs (value of the coefficient b = 2.04–3.26). As the animals grew, the bone density increased. However, the growth rate of this indicator for the majority of bones was slower than the rate of weight gain (the value of the coefficient b ranged from 0.37 – humerus to 0.84 – fibula). The exception was the rib, whose density increased at a similar pace increasing the body weight of animals (value of the coefficient b = 0.97). The study on the influence of the breed and the feeding intensity on bone characteristics (physical and biomechanical) was performed on pigs of the breeds Duroc, Pietrain, and synthetic 990 during a growth period of 15 to 70 kg body weight. Animals were fed ad libitum or dosed system. After slaughter at a body weight of 70 kg, three bones were taken from the right half-carcass: femur, three metatarsal, and three metacarpal and subjected to the determinations described in the methodology. The weight of bones of animals fed aa libitum was significantly lower than in pigs fed restrictively All bones of Duroc breed were significantly heavier and longer than Pietrain and 990 pig bones. The average values of bending force for the examined bones took the following order: III metatarsal bone (63.5 kg) <III metacarpal bone (77.9 kg) <femur (271.5 kg). The feeding system and breed of pigs had no significant effect on the value of this indicator. The average values of the bones strength took the following order: III metatarsal bone (92.6 kg) <III metacarpal (107.2 kg) <femur (353.1 kg). Feeding intensity and breed of animals had no significant effect on the value of this feature of the bones tested. The average bone density took the following order: femur (1.23 g/cm3) <III metatarsal bone (1.26 g/cm3) <III metacarpal bone (1.34 g / cm3). The density of bones of animals fed aa libitum was higher (P<0.01) than in animals fed with a dosing system. The density of examined bones within the breeds took the following order: Pietrain race> line 990> Duroc race. The differences between the “extreme” breeds were: 7.2% (III metatarsal bone), 8.3% (III metacarpal bone), 8.4% (femur). Abstract 13 The average bone stiffness took the following order: III metatarsal bone (35.1 kg/mm) <III metacarpus (41.5 kg/mm) <femur (60.5 kg/mm). This indicator did not differ between the groups of pigs fed at different intensity, except for the metacarpal bone, which was more stiffer in pigs fed aa libitum (P<0.05). The femur of animals fed ad libitum showed a tendency (P<0.09) to be more stiffer and a force of 4.5 kg required for its displacement by 1 mm. Breed differences in stiffness were found for the femur (P <0.05) and III metacarpal bone (P <0.05). For femur, the highest value of this indicator was found in Pietrain pigs (64.5 kg/mm), lower in pigs of 990 line (61.6 kg/mm) and the lowest in Duroc pigs (55.3 kg/mm). In turn, the 3rd metacarpal bone of Duroc and Pietrain pigs had similar stiffness (39.0 and 40.0 kg/mm respectively) and was smaller than that of line 990 pigs (45.4 kg/mm). The thickness of the cortical bone layer took the following order: III metatarsal bone (2.25 mm) <III metacarpal bone (2.41 mm) <femur (5.12 mm). The feeding system did not affect this indicator. Breed differences (P <0.05) for this trait were found only for the femur bone: Duroc (5.42 mm)> line 990 (5.13 mm)> Pietrain (4.81 mm). The cross sectional area of the examined bones was arranged in the following order: III metatarsal bone (84 mm2) <III metacarpal bone (90 mm2) <femur (286 mm2). The feeding system had no effect on the value of this bone trait, with the exception of the femur, which in animals fed the dosing system was 4.7% higher (P<0.05) than in pigs fed ad libitum. Breed differences (P<0.01) in the coross sectional area were found only in femur and III metatarsal bone. The value of this indicator was the highest in Duroc pigs, lower in 990 animals and the lowest in Pietrain pigs. The cortical index of individual bones was in the following order: III metatarsal bone (31.86) <III metacarpal bone (33.86) <femur (44.75). However, its value did not significantly depend on the intensity of feeding or the breed of pigs.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Weighted ratio estimator"

1

Zou, Junping, and Jiexian Wang. "Real-Time Estimation of GPS Satellite Clock Errors and Its Precise Point Positioning Performance." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 823–30. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_83.

Full text
Abstract:
AbstractThe current stochastic model in GNSS processing is constructed based on the prior experience, for example the ratio of the weight of the pseudorange and phase observations is generally determined as 1:10000. These methods ignore the precision differences of the different GNSS receivers and observation space. In this paper, the standard deviation of differenced ionosphere-free pseudorange and phase observations is computed with dual-frequency observations and then the weight ratio of the pseudorange and phase observations is obtained using the computed standard deviation. This method is introduced in satellite clock estimating and the data is processed. The results show that the presented method is feasible, with which the accuracy of the estimated satellite clock results is improved. The estimated satellite clock results are further adopted in PPP and the positioning results of the 10 users validate that the estimated satellite clock, which uses the presented method, can accelerate the convergence of PPP compared with the traditional method.
APA, Harvard, Vancouver, ISO, and other styles
2

Panesso, M., M. Ettrichrätz, S. Gebhardt, O. Georgi, C. Rüger, M. Gnauck, and W. G. Drossel. "Design and Characterization of Piezoceramic Thick Film Sensor for Measuring Cutting Forces in Turning Processes." In Lecture Notes in Mechanical Engineering, 30–39. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28839-5_4.

Full text
Abstract:
AbstractCutting forces in turning processes usually correlate with tool conditions. For this reason, the acquisition of force signals is of key importance for monitoring purposes. Despite the robustness of current piezoelectric measuring platforms, their large weight ratio relative to standalone tool-holder systems limits their effective usable bandwidth for analyzing force signals. Further limitations include high costs and lack of flexibility for general purpose turning operations. Due to this, such systems fail to find acceptance in practical applications and are mainly limited to research activities. To improve these aspects, this work investigates the use of an alternative integration concept using a piezoceramic thick film sensor for performing near-process cutting force measurements at the tool-holder. The charge output of the sensor was estimated using a coupled structural-piezoelectric simulation for its design. The modelled prototype was assembled and characterized by means of a static calibration and an impact hammer test. Following these, a first implementation of the system under dry cutting conditions took place.
APA, Harvard, Vancouver, ISO, and other styles
3

Hankin, David G., Michael S. Mohr, and Ken B. Newman. "Multi-phase sampling." In Sampling Theory, 200–218. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.003.0010.

Full text
Abstract:
Attention is restricted to two-phase or double sampling. A large first-phase sample is used to generate a very good estimate of the mean or total of an auxiliary variable, x, which is relatively cheap to measure. Then, a second-phase sample is selected, usually from the first-phase sample, and both auxiliary and target variables are measured in selected second-phase population units. Two-phase ratio or regression estimators can be used effectively in this context. Errors of estimation reflect first-phase uncertainty in the mean or total of the auxiliary variable, and second-phase errors reflect the nature of the relation and correlation between auxiliary and target variables. Accuracy of the two-phase estimator of a proportion depends on sensitivity and specificity. Sensitivity is the probability that a unit possessing a trait (y = 1) will be correctly classified as such whenever the auxiliary variable, x, has value 1, whereas specificity is the probability that a unit not possessing a trait (y = 0) will be correctly classified as such whenever the auxiliary variable, x, has value 0. Optimal allocation results for estimation of means, totals, and proportions allow the most cost-effective allocation of total sampling effort to the first- and second-phases. In double sampling with stratification, a large first-phase sample estimates stratum weights, a second-phase sample estimates stratum means, and a stratified estimator gives an estimate of the overall population mean or total.
APA, Harvard, Vancouver, ISO, and other styles
4

Tu, Yundong. "Entropy-Based Model Averaging Estimation of Nonparametric Models." In Advances in Info-Metrics, 493–506. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190636685.003.0018.

Full text
Abstract:
In this chapter, I propose a model averaging estimation of nonparametric models based on Shannon’s entropy measure. The choice of weights in the averaging estimator is implemented bya maximizing the Shannon’s entropy measure which aggregates both model uncertainty and data uncertainty. Finite sample simulation studies show that the proposed averaging estimator outperforms the local linear least square estimator in terms of mean-squared errors and outperforms the Mallows averaging estimator of Hansen (2007) when the signal-to-noise ratio is low. An empirical example to apply the proposed estimator is provided to study the wage equation and illustrates its superiority in out-of-sample forecasts.
APA, Harvard, Vancouver, ISO, and other styles
5

Harbaugh, John W., and Johannes Wendebourg. "Risk Analysis Of Petroleum Prospects." In Computers in Geology - 25 Years of Progress. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195085938.003.0012.

Full text
Abstract:
Risk analysis of an oil or gas prospect requires a probability distribution with two components, a dry-hole probability plus a distribution of oil or gas volumes if there is a discovery. While these components should be estimated objectively, risk analysis as currently practiced is mostly guesswork. Geologists assign outcome probabilities without appropriate procedures or data for objective estimation. Valid estimates require frequency data on regional exploratory drilling-success ratios, frequency distributions of oil and gas field volumes, and systematic tabulations of geological variables on a prospect-by-prospect basis. Discriminant functions can be used to analyze relationships between geological variables and hydrocarbons, leading to outcome probabilities conditional on discriminant scores. These probabilities can be incorporated in risk-analysis tables to yield risk-weighted financial forecasts. Computers are required for all procedures. Prior to drilling a petroleum prospect, the likelihood of good outcomes must be weighed against the bad to obtain a risked financial estimate that combines all possibilities. Some oil operators simply contrast the value of discovery that is expected, versus the cost of a dry hole. A cashflow projection yields an estimate of the revenue that will be received if a discovery is made. This assumes an initial producing rate and an ultimate cumulative production for the operator's net revenue interest, and an oil price. When the stream of revenue is discounted and costs for the lease, the completed well, and operating expenses and taxes are subtracted, the net present value is obtained. If the hole is dry, its cost is readily estimated. Only two monetary estimates coupled with an intuitive guess about the likelihood of a producer versus a dry hole form the basis for a decision. A great deal of oil has been found by both independent operators and major oil companies using such simple decision systems. Oil companies generally use more advanced methods at present. Many require their geologists to supply probability estimates for a spectrum of outcomes for each individual prospect, ranging from the probability of a dry hole through the probability of a small discovery, a medium-sized discovery, and various magnitudes of large discoveries.
APA, Harvard, Vancouver, ISO, and other styles
6

A. Guinee, Richard. "Novel Application of Fast Simulated Annealing Method in Brushless Motor Drive (BLMD) Dynamical Parameter Identification for Electric Vehicle Propulsion." In Self-driving Vehicles and Enabling Technologies [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.97370.

Full text
Abstract:
Permanent magnet brushless motor drives (BLMD) are extensively used in electric vehicle (EV) propulsion systems because of their high power and torque to weight ratio, virtually maintenance free operation with precision control of torque, speed and position. An accurate dynamical parameter identification strategy is an essential feature in the adaptive control of such BLMD-EV systems where sensorless current feedback is employed for reliable torque control, with multi-modal penalty cost surfaces, in EV high performance tracking and target ranging. Application of the classical Powell Conjugate Direction optimization method is first discussed and its inaccuracy in dynamical parameter identification is illustrated for multimodal cost surfaces. This is used for comparison with the more accurate Fast Simulated Annealing/Diffusion (FSD) method, presented here, in terms of the returned parameter estimates. Details of the FSD development and application to the BLMD parameter estimation problem based on the minimum quantized parameter step sizes from noise considerations are provided. The accuracy of global parameter convergence estimates returned, cost function evaluation and the algorithm run time are presented. Validation of the FSD identification strategy is provided by excellent correlation of BLMD model simulation trace coherence with experimental test data at the optimal estimates and from cost surface simulation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kumari, Nisha, and Kaushik Kumar. "Lower Body Orthotic Calipers With Composite Braces." In Design and Optimization of Mechanical Engineering Products, 133–51. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3401-3.ch007.

Full text
Abstract:
Composite based materials are finding application in a large number of research and engineering spectrum due to its better mechanical properties (strength and stiffness), inherent surface finish, easiness in fabrication and installation and corrosion resistant. They are very strong and firm, yet very light in weight due to which lower weight-to-volume ratio can be achieved and stiffness to weight is 1.5 times greater than the non-ferrous materials like Aluminum. The work is undertaken in two parts. First and foremost being modeling and virtual estimation of mechanical properties using CREO and ANSYS for currently used aluminum based calipers and fabrication of the composites and testing of the same. A comparison is performed between the virtual and experimental results and also the effectiveness of composite based calipers over Aluminum ones is studied. Here two polymeric based composites are proposed for fabrication which are thermoplast and thermoset based composites respectively. The braces are modeled using a solid modeling Software, CREO and the same is tested using ANSYS.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Meifeng, Guoyun Zhong, Yueshun He, Kai Zhong, Hongmao Chen, and Mingliang Gao. "Fast HEVC Inter-Prediction Algorithm Based on Matching Block Features." In Research Anthology on Recent Trends, Tools, and Implications of Computer Programming, 253–76. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3016-0.ch012.

Full text
Abstract:
A fast inter-prediction algorithm based on matching block features is proposed in this article. The position of the matching block of the current CU in the previous frame is found by the motion vector estimated by the corresponding located CU in the previous frame. Then, a weighted motion vector computation method is presented to compute the motion vector of the matching block of the current CU according to the motions of the PUs the matching block covers. A binary decision tree is built to decide the CU depths and PU mode for the current CU. Four training features are drawn from the characteristics of the CUs and PUs the matching block covers. Simulation results show that the proposed algorithm achieves average 1.1% BD-rate saving, 14.5% coding time saving and 0.01-0.03 dB improvement in peak signal-to-noise ratio (PSNR), compared to the present fast inter-prediction algorithm in HEVC.
APA, Harvard, Vancouver, ISO, and other styles
9

Danaraj, Jonathan J., and Augustine S. Lee. "Asthma in the Critically Ill Patient." In Mayo Clinic Critical and Neurocritical Care Board Review, edited by Eelco F. M. Wijdicks, James Y. Findlay, William D. Freeman, and Ayan Sen, 150–56. Oxford University Press, 2019. http://dx.doi.org/10.1093/med/9780190862923.003.0021.

Full text
Abstract:
Asthma is a common condition that affects an estimated 24 million children and adults in the United States (prevalence, 8%-10%). Globally, over 300 million people are affected and the number is expected to increase. The age distribution is bimodal, but in most patients, asthma is diagnosed before age 18 years (male to female ratio, 2:1 in children; 1:1 in adults). Susceptibility to asthma is multifactorial with both genetic and environmental factors. The strongest risk factor is atopy, a sensitivity to the development of immunoglobulin E (IgE) to specific allergens. A person with atopy is 3- to 4-fold more likely to have asthma than a person without atopy. Other risk factors include birth weight, prematurity, tobacco use (including secondary exposure), and obesity.
APA, Harvard, Vancouver, ISO, and other styles
10

Rossner, Stefan. "Obesity as a health problem." In Oxford Textbook of Endocrinology and Diabetes, 1637–39. Oxford University Press, 2011. http://dx.doi.org/10.1093/med/9780199235292.003.1205.

Full text
Abstract:
Obesity is defined as an excess of body fat that is sufficient to adversely affect health. The prevalence of obesity has been difficult to study because many countries have had their own specific criteria for the classification of different degrees of overweight. However, during the 1990s, the body mass index (weight in kg/height in metres squared), or BMI, became a universally accepted measure of the degree of overweight and now identical limits are recommended. The most frequently accepted classification of overweight and obesity in adults by the WHO is shown in Table 12.1.1.1 (1). In many community studies in affluent societies this scheme has been simplified and cut-off points of 25 and 30 kg/m2 are used for descriptive purposes of overweight and obesity. Both the prevalence of very low BMI (below 18.5 kg/m2) and very high BMI (40 kg/m2 or higher) are usually low, in the order of 1–2% or less. There are some indications that the limits used to designate obesity or overweight in Asian populations may be lowered by several units of BMI; this would greatly affect estimates of the prevalence of obesity. In countries such as China and India with each over a billion inhabitants, small changes in the criteria for overweight or obesity potentially increase the world estimate of obesity by several hundred million (currently estimates are about 250 million worldwide). The distribution of abdominal fat should be considered for an accurate classification of overweight and obesity with respect to the health risks (Table 12.1.1.2). Traditionally this has been indicated by a relatively high waist-to-hip circumference ratio; however, the waist circumference alone may be a better and simpler measure of abdominal fatness (2). In 1998 the National Institutes of Health adopted the BMI classification and combined this with limits for waist measurement (3). This classification proposes that the combination of overweight (BMI between 25 and 30 kg/m2) and moderate obesity (BMI between 30 and 35 kg/m2) with a large waist circumference (greater than or equal to 102 cm in men or greater than or equal to 88 cm in women) carries additional risk (3).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Weighted ratio estimator"

1

Oh, Kyeung Heub, Jin Kwon Hwang, and Chul Ki Song. "Fuzzy Estimation of Vehicle Speed." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84029.

Full text
Abstract:
The absolute longitudinal speed of a vehicle is estimated by using data from an accelerometer of the vehicle and wheel speed sensors of a standard 50-tooth antilock braking system. An intuitive solution to this problem is, “When wheel slip is low, calculate the vehicle velocity from the wheel speeds; when wheel slip is high, calculate the vehicle speed by integrating signal of the accelerometer.” The speed estimator weighted with fuzzy logic is introduced to implement the above concept, which is formulated as an estimation method. And the method is improved through experiments by how to calculate speed from acceleration signal and slip ratios. It is verified experimentally to usefulness o estimation speed of a vehicle. And the experimental result shows that the estimated vehicle longitudinal speed has only a 6 % worst-case error during a hard braking maneuver lasting a few seconds.
APA, Harvard, Vancouver, ISO, and other styles
2

Yazici, Murat. "The weighted least squares ratio (WLSR) method to M-estimators." In 2016 SAI Computing Conference (SAI). IEEE, 2016. http://dx.doi.org/10.1109/sai.2016.7556018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patrick, Ronald S., and J. David Powell. "A Technique for the Real-Time Estimation of Air-Fuel Ratio Using Molecular Weight Ratios." In International Congress & Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 1990. http://dx.doi.org/10.4271/900260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"EMBEDDING RATIO ESTIMATION BASED ON WEIGHTED STEGO IMAGE FOR EMBEDDING IN 2LSB." In International Conference on Security and Cryptography. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003465600970104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xiang-Song, Wei-Xin Gao, and Shi-Ling Zhu. "Research on Noise Reduction and Enhancement of Weld Image." In 9th International Conference on Signal, Image Processing and Pattern Recognition (SPPR 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101902.

Full text
Abstract:
In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.
APA, Harvard, Vancouver, ISO, and other styles
6

Weng, Jing-hang, Lin Chen, Li-Min Sun, Yi-qing Zou, Hui Guo, and Ying Zhu. "A Fully Automated and Noncontact Method for Force Identification of Cables Based on Microwave Radar." In IABSE Congress, Nanjing 2022: Bridges and Structures: Connection, Integration and Harmonisation. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2022. http://dx.doi.org/10.2749/nanjing.2022.1527.

Full text
Abstract:
<p>This study proposes a full-automated and non-contact cable force identification method based on microwave radar. Several algorithms have been presented for data processing. The time domain data records by microwave radar is firstly transformed into frequency domain by Fast Fourier Transform. Then, the eigen-frequencies are simultaneously identified with the proposed fast sieve method. Subsequently, a novel algorithm using hash map and weighted voting is applied to estimate orders of eigen-frequencies. Finally, the average ratio between eigen-frequencies and their orders is estimated by weighted least square method, and then the cable force is calculated by using cable frequency formulas. The method has been validated by field tests.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yin-Song, Jun-Sheng Yu, and Han-Lin Mou. "Cuff-Less blood pressure estimation from Photoplethysmography via weighted visibility graph." In 2022 Cross Strait Radio Science & Wireless Technology Conference (CSRSWTC). IEEE, 2022. http://dx.doi.org/10.1109/csrswtc56224.2022.10098292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Y. "Post-stack Quality Factor Estimation Using Regularized Spectral Ratio Method after Similarity-weighted Stacking." In 77th EAGE Conference and Exhibition 2015. Netherlands: EAGE Publications BV, 2015. http://dx.doi.org/10.3997/2214-4609.201413062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gadgil, Ashwin A., and Robert E. Randall. "Two Phase Annular Flow Approximation Using 1-D Flow Equations Coupled With a Drift Flux Model for Concurrent Flow in Vertical or Near Vertical Channels." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61480.

Full text
Abstract:
Annular flow is a flow regime of two-phase gas-liquid flow dominated by high gas flowrate moving through the center of the pipe (gas core). In this paper we have developed and studied an innovative phenomenological model which combines the Zuber & Findlay’s Drift Flux Model’s weighted mean value approach [1] with the 1-D flow approximation equations. The flow is described in terms of a distribution parameter and an averaged local velocity difference between the phases across the pipe cross-section. The average void fraction is calculated as a function of the ratio of weighted mean gas velocity to the weighted mean liquid velocity (Slip ratio) and the drift flux velocity. The void fraction thus estimated is then applied to the 1-D continuity, momentum and energy equations. The equations are solved simultaneously to obtain the pressure gradient. Lastly, we obtain the liquid film thickness using the triangular hydrodynamic relationship between the liquid flow rate, pressure gradient and the liquid film thickness. The thickness of layer obtained, is then used to verify the original estimate of the void fraction. An iterative procedure is used to match the original estimate to the final value. The results from this study were validated against PipeSIM© software and two field measurements conducted on a wet-gas field in Brazil. As opposed to conventional drift flux models which are based on four simultaneous equations, this model relies on three, thereby significantly reducing the computational resources necessary and is more accurate as we account for variable velocities and void fractions across the pipe cross-section.
APA, Harvard, Vancouver, ISO, and other styles
10

Nasser, Ahmed, Maha Elsabrouty, and Osamu Muta. "Weighted fast iterative shrinkage thresholding for 3D massive MIMO channel estimation." In 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). IEEE, 2017. http://dx.doi.org/10.1109/pimrc.2017.8292556.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Weighted ratio estimator"

1

Rosero-Bixby, Luis, and Tim Miller. The mathematics of the reproduction number R for Covid-19: A primer for demographers. Verlag der Österreichischen Akademie der Wissenschaften, December 2021. http://dx.doi.org/10.1553/populationyearbook2022.res1.3.

Full text
Abstract:
The reproduction number R is a key indicator used to monitor the dynamics of Covid-19 and to assess the effects of infection control strategies that frequently have high social and economic costs. Despite having an analog in demography’s “net reproduction rate” that has been routinely computed for a century, demographers may not be familiar with the concept and measurement of R in the context of Covid-19. This article is intended to be a primer for understanding and estimating R in demography. We show that R can be estimated as a ratio between the numbers of new cases today divided by the weighted average of cases in previous days. We present two alternative derivations for these weights based on how risks have changed over time: constant vs. exponential decay. We then provide estimates of these weights, and demonstrate their use in calculating R to trace the course of the first pandemic year in 53 countries.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Jing, Yuanmei Chen, Die Liu, Fang Ye, Qi Sun, Qiang Huang, Jing Dong Dong, Tao Pei, Yuan He, and Qi Zhang. Prenatal exposure to particulate matter and term low birth weight:systematic review and meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, August 2022. http://dx.doi.org/10.37766/inplasy2022.8.0064.

Full text
Abstract:
Review question / Objective: To assess the effects of particulate matter exposure during various periods of pregnancy on low birth weight and term low birth weight. Population:pregnant women and their singleton live-births; Exposure: maternal exposure to ambient PM2.5 and PM10 during the entire pregnancy or each trimesters were estimated based on ground-level atmospheric pollution monitoring stations or validated exposure models (μg/m3 ); Comparator(s): risk estimates were presented as hazard ratios (HRs) or odds ratios (ORs) and their 95% confidence intervals (95% CI) with per specific increment in PM2.5; Outcomes: term LBW(≥37weeks and<2500g) or LBW(<2500g)were defined as a dichotomous variables.
APA, Harvard, Vancouver, ISO, and other styles
3

Balani, Suman, Hetashvi Sudani, Sonali Nawghare, and Nitin Kulkarni. ESTIMATION OF FETAL WEIGHT BY CLINICAL METHOD, ULTRASONOGRAPHY AND ITS CORRELATION WITH ACTUAL BIRTH WEIGHT IN TERM PREGNANCY. World Wide Journals, February 2023. http://dx.doi.org/10.36106/ijar/6907486.

Full text
Abstract:
Introduction: The Accurate estimation of foetal weight is of paramount importance in modern obstetrics for management of labour and delivery. During the past two decades estimated foetal weight is incorporated into the standard routine antepartum evaluation of high-risk pregnancy & deliveries. Present study was conducted to estimation fetal weight by clinical method and by ultrasonography and to nd out its correlation with actual birth weight in term pregnancy. The cross-sectional Material and Methods: observational study was conducted in outpatient or inpatient Obstetric section of Department of Obstetrics & Gynaecology and USG section of Department of Radio-diagnosis of A.C.P.M. Medical College and Hospital, Dhule, Maharashtra. Most of the study Observations & Results: subjects were between 24-28 years of age 53.5% with mean age of 24.71 years. The mean Hadlock weight was 2705 ± 469 gm, while the actual birth weight was 2805 ± 465 gm. The difference was found to be statistically signicant (p<0.05). The difference in Dare's clinical method was found to be 73.3 ± 49.8 gm, while the Hadlock difference was found to be 103.1 ± 77.4 gm. There was a very strong, positive, statistically signicant correlation seen between Dare Weight and Actual Weight (p<0.05). There was a very strong, positive, statistically signicant correlation seen between Hadlock Weight and Actual Weight (p<0.05). Thus, major ndi Conclusion: ng from this study is that clinical estimation of fetal weight is as accurate as the ultrasonographic method of estimation within the normal birth weight range. Our study has important implication as in developing country like India, where ultrasound is not available in many health care delivery systems specially in rural areas where clinical method is easy, cost effective, simple, accurate and can be used even by midwives.
APA, Harvard, Vancouver, ISO, and other styles
4

Over, Thomas, Riki Saito, Andrea Veilleux, Padraic O’Shea, Jennifer Sharpe, David Soong, and Audrey Ishii. Estimation of Peak Discharge Quantiles for Selected Annual Exceedance Probabilities in Northeastern Illinois. Illinois Center for Transportation, June 2016. http://dx.doi.org/10.36501/0197-9191/16-014.

Full text
Abstract:
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions. The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The skew coefficient values for each streamgage were then computed as the variance-weighted average of at-site and regional skew coefficients. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter. This report also provides: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant. The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the web-based application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted peak discharge records by streamgage are provided at http://dx.doi.org/10.3133/sir20165050 for download.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography