To see the other types of publications on this topic, follow the link: Weighted ratio estimator.

Journal articles on the topic 'Weighted ratio estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Weighted ratio estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bhushan, Shashi, Anoop Kumar, Amer Ibrahim Al-Omari, and Ghadah A. Alomani. "Mean Estimation for Time-Based Surveys Using Memory-Type Logarithmic Estimators." Mathematics 11, no. 9 (April 30, 2023): 2125. http://dx.doi.org/10.3390/math11092125.

Full text
Abstract:
This article examines the issue of population mean estimation utilizing past and present data in the form of an exponentially weighted moving average (EWMA) statistic under simple random sampling (SRS). We suggest memory-type logarithmic estimators and derive their properties, such as mean-square error (MSE) and bias up to a first-order approximation. Using the EWMA statistic, the conventional and novel memory-type estimators are compared. Real and artificial populations are used as examples to illustrate the theoretical findings. According to the empirical findings, memory-type logarithmic estimators are superior to the conventional mean estimator, ratio estimator, product estimator, logarithmic-type estimator, and memory-type ratio and product estimators.
APA, Harvard, Vancouver, ISO, and other styles
2

Zarnoch, S. J., and W. A. Bechtold. "Estimating mapped-plot forest attributes with ratios of means." Canadian Journal of Forest Research 30, no. 5 (May 1, 2000): 688–97. http://dx.doi.org/10.1139/x99-247.

Full text
Abstract:
The mapped-plot design utilized by the U.S. Department of Agriculture (USDA) Forest Inventory and Analysis and the National Forest Health Monitoring Programs is described. Data from 2458 forested mapped plots systematically spread across 25 states reveal that 35% straddle multiple conditions. The ratio-of-means estimator is developed as a method to obtain estimates of forest attributes from mapped plots, along with measures of variability useful for constructing confidence intervals. Basic inventory statistics from North and South Carolina were examined to see if these data satisfied the conditions necessary to qualify the ratio of means as the best linear unbiased estimator. It is shown that the ratio-of-means estimator is equivalent to the Horwitz-Thompson, the mean-of-ratios, and the weighted-mean-of-ratios estimators under certain situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Panda, K. B., and M. Sen. "Weighted Ratio-cum-Product Estimator for Finite Population Mean." International Journal of Scientific Research in Mathematical and Statistical Sciences 5, no. 4 (August 31, 2018): 354–58. http://dx.doi.org/10.26438/ijsrmss/v5i4.354358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wada, Kazumi, Keiichiro Sakashita, and Hiroe Tsubaki. "Robust Estimation for a Generalised Ratio Model." Austrian Journal of Statistics 50, no. 1 (February 3, 2021): 74–87. http://dx.doi.org/10.17713/ajs.v50i1.994.

Full text
Abstract:
It is known that data such as business sales and household income need data transformation prior to regression estimate as the data has a homoscedastic error. However, data transformations make the estimation of mean and total unstable. Therefore, the ratio model is often used for imputation in the field of official statistics to avoid the problem. Our study aims to robustify the estimator following the ratio model by means of M-estimation. Reformulation of the conventional ratio model with homoscedastic quasi-error term provides quasi-residuals which can be used as a measure of outlyingness as same as a linear regression model. A generalisation of the model, which accommodates varied error terms with different heteroscedasticity, is also proposed. Functions for robustified estimators of the generalised ratio model are implemented by the iterative re-weighted least squares algorithm in R environment and illustrated using random datasets. Monte Carlo simulation confirms accuracy of the proposed estimators, as well as their computational efficiency. A comparison of the scale parameters between the average absolute deviation (AAD) and median absolute deviation (MAD) is made regarding Tukey's biweight function. The results with Huber's weight function are also provided for reference. The proposed robust estimator of the generalised ratio model is used for imputation of major corporate accounting items of the 2016 Economic Census for Business Activity in Japan.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, M., G. Huang, J. Zhang, F. Hua, and L. Lu. "A WEIGHTED COHERENCE ESTIMATOR FOR COHERENT CHANGE DETECTION IN SYNTHETIC APERTURE RADAR IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 31, 2022): 1369–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1369-2022.

Full text
Abstract:
Abstract. Synthetic aperture radar (SAR) coherent change detection (CCD) often utilizes the degree of coherence to detect changes that have occurred between two data collections. Although having shown some performances in change detection, many existing coherence estimators are still relatively limited because the change areas do not stand out well from all decorrelation areas due to the low cluster-to-noise ratio (CNR) and volume scattering. Moreover, many estimators require the equal-variance assumption between two SAR images of the same scene. However, the assumption is less likely to be met in regions of significant differences in intensity, such as the change areas. To address these problems, we proposed an improved coherence estimator that introduces the parameters about the true-variance ratio as the weights. Since these parameters are closely related to the ratio-change statistic in intensity-based change detection algorithms, their introduction frees the estimator from the need for the equal-variance assumption and enables detection results to largely combine the advantages of intensity-based and CCD methods. Experiments on simulated and real SAR image pairs demonstrate the effectiveness of the proposed estimator in highlighting the change, obviously improving the contrast between the change and the background.
APA, Harvard, Vancouver, ISO, and other styles
6

Aanes, Sondre, and Jon Helge Vølstad. "Efficient statistical estimators and sampling strategies for estimating the age composition of fish." Canadian Journal of Fisheries and Aquatic Sciences 72, no. 6 (June 2015): 938–53. http://dx.doi.org/10.1139/cjfas-2014-0408.

Full text
Abstract:
Estimates of age compositions of fish populations or catches that are fundamental inputs to analytical stock assessment models are generally obtained from sample surveys, and multistage cluster sampling of fish is the norm. We use simulations and extensive empirical survey data for Northeast Arctic cod (Gadus morhua) to compare the efficiency of estimators that use age–length keys (ALKs) with design-based estimators for estimating age compositions of fish. The design-based weighted ratio estimator produces the most accurate estimates for cluster-correlated data, and an alternative estimator based on a weighted ALK is equivalent under certain constraints. Using simulations to evaluate subsampling strategies, we show that otolith collections from a length-stratified subsample of one fish per 5 cm length bin (∼10 fish total) per haul or trip is sufficient and nearly as efficient as a random subsample of 20 fish. Our study also indicates that the common practice of applying fixed ALKs to length composition data can severely underestimate the variance in estimates of age compositions and that “borrowing” of ALKs developed for other gears, areas, or time periods can cause serious bias.
APA, Harvard, Vancouver, ISO, and other styles
7

Khan, Hina, Saleh Farooq, Muhammad Aslam, and Masood Amjad Khan. "Exponentially Weighted Moving Average Control Charts for the Process Mean Using Exponential Ratio Type Estimator." Journal of Probability and Statistics 2018 (October 1, 2018): 1–15. http://dx.doi.org/10.1155/2018/9413939.

Full text
Abstract:
This study proposes EWMA-type control charts by considering some auxiliary information. The ratio estimation technique for the mean with ranked set sampling design is used in designing the control structure of the proposed charts. We have developed EWMA control charts using two exponential ratio-type estimators based on ranked set sampling for the process mean to obtain specific ARLs, being suitable when small process shifts are of interest.
APA, Harvard, Vancouver, ISO, and other styles
8

Naz, Farah, Tahir Nawaz, Tianxiao Pang, and Muhammad Abid. "Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers." Symmetry 12, no. 1 (December 19, 2019): 16. http://dx.doi.org/10.3390/sym12010016.

Full text
Abstract:
The use of auxiliary information in survey sampling to enhance the efficiency of the estimators of population parameters is a common phenomenon. Generally, the ratio and regression estimators are developed by using the known information on conventional parameters of the auxiliary variables, such as variance, coefficient of variation, coefficient of skewness, coefficient of kurtosis, or correlation between the study and auxiliary variable. The efficiency of these estimators is dubious in the presence of outliers in the data and a nonsymmetrical population. This study presents improved variance estimators under simple random sampling without replacement with the assumption that the information on some nonconventional dispersion measures of the auxiliary variable is readily available. These auxiliary variables can be the inter-decile range, sample inter-quartile range, probability-weighted moment estimator, Gini mean difference estimator, Downton’s estimator, median absolute deviation from the median, and so forth. The algebraic expressions for the bias and mean square error of the proposed estimators are obtained and the efficiency conditions are derived to compare with the existing estimators. The percentage relative efficiencies are used to numerically compare the results of the proposed estimators with the existing estimators by using real datasets, indicating the supremacy of the suggested estimators.
APA, Harvard, Vancouver, ISO, and other styles
9

Schoch, Tobias. "On the Strong Law of Large Numbers for Nonnegative Random Variables. With an Application in Survey Sampling." Austrian Journal of Statistics 50, no. 3 (July 5, 2021): 1–12. http://dx.doi.org/10.17713/ajs.v50i3.631.

Full text
Abstract:
Strong laws of large numbers with arbitrary norming sequences for nonnegative not necessarily independent random variables are obtained. From these results we establish, among other things, stability results for weighted sums of nonnegative random variables. A survey sampling application is provided on strong consistency of the Horvitz--Thompson estimator and the ratio estimator.
APA, Harvard, Vancouver, ISO, and other styles
10

Agarwal, Ankush, and Sandeep Juneja. "Nearest Neighbor Based Estimation Technique for Pricing Bermudan Options." International Game Theory Review 17, no. 01 (March 2015): 1540002. http://dx.doi.org/10.1142/s0219198915400022.

Full text
Abstract:
Bermudan option is an option which allows the holder to exercise at pre-specified time instants where the aim is to maximize expected payoff upon exercise. In most practical cases, the underlying dimensionality of Bermudan options is high and the numerical methods for solving partial differential equations as satisfied by the price process become inapplicable. In the absence of analytical formula a popular approach is to solve the Bermudan option pricing problem approximately using dynamic programming via estimation of the so-called continuation value function. In this paper we develop a nearest neighbor estimator based technique which gives biased estimators for the true option price. We provide algorithms for calculating lower and upper biased estimators which can be used to construct valid confidence intervals. The computation of lower biased estimator is straightforward and relies on suboptimal exercise policy generated using the nearest neighbor estimate of the continuation value function. The upper biased estimator is similarly obtained using likelihood ratio weighted nearest neighbors. We analyze the convergence properties of mean square error of the lower biased estimator. We develop order of magnitude relationship between the simulation parameters and computational budget in an asymptotic regime as the computational budget increases to infinity.
APA, Harvard, Vancouver, ISO, and other styles
11

Pervaiz, Hafiz Zain, Syed Muhammad Muslim Raza, Muhammad Moeen Butt, Saira Sharif, and Muhammad Haider. "A New Hybrid Exponentially Weighted Moving Average control chart using Mixture Ratio Estimator of Mean." Proceedings of the Pakistan Academy of Sciences: A. Physical and Computational Sciences 58, no. 2 (December 27, 2021): 45–57. http://dx.doi.org/10.53560/ppasa(58-2)606.

Full text
Abstract:
In this paper, we propose a Hybrid Exponentially Weighted Moving Average (HEWMA) control chart based on a mixture ratio estimator of mean using a single auxiliary variable and a single auxiliary attribute (Moeen et al., [1]). We call it as Z- HEWMA control chart. The proposed control chart performance is evaluated using outof- control-Average Run Length (ARL1). The control limits of the proposed chart is based on estimator, its mean square errors. A simulated example is used to compare the proposed Z-HEWMA, traditional/simple EWMA chart and CUSUM control chart. From this study the fact is revealed that Z-HEWMA control chart shows more efficient results as compared to traditional/simple EWMA and CUSUM control charts. The Z-HEWMA chart can be used for efficient monitoring of the production process in manufacturing industries where auxiliary information about a numerical variable and an attribute is available.
APA, Harvard, Vancouver, ISO, and other styles
12

Marzjarani, Morteza. "A Comparison of a General Linear Model and the Ratio Estimator." International Journal of Statistics and Probability 9, no. 3 (April 15, 2020): 54. http://dx.doi.org/10.5539/ijsp.v9n3p54.

Full text
Abstract:
In data analysis, selecting a proper statistical model is a challenging issue. Upon the selection, there are other important factors impacting the results. In this article, two statistical models, a General Linear Model (GLM) and the Ratio Estimator will be compared. Where applicable, some issues such as heteroscedasticity, outliers, etc. and the role they play in data analysis will be studied. For reducing the severity of heteroscedasticity, Weighted Least Square (WLS), Generalized Least Square (GLS), and Feasible Generalized Least Square (FGLS) will be deployed. Also, a revised version of FGLS is introduced. Since these issues are data dependent, shrimp effort data collected in the Gulf of Mexico for the years 2005 through 2018 will be used and it is shown that the revised FGLS reduces the impact of heteroscedasticity significantly compared to that of FGLS. The data sets will also be checked for the outliers and corrections are made (where applicable). It is concluded that these issues play a significant role in data analysis and must be taken seriously. Further, the two statistical models, that is, the GLM and the Ratio Estimator are compared.
APA, Harvard, Vancouver, ISO, and other styles
13

Purhadi, Anita Rahayu, and Gabriella Hillary Wenur. "Geographically Weighted Three-Parameters Bivariate Gamma Regression and Its Application." Symmetry 13, no. 2 (January 26, 2021): 197. http://dx.doi.org/10.3390/sym13020197.

Full text
Abstract:
This study discusses model development for response variables following a bivariate gamma distribution using three-parameters, namely shape, scale and location parameters, paying attention to spatial effects so as to produce different parameter estimator values for each location. This model is called geographically weighted bivariate gamma regression (GWBGR). The method used for parameter estimation is maximum-likelihood estimation (MLE) with the Berndt–Hall–Hall-Hausman (BHHH) algorithm approach. Parameter testing consisted of a simultaneous test using the maximum-likelihood ratio test (MLRT) and a partial test using Wald test. The results of GWBGR modeling three-parameters with fixed weight bisquare kernel showed that the variables that significantly affect the rate of infant mortality (RIM) and rate of maternal mortality (RMM) are the percentage of poor people, the percentage of obstetric complications treated, the percentage of pregnant mothers who received Fe3 and the percentage of first-time pregnant mothers under seventeen years of age. While the percentage of households with clean and healthy lifestyle only significant in several regencies and cities.
APA, Harvard, Vancouver, ISO, and other styles
14

Lynch, Thomas B., and Robert F. Wittwer. "n-Tree distance sampling for per-tree estimates with application to unequal-sized cluster sampling of increment core data." Canadian Journal of Forest Research 33, no. 7 (July 1, 2003): 1189–95. http://dx.doi.org/10.1139/x03-036.

Full text
Abstract:
Samples from the n trees nearest to a point or plot center are sometimes used to estimate per-tree values such as age or growth from increment cores. Clutter et al. (J.L. Clutter, J.C. Fortson, L.V. Pienaar, G.H. Brister, and R.L. Bailey. 1983. Timber management: a quantitative approach. John Wiley & Sons, New York) indicated that this procedure can be biased because it is more likely to sample large trees occupying large amounts of space. This sampling procedure falls into the category of n-tree distance sampling in which the nth closest tree to a point defines a plot radius that can be used to estimate number of trees or amount of volume per hectare. When a ratio of n-tree per-hectare estimates is used to estimate per-tree attributes, the resulting estimator is a weighted average in which weights are the inverse of the n-tree sampling plot size. Since this ratio estimator essentially weights observations inversely with plot size, it is not subject to the objections of Clutter et al. (1983). This estimator is used to estimate age by diameter at breast height class for eastern cottonwood (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland.
APA, Harvard, Vancouver, ISO, and other styles
15

Natasha Latifatu Soliha, Dian Lestari, and Yekti Widyaningsih. "Analisis Faktor-Faktor yang Menjelaskan Kasus AIDS Provinsi Jawa Timur Menggunakan Model Geographically Weighted Logistic Regression (GWLR)." Jurnal Statistika dan Aplikasinya 7, no. 1 (June 30, 2023): 37–48. http://dx.doi.org/10.21009/jsa.07104.

Full text
Abstract:
AIDS is the most chronic phase of HIV infection which can weaken the immune system. In 2020, East Java Province is a province which has the most HIV infections and in the third place for the highest total number of AIDS cases in Indonesia. The purpose of this research is to build a model using Geographically Weighted Logistic Regression (GWLR), and to work out the grouping results of regencies/cities using K-means Clustering Analysis. The variables used in this research are Gini Ratio, L Index of Per Capita Expenditure, Gender Ratio, Dependency Ratio, Gender Development Index, and The Number of Pos Pelayanan KB Desa. The proportion levels of AIDS cases are categorized into 2 categories based on cut-point which has been specified, which 0 as the category of low level with the proportion of AIDS cases is less than 0.0006 and 1 as the category of high level with the proportion of AIDS cases is more than or equal to 0.0006. Parameter estimation for GWLR is using Maximum Likelihood Estimation (MLE) method with Fixed Gaussian as weighted kernel function and optimum bandwidth is determined using Akaike's Information Criterion Corrected (AICc). Z-Score of the most suitable model will be grouped using K-means Clustering Analysis, with Z-score is parameter estimator divided by standard error. Grouping results indicates cluster 1 members tend to be regencies/cities that have gender ratio and dependency ratio as significant variables, meanwhile cluster 2 members tend to be regencies/cities that have only dependency ratio as significant variable.
APA, Harvard, Vancouver, ISO, and other styles
16

Hoque, Z., B. Billah, and S. Khan. "On the Size Corrected Tests in Improved Estimation." Calcutta Statistical Association Bulletin 57, no. 3-4 (September 2005): 143–60. http://dx.doi.org/10.1177/0008068320050301.

Full text
Abstract:
In this paper we propose shrinkage preliminary test estimator (SPTE) of the coefficient vector in the multiple linear regression model based on the size corrected Wald ( W), likelihood ratio ( LR) and Lagrangian multiplier ( LM) tests. The correction factors used are those obt,ained from degrees of freedom corrections to the estimate of the error variance and those obtained from the second­order Edgeworth approximations to the exact distributions of the test statistics. The bias and weighted mean squared error (WMSE) fun ctions of the estimators are derived. With respect to WMSE, the relative efficiencies of the SPTEs relative to the maximum likelihood estimator are calculated. This study shows that the amount of conflict can be substantial when the three t ests are based on the same asymptotic chi­square critical value. The conflict among the SPTEs is due to the asymptotic tests not having the correct significance level. The Edgeworth size corrected W, LR and LM tests reduce the conflict remarkably.
APA, Harvard, Vancouver, ISO, and other styles
17

Viwatwongkasem, Chukiat, Sutthisak Srisawad, Pichitpong Soontornpipit, Jutatip Sillabutra, Pratana Satitvipawee, Prasong Kitidamrongsuk, and Hathaikan Chootrakool. "Minimum MSE Weighted Estimator to Make Inferences for a Common Risk Ratio across Sparse Meta-Analysis Data." Open Journal of Statistics 12, no. 01 (2022): 49–69. http://dx.doi.org/10.4236/ojs.2022.121004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Yixiang, Qingwei Gao, Dong Sun, Dexiang Zhang, Yi Xia, and Hui Wang. "A Novel Directionlet-Based Image Denoising Method Using MMSE Estimator and Laplacian Mixture Distribution." Journal of Electrical and Computer Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/459285.

Full text
Abstract:
A novel method based on directionlet transform is proposed for image denoising under Bayesian framework. In order to achieve noise removal, the directionlet coefficients of the uncorrupted image are modeled independently and identically by a two-state Laplacian mixture model with zero mean. The expectation-maximization algorithm is used to estimate the parameters that characterize the assumed prior model. Within the framework of Bayesian theory, the directionlet coefficients of noise-free image are estimated by a nonlinear shrinkage function based on weighted average of the minimum mean square error estimator. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the proposed method is very competitive when compared with other methods in terms of both peak signal-to-noise ratio and visual quality.
APA, Harvard, Vancouver, ISO, and other styles
19

Silvapulle, Mervyn J. "On Limited Dependent Variable Models: Maximum Likelihood Estimation and Test of One-sided Hypothesis." Econometric Theory 7, no. 3 (September 1991): 385–95. http://dx.doi.org/10.1017/s0266466600004527.

Full text
Abstract:
The limited dependent variable models with errors having log-concave density functions are studied here. For such models with normal errors, the asymptotic normality of the maximum likelihood estimator was established by Amemiya [1]. We show, when the density of the error distribution is log-concave, that the maximum likelihood estimator exists with arbitrarily large probability for large sample sizes, and is asymptotically normal. The general theory presented here includes the important special cases of normal, logistic, and extreme value error distributions. The main results are established under rather weak conditions. It is also shown that, under the null hypothesis, the asymptotic distribution of the likelihood ratio statistic for testing a one-sided alternative hypothesis is a weighted sum of chi-squares.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhao, Qingyuan, Yang Chen, Jingshu Wang, and Dylan S. Small. "Powerful three-sample genome-wide design and robust statistical inference in summary-data Mendelian randomization." International Journal of Epidemiology 48, no. 5 (July 11, 2019): 1478–92. http://dx.doi.org/10.1093/ije/dyz142.

Full text
Abstract:
Abstract Background Summary-data Mendelian randomization (MR) has become a popular research design to estimate the causal effect of risk exposures. With the sample size of GWAS continuing to increase, it is now possible to use genetic instruments that are only weakly associated with the exposure. Development We propose a three-sample genome-wide design where typically 1000 independent genetic instruments across the whole genome are used. We develop an empirical partially Bayes statistical analysis approach where instruments are weighted according to their strength; thus weak instruments bring less variation to the estimator. The estimator is highly efficient with many weak genetic instruments and is robust to balanced and/or sparse pleiotropy. Application We apply our method to estimate the causal effect of body mass index (BMI) and major blood lipids on cardiovascular disease outcomes, and obtain substantially shorter confidence intervals (CIs). In particular, the estimated causal odds ratio of BMI on ischaemic stroke is 1.19 (95% CI: 1.07–1.32, P-value <0.001); the estimated causal odds ratio of high-density lipoprotein cholesterol (HDL-C) on coronary artery disease (CAD) is 0.78 (95% CI: 0.73–0.84, P-value <0.001). However, the estimated effect of HDL-C attenuates and become statistically non-significant when we only use strong instruments. Conclusions A genome-wide design can greatly improve the statistical power of MR studies. Robust statistical methods may alleviate but not solve the problem of horizontal pleiotropy. Our empirical results suggest that the relationship between HDL-C and CAD is heterogeneous, and it may be too soon to completely dismiss the HDL hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Wenxing, Xiaojun Mao, Zhuqun Zhai, and Yingsong Li. "High Performance Robust Adaptive Beamforming in the Presence of Array Imperfections." International Journal of Antennas and Propagation 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/3743509.

Full text
Abstract:
A high performance robust beamforming scheme is proposed to combat model mismatch. Our method lies in the novel construction of interference-plus-noise (IPN) covariance matrix. The IPN covariance matrix consists of two parts. The first part is obtained by utilizing the Capon spectrum estimator integrated over a region separated from the direction of the desired signal and the second part is acquired by removing the desired signal component from the sample covariance matrix. Then a weighted summation of these two parts is utilized to reconstruct the IPN matrix. Moreover, a steering vector estimation method based on orthogonal constraint is also proposed. In this method, the presumed steering vector is corrected via orthogonal constraint under the condition where the estimation does not converge to any of the interference steering vectors. To further improve the proposed method in low signal-to-noise ratio (SNR), a hybrid method is proposed by incorporating the diagonal loading method into the IPN matrix reconstruction. Finally, various simulations are performed to demonstrate that the proposed beamformer provides strong robustness against a variety of array mismatches. The output signal-to-interference-plus-noise ratio (SINR) improvement of the beamformer due to the proposed method is significant.
APA, Harvard, Vancouver, ISO, and other styles
22

An, Qi, Zi-shu He, Hui-yong Li, and Yong-hua Li. "Phase Clustering Based Modulation Classification Algorithm for PSK Signal over Wireless Environment." Mobile Information Systems 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/2398464.

Full text
Abstract:
Promptitude and accuracy of signals’ non-data-aided (NDA) identification is one of the key technology demands in noncooperative wireless communication network, especially in information monitoring and other electronic warfare. Based on this background, this paper proposes a new signal classifier for phase shift keying (PSK) signals. The periodicity of signal’s phase is utilized as the assorted character, with which a fractional function is constituted for phase clustering. Classification and the modulation order of intercepted signals can be achieved through its Fast Fourier Transform (FFT) of the phase clustering function. Frequency offset is also considered for practical conditions. The accuracy of frequency offset estimation has a direct impact on its correction. Thus, a feasible solution is supplied. In this paper, an advanced estimator is proposed for estimating the frequency offset and balancing estimation accuracy and range under low signal-to-noise ratio (SNR) conditions. The influence on estimation range brought by the maximum correlation interval is removed through the differential operation of the autocorrelation of the normalized baseband signal raised to the power ofQ. Then, a weighted summation is adopted for an effective frequency estimation. Details of equations and relevant simulations are subsequently presented. The estimator proposed can reach an estimation accuracy of10-4even when the SNR is as low as-15 dB. Analytical formulas are expressed, and the corresponding simulations illustrate that the classifier proposed is more efficient than its counterparts even at low SNRs.
APA, Harvard, Vancouver, ISO, and other styles
23

Noma, Hisashi, and Shiro Tanaka. "Analysis of case-cohort designs with binary outcomes: Improving efficiency using whole-cohort auxiliary information." Statistical Methods in Medical Research 26, no. 2 (October 26, 2014): 691–706. http://dx.doi.org/10.1177/0962280214556175.

Full text
Abstract:
The case-cohort design has been widely adopted for reducing the cost of covariate measurements in large prospective cohort studies. Under the case-cohort design, complete covariate data are collected only on randomly sampled cases and a subcohort randomly selected from the whole cohort. For the analysis of case-cohort studies with binary outcomes, logistic regression analysis has been routinely used. However, in many applications, certain covariates are readily measured on all samples from the whole cohort, and the case-cohort design may be regarded as a two-phase sampling design. Using this auxiliary covariate information, estimators for the regression parameters can be substantially improved. In this article, we discuss the theoretical basis of the case-cohort design derived from the formulation of the two-phase design and the improved estimators using whole-cohort auxiliary variable information. In particular, we show that the sampling scheme of the case-cohort design is substantially equivalent to that of conventional two-phase case-control studies (also known as two-stage case-control studies for epidemiologists), i.e., the methodologies of two-phase case-control studies can be directly applied to case-cohort data. Under this framework, we review and apply the following improved estimators to the case-cohort design with binary outcomes: (i) weighted estimators, (ii) a semiparametric maximum likelihood estimator, and (iii) a multiple imputation estimator. In addition, based on the framework of the two-phase design, we can obtain risk ratio and risk difference estimators without the rare-disease assumption. We illustrate these methodologies via simulations and the National Wilms Tumor Study data.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Xiaoxiang, Zhen Dong, Anxi Yu, Manqing Wu, Dexin Li, and Yongsheng Zhang. "New Approaches for Robust and Efficient Detection of Persistent Scatterers in SAR Tomography." Remote Sensing 11, no. 3 (February 11, 2019): 356. http://dx.doi.org/10.3390/rs11030356.

Full text
Abstract:
Persistent scatterer interferometry (PSI) has the ability to acquire submeter-scale digital elevation model (DEM) and millimeter-scale deformation. A limitation to the application of PSI is that only single persistent scatterers (SPSs) are detected, and pixels with multiple dominant scatterers from different sources are discarded in PSI processing. Synthetic aperture radar (SAR) tomography is a promising technique capable of resolving layovers. In this paper, new approaches based on a novel two-tier network aimed at robust and efficient detection of persistent scatterers (PSs) are presented. The calibration of atmospheric phase screen (APS) and the detection of PSs can be jointly implemented in the novel two-tier network. A residue-to-signal ratio (RSR) estimator is proposed to evaluate whether the APS is effectively calibrated and to select reliable PSs with accurate estimation. In the first-tier network, a Delaunay triangulation network is constructed for APS calibration and SPS detection. RSR thresholding is used to adjust the first-tier network by discarding arcs and SPS candidates (SPSCs) with inaccurate estimation, yielding more than one main network in the first-tier network. After network adjustment, we attempt to establish reliable SPS arcs to connect the main isolated networks, and the expanded largest connected network is then formed with more manual structure information subtracted. Furthermore, rather than the weighted least square (WLS) estimator, a network decomposition WLS (ND-WLS) estimator is proposed to accelerate the retrieval of absolute parameters from the expanded largest connected network, which is quite useful for large network inversion. In the second-tier network, the remaining SPSs and all the double PSs (DPSs) are detected and estimated with reference to the expanded largest connected network. Compared with traditional two-tier network-based methods, more PSs can be robustly and efficiently detected by the proposed new approaches. Experiments on interferometric high resolution TerraSAR-X SAR images are given to demonstrate the merits of the new approaches.
APA, Harvard, Vancouver, ISO, and other styles
25

Pagoni, Panagiota, Niki L. Dimou, Neil Murphy, and Evie Stergiakouli. "Using Mendelian randomisation to assess causality in observational studies." Evidence Based Mental Health 22, no. 2 (April 12, 2019): 67–71. http://dx.doi.org/10.1136/ebmental-2019-300085.

Full text
Abstract:
ObjectiveMendelian randomisation (MR) is a technique that aims to assess causal effects of exposures on disease outcomes. The paper aims to present the main assumptions that underlie MR, the statistical methods used to estimate causal effects and how to account for potential violations of the key assumptions.MethodsWe discuss the key assumptions that should be satisfied in an MR setting. We list the statistical methodologies used in two-sample MR when summary data are available to estimate causal effects (ie, Wald ratio estimator, inverse-variance weighted and maximum likelihood method) and identify/adjust for potential violations of MR assumptions (ie, MR-Egger regression and weighted Median approach). We also present statistical methods and graphical tools used to evaluate the presence of heterogeneity.ResultsWe use as an illustrative example of a published two-sample MR study, investigating the causal association of body mass index with three psychiatric disorders (ie, bipolar disorder, schizophrenia and major depressive disorder). We highlight the importance of assessing the results of all available methods rather than each method alone. We also demonstrate the impact of heterogeneity in the estimation of the causal effects.ConclusionsMR is a useful tool to assess causality of risk factors in medical research. Assessment of the key assumptions underlying MR is crucial for a valid interpretation of the results.
APA, Harvard, Vancouver, ISO, and other styles
26

Simpson, Bryan, David Reith, Natalie Medlicott, and Alesha Smith. "Choice of Renal Function Estimator Influences Adverse Outcomes with Dabigatran Etexilate in Patients with Atrial Fibrillation." TH Open 02, no. 04 (October 2018): e420-e427. http://dx.doi.org/10.1055/s-0038-1676356.

Full text
Abstract:
Background Clinical significance of dosing dabigatran with different estimates of renal function for treatment of atrial fibrillation (AF) is unknown. Renal function is routinely estimated by the chronic kidney disease epidemiology initiative equation (CKD-EPI) and used to guide dosing. The aim of this study was to investigate the risk of adverse outcomes for patients with AF when different estimators of renal function are used. Material and Methods AF patient data were extracted from national administrative databases. Renal function was estimated using Cockcroft–Gault, CKD-EPI, and CKD-EPI adjusted for body surface area (CKD-EPI-BSA). Outcomes of cerebrovascular accident (CVA), systemic embolism (SE), and hemorrhage were extracted. Results In total, 2,425 patients were identified, of which there were hospitalizations for 138 (5.7%) hemorrhagic events, 45 (1.9%) CVA/SE, and 33 (1.4%) unspecified CVA. The level of agreement between Cockcroft–Gault with CKD-EPI and CKD-EPI-BSA yielded a weighted kappa statistic of 0.47 and 0.71, respectively. CKD-EPI and CKD-EPI-BSA significantly overestimated renal function in elderly patients resulting in higher recommended doses compared with Cockcroft–Gault. The hazard ratio for a hemorrhagic event was 2.32 (95% confidence interval, 1.22–4.42; p = 0.01) when a high dose was given compared with normal dose, based on Cockcroft–Gault. Conclusion Both CKD-EPI and CKD-EPI-BSA equations significantly overestimated renal function in the elderly population compared with the Cockcroft–Gault equation. This may lead to dose selection errors for dabigatran, particularly for those with severe impairment, increasing the risk of adverse outcome. Hence, CKD-EPI and CKD-EPI-BSA equations should not be substituted for the Cockcroft–Gault equation in the elderly for the purpose of renal dosage adjustments.
APA, Harvard, Vancouver, ISO, and other styles
27

Brannath, Werner, Matthias Brückner, Meinhard Kieser, and Geraldine Rauch. "The Average Hazard Ratio – A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?" Methods of Information in Medicine 57, no. 03 (May 2018): 089–100. http://dx.doi.org/10.3414/me17-01-0058.

Full text
Abstract:
Summary Background: In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called ‘average hazard ratio’. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. Objectives: There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. Methods: We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Results: Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. Conclusions: As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Hu, Mengjin, Xiaoning Wang, Jiangshan Tan, Jingang Yang, Xiaojin Gao, and Yuejin Yang. "Causal Associations between Paternal Longevity and Risks of Cardiovascular Diseases." Journal of Cardiovascular Development and Disease 9, no. 8 (July 26, 2022): 233. http://dx.doi.org/10.3390/jcdd9080233.

Full text
Abstract:
Background: Observational studies have suggested that paternal longevity is associated with reduced risks of cardiovascular diseases, yet the causal association remains to be determined. Objectives: To investigate whether Mendelian randomization (MR) results support a causal role of paternal longevity for risks of cardiovascular diseases. Methods: Genetic variants associated with paternal longevity and cardiovascular diseases were obtained from public genome-wide association study data. We used inverse variance weighted MR under a random-effects model to provide causal estimates between paternal longevity and cardiovascular diseases. Results: Paternal longevity was associated with decreased risks of coronary heart disease (odds ratio (OR): 0.08; 95% confidence interval (CI): 0.02–0.37; p = 0.001) and peripheral artery disease (OR: 0.15; 95% CI: 0.03–0.65; p = 0.011). No significant differences were observed in hypertension, atrial fibrillation, heart failure, transient ischemic attack, ischemic stroke, or cardiac death. The weighted median method revealed consistent results between genetically instrumented paternal longevity and decreased risk of coronary heart disease and peripheral artery disease. No significant differences were observed in the MR-Egger results. Multivariable MR consistently indicated causal associations between paternal longevity and decreased cardiovascular diseases. The leave-one-out analysis suggested that the causal associations were not affected by individual single-nucleotide polymorphisms. The intercept of the MR-Egger estimator and funnel plot revealed no indication of horizontal pleiotropic effects. Conclusions: Our MR analyses supported a causal role of paternal longevity for decreased risks of coronary heart disease and peripheral artery disease, which highlighted the need for better monitoring and intervention of cardiovascular diseases in populations with premature paternal death.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Ziteng, Junfeng Li, and Yonghong Yan. "Target Speaker Localization Based on the Complex Watson Mixture Model and Time-Frequency Selection Neural Network." Applied Sciences 8, no. 11 (November 21, 2018): 2326. http://dx.doi.org/10.3390/app8112326.

Full text
Abstract:
Common sound source localization algorithms focus on localizing all the active sources in the environment. While the source identities are generally unknown, retrieving the location of a speaker of interest requires extra effort. This paper addresses the problem of localizing a speaker of interest from a novel perspective by first performing time-frequency selection before localization. The speaker of interest, namely the target speaker, is assumed to be sparsely active in the signal spectra. The target speaker-dominant time-frequency regions are separated by a speaker-aware Long Short-Term Memory (LSTM) neural network, and they are sufficient to determine the Direction of Arrival (DoA) of the target speaker. Speaker-awareness is achieved by utilizing a short target utterance to adapt the hidden layer outputs of the neural network. The instantaneous DoA estimator is based on the probabilistic complex Watson Mixture Model (cWMM), and a weighted maximum likelihood estimation of the model parameters is accordingly derived. Simulative experiments show that the proposed algorithm works well in various noisy conditions and remains robust when the signal-to-noise ratio is low and when a competing speaker exists.
APA, Harvard, Vancouver, ISO, and other styles
30

Toyama, Naoki, Daisuke Ekuni, Daisuke Matsui, Teruhide Koyama, Masahiro Nakatochi, Yukihide Momozawa, Michiaki Kubo, and Manabu Morita. "Comprehensive Analysis of Risk Factors for Periodontitis Focusing on the Saliva Microbiome and Polymorphism." International Journal of Environmental Research and Public Health 18, no. 12 (June 14, 2021): 6430. http://dx.doi.org/10.3390/ijerph18126430.

Full text
Abstract:
Few studies have exhaustively assessed relationships among polymorphisms, the microbiome, and periodontitis. The objective of the present study was to assess associations simultaneously among polymorphisms, the microbiome, and periodontitis. We used propensity score matching with a 1:1 ratio to select subjects, and then 22 individuals (mean age ± standard deviation, 60.7 ± 9.9 years) were analyzed. After saliva collection, V3-4 regions of the 16S rRNA gene were sequenced to investigate microbiome composition, alpha diversity (Shannon index, Simpson index, Chao1, and abundance-based coverage estimator) and beta diversity using principal coordinate analysis (PCoA) based on weighted and unweighted UniFrac distances. A total of 51 single-nucleotide polymorphisms (SNPs) related to periodontitis were identified. The frequencies of SNPs were collected from Genome-Wide Association Study data. The PCoA of unweighted UniFrac distance showed a significant difference between periodontitis and control groups (p < 0.05). There were no significant differences in alpha diversity and PCoA of weighted UniFrac distance (p > 0.05). Two families (Lactobacillaceae and Desulfobulbaceae) and one species (Porphyromonas gingivalis) were observed only in the periodontitis group. No SNPs showed significant expression. These results suggest that periodontitis was related to the presence of P. gingivalis and the families Lactobacillaceae and Desulfobulbaceae but not SNPs.
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Jinzhi, Yun Zhang, Na Li, and Hongling Jiang. "Joint Source-Channel Decoding of Polar Codes for HEVC-Based Video Streaming." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–23. http://dx.doi.org/10.1145/3502208.

Full text
Abstract:
Ultra High-Definition (UHD) and Virtual Reality (VR) video streaming over 5G networks are emerging, in which High-Efficiency Video Coding (HEVC) is used as source coding to compress videos more efficiently and polar code is used as channel coding to transmit bitstream reliably over an error-prone channel. In this article, a novel Joint Source-Channel Decoding (JSCD) of polar codes for HEVC-based video streaming is presented to improve the streaming reliability and visual quality. Firstly, a Kernel Density Estimation (KDE) fitting approach is proposed to estimate the positions of error channel decoded bits. Secondly, a modified polar decoder called R-SCFlip is designed to improve the channel decoding accuracy. Finally, to combine the KDE estimator and the R-SCFlip decoder together, the JSCD scheme is implemented in an iterative process. Extensive experimental results reveal that, compared to the conventional methods without JSCD, the error data-frame correction ratios are increased. Averagely, 1.07% and 1.11% Frame Error Ratio (FER) improvements have been achieved for Additive White Gaussian Noise (AWGN) and Rayleigh fading channels, respectively. Meanwhile, the qualities of the recovered videos are significantly improved. For the 2D videos, the average Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) gains reach 14% and 34%, respectively. For the 360֯ videos, the average improvements in terms of Weighted-to-Spherically-uniform PSNR (WS-PSNR) and Voronoi-based Video Multimethod Assessment Fusion (VI-VMAF) reach 21% and 7%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
32

Steenweg, Claas, Jonas Habicht, and Kerstin Wohlgemuth. "Continuous Isolation of Particles with Varying Aspect Ratios up to Thin Needles Achieving Free-Flowing Products." Crystals 12, no. 2 (January 19, 2022): 137. http://dx.doi.org/10.3390/cryst12020137.

Full text
Abstract:
The continuous vacuum screw filter (CVSF) for small-scale continuous product isolation of suspensions was operated for the first time with cuboid-shaped and needle-shaped particles. These high aspect ratio particles are very common in pharmaceutical manufacturing processes and provide challenges in filtration, washing, and drying processes. Moreover, the flowability decreases and undesired secondary processes of attrition, breakage, and agglomeration may occur intensively. Nevertheless, in this study, it is shown that even cuboid and needle-shaped particles (l-alanine) can be processed within the CVSF preserving the product quality in terms of particle size distribution (PSD) and preventing breakage or attrition effects. A dynamic image analysis-based approach combining axis length distributions (ALDs) with a kernel-density estimator was used for evaluation. This approach was extended with a quantification of the center of mass of the density-weighted ALDs, providing a measure to analyze the preservation of the inlet PSD statistically. Moreover, a targeted residual moisture below 1% could be achieved by adding a drying module (Tdry = 60 °C) to the modular setup of the CVSF.
APA, Harvard, Vancouver, ISO, and other styles
33

Bayar, Yilmaz. "Macroeconomic, Institutional and Bank-Specific Determinants of Non-Performing Loans in Emerging Market Economies: A Dynamic Panel Regression Analysis." Journal of Central Banking Theory and Practice 8, no. 3 (September 1, 2019): 95–110. http://dx.doi.org/10.2478/jcbtp-2019-0026.

Full text
Abstract:
Abstract Banking sector is important for various macroeconomic and microeconomic variables in terms of mobilization of funds, increasing savings, and providing alternative investment instruments suited to the every person by minimizing the risk of adverse selection and moral hazard, allocating funds to most productive projects, risk diversification. Therefore, sound functioning of the banking sector is critical especially for emerging and developing countries. This study explores the macroeconomic, institutional, and bank-specific factors behind nonperforming banking loans as an indicator of banking sector functioning in emerging market economies over the 2000-2013 period by employing the system GMM dynamic panel data estimator. Results of the dynamic panel regression analysis showed that economic growth, inflation, economic freedom (institutional development), return on assets and equity, regulatory capital to risk-weighted assets, and noninterest income to total income affected nonperforming loans negatively, while unemployment, public debt, credit growth, lagged values of nonperforming loans, cost to income ratio and financial crises affected nonperforming loans positively.
APA, Harvard, Vancouver, ISO, and other styles
34

Kasana, E., K. Chauhan, and B. P. Sahoo. "Impact of monetary policy on bank loans in India." Finance: Theory and Practice 27, no. 3 (July 12, 2023): 56–64. http://dx.doi.org/10.26794/2587-5671-2023-27-3-56-64.

Full text
Abstract:
This research paper aims to investigate the monetary transmission in India through bank lending channel, to know whether a change in monetary policy affects bank loans or not. A balanced panel data of 50 commercial banks covering a timeframe of 11 years from 2009 to 2020 has been undertaken for the research methodology. The outcomes of the dynamic panel have been considered by using the Generalized Method of Moment developed by Arellano Bond Blundell and Bover estimator. The result indicates that channel of bank lending has improved banks’ resilience to monetary shocks. This paper finds the significance of bank characteristics like size, liquidity, and capital which have a substantial impact on bank lending. This research study concludes that repo rate, cash reserve ratio and weighted average call rate are imperative instrument of monetary policy transmission. Banks with small size, capital, and liquidity are more sensitive to any variation in monetary policy as compared to large banks.
APA, Harvard, Vancouver, ISO, and other styles
35

Aymon, R., B. Gilbert, D. Mongin, E. Nham, C. Laedermann, R. Müller, K. Lauper, D. Courvoisier, and A. Finckh. "POS1420 DOUBLY ROBUST ESTIMATOR FOR AVERAGE TREATMENT EFFECT AS SENSITIVITY ANALYSIS FOR COMPARATIVE EFFECTIVENESS RESEARCH. AN EXAMPLE COMPARING DRUG MAINTENANCE BETWEEN BARICITINIB AND ALTERNATIVE BIOLOGIC DMARDS." Annals of the Rheumatic Diseases 81, Suppl 1 (May 23, 2022): 1053. http://dx.doi.org/10.1136/annrheumdis-2022-eular.1822.

Full text
Abstract:
BackgroundDrug maintenance is a common outcome measure of real world effectiveness studies, because it combines a measure of drug effectiveness and its tolerance / safety. Major hurdles of observational studies are potential selection biases and confounding. Cox proportional hazard ratio models address this issue by adjusting for potential confounders, but misspecification of the model may lead to biased estimates. ‘Augmented Inverse Probability Treatment Weighting’ (AIPTW) has the attractive property of being doubly robust, meaning that only one of the two underlying models has to be correctly specified to obtain consistent estimates. It can be used as a sensitivity analysis for Cox models, when analyzing time-to-event data.ObjectivesTo evaluate AIPTW estimator and test the robustness of the results obtained by a Cox model.MethodsPrevious analyses in the Swiss rheumatoid arthritis (RA) registry (SCQM) had demonstrated that time to all-cause-discontinuation was significantly longer in RA patients on bariticinib (BARI, N = 273) compared to TNF-inhibitors (TNFi, N = 473); but not compared to other mode of actions biologics (OMA, N = 378) [1], in an adjusted Cox regression including age, gender, BMI, concomitant csDMARD, prednisone, CDAI score, disease duration, smoking status, line of therapy and seropositivity.Here we repeat the same analysis using AIPTW, including the same potential confounders. We combine a propensity score using a logistic regression model and an inverse probability weighted Cox regression. Two implementations of the AIPTW estimator are considered. First we use the RiskRegression package in R, to obtain risk ratios. Then we implement the AIPTW manually to obtain the average treatment effect as the difference in median survival time.ResultsTime to treatment discontinuation measured with Cox model was significantly longer for RA patients on BARI compared to patients on TNFi according to the adjusted Cox model (HR = 1.79), and a similar non-significant trend existed when compared to OMA (HR = 1.29).When considering 90-day treatment discontinuation measured with the AIPTW, the results were qualitatively very similar: the risk ratio between BARI and TNFi groups is statistically significant (RR = 2.51), while that of BARI against OMA is larger than one (RR = 1.47), but not statistically significant. Confidence intervals are larger with the AIPTW estimation.Table 1.Cox Regression HR and AIPTW risk ratiosCox Regression Hazard Ratio (95% CI)AIPTW Estimate of 90-day risk ratio of treatment discontinuation (95% CI)BARI vs. TNFi1.79* (1.34-2.38)2.51* (1.19 – 3.83)BARI vs. OMA1.29 (0.96-1.73)1.47 (0.76 – 2.18)Legend: BARI: baricitinib; TNFi: TNF-inhibitors; OMA: Other Mode of Actions biologics; AIPTW: Augmented Inverse Probability Treatment Weighting. 95% CI: 95% Confidence Interval. *: statistically significant result at the p<0.05 level.Figure 1.Absolute risk of treatment discontinuation over time between patients on baricitinib and patients on TNF inhibitors, estimated with AIPTW.ConclusionTime to treatment discontinuation measured with Cox model was significantly longer for RA patients on BARI compared to patients on TNFi according to the adjusted Cox model (HR = 1.79), and a similar non-significant trend existed when compared to OMA (HR = 1.29).When considering 90-day treatment discontinuation measured with the AIPTW, the results were qualitatively very similar: the risk ratio between BARI and TNFi groups is statistically significant (RR = 2.51), while that of BARI against OMA is larger than one (RR = 1.47), but not statistically significant. Confidence intervals are larger with the AIPTW estimation.Conflict of Interest:This analysis has been made possible by financial support of Eli Lilly (Suisse) SA to the Geneva University Hospitals (HUG).References[1]Ann Rheum Dis, supplement 1, year 2021. DOI: 10.1136/annrheumdis-2021-eular.1781Disclosure of InterestsRomain Aymon: None declared, Benoit GILBERT: None declared, Denis Mongin: None declared, Eric Nham: None declared, Cedric Laedermann Employee of: Eli Lilly, Rüdiger Müller Consultant of: Streuli Pharma, Gebro Pharma, AbbVie, Kim Lauper Speakers bureau: Pfizer, Viatris and Celltrion, Consultant of: Pfizer, Delphine Courvoisier: None declared, Axel Finckh: None declared
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Huabin, Junbin Huang, Wenhua Liao, Jiannan Xu, Zhongyuan He, Yong Liu, Zhijie He, and Chun Chen. "Prognostic Value of the Red Cell Distribution Width in Patients with Sepsis-Induced Acute Respiratory Distress Syndrome: A Retrospective Cohort Study." Disease Markers 2021 (June 2, 2021): 1–8. http://dx.doi.org/10.1155/2021/5543822.

Full text
Abstract:
Objective. The prognostic value of the red cell distribution width (RDW) in patients with sepsis-induced acute respiratory distress syndrome (ARDS) is still elusive. This study is aimed at determining whether RDW is a prognostic indicator of sepsis-induced ARDS. Methods. This retrospective cohort study included 1161 patients with sepsis-induced ARDS. The datasets were acquired from the Medical Information Mart for Intensive Care III database. The locally weighted scatter-plot smoothing technique, Cox regression, Kaplan-Meier estimator, and subgroup analysis were carried out to evaluate the association between RDW and 90-day mortality. Results. The RDW and mortality had a roughly linear increasing relationship. The Cox regression model results were as follows: for level 2 ( 14.5 % < RDW < 16.2 % ), hazard ratio HR = 1.35 , 95% confidence interval CI = 1.03 – 1.77 , and for level 3 ( RDW ≥ 16.2 % ), HR = 2.07 , 95% CI = 1.59 – 2.69 . The following results were obtained when RDW was treated as a continuous variable: HR = 1.11 , 95 % CI = 1.06 – 1.15 . The P values of the interaction between the RDW and covariates were greater than 0.05. Conclusion. RDW is a new independent prognostic marker for patients with sepsis-induced ARDS.
APA, Harvard, Vancouver, ISO, and other styles
37

Manceau, Alain, Matthew Marcus, and Thomas Lenoir. "Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy." Journal of Synchrotron Radiation 21, no. 5 (July 31, 2014): 1140–47. http://dx.doi.org/10.1107/s1600577514013526.

Full text
Abstract:
Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.
APA, Harvard, Vancouver, ISO, and other styles
38

Andrews, Donald W. K., and Patrik Guggenberger. "ASYMPTOTIC SIZE OF KLEIBERGEN’S LM AND CONDITIONAL LR TESTS FOR MOMENT CONDITION MODELS." Econometric Theory 33, no. 5 (October 3, 2016): 1046–80. http://dx.doi.org/10.1017/s0266466616000347.

Full text
Abstract:
An influential paper by Kleibergen (2005, Econometrica 73, 1103–1123) introduces Lagrange multiplier (LM) and conditional likelihood ratio-like (CLR) tests for nonlinear moment condition models. These procedures aim to have good size performance even when the parameters are unidentified or poorly identified. However, the asymptotic size and similarity (in a uniform sense) of these procedures have not been determined in the literature. This paper does so.This paper shows that the LM test has correct asymptotic size and is asymptotically similar for a suitably chosen parameter space of null distributions. It shows that the CLR tests also have these properties when the dimension p of the unknown parameter θ equals 1. When p ≥ 2, however, the asymptotic size properties are found to depend on how the conditioning statistic, upon which the CLR tests depend, is weighted. Two weighting methods have been suggested in the literature. The paper shows that the CLR tests are guaranteed to have correct asymptotic size when p ≥ 2 when the weighting is based on an estimator of the variance of the sample moments, i.e., moment-variance weighting, combined with the Robin and Smith (2000, Econometric Theory 16, 151–175) rank statistic. The paper also determines a formula for the asymptotic size of the CLR test when the weighting is based on an estimator of the variance of the sample Jacobian. However, the results of the paper do not guarantee correct asymptotic size when p ≥ 2 with the Jacobian-variance weighting, combined with the Robin and Smith (2000, Econometric Theory 16, 151–175) rank statistic, because two key sample quantities are not necessarily asymptotically independent under some identification scenarios.Analogous results for confidence sets are provided. Even for the special case of a linear instrumental variable regression model with two or more right-hand side endogenous variables, the results of the paper are new to the literature.
APA, Harvard, Vancouver, ISO, and other styles
39

Fan, K., J. Hu, J. Hou, Q. Yu, P. F. He, X. LI, and S. X. Zhang. "POS0394 THE CAUSAL RELATIONSHIPS BETWEEN MEMBRANE-ASSOCIATED PROGESTERONE RECEPTOR AND OSTEOPOROSIS: EVIDENCE FROM HUMAN STUDIES." Annals of the Rheumatic Diseases 82, Suppl 1 (May 30, 2023): 452.2–453. http://dx.doi.org/10.1136/annrheumdis-2023-eular.2565.

Full text
Abstract:
BackgroundThe observational association has shown that progesterone is associated with osteoporosis (OP)(1), and progesterone was sometimes used as a therapy in osteoporosis treatment [2]. However, controversy over causality remains unresolved[3].ObjectivesTo reveal the causal association between progesterone and OP.MethodsWe performed two-sample mendelian randomization (MR) analyses. The summary statistics of membrane-associated progesterone receptor were obtained from the INTERVAL study, which characterized the genetic architecture of 1478 human plasma proteins[4]. The genetic associations of femoral neck bone mineral density (FN-BMD, 32735 samples), forearm bone mineral density (FA-BMD, 8143 samples), lumbar spine bone mineral density ((LS-BMD, 28498 samples), and heel bone mineral density (eBMD, 426824 samples) were obtained from GEnetic Factors for OSteoporosis Consortium (GEFOS)(5) and UK Biobank (containing 426,824 individuals)[6]. The significance threshold was set using independent variants in linkage disequilibrium (r2< 0.001) at the standard (p < 5e-6) to select single nucleotide polymorphisms (SNPs) as instrumental variables. Inverse variance weighted (IVW), weighted median estimator (WME), and MR-Egger regression methods (MR-ER) were performed in mendelian randomization analysis. Sensitivity analyses were then conducted. The MR Pleiotropy Residual Sum and Outlier (MR-PRESSO) were used to detect and correct the outliers in IVW linear regression[7].ResultsMR analyses found that increased level of membrane-associated progesterone receptor was causally linked to decreased FN-BMD (odds ratio =0.95, 95% confidence interval: 0.92-0.99, P =0.0086). No heterogeneity (Q = 9.77, PQ= 0.551) or horizontal pleiotropy (egger regression intercept = -0.008, P = 0.424) was detected. Besides, no significant causation was found between the mineral density of other kinds of bones (FA-BMD, LS-BMD, and eBMD) and progesterone (Figure 1).ConclusionThis study support that there are detrimental causal effects of progesterone on OP risk. Further randomized controlled trials are needed to clarify the specific protective mechanisms.References[1]Recker RR, Davies KM, Dowd RM, Heaney RP. The effect of low-dose continuous estrogen and progesterone therapy with calcium and vitamin D on bone in elderly women. A randomized, controlled trial. Ann Intern Med. 1999;130(11):897-904.[2]Prior JC. Progesterone for the prevention and treatment of osteoporosis in women. Climacteric. 2018;21(4):366-74.[3]Leonetti HB, Longo S, Anasti JN. Transdermal progesterone cream for vasomotor symptoms and postmenopausal bone loss. Obstet Gynecol. 1999;94(2):225-8.[4]Sun BB, Maranville JC, Peters JE, Stacey D, Staley JR, Blackshaw J, et al. Genomic atlas of the human plasma proteome. Nature. 2018;558(7708):73-9.[5]Zheng HF, Forgetta V, Hsu YH, Estrada K, Rosello-Diez A, Leo PJ, et al. Whole-genome sequencing identifies EN1 as a determinant of bone density and fracture. Nature. 2015;526(7571):112-7.[6]Morris JA, Kemp JP, Youlten SE, Laurent L, Logan JG, Chai RC, et al. An atlas of genetic influences on osteoporosis in humans and mice. Nat Genet. 2019;51(2):258-66.[7]Verbanck M, Chen CY, Neale B, Do R. Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases. Nat Genet. 2018;50(5):693-8.Figure 1.Forest plot of the results of MR analysis between membrane-associated progesterone receptor and OP. IVW, inverse variance weighted; MR-ER, MR-Egger regression methods; WME, weighted median estimator; OP, osteoporosis; CI, confidence interval; SNP, single nucleotide polymorphisms; FN-BMD, femoral neck bone mineral density; FA-BMD, forearm bone mineral density; LS-BMD, lumbar spine bone mineral density; eBMD, heel bone mineral density.Acknowledgements:NIL.Disclosure of InterestsNone Declared.
APA, Harvard, Vancouver, ISO, and other styles
40

Dusingize, Jean Claude, Catherine M. Olsen, Jiyuan An, Nirmala Pandeya, Matthew H. Law, Bridie S. Thompson, Alisa M. Goldstein, et al. "Body mass index and height and risk of cutaneous melanoma: Mendelian randomization analyses." International Journal of Epidemiology 49, no. 4 (February 18, 2020): 1236–45. http://dx.doi.org/10.1093/ije/dyaa009.

Full text
Abstract:
Abstract Background Height and body mass index (BMI) have both been positively associated with melanoma risk, although findings for BMI have been less consistent than height. It remains unclear, however, whether these associations reflect causality or are due to residual confounding by environmental and lifestyle risk factors. We re-evaluated these associations using a two-sample Mendelian randomization (MR) approach. Methods We identified single nucleotide polymorphisms (SNPs) for BMI and height from separate genome-wide association study (GWAS) meta-analyses. We obtained melanoma SNPs from the most recent melanoma GWAS meta-analysis comprising 12 874 cases and 23 203 controls. We used the inverse variance-weighted estimator to derive separate causal risk estimates across all SNP instruments for BMI and height. Results Based on the combined estimate derived from 730 SNPs for BMI, we found no evidence of an association between genetically predicted BMI and melanoma [odds ratio (OR) per one standard deviation (1 SD) (4.6 kg/m2) increase in BMI 1.00, 95% confidence interval (CI): 0.91–1.11]. In contrast, we observed a positive association between genetically-predicted height (derived from a pooled estimate of 3290 SNPs) and melanoma risk [OR 1.08, 95% CI: 1.02–1.13, per 1 SD (9.27 cm) increase in height]. Sensitivity analyses using two alternative MR methods yielded similar results. Conclusions These findings provide no evidence for a causal association between higher BMI and melanoma, but support the notion that height is causally associated with melanoma risk. Mechanisms through which height influences melanoma risk remain unclear, and it remains possible that the effect could be mediated through diverse pathways including growth factors and even socioeconomic status.
APA, Harvard, Vancouver, ISO, and other styles
41

de Souza, Talita Araujo, Karen Kaline Teixeira, Reginaldo Lopes Santana, Cinthia Barros Penha, Arthur de Almeida Medeiros, Kenio Costa de Lima, and Isabelle Ribeiro Barbosa. "Intra-urban differentials of congenital and acquired syphilis and syphilis in pregnant women in an urban area in northeastern Brazil." Transactions of The Royal Society of Tropical Medicine and Hygiene 115, no. 9 (February 6, 2021): 1010–18. http://dx.doi.org/10.1093/trstmh/trab011.

Full text
Abstract:
Abstract Background Currently syphilis is considered an epidemic disease worldwide. The objective of this study was to identify intra-urban differentials in the occurrence of congenital and acquired syphilis and syphilis in pregnant women in the city of Natal, in northeast Brazil. Methods Cases of syphilis recorded by the municipal surveillance system from 1 January 2011 to 30 December 2018 were analysed. Spatial statistical analyses were performed using the kernel density estimator of the quadratic smoothing function (weighted). SaTScan software was applied for the calculation of risk based on a discrete Poisson model. Results There were 2163 cases of acquired syphilis, 738 cases of syphilis in pregnant women and 1279 cases of congenital syphilis. Kernel density maps showed that the occurrence of cases is more prevalent in peripheral areas and in areas with more precarious urban infrastructure. In 2011–2014 and 2015–2018, seven statistically significant clusters of acquired syphilis were identified. From 2011 to 2014, the most likely cluster had a relative risk of 3.54 (log likelihood ratio [LLR] 38 895; p&lt;0.001) and from 2015 to 2018 the relative risk was 0.54 (LLR 69 955; p&lt;0.001). Conclusions In the municipality of Natal, there was a clustered pattern of spatial distribution of syphilis, with some areas presenting greater risk for the occurrence of new cases.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Xuefei, Min Feng, Hong Zhang, Chao Wang, Yixian Tang, Jinhao Xu, Dezhao Yan, and Chunling Wang. "Detecting Rock Glacier Displacement in the Central Himalayas Using Multi-Temporal InSAR." Remote Sensing 13, no. 23 (November 23, 2021): 4738. http://dx.doi.org/10.3390/rs13234738.

Full text
Abstract:
Rock glaciers represent typical periglacial landscapes and are distributed widely in alpine mountain environments. Rock glacier activity represents a critical indicator of water reserves state, permafrost distribution, and landslide disaster susceptibility. The dynamics of rock glacier activity in alpine periglacial environments are poorly quantified, especially in the central Himalayas. Multi-temporal Interferometric Synthetic Aperture Radar (MT-InSAR) has been shown to be a useful technique for rock glacier deformation detection. In this study, we developed a multi-baseline persistent scatterer (PS) and distributed scatterer (DS) combined MT-InSAR method to monitor the activity of rock glaciers in the central Himalayas. In periglacial landforms, the application of the PS interferometry (PSI) method is restricted by insufficient PS due to large temporal baseline intervals and temporal decorrelation, which hinder comprehensive measurements of rock glaciers. Thus, we first evaluated the rock glacier interferometric coherence of all possible interferometric combinations and determined a multi-baseline network based on rock glacier coherence; then, we constructed a Delaunay triangulation network (DTN) by exploiting both PS and DS points. To improve the robustness of deformation parameters estimation in the DTN, we combined the Nelder–Mead algorithm with the M-estimator method to estimate the deformation rate variation at the arcs of the DTN and introduced a ridge-estimator-based weighted least square (WLR) method for the inversion of the deformation rate from the deformation rate variation. We applied our method to Sentinel-1A ascending and descending geometry data (May 2018 to January 2019) and obtained measurements of rock glacier deformation for 4327 rock glaciers over the central Himalayas, at least more than 15% detecting with single geometry data. The line-of-sight (LOS) deformation of rock glaciers in the central Himalayas ranged from −150 mm to 150 mm. We classified the active deformation area (ADA) of all individual rock glaciers with the threshold determined by the standard deviation of the deformation map. The results show that 49% of the detected rock glaciers (monitoring rate greater than 30%) are highly active, with an ADA ratio greater than 10%. After projecting the LOS deformation to the steep slope direction and classifying the rock glacier activity following the IPA Action Group guideline, 12% of the identified rock glaciers were classified as active and 86% were classified as transitional. This research is the first multi-baseline, PS, and DS network-based MT-InSAR method applied to detecting large-scale rock glaciers activity.
APA, Harvard, Vancouver, ISO, and other styles
43

Chaudhry, Nazir Ahmed. "The weighted jackknife for ratio estimation." Communications in Statistics - Theory and Methods 19, no. 9 (January 1990): 3283–313. http://dx.doi.org/10.1080/03610929008830382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Monserud, Robert A., and John D. Marshall. "Allometric crown relations in three northern Idaho conifer species." Canadian Journal of Forest Research 29, no. 5 (May 1, 1999): 521–35. http://dx.doi.org/10.1139/x99-015.

Full text
Abstract:
Allometric equations predicting individual branch and total crown leaf area, leaf mass, and branch wood mass were developed for Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. glauca), ponderosa pine (Pinus ponderosa Dougl. ex Laws.), and western white pine (Pinus monticola Dougl. ex D. Don) on the Priest River Experimental Forest in northern Idaho. Whole crowns were weighed fresh in the field by crown quarter. Two antithetic random branches were sampled from each crown quarter, weighed fresh in the field, and returned to the laboratory for detailed analysis. Nonlinear weighted regression with the general allometric equation was used to estimate all parameters. For the branches, branch diameter and length, foliated length, and position in the crown explain 82-97% of the variation. Specific leaf area (leaf area/mass) differs significantly among species and increases with distance from the tree top. For whole trees, sapwood area at breast height, crown ratio and length, and crown competition factor (CCF) explain 94-99% of the variation. The assumption of linearity and constant ratio between leaf area and sapwood area held rather generally. Differences between two alternative estimators (branch summation vs. crown weighing) of total crown biomass and leaf area were not statistically significant. For stands, stand totals were estimated from the whole-tree equations and stand-inventory data. Generally, these stand estimates were intermediate between coastal forests west of the Cascades and drier forests in the rain shadow of the Rocky Mountain crest.
APA, Harvard, Vancouver, ISO, and other styles
45

Swords, Douglas S., Ignacio Garrido-Laguna, Sean J. Mulvihill, Gregory J. Stoddard, Matthew A. Firpo, and Courtney L. Scaife. "Association of adjuvant chemotherapy with overall survival in resected pancreatic adenocarcinoma previously treated with neoadjuvant therapy." Journal of Clinical Oncology 36, no. 4_suppl (February 1, 2018): 404. http://dx.doi.org/10.1200/jco.2018.36.4_suppl.404.

Full text
Abstract:
404 Background: Guidelines for adjuvant chemotherapy in patients with resected pancreatic adenocarcinoma (PDAC) who received neoadjuvant chemotherapy are equivocal. A lymph node ratio (LNR) ≥ 0.15 may predict lack of benefit, but conflicting results are reported. Methods: The National Cancer Database was searched to identify patients who were resected after neoadjuvant chemotherapy in 2006-2013. Exclusions: metastases at surgery, 90-day postoperative mortality, adjuvant radiation, and outlier interval from diagnosis to surgery (<2.5 or >10 months). The association between adjuvant chemotherapy and overall survival (OS) from diagnosis was examined using multivariable Cox regression and inverse propensity of treatment weighted (IPTW) Cox regression. An IPTW based estimator of the average treatment effect (ATE) was used to quantify the population average survival benefit of treatment. Outcomes were examined in all patients and in those with LNR < 0.15 and ≥ 0.15. Results: 681/2488 patients (27%) received adjuvant chemotherapy. In multivariable Cox regression, adjuvant chemotherapy was associated with improved OS in the overall cohort and in patients with LNR < 0.15. A trend towards improved OS was also observed for those with LNR ≥ 0.15. After accounting for indication bias using IPTW, a significant survival benefit for was observed only for patients with LNR < 0.15. The ATE among LNR < 0.15 patients was 3.3 (95% CI 1.0, 5.7) months, indicating that the average survival of the population would be 3.3 months longer if all received treatment. Conclusions: Adjuvant chemotherapy in resected PDAC patients who received neoadjuvant therapy appears to be beneficial in patients with negative lymph nodes or minimal nodal burden. High LNR after neoadjuvant therapy may be an indicator of adverse tumor biology that is less likely to derive a therapeutic benefit. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
46

Austin, Peter C. "Inverse probability weighted estimation of the marginal odds ratio." Statistics in Medicine 27, no. 26 (November 20, 2008): 5560–63. http://dx.doi.org/10.1002/sim.3385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tchetgen Tchetgen, Eric J. "Inverse odds ratio-weighted estimation for causal mediation analysis." Statistics in Medicine 32, no. 26 (June 7, 2013): 4567–80. http://dx.doi.org/10.1002/sim.5864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jiang, Yu Ying, and Xiao Feng Zhu. "Weighted-Correct Empirical likelihood for Linear EV Models." Advanced Materials Research 524-527 (May 2012): 3884–87. http://dx.doi.org/10.4028/www.scientific.net/amr.524-527.3884.

Full text
Abstract:
The empirical likelihood inference based weighted correction in linear EV model with missing responses is studied. A weighted-correct empirical likelihood method is developed. It can be shown that the weighted-correct empirical likelihood ratio is asymptotically standard chi-square. The results can be used directly to construct the asymptotic confidence regions of the unknown parameters. The estimation procedure is relatively simple and the estimated efficiency has been greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
49

Saruhashi, Takumi, Masato Ohkubo, and Yasushi Nagata. "Study on Likelihood-Ratio-Based Multivariate EWMA Control Chart Using Lasso." Quality Innovation Prosperity 25, no. 1 (March 31, 2021): 3–15. http://dx.doi.org/10.12776/qip.v25i1.1552.

Full text
Abstract:
Purpose: When applying exponentially weighted moving average (EWMA) multivariate control charts to multivariate statistical process control, in many cases, only some elements of the controlled parameters change. In such situations, control charts applying Lasso are useful. This study proposes a novel multivariate control chart that assumes that only a few elements of the controlled parameters change. Methodology/Approach: We applied Lasso to the conventional likelihood ratio-based EWMA chart; specifically, we considered a multivariate control chart based on a log-likelihood ratio with sparse estimators of the mean vector and variance-covariance matrix. Findings: The results show that 1) it is possible to identify which elements have changed by confirming each sparse estimated parameter, and 2) the proposed procedure outperforms the conventional likelihood ratio-based EWMA chart regardless of the number of parameter elements that change. Research Limitation/Implication: We perform sparse estimation under the assumption that the regularization parameters are known. However, the regularization parameters are often unknown in real life; therefore, it is necessary to discuss how to determine them. Originality/Value of paper: The study provides a natural extension of the conventional likelihood ratio-based EWMA chart to improve interpretability and detection accuracy. Our procedure is expected to solve challenges created by changes in a few elements of the population mean vector and population variance-covariance matrix.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Haoye, and Bo Xu. "Improving Pulse Phase Estimation Accuracy with Sampling and Weighted Averaging." Journal of Navigation 72, no. 04 (February 5, 2019): 1007–20. http://dx.doi.org/10.1017/s0373463318001066.

Full text
Abstract:
The accuracy in an X-ray pulsar-based navigation system depends mainly on the accuracy of the pulse phase estimation. In this paper, a novel method is proposed which combines an epoch folding process and a cross-correlation method with the idea of “averaging multiple measurements”. In this method, pulse phase is estimated multiple times on the sampled subsets of arriving photons' time tags, and a final estimation is obtained as the weighted average of these estimations. Two explanations as to how the proposed method can improve accuracy are provided: a Signal to Noise Ratio (SNR)-based explanation and an “error-difference trade-off” explanation. Numerical simulations show that the accuracy in pulse phase estimation can be improved with the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography