Artigos de revistas sobre o tema "Matrix linear regression"

Siga este link para ver outros tipos de publicações sobre o tema: Matrix linear regression.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Matrix linear regression".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Lutay, V. N., e N. S. Khusainov. "The selective regularization of a linear regression model". Journal of Physics: Conference Series 2099, n.º 1 (1 de novembro de 2021): 012024. http://dx.doi.org/10.1088/1742-6596/2099/1/012024.

Texto completo da fonte
Resumo:
Abstract This paper discusses constructing a linear regression model with regularization of the system matrix of normal equations. In contrast to the conventional ridge regression, where positive parameters are added to all diagonal terms of a matrix, in the method proposed only those matrix diagonal entries that correspond to the data with a high correlation are increased. This leads to a decrease in the matrix conditioning and, therefore, to a decrease in the corresponding coefficients of the regression equation. The selection of the entries to be increased is based on the triangular decomposition of the correlation matrix of the original dataset. The effectiveness of the method is tested on a known dataset, and it is performed not only with a ridge regression, but also with the results of applying the widespread algorithms LARS and Lasso.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nakonechnyi, Alexander G., Grigoriy I. Kudin, Petr N. Zinko e Taras P. Zinko. "Perturbation Method in Problems of Linear Matrix Regression". Journal of Automation and Information Sciences 52, n.º 1 (2020): 1–12. http://dx.doi.org/10.1615/jautomatinfscien.v52.i1.10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Jiawei, Peng Wang e Ning Zhang. "Distribution Network Admittance Matrix Estimation With Linear Regression". IEEE Transactions on Power Systems 36, n.º 5 (setembro de 2021): 4896–99. http://dx.doi.org/10.1109/tpwrs.2021.3090250.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ivashnev, L. I. "Methods of linear multiple regression in a matrix form". Izvestiya MGTU MAMI 9, n.º 4-4 (20 de agosto de 2015): 35–41. http://dx.doi.org/10.17816/2074-0530-67011.

Texto completo da fonte
Resumo:
The article contains a summary of three basic and two weighted linear multiple regression tech- niques in matrix form, together with the method of least squares of Gauss constitute a new tool re- gression analysis. The article contains a matrix formula that can be used to obtain equations of line- ar multiple regression and the basic weighted least-squares method to obtain regression equations without constant term and the method of obtaining the regression equations of general form. The article provides an example of use of matrix methods to obtain the coefficients of regression equa- tion of the general form, ie equation of equal performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Mahaboob, B., J. P. Praveen, B. V. A. Rao, Y. Harnath, C. Narayana e G. B. Prakash. "A STUDY ON MULTIPLE LINEAR REGRESSION USING MATRIX CALCULUS". Advances in Mathematics: Scientific Journal 9, n.º 7 (2 de agosto de 2020): 4863–72. http://dx.doi.org/10.37418/amsj.9.7.52.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Aubin, Elisete da Conceição Q., e Gauss M. Cordeiro. "BIAS in linear regression models with unknown covariance matrix". Communications in Statistics - Simulation and Computation 26, n.º 3 (janeiro de 1997): 813–28. http://dx.doi.org/10.1080/03610919708813413.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bargiela, Andrzej, e Joanna K. Hartley. "Orthogonal linear regression algorithm based on augmented matrix formulation". Computers & Operations Research 20, n.º 8 (outubro de 1993): 829–36. http://dx.doi.org/10.1016/0305-0548(93)90104-q.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Livadiotis, George. "Linear Regression with Optimal Rotation". Stats 2, n.º 4 (28 de setembro de 2019): 416–25. http://dx.doi.org/10.3390/stats2040028.

Texto completo da fonte
Resumo:
The paper shows how the linear regression depends on the selection of the reference frame. The slope of the fitted line and the corresponding Pearson’s correlation coefficient are expressed in terms of the rotation angle. The correlation coefficient is found to be maximized for a certain optimal angle, for which the slope attains a special optimal value. The optimal angle, the value of the optimal slope, and the corresponding maximum correlation coefficient were expressed in terms of the covariance matrix, but also in terms of the values of the slope, derived from the fitting at the nonrotated and right-angle-rotated axes. The potential of the new method is to improve the derived values of the fitting parameters by detecting the optimal rotation angle, that is, the one that maximizes the correlation coefficient. The presented analysis was applied to the linear regression of density and temperature measurements characterizing the proton plasma in the inner heliosheath, the outer region of our heliosphere.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Klen, Kateryna, Vadym Martynyuk e Mykhailo Yaremenko. "Prediction of the wind speed change function by linear regression method". Computational Problems of Electrical Engineering 9, n.º 2 (10 de novembro de 2019): 28–33. http://dx.doi.org/10.23939/jcpee2019.02.028.

Texto completo da fonte
Resumo:
In the article the approximation of the function of wind speed changes by linear functions based on Walsh functions and the prediction of function values by linear regression method is made. It is shown that under the condition of a linear change of the internal resistance of the wind generator over time, it is advisable to introduce the wind speed change function with linear approximation. The system of orthonormal linear functions based on Walsh functions is given. As an example, the approximation of the linear-increasing function with a system of 4, 8 and 16 linear functions based on the Walsh functions is given. The result of the approximation of the wind speed change function with a system of 8 linear functions based on Walsh functions is shown. Decomposition coefficients, mean-square and average relative approximation errors for such approximation are calculated. In order to find the parameters of multiple linear regression the method of least squares is applied. The regression equation in matrix form is given. The example of application of the prediction method of linear regression to simple functions is shown. The restoration result for wind speed change function is shown. Decomposition coefficients, mean-square and average relative approximation errors for restoration of wind speed change function with linear regression method are calculated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Srivastava, A. K. "Estimation of linear regression model with rank deficient observations matrix under linear restrictions". Microelectronics Reliability 36, n.º 1 (janeiro de 1996): 109–10. http://dx.doi.org/10.1016/0026-2714(95)00018-w.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Tanveer, M. "Linear programming twin support vector regression". Filomat 31, n.º 7 (2017): 2123–42. http://dx.doi.org/10.2298/fil1707123t.

Texto completo da fonte
Resumo:
In this paper, a new linear programming formulation of a 1-norm twin support vector regression is proposed whose solution is obtained by solving a pair of dual exterior penalty problems as unconstrained minimization problems using Newton method. The idea of our formulation is to reformulate TSVR as a strongly convex problem by incorporated regularization technique and then derive a new 1-norm linear programming formulation for TSVR to improve robustness and sparsity. Our approach has the advantage that a pair of matrix equation of order equals to the number of input examples is solved at each iteration of the algorithm. The algorithm converges from any starting point and can be easily implemented in MATLAB without using any optimization packages. The efficiency of the proposed method is demonstrated by experimental results on a number of interesting synthetic and real-world datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Abdullah, Lazim, e Nadia Zakaria. "Matrix Driven Multivariate Fuzzy Linear Regression Model in Car Sales". Journal of Applied Sciences 12, n.º 1 (15 de dezembro de 2011): 56–63. http://dx.doi.org/10.3923/jas.2012.56.63.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Trenkler, ötz. "Mean square error matrix comparisons of estimators in linear regression". Communications in Statistics - Theory and Methods 14, n.º 10 (janeiro de 1985): 2495–509. http://dx.doi.org/10.1080/03610928508829058.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Mustafa Nadhim Lattef e Mustafa I ALheety. "Study of Some Kinds of Ridge Regression Estimators in Linear Regression Model". Tikrit Journal of Pure Science 25, n.º 5 (18 de dezembro de 2020): 130–42. http://dx.doi.org/10.25130/tjps.v25i5.301.

Texto completo da fonte
Resumo:
In linear regression model, the biased estimation is one of the most commonly used methods to reduce the effect of the multicollinearity. In this paper, a simulation study is performed to compare the relative efficiency of some kinds of biased estimators as well as for twelve proposed estimated ridge parameter (k) which are given in the literature. We propose some new adjustments to estimate the ridge parameter. Finally, we consider a real data set in economics to illustrate the results based on the estimated mean squared error (MSE) criterion. According to the results, all the proposed estimators of (k) are superior to ordinary least squared estimator (OLS), and the superiority among them based on minimum MSE matrix will change according to the sample under consideration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Dorugade, Ashok V. "A modified two-parameter estimator in linear regression". Statistics in Transition new series 15, n.º 1 (3 de março de 2014): 23–36. http://dx.doi.org/10.59170/stattrans-2014-002.

Texto completo da fonte
Resumo:
In this article, a modified two-parameter estimator is introduced for the vector of parameters in the linear regression model when data exists with multicollinearity. The properties of the proposed estimator are discussed and the performance in terms of the matrix mean square error criterion over the ordinary least squares (OLS) estimator, a new two-parameter estimator (NTP), an almost unbiased two-parameter estimator (AUTP) and other well known estimators reviewed in this article is investigated. A numerical example and simulation study are finally conducted to illustrate the superiority of the proposed estimator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Jokubaitis, Saulius, e Remigijus Leipus. "Asymptotic Normality in Linear Regression with Approximately Sparse Structure". Mathematics 10, n.º 10 (12 de maio de 2022): 1657. http://dx.doi.org/10.3390/math10101657.

Texto completo da fonte
Resumo:
In this paper, we study the asymptotic normality in high-dimensional linear regression. We focus on the case where the covariance matrix of the regression variables has a KMS structure, in asymptotic settings where the number of predictors, p, is proportional to the number of observations, n. The main result of the paper is the derivation of the exact asymptotic distribution for the suitably centered and normalized squared norm of the product between predictor matrix, X, and outcome variable, Y, i.e., the statistic ∥X′Y∥22, under rather unrestrictive assumptions for the model parameters βj. We employ variance-gamma distribution in order to derive the results, which, along with the asymptotic results, allows us to easily define the exact distribution of the statistic. Additionally, we consider a specific case of approximate sparsity of the model parameter vector β and perform a Monte Carlo simulation study. The simulation results suggest that the statistic approaches the limiting distribution fairly quickly even under high variable multi-correlation and relatively small number of observations, suggesting possible applications to the construction of statistical testing procedures for the real-world data and related problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Efremov, A. "General Forms of a Class of Multivariable Regression Models". Information Technologies and Control 11, n.º 2 (2 de outubro de 2014): 22–28. http://dx.doi.org/10.2478/itc-2013-0009.

Texto completo da fonte
Resumo:
Abstract There are two possible general forms of multiple input multiple output (MIMO) regression models, which are either linear with respect to their parameters or non-linear, but in order to estimate their parameters, at a certain stage it could be assumed that they are linear. This is in fact the basic assumption behind the linear approach for parameters estimation. There are two possible representations of a MIMO model, which at a certain level could be fictitiously presented as linear functions of its parameters. One representation is when the parameters are collected in a matrix and hence, the regressors are in a vector. The other possible case is the parameters to be in a vector, but the regressors at a given instant to be placed in a matrix. Both types of representations are considered in the paper. Their advantages and disadvantages are summarized and their applicability within the whole experimental modelling process is also discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Weitao Fang, Zhengbin Cheng e Dan Yang. "Face Recognition based on Non-Negative Matrix Factorization and Linear Regression". International Journal of Advancements in Computing Technology 4, n.º 16 (30 de setembro de 2012): 93–100. http://dx.doi.org/10.4156/ijact.vol4.issue16.11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Fan, Xitao, e William G. Jacoby. "BOOTSREG: An SAS Matrix Language Program for Bootstrapping Linear Regression Models". Educational and Psychological Measurement 55, n.º 5 (outubro de 1995): 764–68. http://dx.doi.org/10.1177/0013164495055005007.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Isotalo, Jarkko, Simo Puntanen e George P. H. Styan. "A Useful Matrix Decomposition and Its Statistical Applications in Linear Regression". Communications in Statistics - Theory and Methods 37, n.º 9 (11 de março de 2008): 1436–57. http://dx.doi.org/10.1080/03610920701666328.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Tang, Ying. "Beyond EM: A faster Bayesian linear regression algorithm without matrix inversions". Neurocomputing 378 (fevereiro de 2020): 435–40. http://dx.doi.org/10.1016/j.neucom.2019.10.061.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Kong, Dehan, Baiguo An, Jingwen Zhang e Hongtu Zhu. "L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses". Journal of the American Statistical Association 115, n.º 529 (30 de abril de 2019): 403–24. http://dx.doi.org/10.1080/01621459.2018.1555092.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

He, Xi-Lei, Zhen-Hua He, Rui-Liang Wang, Xu-Ben Wang e Lian Jiang. "Calculations of rock matrix modulus based on a linear regression relation". Applied Geophysics 8, n.º 3 (setembro de 2011): 155–62. http://dx.doi.org/10.1007/s11770-011-0290-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Furno, Marilena. "The information matrix test in the linear regression with ARMA errors". Journal of the Italian Statistical Society 5, n.º 3 (dezembro de 1996): 369–85. http://dx.doi.org/10.1007/bf02589097.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Kostov, Philip. "Choosing the Right Spatial Weighting Matrix in a Quantile Regression Model". ISRN Economics 2013 (28 de janeiro de 2013): 1–16. http://dx.doi.org/10.1155/2013/158240.

Texto completo da fonte
Resumo:
This paper proposes computationally tractable methods for selecting the appropriate spatial weighting matrix in the context of a spatial quantile regression model. This selection is a notoriously difficult problem even in linear spatial models and is even more difficult in a quantile regression setup. The proposal is illustrated by an empirical example and manages to produce tractable models. One important feature of the proposed methodology is that by allowing different degrees and forms of spatial dependence across quantiles it further relaxes the usual quantile restriction attributable to the linear quantile regression. In this way we can obtain a more robust, with regard to potential functional misspecification, model, but nevertheless preserve the parametric rate of convergence and the established inferential apparatus associated with the linear quantile regression approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Gilyén, András, Zhao Song e Ewin Tang. "An improved quantum-inspired algorithm for linear regression". Quantum 6 (30 de junho de 2022): 754. http://dx.doi.org/10.22331/q-2022-06-30-754.

Texto completo da fonte
Resumo:
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix A is stored in a data structure applicable for QRAM-based state preparation.Namely, suppose we are given an A∈Cm×n with minimum non-zero singular value σ which supports certain efficient ℓ2-norm importance sampling queries, along with a b∈Cm. Then, for some x∈Cn satisfying ‖x–A+b‖≤ε‖A+b‖, we can output a measurement of |x⟩ in the computational basis and output an entry of x with classical algorithms that run in O~(‖A‖F6‖A‖6σ12ε4) and O~(‖A‖F6‖A‖2σ8ε4) time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of ‖A‖16σ16ε2 [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Wang, Lei, e Su Feng Guo. "Measurement Based on the "Group Linear Regression Method" of Harmonic Impedance". Advanced Materials Research 354-355 (outubro de 2011): 1051–57. http://dx.doi.org/10.4028/www.scientific.net/amr.354-355.1051.

Texto completo da fonte
Resumo:
Harmonic pollution monitoring and control is the key to an accurate estimate of harmonic impedance, and thus an accurate assessment of public connection points of the harmonic emission level. This paper analyzes the link under the star-shaped and triangular three-phase asymmetrical power systems derived established a three-phase unbalanced power system harmonic impedance matrix estimation model. Model is based on a large number of systems and load the public connection point of the simulation data, the real and imaginary part of the harmonic impedance by opening and using grouping multiple linear regression method, on the system side harmonic impedance matrix to estimate the value of each element. Simulation and error analysis shows that the model obtained is more accurate and effective.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Bazilevskiy, M. P. "Solving an Optimization Problem for Estimating Fully connected Linear Regression Models". Моделирование и анализ данных 14, n.º 1 (11 de abril de 2024): 121–34. http://dx.doi.org/10.17759/mda.2024140108.

Texto completo da fonte
Resumo:
<p>This article is devoted to the problem of estimating fully connected linear regression models using the maximum likelihood method. Previously, a special numerical method was developed for this purpose, based on solving a nonlinear system using the method of simple iterations. At the same time, the issues of choosing initial approximations and fulfilling sufficient conditions for convergence were not studied. This article proposes a new method for solving the optimization problem of estimating fully connected regressions, similar to the method of estimating orthogonal regressions. It has been proven that, with equal error variances of interconnected variables, estimates of b-parameters of fully connected regression are equal to the components of the eigenvector corresponding to the smallest eigenvalue of the inverse covariance matrix. And if the ratios of the error variances of the variables are equal to the ratios of the variances of the variables, then the b-parameter estimates are equal to the components of the eigenvector corresponding to the smallest eigenvalue of the inverse correlation matrix, multiplied by the specific ratios of the standard deviations of the variables. A numerical experiment was carried out to confirm the correctness of the developed mathematical apparatus. The proposed method for solving the optimization problem of estimating fully connected regressions can be effectively used when solving problems of constructing multiple fully connected linear regressions.</p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Barnabani, Marco. "A Parametric Test to Discriminate Between a Linear Regression Model and a Linear Latent Growth Model". International Journal of Statistics and Probability 6, n.º 3 (14 de maio de 2017): 157. http://dx.doi.org/10.5539/ijsp.v6n3p157.

Texto completo da fonte
Resumo:
In longitudinal studies with subjects measured repeatedly across time, an important problem is how to select a model generating data by choosing between a linear regression model and a linear latent growth model. Approaches based both on information criteria and asymptotic hypothesis tests of the variances of ''random'' components are widely used but not completely satisfactory. We propose a test statistic based on the trace of the product of an estimate of a variance covariance matrix defined when data come from a linear regression model and a sample variance covariance matrix. We studied the sampling distribution of the test statistic giving a representation in terms of an infinite series of generalized F-distributions. Knowledge about this distribution allows us to make inference within a classical hypothesis testing ramework. The test statistic can be used by itself to discriminate between the two models and/or, if duly modified, it can be used to test randomness on single components. Moreover, in conjunction with some model selection criteria, it gives additional information which can help in choosing the model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

LIPOVETSKY, STAN. "MEANINGFUL REGRESSION COEFFICIENTS BUILT BY DATA GRADIENTS". Advances in Adaptive Data Analysis 02, n.º 04 (outubro de 2010): 451–62. http://dx.doi.org/10.1142/s1793536910000574.

Texto completo da fonte
Resumo:
Multiple regression's coefficients define change in the dependent variable due to a predictor's change while all other predictors are constant. Rearranging data to paired differences of observations and keeping only biggest changes yield a matrix of a single variable change, which is close to orthogonal design, so there is no impact of multicollinearity on the regression. A similar approach is used for meaningful coefficients of nonlinear regressions with coefficients of half-elasticity, elasticity, and odds' elasticity due the gradients in each predictor. In contrast to regular linear and nonlinear regressions, the suggested technique produces interpretable coefficients not prone to multicollinearity effects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Zhai, Qianru, Ye Tian e Jingyue Zhou. "Linear Twin Quadratic Surface Support Vector Regression". Mathematical Problems in Engineering 2020 (4 de abril de 2020): 1–18. http://dx.doi.org/10.1155/2020/3238129.

Texto completo da fonte
Resumo:
Twin support vector regression (TSVR) generates two nonparallel hyperplanes by solving a pair of smaller-sized problems instead of a single larger-sized problem in the standard SVR. Due to its efficiency, TSVR is frequently applied in various areas. In this paper, we propose a totally new version of TSVR named Linear Twin Quadratic Surface Support Vector Regression (LTQSSVR), which directly uses two quadratic surfaces in the original space for regression. It is worth noting that our new approach not only avoids the notoriously difficult and time-consuming task for searching a suitable kernel function and its corresponding parameters in the traditional SVR-based method but also achieves a better generalization performance. Besides, in order to make further improvement on the efficiency and robustness of the model, we introduce the 1-norm to measure the error. The linear programming structure of the new model skips the matrix inverse operation and makes it solvable for those huge-sized problems. As we know, the capability of handling large-sized problem is very important in this big data era. In addition, to verify the effectiveness and efficiency of our model, we compare it with some well-known methods. The numerical experiments on 2 artificial data sets and 12 benchmark data sets demonstrate the validity and applicability of our proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

NZABANITA, Joseph, Dietrich VON ROSEN e Martin SINGULL. "Bilinear regression model with Kronecker and linear structures for the covariance matrix". Afrika Statistika 10, n.º 2 (31 de dezembro de 2015): 837–27. http://dx.doi.org/10.16929/as/2015.827.77.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Cribari-Neto, Francisco, e Wilton Bernardino da Silva. "A new heteroskedasticity-consistent covariance matrix estimator for the linear regression model". AStA Advances in Statistical Analysis 95, n.º 2 (4 de novembro de 2010): 129–46. http://dx.doi.org/10.1007/s10182-010-0141-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Drygas, Hilmar. "A note on the inverse-partitioned-matrix method in linear regression analysis". Linear Algebra and its Applications 67 (junho de 1985): 275–77. http://dx.doi.org/10.1016/0024-3795(85)90201-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Akdeniz, Fikri, e Hamza Erol. "Mean Squared Error Matrix Comparisons of Some Biased Estimators in Linear Regression". Communications in Statistics - Theory and Methods 32, n.º 12 (12 de janeiro de 2003): 2389–413. http://dx.doi.org/10.1081/sta-120025385.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Pan, Nang-Fei, Tzu-Chieh Lin e Nai-Hsin Pan. "Estimating bridge performance based on a matrix-driven fuzzy linear regression model". Automation in Construction 18, n.º 5 (agosto de 2009): 578–86. http://dx.doi.org/10.1016/j.autcon.2008.12.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Auerbach, Eric. "Identification and Estimation of a Partially Linear Regression Model Using Network Data". Econometrica 90, n.º 1 (2022): 347–65. http://dx.doi.org/10.3982/ecta19794.

Texto completo da fonte
Resumo:
I study a regression model in which one covariate is an unknown function of a latent driver of link formation in a network. Rather than specify and fit a parametric network formation model, I introduce a new method based on matching pairs of agents with similar columns of the squared adjacency matrix, the ijth entry of which contains the number of other agents linked to both agents i and j. The intuition behind this approach is that for a large class of network formation models the columns of the squared adjacency matrix characterize all of the identifiable information about individual linking behavior. In this paper, I describe the model, formalize this intuition, and provide consistent estimators for the parameters of the regression model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Wei, Chuanhua, e Xiaonan Wang. "Principal Components Regression Estimation in Semiparametric Partially Linear Additive Models". International Journal of Statistics and Probability 5, n.º 1 (25 de novembro de 2015): 46. http://dx.doi.org/10.5539/ijsp.v5n1p46.

Texto completo da fonte
Resumo:
<p>Partially linear additive model is useful in statistical modelling as a multivariate nonparametric fitting technique. This paper considers statistical inference for the semiparametric model in the presence of multicollinearity. Based on the profile least-squares approach, we propose a novel principal components regression estimator for the parametric component, and provide the asymptotic bias and covariance matrix of the proposed estimator. Some simulations are conducted to examine the performance of our proposed estimators and the results are satisfactory.</p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Kannan, Ravindran, e Santosh Vempala. "Randomized algorithms in numerical linear algebra". Acta Numerica 26 (1 de maio de 2017): 95–135. http://dx.doi.org/10.1017/s0962492917000058.

Texto completo da fonte
Resumo:
This survey provides an introduction to the use of randomization in the design of fast algorithms for numerical linear algebra. These algorithms typically examine only a subset of the input to solve basic problems approximately, including matrix multiplication, regression and low-rank approximation. The survey describes the key ideas and gives complete proofs of the main results in the field. A central unifying idea is sampling the columns (or rows) of a matrix according to their squared lengths.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Shah Alam, Bhaskar Bakshi, Rupjit Maity, Sulekha Das e Avijit Kumar Chaudhuri. "Heart Disease Diagnosis and Prediction using Multi Linear Regression". international journal of engineering technology and management sciences 7, n.º 2 (2023): 210–21. http://dx.doi.org/10.46647/ijetms.2023.v07i02.025.

Texto completo da fonte
Resumo:
The correct prediction of heart disease can prevent life threats, and incorrect prediction can prove to be fatal at the same time. In this paper machine learning algorithm is applied to compare the results and analysis of primary dataset. The dataset consists of 46 attributes among these Information gain is used to select 24 features for performing the analysis. Various promising results are achieved and are validated using accuracy and confusion matrix. The dataset consists of some irrelevant features which are handled and data are also normalized for getting better results. Using machine learning approach, 77.78% accuracy was obtained. Multiple linear regressions are used to construct and validate the prediction system. Our experimental result shows that multiple linear regressions are suitable for modelling and predicting cholesterol.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Nakonechnyi, Alexander, Grigoriy Kudin, Taras Zinko e Petr Zinko. "MINIMAX ROOT–MEAN–SQUARE ESTIMATES OF MATRIX PARAMETERS IN LINEAR REGRESSION PROBLEMS UNDER UNCERTAINTY". Journal of Automation and Information sciences 4 (1 de julho de 2021): 28–37. http://dx.doi.org/10.34229/1028-0979-2021-4-3.

Texto completo da fonte
Resumo:
The issues of parameter estimation in linear regression problems with random matrix coefficients were researched. Given that random linear functions are observed from unknown matrices with random errors that have unknown correlation matrices, the problems of guaranteed mean square estimation of linear functions of matrices were investigated. The estimates of the upper and lower guaranteed standard errors of linear estimates of observations of linear functions of matrices were obtained in the case when the sets are found, for which the unknown matrices and correlation matrices of observation errors are known. It was proved that for some partial cases such estimates are accurate. Assuming that the sets are bounded, convex and closed, more accurate two-sided estimates have been gained for guaranteed errors. The conditions when the guaranteed mean squared errors approach zero as the number of observations increases were found. The necessary and sufficient conditions for the unbiasedness of linear estimates of linear functions of matrices were provided. The notion of quasi-optimal estimates for linear functions of matrices was introduced, and it was proved that in the class of unbiased estimates, quasi-optimal estimates exist and are unique. For such estimates, the conditions of convergence to zero of the guaranteed mean-square errors were obtained. Also, for linear estimates of unknown matrices, the concept of quasi-minimax estimates was introduced and it was confirmed that they are unbiased. For special sets, which include an unknown matrix and correlation matrices of observation errors, such estimates were expressed through the solution of linear operator equations in a finite-dimensional space. For quasi-minimax estimates under certain assumptions, the form of the guaranteed mean squared error of the unknown matrix was found. It was shown that such errors are limited by the sum of traces of the known matrices. An example of finding a minimax unbiased linear estimation was given for a special type of random matrices that are included in the observation equation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Teng, Guangqiang, Boping Tian, Yuanyuan Zhang e Sheng Fu. "Asymptotics of Subsampling for Generalized Linear Regression Models under Unbounded Design". Entropy 25, n.º 1 (31 de dezembro de 2022): 84. http://dx.doi.org/10.3390/e25010084.

Texto completo da fonte
Resumo:
The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only considers bounded covariates. In this paper, the asymptotic normality of the subsampling M-estimator based on the Fisher information matrix is obtained. Then, we study the asymptotic properties of subsampling estimators of unbounded GLMs with nonnatural links, including conditional asymptotic properties and unconditional asymptotic properties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Pohar, Maja, Mateja Blas e Sandra Turk. "Comparison of logistic regression and linear discriminant analysis". Advances in Methodology and Statistics 1, n.º 1 (1 de janeiro de 2004): 143–61. http://dx.doi.org/10.51936/ayrt6204.

Texto completo da fonte
Resumo:
Two of the most widely used statistical methods for analyzing categorical outcome variables are linear discriminant analysis and logistic regression. While both are appropriate for the development of linear classification models, linear discriminant analysis makes more assumptions about the underlying data. Hence, it is assumed that logistic regression is the more flexible and more robust method in case of violations of these assumptions. In this paper we consider the problem of choosing between the two methods, and set some guidelines for proper choice. The comparison between the methods is based on several measures of predictive accuracy. The performance of the methods is studied by simulations. We start with an example where all the assumptions of the linear discriminant analysis are satisfied and observe the impact of changes regarding the sample size, covariance matrix, Mahalanobis distance and direction of distance between group means. Next, we compare the robustness of the methods towards categorisation and non-normality of explanatory variables in a closely controlled way. We show that the results of LDA and LR are close whenever the normality assumptions are not too badly violated, and set some guidelines for recognizing these situations. We discuss the inappropriateness of LDA in all other cases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Hidayah Mohamed Isa, Noor, Mahmod Othman e Samsul Ariffin Abdul Karim. "Multivariate Matrix for Fuzzy Linear Regression Model to Analyse The Taxation in Malaysia". International Journal of Engineering & Technology 7, n.º 4.33 (9 de dezembro de 2018): 78. http://dx.doi.org/10.14419/ijet.v7i4.33.23490.

Texto completo da fonte
Resumo:
A multivariate matrix is proposed to find the best factor for fuzzy linear regression (FLR) with symmetric triangular fuzzy numbers (TFNs). The goal of this paper is to select the best factor influence tax revenue among four variables. Eighteen years’ data of the variables from IndexMundi and World Bank Data. It is found that the model is successfully explained between independent variables and response variable. It is notices that sixty-six percent of the variance of tax revenue is explained by Gross Domestic Product, Inflation, Unemployment and Merchandise Trade. The introduction of multivariate matrix for fuzzy linear regression in taxation is a first attempt to analyses the relationship the tax revenue with the independent variables.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Abdelhadi, Yaser. "Linear modeling and regression for exponential engineering functions by a generalized ordinary least squares method". International Journal of Engineering & Technology 3, n.º 2 (20 de abril de 2014): 174. http://dx.doi.org/10.14419/ijet.v3i2.2023.

Texto completo da fonte
Resumo:
Linear transformations are performed for selected exponential engineering functions. The Optimum values of parameters of the linear model equation that fits the set of experimental or simulated data points are determined by the linear least squares method. The classical and matrix forms of ordinary least squares are illustrated. Keywords: Exponential Functions; Linear Modeling; Ordinary Least Squares; Parametric Estimation; Regression Steps.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

du Toit, A. S., J. Booysen e J. J. Human. "Use of linear regression and correlation matrix in the evaluation of CERES3 (Maize)". South African Journal of Plant and Soil 14, n.º 4 (janeiro de 1997): 177–82. http://dx.doi.org/10.1080/02571862.1997.10635104.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Gaffke, N., e R. Schwabe. "Quasi-Newton algorithm for optimal approximate linear regression design: Optimization in matrix space". Journal of Statistical Planning and Inference 198 (janeiro de 2019): 62–78. http://dx.doi.org/10.1016/j.jspi.2018.03.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Bertsimas, Dimitris, e Martin S. Copenhaver. "Characterization of the equivalence of robustification and regularization in linear and matrix regression". European Journal of Operational Research 270, n.º 3 (novembro de 2018): 931–42. http://dx.doi.org/10.1016/j.ejor.2017.03.051.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Peña, Daniel, e Victor J. Yohai. "The Detection of Influential Subsets in Linear Regression by Using an Influence Matrix". Journal of the Royal Statistical Society: Series B (Methodological) 57, n.º 1 (janeiro de 1995): 145–56. http://dx.doi.org/10.1111/j.2517-6161.1995.tb02020.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Peña, Daniel, e Victor J. Yohai. "The Detection of Influential Subsets in Linear Regression by Using an Influence Matrix". Journal of the Royal Statistical Society: Series B (Methodological) 57, n.º 3 (setembro de 1995): 611. http://dx.doi.org/10.1111/j.2517-6161.1995.tb02051.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia