To see the other types of publications on this topic, follow the link: Matrix linear regression.

Journal articles on the topic 'Matrix linear regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Matrix linear regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lutay, V. N., and N. S. Khusainov. "The selective regularization of a linear regression model." Journal of Physics: Conference Series 2099, no. 1 (November 1, 2021): 012024. http://dx.doi.org/10.1088/1742-6596/2099/1/012024.

Full text
Abstract:
Abstract This paper discusses constructing a linear regression model with regularization of the system matrix of normal equations. In contrast to the conventional ridge regression, where positive parameters are added to all diagonal terms of a matrix, in the method proposed only those matrix diagonal entries that correspond to the data with a high correlation are increased. This leads to a decrease in the matrix conditioning and, therefore, to a decrease in the corresponding coefficients of the regression equation. The selection of the entries to be increased is based on the triangular decomposition of the correlation matrix of the original dataset. The effectiveness of the method is tested on a known dataset, and it is performed not only with a ridge regression, but also with the results of applying the widespread algorithms LARS and Lasso.
APA, Harvard, Vancouver, ISO, and other styles
2

Nakonechnyi, Alexander G., Grigoriy I. Kudin, Petr N. Zinko, and Taras P. Zinko. "Perturbation Method in Problems of Linear Matrix Regression." Journal of Automation and Information Sciences 52, no. 1 (2020): 1–12. http://dx.doi.org/10.1615/jautomatinfscien.v52.i1.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jiawei, Peng Wang, and Ning Zhang. "Distribution Network Admittance Matrix Estimation With Linear Regression." IEEE Transactions on Power Systems 36, no. 5 (September 2021): 4896–99. http://dx.doi.org/10.1109/tpwrs.2021.3090250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ivashnev, L. I. "Methods of linear multiple regression in a matrix form." Izvestiya MGTU MAMI 9, no. 4-4 (August 20, 2015): 35–41. http://dx.doi.org/10.17816/2074-0530-67011.

Full text
Abstract:
The article contains a summary of three basic and two weighted linear multiple regression tech- niques in matrix form, together with the method of least squares of Gauss constitute a new tool re- gression analysis. The article contains a matrix formula that can be used to obtain equations of line- ar multiple regression and the basic weighted least-squares method to obtain regression equations without constant term and the method of obtaining the regression equations of general form. The article provides an example of use of matrix methods to obtain the coefficients of regression equa- tion of the general form, ie equation of equal performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Mahaboob, B., J. P. Praveen, B. V. A. Rao, Y. Harnath, C. Narayana, and G. B. Prakash. "A STUDY ON MULTIPLE LINEAR REGRESSION USING MATRIX CALCULUS." Advances in Mathematics: Scientific Journal 9, no. 7 (August 2, 2020): 4863–72. http://dx.doi.org/10.37418/amsj.9.7.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aubin, Elisete da Conceição Q., and Gauss M. Cordeiro. "BIAS in linear regression models with unknown covariance matrix." Communications in Statistics - Simulation and Computation 26, no. 3 (January 1997): 813–28. http://dx.doi.org/10.1080/03610919708813413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bargiela, Andrzej, and Joanna K. Hartley. "Orthogonal linear regression algorithm based on augmented matrix formulation." Computers & Operations Research 20, no. 8 (October 1993): 829–36. http://dx.doi.org/10.1016/0305-0548(93)90104-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Livadiotis, George. "Linear Regression with Optimal Rotation." Stats 2, no. 4 (September 28, 2019): 416–25. http://dx.doi.org/10.3390/stats2040028.

Full text
Abstract:
The paper shows how the linear regression depends on the selection of the reference frame. The slope of the fitted line and the corresponding Pearson’s correlation coefficient are expressed in terms of the rotation angle. The correlation coefficient is found to be maximized for a certain optimal angle, for which the slope attains a special optimal value. The optimal angle, the value of the optimal slope, and the corresponding maximum correlation coefficient were expressed in terms of the covariance matrix, but also in terms of the values of the slope, derived from the fitting at the nonrotated and right-angle-rotated axes. The potential of the new method is to improve the derived values of the fitting parameters by detecting the optimal rotation angle, that is, the one that maximizes the correlation coefficient. The presented analysis was applied to the linear regression of density and temperature measurements characterizing the proton plasma in the inner heliosheath, the outer region of our heliosphere.
APA, Harvard, Vancouver, ISO, and other styles
9

Klen, Kateryna, Vadym Martynyuk, and Mykhailo Yaremenko. "Prediction of the wind speed change function by linear regression method." Computational Problems of Electrical Engineering 9, no. 2 (November 10, 2019): 28–33. http://dx.doi.org/10.23939/jcpee2019.02.028.

Full text
Abstract:
In the article the approximation of the function of wind speed changes by linear functions based on Walsh functions and the prediction of function values by linear regression method is made. It is shown that under the condition of a linear change of the internal resistance of the wind generator over time, it is advisable to introduce the wind speed change function with linear approximation. The system of orthonormal linear functions based on Walsh functions is given. As an example, the approximation of the linear-increasing function with a system of 4, 8 and 16 linear functions based on the Walsh functions is given. The result of the approximation of the wind speed change function with a system of 8 linear functions based on Walsh functions is shown. Decomposition coefficients, mean-square and average relative approximation errors for such approximation are calculated. In order to find the parameters of multiple linear regression the method of least squares is applied. The regression equation in matrix form is given. The example of application of the prediction method of linear regression to simple functions is shown. The restoration result for wind speed change function is shown. Decomposition coefficients, mean-square and average relative approximation errors for restoration of wind speed change function with linear regression method are calculated.
APA, Harvard, Vancouver, ISO, and other styles
10

Srivastava, A. K. "Estimation of linear regression model with rank deficient observations matrix under linear restrictions." Microelectronics Reliability 36, no. 1 (January 1996): 109–10. http://dx.doi.org/10.1016/0026-2714(95)00018-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tanveer, M. "Linear programming twin support vector regression." Filomat 31, no. 7 (2017): 2123–42. http://dx.doi.org/10.2298/fil1707123t.

Full text
Abstract:
In this paper, a new linear programming formulation of a 1-norm twin support vector regression is proposed whose solution is obtained by solving a pair of dual exterior penalty problems as unconstrained minimization problems using Newton method. The idea of our formulation is to reformulate TSVR as a strongly convex problem by incorporated regularization technique and then derive a new 1-norm linear programming formulation for TSVR to improve robustness and sparsity. Our approach has the advantage that a pair of matrix equation of order equals to the number of input examples is solved at each iteration of the algorithm. The algorithm converges from any starting point and can be easily implemented in MATLAB without using any optimization packages. The efficiency of the proposed method is demonstrated by experimental results on a number of interesting synthetic and real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
12

Abdullah, Lazim, and Nadia Zakaria. "Matrix Driven Multivariate Fuzzy Linear Regression Model in Car Sales." Journal of Applied Sciences 12, no. 1 (December 15, 2011): 56–63. http://dx.doi.org/10.3923/jas.2012.56.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Trenkler, ötz. "Mean square error matrix comparisons of estimators in linear regression." Communications in Statistics - Theory and Methods 14, no. 10 (January 1985): 2495–509. http://dx.doi.org/10.1080/03610928508829058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mustafa Nadhim Lattef and Mustafa I ALheety. "Study of Some Kinds of Ridge Regression Estimators in Linear Regression Model." Tikrit Journal of Pure Science 25, no. 5 (December 18, 2020): 130–42. http://dx.doi.org/10.25130/tjps.v25i5.301.

Full text
Abstract:
In linear regression model, the biased estimation is one of the most commonly used methods to reduce the effect of the multicollinearity. In this paper, a simulation study is performed to compare the relative efficiency of some kinds of biased estimators as well as for twelve proposed estimated ridge parameter (k) which are given in the literature. We propose some new adjustments to estimate the ridge parameter. Finally, we consider a real data set in economics to illustrate the results based on the estimated mean squared error (MSE) criterion. According to the results, all the proposed estimators of (k) are superior to ordinary least squared estimator (OLS), and the superiority among them based on minimum MSE matrix will change according to the sample under consideration.
APA, Harvard, Vancouver, ISO, and other styles
15

Dorugade, Ashok V. "A modified two-parameter estimator in linear regression." Statistics in Transition new series 15, no. 1 (March 3, 2014): 23–36. http://dx.doi.org/10.59170/stattrans-2014-002.

Full text
Abstract:
In this article, a modified two-parameter estimator is introduced for the vector of parameters in the linear regression model when data exists with multicollinearity. The properties of the proposed estimator are discussed and the performance in terms of the matrix mean square error criterion over the ordinary least squares (OLS) estimator, a new two-parameter estimator (NTP), an almost unbiased two-parameter estimator (AUTP) and other well known estimators reviewed in this article is investigated. A numerical example and simulation study are finally conducted to illustrate the superiority of the proposed estimator.
APA, Harvard, Vancouver, ISO, and other styles
16

Jokubaitis, Saulius, and Remigijus Leipus. "Asymptotic Normality in Linear Regression with Approximately Sparse Structure." Mathematics 10, no. 10 (May 12, 2022): 1657. http://dx.doi.org/10.3390/math10101657.

Full text
Abstract:
In this paper, we study the asymptotic normality in high-dimensional linear regression. We focus on the case where the covariance matrix of the regression variables has a KMS structure, in asymptotic settings where the number of predictors, p, is proportional to the number of observations, n. The main result of the paper is the derivation of the exact asymptotic distribution for the suitably centered and normalized squared norm of the product between predictor matrix, X, and outcome variable, Y, i.e., the statistic ∥X′Y∥22, under rather unrestrictive assumptions for the model parameters βj. We employ variance-gamma distribution in order to derive the results, which, along with the asymptotic results, allows us to easily define the exact distribution of the statistic. Additionally, we consider a specific case of approximate sparsity of the model parameter vector β and perform a Monte Carlo simulation study. The simulation results suggest that the statistic approaches the limiting distribution fairly quickly even under high variable multi-correlation and relatively small number of observations, suggesting possible applications to the construction of statistical testing procedures for the real-world data and related problems.
APA, Harvard, Vancouver, ISO, and other styles
17

Efremov, A. "General Forms of a Class of Multivariable Regression Models." Information Technologies and Control 11, no. 2 (October 2, 2014): 22–28. http://dx.doi.org/10.2478/itc-2013-0009.

Full text
Abstract:
Abstract There are two possible general forms of multiple input multiple output (MIMO) regression models, which are either linear with respect to their parameters or non-linear, but in order to estimate their parameters, at a certain stage it could be assumed that they are linear. This is in fact the basic assumption behind the linear approach for parameters estimation. There are two possible representations of a MIMO model, which at a certain level could be fictitiously presented as linear functions of its parameters. One representation is when the parameters are collected in a matrix and hence, the regressors are in a vector. The other possible case is the parameters to be in a vector, but the regressors at a given instant to be placed in a matrix. Both types of representations are considered in the paper. Their advantages and disadvantages are summarized and their applicability within the whole experimental modelling process is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
18

Weitao Fang, Zhengbin Cheng, and Dan Yang. "Face Recognition based on Non-Negative Matrix Factorization and Linear Regression." International Journal of Advancements in Computing Technology 4, no. 16 (September 30, 2012): 93–100. http://dx.doi.org/10.4156/ijact.vol4.issue16.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fan, Xitao, and William G. Jacoby. "BOOTSREG: An SAS Matrix Language Program for Bootstrapping Linear Regression Models." Educational and Psychological Measurement 55, no. 5 (October 1995): 764–68. http://dx.doi.org/10.1177/0013164495055005007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Isotalo, Jarkko, Simo Puntanen, and George P. H. Styan. "A Useful Matrix Decomposition and Its Statistical Applications in Linear Regression." Communications in Statistics - Theory and Methods 37, no. 9 (March 11, 2008): 1436–57. http://dx.doi.org/10.1080/03610920701666328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tang, Ying. "Beyond EM: A faster Bayesian linear regression algorithm without matrix inversions." Neurocomputing 378 (February 2020): 435–40. http://dx.doi.org/10.1016/j.neucom.2019.10.061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kong, Dehan, Baiguo An, Jingwen Zhang, and Hongtu Zhu. "L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses." Journal of the American Statistical Association 115, no. 529 (April 30, 2019): 403–24. http://dx.doi.org/10.1080/01621459.2018.1555092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

He, Xi-Lei, Zhen-Hua He, Rui-Liang Wang, Xu-Ben Wang, and Lian Jiang. "Calculations of rock matrix modulus based on a linear regression relation." Applied Geophysics 8, no. 3 (September 2011): 155–62. http://dx.doi.org/10.1007/s11770-011-0290-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Furno, Marilena. "The information matrix test in the linear regression with ARMA errors." Journal of the Italian Statistical Society 5, no. 3 (December 1996): 369–85. http://dx.doi.org/10.1007/bf02589097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kostov, Philip. "Choosing the Right Spatial Weighting Matrix in a Quantile Regression Model." ISRN Economics 2013 (January 28, 2013): 1–16. http://dx.doi.org/10.1155/2013/158240.

Full text
Abstract:
This paper proposes computationally tractable methods for selecting the appropriate spatial weighting matrix in the context of a spatial quantile regression model. This selection is a notoriously difficult problem even in linear spatial models and is even more difficult in a quantile regression setup. The proposal is illustrated by an empirical example and manages to produce tractable models. One important feature of the proposed methodology is that by allowing different degrees and forms of spatial dependence across quantiles it further relaxes the usual quantile restriction attributable to the linear quantile regression. In this way we can obtain a more robust, with regard to potential functional misspecification, model, but nevertheless preserve the parametric rate of convergence and the established inferential apparatus associated with the linear quantile regression approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Gilyén, András, Zhao Song, and Ewin Tang. "An improved quantum-inspired algorithm for linear regression." Quantum 6 (June 30, 2022): 754. http://dx.doi.org/10.22331/q-2022-06-30-754.

Full text
Abstract:
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix A is stored in a data structure applicable for QRAM-based state preparation.Namely, suppose we are given an A∈Cm×n with minimum non-zero singular value σ which supports certain efficient ℓ2-norm importance sampling queries, along with a b∈Cm. Then, for some x∈Cn satisfying ‖x–A+b‖≤ε‖A+b‖, we can output a measurement of |x⟩ in the computational basis and output an entry of x with classical algorithms that run in O~(‖A‖F6‖A‖6σ12ε4) and O~(‖A‖F6‖A‖2σ8ε4) time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of ‖A‖16σ16ε2 [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Lei, and Su Feng Guo. "Measurement Based on the "Group Linear Regression Method" of Harmonic Impedance." Advanced Materials Research 354-355 (October 2011): 1051–57. http://dx.doi.org/10.4028/www.scientific.net/amr.354-355.1051.

Full text
Abstract:
Harmonic pollution monitoring and control is the key to an accurate estimate of harmonic impedance, and thus an accurate assessment of public connection points of the harmonic emission level. This paper analyzes the link under the star-shaped and triangular three-phase asymmetrical power systems derived established a three-phase unbalanced power system harmonic impedance matrix estimation model. Model is based on a large number of systems and load the public connection point of the simulation data, the real and imaginary part of the harmonic impedance by opening and using grouping multiple linear regression method, on the system side harmonic impedance matrix to estimate the value of each element. Simulation and error analysis shows that the model obtained is more accurate and effective.
APA, Harvard, Vancouver, ISO, and other styles
28

Bazilevskiy, M. P. "Solving an Optimization Problem for Estimating Fully connected Linear Regression Models." Моделирование и анализ данных 14, no. 1 (April 11, 2024): 121–34. http://dx.doi.org/10.17759/mda.2024140108.

Full text
Abstract:
<p>This article is devoted to the problem of estimating fully connected linear regression models using the maximum likelihood method. Previously, a special numerical method was developed for this purpose, based on solving a nonlinear system using the method of simple iterations. At the same time, the issues of choosing initial approximations and fulfilling sufficient conditions for convergence were not studied. This article proposes a new method for solving the optimization problem of estimating fully connected regressions, similar to the method of estimating orthogonal regressions. It has been proven that, with equal error variances of interconnected variables, estimates of b-parameters of fully connected regression are equal to the components of the eigenvector corresponding to the smallest eigenvalue of the inverse covariance matrix. And if the ratios of the error variances of the variables are equal to the ratios of the variances of the variables, then the b-parameter estimates are equal to the components of the eigenvector corresponding to the smallest eigenvalue of the inverse correlation matrix, multiplied by the specific ratios of the standard deviations of the variables. A numerical experiment was carried out to confirm the correctness of the developed mathematical apparatus. The proposed method for solving the optimization problem of estimating fully connected regressions can be effectively used when solving problems of constructing multiple fully connected linear regressions.</p>
APA, Harvard, Vancouver, ISO, and other styles
29

Barnabani, Marco. "A Parametric Test to Discriminate Between a Linear Regression Model and a Linear Latent Growth Model." International Journal of Statistics and Probability 6, no. 3 (May 14, 2017): 157. http://dx.doi.org/10.5539/ijsp.v6n3p157.

Full text
Abstract:
In longitudinal studies with subjects measured repeatedly across time, an important problem is how to select a model generating data by choosing between a linear regression model and a linear latent growth model. Approaches based both on information criteria and asymptotic hypothesis tests of the variances of ''random'' components are widely used but not completely satisfactory. We propose a test statistic based on the trace of the product of an estimate of a variance covariance matrix defined when data come from a linear regression model and a sample variance covariance matrix. We studied the sampling distribution of the test statistic giving a representation in terms of an infinite series of generalized F-distributions. Knowledge about this distribution allows us to make inference within a classical hypothesis testing ramework. The test statistic can be used by itself to discriminate between the two models and/or, if duly modified, it can be used to test randomness on single components. Moreover, in conjunction with some model selection criteria, it gives additional information which can help in choosing the model.
APA, Harvard, Vancouver, ISO, and other styles
30

LIPOVETSKY, STAN. "MEANINGFUL REGRESSION COEFFICIENTS BUILT BY DATA GRADIENTS." Advances in Adaptive Data Analysis 02, no. 04 (October 2010): 451–62. http://dx.doi.org/10.1142/s1793536910000574.

Full text
Abstract:
Multiple regression's coefficients define change in the dependent variable due to a predictor's change while all other predictors are constant. Rearranging data to paired differences of observations and keeping only biggest changes yield a matrix of a single variable change, which is close to orthogonal design, so there is no impact of multicollinearity on the regression. A similar approach is used for meaningful coefficients of nonlinear regressions with coefficients of half-elasticity, elasticity, and odds' elasticity due the gradients in each predictor. In contrast to regular linear and nonlinear regressions, the suggested technique produces interpretable coefficients not prone to multicollinearity effects.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhai, Qianru, Ye Tian, and Jingyue Zhou. "Linear Twin Quadratic Surface Support Vector Regression." Mathematical Problems in Engineering 2020 (April 4, 2020): 1–18. http://dx.doi.org/10.1155/2020/3238129.

Full text
Abstract:
Twin support vector regression (TSVR) generates two nonparallel hyperplanes by solving a pair of smaller-sized problems instead of a single larger-sized problem in the standard SVR. Due to its efficiency, TSVR is frequently applied in various areas. In this paper, we propose a totally new version of TSVR named Linear Twin Quadratic Surface Support Vector Regression (LTQSSVR), which directly uses two quadratic surfaces in the original space for regression. It is worth noting that our new approach not only avoids the notoriously difficult and time-consuming task for searching a suitable kernel function and its corresponding parameters in the traditional SVR-based method but also achieves a better generalization performance. Besides, in order to make further improvement on the efficiency and robustness of the model, we introduce the 1-norm to measure the error. The linear programming structure of the new model skips the matrix inverse operation and makes it solvable for those huge-sized problems. As we know, the capability of handling large-sized problem is very important in this big data era. In addition, to verify the effectiveness and efficiency of our model, we compare it with some well-known methods. The numerical experiments on 2 artificial data sets and 12 benchmark data sets demonstrate the validity and applicability of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
32

NZABANITA, Joseph, Dietrich VON ROSEN, and Martin SINGULL. "Bilinear regression model with Kronecker and linear structures for the covariance matrix." Afrika Statistika 10, no. 2 (December 31, 2015): 837–27. http://dx.doi.org/10.16929/as/2015.827.77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cribari-Neto, Francisco, and Wilton Bernardino da Silva. "A new heteroskedasticity-consistent covariance matrix estimator for the linear regression model." AStA Advances in Statistical Analysis 95, no. 2 (November 4, 2010): 129–46. http://dx.doi.org/10.1007/s10182-010-0141-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Drygas, Hilmar. "A note on the inverse-partitioned-matrix method in linear regression analysis." Linear Algebra and its Applications 67 (June 1985): 275–77. http://dx.doi.org/10.1016/0024-3795(85)90201-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Akdeniz, Fikri, and Hamza Erol. "Mean Squared Error Matrix Comparisons of Some Biased Estimators in Linear Regression." Communications in Statistics - Theory and Methods 32, no. 12 (January 12, 2003): 2389–413. http://dx.doi.org/10.1081/sta-120025385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pan, Nang-Fei, Tzu-Chieh Lin, and Nai-Hsin Pan. "Estimating bridge performance based on a matrix-driven fuzzy linear regression model." Automation in Construction 18, no. 5 (August 2009): 578–86. http://dx.doi.org/10.1016/j.autcon.2008.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Auerbach, Eric. "Identification and Estimation of a Partially Linear Regression Model Using Network Data." Econometrica 90, no. 1 (2022): 347–65. http://dx.doi.org/10.3982/ecta19794.

Full text
Abstract:
I study a regression model in which one covariate is an unknown function of a latent driver of link formation in a network. Rather than specify and fit a parametric network formation model, I introduce a new method based on matching pairs of agents with similar columns of the squared adjacency matrix, the ijth entry of which contains the number of other agents linked to both agents i and j. The intuition behind this approach is that for a large class of network formation models the columns of the squared adjacency matrix characterize all of the identifiable information about individual linking behavior. In this paper, I describe the model, formalize this intuition, and provide consistent estimators for the parameters of the regression model.
APA, Harvard, Vancouver, ISO, and other styles
38

Wei, Chuanhua, and Xiaonan Wang. "Principal Components Regression Estimation in Semiparametric Partially Linear Additive Models." International Journal of Statistics and Probability 5, no. 1 (November 25, 2015): 46. http://dx.doi.org/10.5539/ijsp.v5n1p46.

Full text
Abstract:
<p>Partially linear additive model is useful in statistical modelling as a multivariate nonparametric fitting technique. This paper considers statistical inference for the semiparametric model in the presence of multicollinearity. Based on the profile least-squares approach, we propose a novel principal components regression estimator for the parametric component, and provide the asymptotic bias and covariance matrix of the proposed estimator. Some simulations are conducted to examine the performance of our proposed estimators and the results are satisfactory.</p>
APA, Harvard, Vancouver, ISO, and other styles
39

Kannan, Ravindran, and Santosh Vempala. "Randomized algorithms in numerical linear algebra." Acta Numerica 26 (May 1, 2017): 95–135. http://dx.doi.org/10.1017/s0962492917000058.

Full text
Abstract:
This survey provides an introduction to the use of randomization in the design of fast algorithms for numerical linear algebra. These algorithms typically examine only a subset of the input to solve basic problems approximately, including matrix multiplication, regression and low-rank approximation. The survey describes the key ideas and gives complete proofs of the main results in the field. A central unifying idea is sampling the columns (or rows) of a matrix according to their squared lengths.
APA, Harvard, Vancouver, ISO, and other styles
40

Shah Alam, Bhaskar Bakshi, Rupjit Maity, Sulekha Das, and Avijit Kumar Chaudhuri. "Heart Disease Diagnosis and Prediction using Multi Linear Regression." international journal of engineering technology and management sciences 7, no. 2 (2023): 210–21. http://dx.doi.org/10.46647/ijetms.2023.v07i02.025.

Full text
Abstract:
The correct prediction of heart disease can prevent life threats, and incorrect prediction can prove to be fatal at the same time. In this paper machine learning algorithm is applied to compare the results and analysis of primary dataset. The dataset consists of 46 attributes among these Information gain is used to select 24 features for performing the analysis. Various promising results are achieved and are validated using accuracy and confusion matrix. The dataset consists of some irrelevant features which are handled and data are also normalized for getting better results. Using machine learning approach, 77.78% accuracy was obtained. Multiple linear regressions are used to construct and validate the prediction system. Our experimental result shows that multiple linear regressions are suitable for modelling and predicting cholesterol.
APA, Harvard, Vancouver, ISO, and other styles
41

Nakonechnyi, Alexander, Grigoriy Kudin, Taras Zinko, and Petr Zinko. "MINIMAX ROOT–MEAN–SQUARE ESTIMATES OF MATRIX PARAMETERS IN LINEAR REGRESSION PROBLEMS UNDER UNCERTAINTY." Journal of Automation and Information sciences 4 (July 1, 2021): 28–37. http://dx.doi.org/10.34229/1028-0979-2021-4-3.

Full text
Abstract:
The issues of parameter estimation in linear regression problems with random matrix coefficients were researched. Given that random linear functions are observed from unknown matrices with random errors that have unknown correlation matrices, the problems of guaranteed mean square estimation of linear functions of matrices were investigated. The estimates of the upper and lower guaranteed standard errors of linear estimates of observations of linear functions of matrices were obtained in the case when the sets are found, for which the unknown matrices and correlation matrices of observation errors are known. It was proved that for some partial cases such estimates are accurate. Assuming that the sets are bounded, convex and closed, more accurate two-sided estimates have been gained for guaranteed errors. The conditions when the guaranteed mean squared errors approach zero as the number of observations increases were found. The necessary and sufficient conditions for the unbiasedness of linear estimates of linear functions of matrices were provided. The notion of quasi-optimal estimates for linear functions of matrices was introduced, and it was proved that in the class of unbiased estimates, quasi-optimal estimates exist and are unique. For such estimates, the conditions of convergence to zero of the guaranteed mean-square errors were obtained. Also, for linear estimates of unknown matrices, the concept of quasi-minimax estimates was introduced and it was confirmed that they are unbiased. For special sets, which include an unknown matrix and correlation matrices of observation errors, such estimates were expressed through the solution of linear operator equations in a finite-dimensional space. For quasi-minimax estimates under certain assumptions, the form of the guaranteed mean squared error of the unknown matrix was found. It was shown that such errors are limited by the sum of traces of the known matrices. An example of finding a minimax unbiased linear estimation was given for a special type of random matrices that are included in the observation equation.
APA, Harvard, Vancouver, ISO, and other styles
42

Teng, Guangqiang, Boping Tian, Yuanyuan Zhang, and Sheng Fu. "Asymptotics of Subsampling for Generalized Linear Regression Models under Unbounded Design." Entropy 25, no. 1 (December 31, 2022): 84. http://dx.doi.org/10.3390/e25010084.

Full text
Abstract:
The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only considers bounded covariates. In this paper, the asymptotic normality of the subsampling M-estimator based on the Fisher information matrix is obtained. Then, we study the asymptotic properties of subsampling estimators of unbounded GLMs with nonnatural links, including conditional asymptotic properties and unconditional asymptotic properties.
APA, Harvard, Vancouver, ISO, and other styles
43

Pohar, Maja, Mateja Blas, and Sandra Turk. "Comparison of logistic regression and linear discriminant analysis." Advances in Methodology and Statistics 1, no. 1 (January 1, 2004): 143–61. http://dx.doi.org/10.51936/ayrt6204.

Full text
Abstract:
Two of the most widely used statistical methods for analyzing categorical outcome variables are linear discriminant analysis and logistic regression. While both are appropriate for the development of linear classification models, linear discriminant analysis makes more assumptions about the underlying data. Hence, it is assumed that logistic regression is the more flexible and more robust method in case of violations of these assumptions. In this paper we consider the problem of choosing between the two methods, and set some guidelines for proper choice. The comparison between the methods is based on several measures of predictive accuracy. The performance of the methods is studied by simulations. We start with an example where all the assumptions of the linear discriminant analysis are satisfied and observe the impact of changes regarding the sample size, covariance matrix, Mahalanobis distance and direction of distance between group means. Next, we compare the robustness of the methods towards categorisation and non-normality of explanatory variables in a closely controlled way. We show that the results of LDA and LR are close whenever the normality assumptions are not too badly violated, and set some guidelines for recognizing these situations. We discuss the inappropriateness of LDA in all other cases.
APA, Harvard, Vancouver, ISO, and other styles
44

Hidayah Mohamed Isa, Noor, Mahmod Othman, and Samsul Ariffin Abdul Karim. "Multivariate Matrix for Fuzzy Linear Regression Model to Analyse The Taxation in Malaysia." International Journal of Engineering & Technology 7, no. 4.33 (December 9, 2018): 78. http://dx.doi.org/10.14419/ijet.v7i4.33.23490.

Full text
Abstract:
A multivariate matrix is proposed to find the best factor for fuzzy linear regression (FLR) with symmetric triangular fuzzy numbers (TFNs). The goal of this paper is to select the best factor influence tax revenue among four variables. Eighteen years’ data of the variables from IndexMundi and World Bank Data. It is found that the model is successfully explained between independent variables and response variable. It is notices that sixty-six percent of the variance of tax revenue is explained by Gross Domestic Product, Inflation, Unemployment and Merchandise Trade. The introduction of multivariate matrix for fuzzy linear regression in taxation is a first attempt to analyses the relationship the tax revenue with the independent variables.
APA, Harvard, Vancouver, ISO, and other styles
45

Abdelhadi, Yaser. "Linear modeling and regression for exponential engineering functions by a generalized ordinary least squares method." International Journal of Engineering & Technology 3, no. 2 (April 20, 2014): 174. http://dx.doi.org/10.14419/ijet.v3i2.2023.

Full text
Abstract:
Linear transformations are performed for selected exponential engineering functions. The Optimum values of parameters of the linear model equation that fits the set of experimental or simulated data points are determined by the linear least squares method. The classical and matrix forms of ordinary least squares are illustrated. Keywords: Exponential Functions; Linear Modeling; Ordinary Least Squares; Parametric Estimation; Regression Steps.
APA, Harvard, Vancouver, ISO, and other styles
46

du Toit, A. S., J. Booysen, and J. J. Human. "Use of linear regression and correlation matrix in the evaluation of CERES3 (Maize)." South African Journal of Plant and Soil 14, no. 4 (January 1997): 177–82. http://dx.doi.org/10.1080/02571862.1997.10635104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gaffke, N., and R. Schwabe. "Quasi-Newton algorithm for optimal approximate linear regression design: Optimization in matrix space." Journal of Statistical Planning and Inference 198 (January 2019): 62–78. http://dx.doi.org/10.1016/j.jspi.2018.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bertsimas, Dimitris, and Martin S. Copenhaver. "Characterization of the equivalence of robustification and regularization in linear and matrix regression." European Journal of Operational Research 270, no. 3 (November 2018): 931–42. http://dx.doi.org/10.1016/j.ejor.2017.03.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Peña, Daniel, and Victor J. Yohai. "The Detection of Influential Subsets in Linear Regression by Using an Influence Matrix." Journal of the Royal Statistical Society: Series B (Methodological) 57, no. 1 (January 1995): 145–56. http://dx.doi.org/10.1111/j.2517-6161.1995.tb02020.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Peña, Daniel, and Victor J. Yohai. "The Detection of Influential Subsets in Linear Regression by Using an Influence Matrix." Journal of the Royal Statistical Society: Series B (Methodological) 57, no. 3 (September 1995): 611. http://dx.doi.org/10.1111/j.2517-6161.1995.tb02051.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography