Journal articles on the topic 'Backfittig algorithm'

To see the other types of publications on this topic, follow the link: Backfittig algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Backfittig algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ansley, Craig F., and Robert Kohn. "Convergence of the backfitting algorithm for additive models." Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 57, no. 3 (December 1994): 316–29. http://dx.doi.org/10.1017/s1446788700037721.

Full text
Abstract:
AbstractThe backfitting algorithm is an iterative procedure for fitting additive models in which, at each step, one component is estimated keeping the other components fixed, the algorithm proceeding component by component and iterating until convergence. Convergence of the algorithm has been studied by Buja, Hastie, and Tibshirani (1989). We give a simple, but more general, geometric proof of the convergence of the backfitting algorithm when the additive components are estimated by penalized least squares. Our treatment covers spline smoothers and structural time series models, and we give a full discussion of the degenerate case. Our proof is based on Halperin's (1962) generalization of von Neumann's alternating projection theorem.
APA, Harvard, Vancouver, ISO, and other styles
2

Mendes, Jérôme, Francisco Souza, Rui Araújo, and Saeid Rastegar. "Neo-fuzzy neuron learning using backfitting algorithm." Neural Computing and Applications 31, no. 8 (December 30, 2017): 3609–18. http://dx.doi.org/10.1007/s00521-017-3301-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Härdle, W., and P. Hall. "On the backfitting algorithm for additive regression models." Statistica Neerlandica 47, no. 1 (March 1993): 43–57. http://dx.doi.org/10.1111/j.1467-9574.1993.tb01405.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacobs, Robert A., Wenxin Jiang, and Martin A. Tanner. "Factorial Hidden Markov Models and the Generalized Backfitting Algorithm." Neural Computation 14, no. 10 (October 1, 2002): 2415–37. http://dx.doi.org/10.1162/08997660260293283.

Full text
Abstract:
Previous researchers developed new learning architectures for sequential data by extending conventional hidden Markov models through the use of distributed state representations. Although exact inference and parameter estimation in these architectures is computationally intractable, Ghahramani and Jordan (1997) showed that approximate inference and parameter estimation in one such architecture, factorial hidden Markov models (FHMMs), is feasible in certain circumstances. However, the learning algorithm proposed by these investigators, based on variational techniques, is difficult to understand and implement and is limited to the study of real-valued data sets. This chapter proposes an alternative method for approximate inference and parameter estimation in FHMMs based on the perspective that FHMMs are a generalization of a well-known class of statistical models known as generalized additive models (GAMs; Hastie & Tibshirani, 1990). Using existing statistical techniques for GAMs as a guide, we have developed the generalized backfitting algorithm. This algorithm computes customized error signals for each hidden Markov chain of an FHMM and then trains each chain one at a time using conventional techniques from the hidden Markov models literature. Relative to previous perspectives on FHMMs, we believe that the viewpoint taken here has a number of advantages. First, it places FHMMs on firm statistical foundations by relating them to a class of models that are well studied in the statistics community, yet it generalizes this class of models in an interesting way. Second, it leads to an understanding of how FHMMs can be applied to many different types of time-series data, including Bernoulli and multinomial data, not just data that are real valued. Finally, it leads to an effective learning procedure for FHMMs that is easier to understand and easier to implement than existing learning procedures. Simulation results suggest that FHMMs trained with the generalized backfitting algorithm are a practical and powerful tool for analyzing sequential data.
APA, Harvard, Vancouver, ISO, and other styles
5

ABEL, MARKUS. "NONPARAMETRIC MODELING AND SPATIOTEMPORAL DYNAMICAL SYSTEMS." International Journal of Bifurcation and Chaos 14, no. 06 (June 2004): 2027–39. http://dx.doi.org/10.1142/s0218127404010382.

Full text
Abstract:
This article describes how to use statistical data analysis to obtain models directly from data. The focus is put on finding nonlinearities within a generalized additive model. These models are found by means of backfitting or more general algorithms, like the alternating conditional expectation value one. The method is illustrated by numerically generated data. As an application, the example of vortex ripple dynamics, a highly complex fluid-granular system, is treated.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Ting, and Zhiqiang Tan. "Backfitting algorithms for total-variation and empirical-norm penalized additive modelling with high-dimensional data." Stat 7, no. 1 (2018): e198. http://dx.doi.org/10.1002/sta4.198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ting, Jo-Anne, Aaron D'Souza, Sethu Vijayakumar, and Stefan Schaal. "Efficient Learning and Feature Selection in High-Dimensional Regression." Neural Computation 22, no. 4 (April 2010): 831–86. http://dx.doi.org/10.1162/neco.2009.02-08-702.

Full text
Abstract:
We present a novel algorithm for efficient learning and feature selection in high-dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a probabilistic version of the well-known statistical regression technique of backfitting. Using the expectation-maximization algorithm, along with variational approximation methods to overcome intractability, we extend our algorithm to include automatic relevance detection of the input features. This variational Bayesian least squares (VBLS) approach retains its simplicity as a linear model, but offers a novel statistically robust black-box approach to generalized linear regression with high-dimensional inputs. It can be easily extended to nonlinear regression and classification problems. In particular, we derive the framework of sparse Bayesian learning, the relevance vector machine, with VBLS at its core, offering significant computational and robustness advantages for this class of methods. The iterative nature of VBLS makes it most suitable for real-time incremental learning, which is crucial especially in the application domain of robotics, brain-machine interfaces, and neural prosthetics, where real-time learning of models for control is needed. We evaluate our algorithm on synthetic and neurophysiological data sets, as well as on standard regression and classification benchmark data sets, comparing it with other competitive statistical approaches and demonstrating its suitability as a drop-in replacement for other generalized linear regression techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Skhosana, Sphiwe B., Salomon M. Millard, and Frans H. J. Kanfer. "A Novel EM-Type Algorithm to Estimate Semi-Parametric Mixtures of Partially Linear Models." Mathematics 11, no. 5 (February 22, 2023): 1087. http://dx.doi.org/10.3390/math11051087.

Full text
Abstract:
Semi- and non-parametric mixture of normal regression models are a flexible class of mixture of regression models. These models assume that the component mixing proportions, regression functions and/or variances are non-parametric functions of the covariates. Among this class of models, the semi-parametric mixture of partially linear models (SPMPLMs) combine the desirable interpretability of a parametric model and the flexibility of a non-parametric model. However, local-likelihood estimation of the non-parametric term poses a computational challenge. Traditional EM optimisation of the local-likelihood functions is not appropriate due to the label-switching problem. Separately applying the EM algorithm on each local-likelihood function will likely result in non-smooth function estimates. This is because the local responsibilities calculated at the E-step of each local EM are not guaranteed to be aligned. To prevent this, the EM algorithm must be modified so that the same (global) responsibilities are used at each local M-step. In this paper, we propose a one-step backfitting EM-type algorithm to estimate the SPMPLMs and effectively address the label-switching problem. The proposed algorithm estimates the non-parametric term using each set of local responsibilities in turn and then incorporates a smoothing step to obtain the smoothest estimate. In addition, to reduce the computational burden imposed by the use of the partial-residuals estimator of the parametric term, we propose a plug-in estimator. The performance and practical usefulness of the proposed methods was tested using a simulated dataset and two real datasets, respectively. Our finite sample analysis revealed that the proposed methods are effective at solving the label-switching problem and producing reasonable and interpretable results in a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
9

GHOSH, ANIL KUMAR, and SMARAJIT BOSE. "FEATURE EXTRACTION FOR CLASSIFICATION USING STATISTICAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1103–26. http://dx.doi.org/10.1142/s0218001407005855.

Full text
Abstract:
In a classification problem, quite often the dimension of the measurement vector is large. Some of these measurements may not be important for separating the classes. Removal of these measurement variables not only reduces the computational cost but also leads to better understanding of class separability. There are some methods in the existing literature for reducing the dimensionality of a classification problem without losing much of the separability information. However, these dimension reduction procedures usually work well for linear classifiers. In the case where competing classes are not linearly separable, one has to look for ideal "features" which could be some transformations of one or more measurements. In this paper, we make an attempt to tackle both, the problems of dimension reduction and feature extraction, by considering a projection pursuit regression model. The single hidden layer perceptron model and some other popular models can be viewed as special cases of this model. An iterative algorithm based on backfitting is proposed to select the features dynamically, and cross-validation method is used to select the ideal number of features. We carry out an extensive simulation study to show the effectiveness of this fully automatic method.
APA, Harvard, Vancouver, ISO, and other styles
10

Łabęda-Grudziak, Zofia M. "The Disturbance Detection in the Outlet Temperature of a Coal Dust–Air Mixture on the Basis of the Statistical Model." Energies 15, no. 19 (October 4, 2022): 7302. http://dx.doi.org/10.3390/en15197302.

Full text
Abstract:
The reliability of a coal mill's operation is strongly connected with optimizing the combustion process. Monitoring the temperature of a dust–air mixture significantly increases the coal mill's operational efficiency and safety. Reliable and accurate information about disturbances can help with optimization actions. The article describes the application of an additive regression model and data mining techniques for the identification of the temperature model of a dust–air mixture at the outlet of a coal mill. This is a new approach to the problem of power unit modeling, which extends the possibilities of multivariate and nonlinear estimation by using the backfitting algorithm with flexible nonparametric smoothing techniques. The designed model was used to construct a disturbance detection system in the position of hot and cold air dampers. In order to achieve the robust properties of the detection systems, statistical measures of the differences between the real and modeled temperature signal of dust–air mixtures were used. The research has been conducted on the basis of the real measuring data registered in the Polish power unit with a capacity of 200 MW. The obtained high-quality model identification confirms the correctness of the presented method. The model is characterized by high sensitivity to any disturbances in the cold and hot air damper position. The results show that the suggested method improves the usability of the statistical modeling, which creates good prospects for future applications of additive models in the issues of diagnosing faults and cyber-attacks in power systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Amato, Umberto, Anestis Antoniadis, Italia De Feis, and Yannig Goude. "Estimation and group variable selection for additive partial linear models withwavelets and splines." South African Statistical Journal 51, no. 2 (2022). http://dx.doi.org/10.37920/sasj.2017.51.2.1.

Full text
Abstract:
In this paper we study sparse high dimensional additive partial linear models with nonparametric additive components of heterogeneous smoothness. We review several existing algorithms that have been developed for this problem in the recent literature, highlighting the connections between them, and present some computationally efficient algorithms for fitting such models. To achieve optimal rates in large sample situations we use hybrid P-splines and block wavelet penalisation techniques combined with adaptive (group) LASSO-like procedures for selecting the additive components in the nonparametric part of the models. Hence, the component selection and estimation in the nonparametric part may be viewed as a functional version of estimation and grouped variable selection. This allows to take advantage of several oracle results which yield asymptotic optimality of estimators in high-dimensional but sparse additive models. Numerical implementations of our procedures for proximal like algorithms are discussed. Large sample properties of the estimates and of the model selection are presented and the results are illustrated with simulated examples and a real data analysis. Keywords: Additive models, Backfitting, Penalisation, Proximal algorithms, Squared group-LASSO, Splines, Wavelets
APA, Harvard, Vancouver, ISO, and other styles
12

Ahmed, Syed Ejaz, Dursun Aydın, and Ersin Yılmaz. "A survey of smoothing techniques based on a backfitting algorithm in estimation of semiparametric additive models." WIREs Computational Statistics, December 25, 2022. http://dx.doi.org/10.1002/wics.1605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hiabu, M., J. P. Nielsen, and T. H. Scheike. "Nonsmooth backfitting for the excess risk additive regression model with two survival time scales." Biometrika, July 8, 2020. http://dx.doi.org/10.1093/biomet/asaa058.

Full text
Abstract:
Summary We consider an extension of Aalen’s additive regression model that allows covariates to have effects that vary on two different time scales. The two time scales considered are equal up to a constant for each individual and vary across individuals, such as follow-up time and age in medical studies or calendar time and age in longitudinal studies. The model was introduced in Scheike (2001), where it was solved using smoothing techniques. We present a new backfitting algorithm for estimating the structured model without having to use smoothing. Estimators of the cumulative regression functions on the two time scales are suggested by solving local estimating equations jointly on the two time scales. We provide large-sample properties and simultaneous confidence bands. The model is applied to data on myocardial infarction, providing a separation of the two effects stemming from time since diagnosis and age.
APA, Harvard, Vancouver, ISO, and other styles
14

Gámiz, María Luz, Anton Kalén, Rafael Nozal-Cañadas, and Rocío Raya-Miranda. "Statistical supervised learning with engineering data: a case study of low frequency noise measured on semiconductor devices." International Journal of Advanced Manufacturing Technology, April 4, 2022. http://dx.doi.org/10.1007/s00170-022-08949-z.

Full text
Abstract:
AbstractOur practical motivation is the analysis of potential correlations between spectral noise current and threshold voltage from common on-wafer MOSFETs. The usual strategy leads to the use of standard techniques based on Normal linear regression easily accessible in all statistical software (both free or commercial). However, these statistical methods are not appropriate because the assumptions they lie on are not met. More sophisticated methods are required. A new strategy based on the most novel nonparametric techniques which are data-driven and thus free from questionable parametric assumptions is proposed. A backfitting algorithm accounting for random effects and nonparametric regression is designed and implemented. The nature of the correlation between threshold voltage and noise is examined by conducting a statistical test, which is based on a novel technique that summarizes in a color map all the relevant information of the data. The way the results are presented in the plot makes it easy for a non-expert in data analysis to understand what is underlying. The good performance of the method is proven through simulations and it is applied to a data case in a field where these modern statistical techniques are novel and result very efficient.
APA, Harvard, Vancouver, ISO, and other styles
15

Ocampo, Shirlee, and Erniel Barrios. "Sparse Spatial Autoregressive and Spatio-temporal Models for COVID-19 Incidence in the Philippines." Philippine Journal of Science 151, no. 5 (August 8, 2022). http://dx.doi.org/10.56899/151.05.35.

Full text
Abstract:
Philippine COVID-19 data have so many gaps resulting from lack of mass testing, late reporting of test results, and unreported cases – leading to too much noise and sparsity. A sparse spatial autoregressive model linking COVID-19 incidence and mortality rates to the healthcare system, demographic and economic indicators, disease prevalence, vaccination, urbanity, and environmental factors is proposed. The model allows for irregular spatial units accounting for temporal dependencies within a neighborhood was estimated using the Cochranne-Orcutt procedure embedded into the backfitting algorithm. Daily COVID-19 cases and deaths across provinces and cities in the National Capital Region (NCR) from 01 April 2020–15 September 2021 show a significant association of COVID-19 prevalence rate with the number of health workers, revenue of the local government unit (LGU), and prevalence of tuberculosis (TB). On the other hand, COVID-19 mortality rates are associated with the number of health workers, number of licensed COVID-19 testing laboratories, number of cities in an LGU, revenue of the LGU, prevalence rate of cancer, and prevalence rate of TB. The models emphasize the importance of resources available in the LGU that can boost the capabilities of the health care system. Pre-existing health conditions (co-morbidities) in the communities also determine the prevalence and mortality rates of COVID-19 in the Philippines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography