Academic literature on the topic 'Backfittig algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Backfittig algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Backfittig algorithm"

1

Ansley, Craig F., and Robert Kohn. "Convergence of the backfitting algorithm for additive models." Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 57, no. 3 (December 1994): 316–29. http://dx.doi.org/10.1017/s1446788700037721.

Full text
Abstract:
AbstractThe backfitting algorithm is an iterative procedure for fitting additive models in which, at each step, one component is estimated keeping the other components fixed, the algorithm proceeding component by component and iterating until convergence. Convergence of the algorithm has been studied by Buja, Hastie, and Tibshirani (1989). We give a simple, but more general, geometric proof of the convergence of the backfitting algorithm when the additive components are estimated by penalized least squares. Our treatment covers spline smoothers and structural time series models, and we give a full discussion of the degenerate case. Our proof is based on Halperin's (1962) generalization of von Neumann's alternating projection theorem.
APA, Harvard, Vancouver, ISO, and other styles
2

Mendes, Jérôme, Francisco Souza, Rui Araújo, and Saeid Rastegar. "Neo-fuzzy neuron learning using backfitting algorithm." Neural Computing and Applications 31, no. 8 (December 30, 2017): 3609–18. http://dx.doi.org/10.1007/s00521-017-3301-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Härdle, W., and P. Hall. "On the backfitting algorithm for additive regression models." Statistica Neerlandica 47, no. 1 (March 1993): 43–57. http://dx.doi.org/10.1111/j.1467-9574.1993.tb01405.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacobs, Robert A., Wenxin Jiang, and Martin A. Tanner. "Factorial Hidden Markov Models and the Generalized Backfitting Algorithm." Neural Computation 14, no. 10 (October 1, 2002): 2415–37. http://dx.doi.org/10.1162/08997660260293283.

Full text
Abstract:
Previous researchers developed new learning architectures for sequential data by extending conventional hidden Markov models through the use of distributed state representations. Although exact inference and parameter estimation in these architectures is computationally intractable, Ghahramani and Jordan (1997) showed that approximate inference and parameter estimation in one such architecture, factorial hidden Markov models (FHMMs), is feasible in certain circumstances. However, the learning algorithm proposed by these investigators, based on variational techniques, is difficult to understand and implement and is limited to the study of real-valued data sets. This chapter proposes an alternative method for approximate inference and parameter estimation in FHMMs based on the perspective that FHMMs are a generalization of a well-known class of statistical models known as generalized additive models (GAMs; Hastie & Tibshirani, 1990). Using existing statistical techniques for GAMs as a guide, we have developed the generalized backfitting algorithm. This algorithm computes customized error signals for each hidden Markov chain of an FHMM and then trains each chain one at a time using conventional techniques from the hidden Markov models literature. Relative to previous perspectives on FHMMs, we believe that the viewpoint taken here has a number of advantages. First, it places FHMMs on firm statistical foundations by relating them to a class of models that are well studied in the statistics community, yet it generalizes this class of models in an interesting way. Second, it leads to an understanding of how FHMMs can be applied to many different types of time-series data, including Bernoulli and multinomial data, not just data that are real valued. Finally, it leads to an effective learning procedure for FHMMs that is easier to understand and easier to implement than existing learning procedures. Simulation results suggest that FHMMs trained with the generalized backfitting algorithm are a practical and powerful tool for analyzing sequential data.
APA, Harvard, Vancouver, ISO, and other styles
5

ABEL, MARKUS. "NONPARAMETRIC MODELING AND SPATIOTEMPORAL DYNAMICAL SYSTEMS." International Journal of Bifurcation and Chaos 14, no. 06 (June 2004): 2027–39. http://dx.doi.org/10.1142/s0218127404010382.

Full text
Abstract:
This article describes how to use statistical data analysis to obtain models directly from data. The focus is put on finding nonlinearities within a generalized additive model. These models are found by means of backfitting or more general algorithms, like the alternating conditional expectation value one. The method is illustrated by numerically generated data. As an application, the example of vortex ripple dynamics, a highly complex fluid-granular system, is treated.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Ting, and Zhiqiang Tan. "Backfitting algorithms for total-variation and empirical-norm penalized additive modelling with high-dimensional data." Stat 7, no. 1 (2018): e198. http://dx.doi.org/10.1002/sta4.198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ting, Jo-Anne, Aaron D'Souza, Sethu Vijayakumar, and Stefan Schaal. "Efficient Learning and Feature Selection in High-Dimensional Regression." Neural Computation 22, no. 4 (April 2010): 831–86. http://dx.doi.org/10.1162/neco.2009.02-08-702.

Full text
Abstract:
We present a novel algorithm for efficient learning and feature selection in high-dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a probabilistic version of the well-known statistical regression technique of backfitting. Using the expectation-maximization algorithm, along with variational approximation methods to overcome intractability, we extend our algorithm to include automatic relevance detection of the input features. This variational Bayesian least squares (VBLS) approach retains its simplicity as a linear model, but offers a novel statistically robust black-box approach to generalized linear regression with high-dimensional inputs. It can be easily extended to nonlinear regression and classification problems. In particular, we derive the framework of sparse Bayesian learning, the relevance vector machine, with VBLS at its core, offering significant computational and robustness advantages for this class of methods. The iterative nature of VBLS makes it most suitable for real-time incremental learning, which is crucial especially in the application domain of robotics, brain-machine interfaces, and neural prosthetics, where real-time learning of models for control is needed. We evaluate our algorithm on synthetic and neurophysiological data sets, as well as on standard regression and classification benchmark data sets, comparing it with other competitive statistical approaches and demonstrating its suitability as a drop-in replacement for other generalized linear regression techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Skhosana, Sphiwe B., Salomon M. Millard, and Frans H. J. Kanfer. "A Novel EM-Type Algorithm to Estimate Semi-Parametric Mixtures of Partially Linear Models." Mathematics 11, no. 5 (February 22, 2023): 1087. http://dx.doi.org/10.3390/math11051087.

Full text
Abstract:
Semi- and non-parametric mixture of normal regression models are a flexible class of mixture of regression models. These models assume that the component mixing proportions, regression functions and/or variances are non-parametric functions of the covariates. Among this class of models, the semi-parametric mixture of partially linear models (SPMPLMs) combine the desirable interpretability of a parametric model and the flexibility of a non-parametric model. However, local-likelihood estimation of the non-parametric term poses a computational challenge. Traditional EM optimisation of the local-likelihood functions is not appropriate due to the label-switching problem. Separately applying the EM algorithm on each local-likelihood function will likely result in non-smooth function estimates. This is because the local responsibilities calculated at the E-step of each local EM are not guaranteed to be aligned. To prevent this, the EM algorithm must be modified so that the same (global) responsibilities are used at each local M-step. In this paper, we propose a one-step backfitting EM-type algorithm to estimate the SPMPLMs and effectively address the label-switching problem. The proposed algorithm estimates the non-parametric term using each set of local responsibilities in turn and then incorporates a smoothing step to obtain the smoothest estimate. In addition, to reduce the computational burden imposed by the use of the partial-residuals estimator of the parametric term, we propose a plug-in estimator. The performance and practical usefulness of the proposed methods was tested using a simulated dataset and two real datasets, respectively. Our finite sample analysis revealed that the proposed methods are effective at solving the label-switching problem and producing reasonable and interpretable results in a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
9

GHOSH, ANIL KUMAR, and SMARAJIT BOSE. "FEATURE EXTRACTION FOR CLASSIFICATION USING STATISTICAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1103–26. http://dx.doi.org/10.1142/s0218001407005855.

Full text
Abstract:
In a classification problem, quite often the dimension of the measurement vector is large. Some of these measurements may not be important for separating the classes. Removal of these measurement variables not only reduces the computational cost but also leads to better understanding of class separability. There are some methods in the existing literature for reducing the dimensionality of a classification problem without losing much of the separability information. However, these dimension reduction procedures usually work well for linear classifiers. In the case where competing classes are not linearly separable, one has to look for ideal "features" which could be some transformations of one or more measurements. In this paper, we make an attempt to tackle both, the problems of dimension reduction and feature extraction, by considering a projection pursuit regression model. The single hidden layer perceptron model and some other popular models can be viewed as special cases of this model. An iterative algorithm based on backfitting is proposed to select the features dynamically, and cross-validation method is used to select the ideal number of features. We carry out an extensive simulation study to show the effectiveness of this fully automatic method.
APA, Harvard, Vancouver, ISO, and other styles
10

Łabęda-Grudziak, Zofia M. "The Disturbance Detection in the Outlet Temperature of a Coal Dust–Air Mixture on the Basis of the Statistical Model." Energies 15, no. 19 (October 4, 2022): 7302. http://dx.doi.org/10.3390/en15197302.

Full text
Abstract:
The reliability of a coal mill's operation is strongly connected with optimizing the combustion process. Monitoring the temperature of a dust–air mixture significantly increases the coal mill's operational efficiency and safety. Reliable and accurate information about disturbances can help with optimization actions. The article describes the application of an additive regression model and data mining techniques for the identification of the temperature model of a dust–air mixture at the outlet of a coal mill. This is a new approach to the problem of power unit modeling, which extends the possibilities of multivariate and nonlinear estimation by using the backfitting algorithm with flexible nonparametric smoothing techniques. The designed model was used to construct a disturbance detection system in the position of hot and cold air dampers. In order to achieve the robust properties of the detection systems, statistical measures of the differences between the real and modeled temperature signal of dust–air mixtures were used. The research has been conducted on the basis of the real measuring data registered in the Polish power unit with a capacity of 200 MW. The obtained high-quality model identification confirms the correctness of the presented method. The model is characterized by high sensitivity to any disturbances in the cold and hot air damper position. The results show that the suggested method improves the usability of the statistical modeling, which creates good prospects for future applications of additive models in the issues of diagnosing faults and cyber-attacks in power systems.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Backfittig algorithm"

1

Jégou, Nicolas. "Régression isotonique itérée." Phd thesis, Université Rennes 2, 2012. http://tel.archives-ouvertes.fr/tel-00776627.

Full text
Abstract:
Ce travail se situe dans le cadre de la régression non paramétrique univariée. Supposant la fonction de régression à variation bornée et partant du résultat selon lequel une telle fonction se décompose en la somme d'une fonction croissante et d'une fonction décroissante, nous proposons de construire et d'étudier un nouvel estimateur combinant les techniques d'estimation des modèles additifs et celles d'estimation sous contraintes de monotonie. Plus précisément, notreméthode consiste à itérer la régression isotonique selon l'algorithme backfitting. On dispose ainsià chaque itération d'un estimateur de la fonction de régression résultant de la somme d'une partiecroissante et d'une partie décroissante.Le premier chapitre propose un tour d'horizon des références relatives aux outils cités à l'instant. Le chapitre suivant est dédié à l'étude théorique de la régression isotonique itérée. Dans un premier temps, on montre que, la taille d'échantillon étant fixée, augmenter le nombre d'itérations conduit à l'interpolation des données. On réussit à identifier les limites des termes individuels de la somme en montrant l'égalité de notre algorithme avec celui consistant à itérer la régressionisotonique selon un algorithme de type réduction itérée du biais. Nous établissons enfin la consistance de l'estimateur.Le troisième chapitre est consacré à l'étude pratique de l'estimateur. Comme augmenter le nombre d'itérations conduit au sur-ajustement, il n'est pas souhaitable d'itérer la méthode jusqu'à la convergence. Nous examinons des règles d'arrêt basées sur des adaptations de critères usuellement employés dans le cadre des méthodes linéaires de lissage (AIC, BIC,...) ainsi que des critères supposant une connaissance a priori sur le nombre de modes de la fonction de régression. Il en ressort un comportement intéressant de la méthode lorsque la fonction de régression possède des points de rupture. Nous appliquons ensuite l'algorithme à des données réelles de type puces CGH où la détection de ruptures est d'un intérêt crucial. Enfin, une application à l'estimation des fonctions unimodales et à la détection de mode(s) est proposée
APA, Harvard, Vancouver, ISO, and other styles
2

Vannucci, Giulia. "Interpretable semilinear regression trees." Doctoral thesis, 2019. http://hdl.handle.net/2158/1150170.

Full text
Abstract:
Tree-based methods refer to a class of predictive models largely employed in many scientific areas. Regression trees partition the variable space into a set of hyper- rectangles, and fit a model within each of them. They are conceptually simple, ap- parently easy to interpret and capable to deal with non linearities and interactions. Random forests are an ensemble of regression trees constructed on subsamples of statistical units and on a subset of explanatory variables randomly selected. The prediction is a combination of this kind of trees. Despite the loss in interpretability, thanks to their high predictive performance, random forests have achieved great success. The aim of this thesis is to propose a class of models combining a linear component and a tree, able to discover the relevant variables directly influencing a response. The proposal is a semilinear model that can handle linear and non linear dependencies and maintains a good predictive performance, while ensuring a simple and intuitive interpretation in a generative model sense. Moreover, two different algorithms for estimation, a two-stage estimation procedure based on a backfitting algorithm and one based on evolutionary algorithms are proposed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Backfittig algorithm"

1

Wang, Lanruo, Xianjue Luo, and Wei Zhang. "Unsupervised energy disaggregation with factorial hidden Markov models based on generalized backfitting algorithm." In TENCON 2013 - 2013 IEEE Region 10 Conference. IEEE, 2013. http://dx.doi.org/10.1109/tencon.2013.6718469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography