Littérature scientifique sur le sujet « Backfittig algorithm »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Backfittig algorithm ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Backfittig algorithm"

1

Ansley, Craig F., et Robert Kohn. « Convergence of the backfitting algorithm for additive models ». Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 57, no 3 (décembre 1994) : 316–29. http://dx.doi.org/10.1017/s1446788700037721.

Texte intégral
Résumé :
AbstractThe backfitting algorithm is an iterative procedure for fitting additive models in which, at each step, one component is estimated keeping the other components fixed, the algorithm proceeding component by component and iterating until convergence. Convergence of the algorithm has been studied by Buja, Hastie, and Tibshirani (1989). We give a simple, but more general, geometric proof of the convergence of the backfitting algorithm when the additive components are estimated by penalized least squares. Our treatment covers spline smoothers and structural time series models, and we give a full discussion of the degenerate case. Our proof is based on Halperin's (1962) generalization of von Neumann's alternating projection theorem.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mendes, Jérôme, Francisco Souza, Rui Araújo et Saeid Rastegar. « Neo-fuzzy neuron learning using backfitting algorithm ». Neural Computing and Applications 31, no 8 (30 décembre 2017) : 3609–18. http://dx.doi.org/10.1007/s00521-017-3301-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Härdle, W., et P. Hall. « On the backfitting algorithm for additive regression models ». Statistica Neerlandica 47, no 1 (mars 1993) : 43–57. http://dx.doi.org/10.1111/j.1467-9574.1993.tb01405.x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Jacobs, Robert A., Wenxin Jiang et Martin A. Tanner. « Factorial Hidden Markov Models and the Generalized Backfitting Algorithm ». Neural Computation 14, no 10 (1 octobre 2002) : 2415–37. http://dx.doi.org/10.1162/08997660260293283.

Texte intégral
Résumé :
Previous researchers developed new learning architectures for sequential data by extending conventional hidden Markov models through the use of distributed state representations. Although exact inference and parameter estimation in these architectures is computationally intractable, Ghahramani and Jordan (1997) showed that approximate inference and parameter estimation in one such architecture, factorial hidden Markov models (FHMMs), is feasible in certain circumstances. However, the learning algorithm proposed by these investigators, based on variational techniques, is difficult to understand and implement and is limited to the study of real-valued data sets. This chapter proposes an alternative method for approximate inference and parameter estimation in FHMMs based on the perspective that FHMMs are a generalization of a well-known class of statistical models known as generalized additive models (GAMs; Hastie & Tibshirani, 1990). Using existing statistical techniques for GAMs as a guide, we have developed the generalized backfitting algorithm. This algorithm computes customized error signals for each hidden Markov chain of an FHMM and then trains each chain one at a time using conventional techniques from the hidden Markov models literature. Relative to previous perspectives on FHMMs, we believe that the viewpoint taken here has a number of advantages. First, it places FHMMs on firm statistical foundations by relating them to a class of models that are well studied in the statistics community, yet it generalizes this class of models in an interesting way. Second, it leads to an understanding of how FHMMs can be applied to many different types of time-series data, including Bernoulli and multinomial data, not just data that are real valued. Finally, it leads to an effective learning procedure for FHMMs that is easier to understand and easier to implement than existing learning procedures. Simulation results suggest that FHMMs trained with the generalized backfitting algorithm are a practical and powerful tool for analyzing sequential data.
Styles APA, Harvard, Vancouver, ISO, etc.
5

ABEL, MARKUS. « NONPARAMETRIC MODELING AND SPATIOTEMPORAL DYNAMICAL SYSTEMS ». International Journal of Bifurcation and Chaos 14, no 06 (juin 2004) : 2027–39. http://dx.doi.org/10.1142/s0218127404010382.

Texte intégral
Résumé :
This article describes how to use statistical data analysis to obtain models directly from data. The focus is put on finding nonlinearities within a generalized additive model. These models are found by means of backfitting or more general algorithms, like the alternating conditional expectation value one. The method is illustrated by numerically generated data. As an application, the example of vortex ripple dynamics, a highly complex fluid-granular system, is treated.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang, Ting, et Zhiqiang Tan. « Backfitting algorithms for total-variation and empirical-norm penalized additive modelling with high-dimensional data ». Stat 7, no 1 (2018) : e198. http://dx.doi.org/10.1002/sta4.198.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ting, Jo-Anne, Aaron D'Souza, Sethu Vijayakumar et Stefan Schaal. « Efficient Learning and Feature Selection in High-Dimensional Regression ». Neural Computation 22, no 4 (avril 2010) : 831–86. http://dx.doi.org/10.1162/neco.2009.02-08-702.

Texte intégral
Résumé :
We present a novel algorithm for efficient learning and feature selection in high-dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a probabilistic version of the well-known statistical regression technique of backfitting. Using the expectation-maximization algorithm, along with variational approximation methods to overcome intractability, we extend our algorithm to include automatic relevance detection of the input features. This variational Bayesian least squares (VBLS) approach retains its simplicity as a linear model, but offers a novel statistically robust black-box approach to generalized linear regression with high-dimensional inputs. It can be easily extended to nonlinear regression and classification problems. In particular, we derive the framework of sparse Bayesian learning, the relevance vector machine, with VBLS at its core, offering significant computational and robustness advantages for this class of methods. The iterative nature of VBLS makes it most suitable for real-time incremental learning, which is crucial especially in the application domain of robotics, brain-machine interfaces, and neural prosthetics, where real-time learning of models for control is needed. We evaluate our algorithm on synthetic and neurophysiological data sets, as well as on standard regression and classification benchmark data sets, comparing it with other competitive statistical approaches and demonstrating its suitability as a drop-in replacement for other generalized linear regression techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Skhosana, Sphiwe B., Salomon M. Millard et Frans H. J. Kanfer. « A Novel EM-Type Algorithm to Estimate Semi-Parametric Mixtures of Partially Linear Models ». Mathematics 11, no 5 (22 février 2023) : 1087. http://dx.doi.org/10.3390/math11051087.

Texte intégral
Résumé :
Semi- and non-parametric mixture of normal regression models are a flexible class of mixture of regression models. These models assume that the component mixing proportions, regression functions and/or variances are non-parametric functions of the covariates. Among this class of models, the semi-parametric mixture of partially linear models (SPMPLMs) combine the desirable interpretability of a parametric model and the flexibility of a non-parametric model. However, local-likelihood estimation of the non-parametric term poses a computational challenge. Traditional EM optimisation of the local-likelihood functions is not appropriate due to the label-switching problem. Separately applying the EM algorithm on each local-likelihood function will likely result in non-smooth function estimates. This is because the local responsibilities calculated at the E-step of each local EM are not guaranteed to be aligned. To prevent this, the EM algorithm must be modified so that the same (global) responsibilities are used at each local M-step. In this paper, we propose a one-step backfitting EM-type algorithm to estimate the SPMPLMs and effectively address the label-switching problem. The proposed algorithm estimates the non-parametric term using each set of local responsibilities in turn and then incorporates a smoothing step to obtain the smoothest estimate. In addition, to reduce the computational burden imposed by the use of the partial-residuals estimator of the parametric term, we propose a plug-in estimator. The performance and practical usefulness of the proposed methods was tested using a simulated dataset and two real datasets, respectively. Our finite sample analysis revealed that the proposed methods are effective at solving the label-switching problem and producing reasonable and interpretable results in a reasonable amount of time.
Styles APA, Harvard, Vancouver, ISO, etc.
9

GHOSH, ANIL KUMAR, et SMARAJIT BOSE. « FEATURE EXTRACTION FOR CLASSIFICATION USING STATISTICAL NETWORKS ». International Journal of Pattern Recognition and Artificial Intelligence 21, no 07 (novembre 2007) : 1103–26. http://dx.doi.org/10.1142/s0218001407005855.

Texte intégral
Résumé :
In a classification problem, quite often the dimension of the measurement vector is large. Some of these measurements may not be important for separating the classes. Removal of these measurement variables not only reduces the computational cost but also leads to better understanding of class separability. There are some methods in the existing literature for reducing the dimensionality of a classification problem without losing much of the separability information. However, these dimension reduction procedures usually work well for linear classifiers. In the case where competing classes are not linearly separable, one has to look for ideal "features" which could be some transformations of one or more measurements. In this paper, we make an attempt to tackle both, the problems of dimension reduction and feature extraction, by considering a projection pursuit regression model. The single hidden layer perceptron model and some other popular models can be viewed as special cases of this model. An iterative algorithm based on backfitting is proposed to select the features dynamically, and cross-validation method is used to select the ideal number of features. We carry out an extensive simulation study to show the effectiveness of this fully automatic method.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Łabęda-Grudziak, Zofia M. « The Disturbance Detection in the Outlet Temperature of a Coal Dust–Air Mixture on the Basis of the Statistical Model ». Energies 15, no 19 (4 octobre 2022) : 7302. http://dx.doi.org/10.3390/en15197302.

Texte intégral
Résumé :
The reliability of a coal mill's operation is strongly connected with optimizing the combustion process. Monitoring the temperature of a dust–air mixture significantly increases the coal mill's operational efficiency and safety. Reliable and accurate information about disturbances can help with optimization actions. The article describes the application of an additive regression model and data mining techniques for the identification of the temperature model of a dust–air mixture at the outlet of a coal mill. This is a new approach to the problem of power unit modeling, which extends the possibilities of multivariate and nonlinear estimation by using the backfitting algorithm with flexible nonparametric smoothing techniques. The designed model was used to construct a disturbance detection system in the position of hot and cold air dampers. In order to achieve the robust properties of the detection systems, statistical measures of the differences between the real and modeled temperature signal of dust–air mixtures were used. The research has been conducted on the basis of the real measuring data registered in the Polish power unit with a capacity of 200 MW. The obtained high-quality model identification confirms the correctness of the presented method. The model is characterized by high sensitivity to any disturbances in the cold and hot air damper position. The results show that the suggested method improves the usability of the statistical modeling, which creates good prospects for future applications of additive models in the issues of diagnosing faults and cyber-attacks in power systems.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Backfittig algorithm"

1

Jégou, Nicolas. « Régression isotonique itérée ». Phd thesis, Université Rennes 2, 2012. http://tel.archives-ouvertes.fr/tel-00776627.

Texte intégral
Résumé :
Ce travail se situe dans le cadre de la régression non paramétrique univariée. Supposant la fonction de régression à variation bornée et partant du résultat selon lequel une telle fonction se décompose en la somme d'une fonction croissante et d'une fonction décroissante, nous proposons de construire et d'étudier un nouvel estimateur combinant les techniques d'estimation des modèles additifs et celles d'estimation sous contraintes de monotonie. Plus précisément, notreméthode consiste à itérer la régression isotonique selon l'algorithme backfitting. On dispose ainsià chaque itération d'un estimateur de la fonction de régression résultant de la somme d'une partiecroissante et d'une partie décroissante.Le premier chapitre propose un tour d'horizon des références relatives aux outils cités à l'instant. Le chapitre suivant est dédié à l'étude théorique de la régression isotonique itérée. Dans un premier temps, on montre que, la taille d'échantillon étant fixée, augmenter le nombre d'itérations conduit à l'interpolation des données. On réussit à identifier les limites des termes individuels de la somme en montrant l'égalité de notre algorithme avec celui consistant à itérer la régressionisotonique selon un algorithme de type réduction itérée du biais. Nous établissons enfin la consistance de l'estimateur.Le troisième chapitre est consacré à l'étude pratique de l'estimateur. Comme augmenter le nombre d'itérations conduit au sur-ajustement, il n'est pas souhaitable d'itérer la méthode jusqu'à la convergence. Nous examinons des règles d'arrêt basées sur des adaptations de critères usuellement employés dans le cadre des méthodes linéaires de lissage (AIC, BIC,...) ainsi que des critères supposant une connaissance a priori sur le nombre de modes de la fonction de régression. Il en ressort un comportement intéressant de la méthode lorsque la fonction de régression possède des points de rupture. Nous appliquons ensuite l'algorithme à des données réelles de type puces CGH où la détection de ruptures est d'un intérêt crucial. Enfin, une application à l'estimation des fonctions unimodales et à la détection de mode(s) est proposée
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vannucci, Giulia. « Interpretable semilinear regression trees ». Doctoral thesis, 2019. http://hdl.handle.net/2158/1150170.

Texte intégral
Résumé :
Tree-based methods refer to a class of predictive models largely employed in many scientific areas. Regression trees partition the variable space into a set of hyper- rectangles, and fit a model within each of them. They are conceptually simple, ap- parently easy to interpret and capable to deal with non linearities and interactions. Random forests are an ensemble of regression trees constructed on subsamples of statistical units and on a subset of explanatory variables randomly selected. The prediction is a combination of this kind of trees. Despite the loss in interpretability, thanks to their high predictive performance, random forests have achieved great success. The aim of this thesis is to propose a class of models combining a linear component and a tree, able to discover the relevant variables directly influencing a response. The proposal is a semilinear model that can handle linear and non linear dependencies and maintains a good predictive performance, while ensuring a simple and intuitive interpretation in a generative model sense. Moreover, two different algorithms for estimation, a two-stage estimation procedure based on a backfitting algorithm and one based on evolutionary algorithms are proposed.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Backfittig algorithm"

1

Wang, Lanruo, Xianjue Luo et Wei Zhang. « Unsupervised energy disaggregation with factorial hidden Markov models based on generalized backfitting algorithm ». Dans TENCON 2013 - 2013 IEEE Region 10 Conference. IEEE, 2013. http://dx.doi.org/10.1109/tencon.2013.6718469.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie