Academic literature on the topic 'LASSO algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LASSO algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LASSO algorithm"

1

Kaneda, Yasuaki, and Yasuharu Irizuki. "Recursive Algorithm for LASSO." IEEJ Transactions on Electronics, Information and Systems 136, no. 7 (2016): 915–22. http://dx.doi.org/10.1541/ieejeiss.136.915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jain, Rahi, and Wei Xu. "HDSI: High dimensional selection with interactions algorithm on feature selection and testing." PLOS ONE 16, no. 2 (February 16, 2021): e0246159. http://dx.doi.org/10.1371/journal.pone.0246159.

Full text
Abstract:
Feature selection on high dimensional data along with the interaction effects is a critical challenge for classical statistical learning techniques. Existing feature selection algorithms such as random LASSO leverages LASSO capability to handle high dimensional data. However, the technique has two main limitations, namely the inability to consider interaction terms and the lack of a statistical test for determining the significance of selected features. This study proposes a High Dimensional Selection with Interactions (HDSI) algorithm, a new feature selection method, which can handle high-dimensional data, incorporate interaction terms, provide the statistical inferences of selected features and leverage the capability of existing classical statistical techniques. The method allows the application of any statistical technique like LASSO and subset selection on multiple bootstrapped samples; each contains randomly selected features. Each bootstrap data incorporates interaction terms for the randomly sampled features. The selected features from each model are pooled and their statistical significance is determined. The selected statistically significant features are used as the final output of the approach, whose final coefficients are estimated using appropriate statistical techniques. The performance of HDSI is evaluated using both simulated data and real studies. In general, HDSI outperforms the commonly used algorithms such as LASSO, subset selection, adaptive LASSO, random LASSO and group LASSO.
APA, Harvard, Vancouver, ISO, and other styles
3

Yau, Chun Yip, and Tsz Shing Hui. "LARS-type algorithm for group lasso." Statistics and Computing 27, no. 4 (May 23, 2016): 1041–48. http://dx.doi.org/10.1007/s11222-016-9669-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alghamdi, Maryam A., Mohammad Ali Alghamdi, Naseer Shahzad, and Hong-Kun Xu. "Properties and Iterative Methods for theQ-Lasso." Abstract and Applied Analysis 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/250943.

Full text
Abstract:
We introduce theQ-lasso which generalizes the well-known lasso of Tibshirani (1996) withQa closed convex subset of a Euclideanm-space for some integerm≥1. This setQcan be interpreted as the set of errors within given tolerance level when linear measurements are taken to recover a signal/image via the lasso. Solutions of theQ-lasso depend on a tuning parameterγ. In this paper, we obtain basic properties of the solutions as a function ofγ. Because of ill posedness, we also applyl1-l2regularization to theQ-lasso. In addition, we discuss iterative methods for solving theQ-lasso which include the proximal-gradient algorithm and the projection-gradient algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jin-Jia, and Yang Lu. "Coordinate Descent Based Hierarchical Interactive Lasso Penalized Logistic Regression and Its Application to Classification Problems." Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/430201.

Full text
Abstract:
We present the hierarchical interactive lasso penalized logistic regression using the coordinate descent algorithm based on the hierarchy theory and variables interactions. We define the interaction model based on the geometric algebra and hierarchical constraint conditions and then use the coordinate descent algorithm to solve for the coefficients of the hierarchical interactive lasso model. We provide the results of some experiments based on UCI datasets, Madelon datasets from NIPS2003, and daily activities of the elder. The experimental results show that the variable interactions and hierarchy contribute significantly to the classification. The hierarchical interactive lasso has the advantages of the lasso and interactive lasso.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yashu, Jie Wang, and Jieping Ye. "An Efficient Algorithm For Weak Hierarchical Lasso." ACM Transactions on Knowledge Discovery from Data 10, no. 3 (February 24, 2016): 1–24. http://dx.doi.org/10.1145/2791295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Jinseog, Yuwon Kim, and Yongdai Kim. "A Gradient-Based Optimization Algorithm for LASSO." Journal of Computational and Graphical Statistics 17, no. 4 (December 2008): 994–1009. http://dx.doi.org/10.1198/106186008x386210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hao. "Coordinate descent algorithm for covariance graphical lasso." Statistics and Computing 24, no. 4 (February 23, 2013): 521–29. http://dx.doi.org/10.1007/s11222-013-9385-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yahui, Yang Li, and Yuanyuan Sun. "Online Static Security Assessment of Power Systems Based on Lasso Algorithm." Applied Sciences 8, no. 9 (August 23, 2018): 1442. http://dx.doi.org/10.3390/app8091442.

Full text
Abstract:
As one important means of ensuring secure operation in a power system, the contingency selection and ranking methods need to be more rapid and accurate. A novel method-based least absolute shrinkage and selection operator (Lasso) algorithm is proposed in this paper to apply to online static security assessment (OSSA). The assessment is based on a security index, which is applied to select and screen contingencies. Firstly, the multi-step adaptive Lasso (MSA-Lasso) regression algorithm is introduced based on the regression algorithm, whose predictive performance has an advantage. Then, an OSSA module is proposed to evaluate and select contingencies in different load conditions. In addition, the Lasso algorithm is employed to predict the security index of each power system operation state with the consideration of bus voltages and power flows, according to Newton–Raphson load flow (NRLF) analysis in post-contingency states. Finally, the numerical results of applying the proposed approach to the IEEE 14-bus, 118-bus, and 300-bus test systems demonstrate the accuracy and rapidity of OSSA.
APA, Harvard, Vancouver, ISO, and other styles
10

Keerthi, S. S., and S. Shevade. "A Fast Tracking Algorithm for Generalized LARS/LASSO." IEEE Transactions on Neural Networks 18, no. 6 (November 2007): 1826–30. http://dx.doi.org/10.1109/tnn.2007.900229.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "LASSO algorithm"

1

Zhang, Han. "Detecting Rare Haplotype-Environmental Interaction and Nonlinear Effects of Rare Haplotypes using Bayesian LASSO on Quantitative Traits." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu149969433115895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Asif, Muhammad Salman. "Primal dual pursuit a homotopy based algorithm for the Dantzig selector /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24693.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Romberg, Justin; Committee Member: McClellan, James; Committee Member: Mersereau, Russell
APA, Harvard, Vancouver, ISO, and other styles
3

Soret, Perrine. "Régression pénalisée de type Lasso pour l’analyse de données biologiques de grande dimension : application à la charge virale du VIH censurée par une limite de quantification et aux données compositionnelles du microbiote." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0254.

Full text
Abstract:
Dans les études cliniques et grâce aux progrès technologiques, la quantité d’informations recueillies chez un même patient ne cesse de croître conduisant à des situations où le nombre de variables explicatives est plus important que le nombre d’individus. La méthode Lasso s'est montrée appropriée face aux problèmes de sur-ajustement rencontrés en grande dimension.Cette thèse est consacrée à l'application et au développement des régressions pénalisées de type Lasso pour des données cliniques présentant des structures particulières.Premièrement, chez des patients atteints du virus de l'immunodéficience humaine des mutations dans les gènes du virus peuvent être liées au développement de résistances à tel ou tel traitement.La prédiction de la charge virale à partir des mutations (potentiellement grand) permet d'orienter le choix des traitements.En dessous d'un seuil, la charge virale est indétectable, on parle de données censurées à gauche.Nous proposons deux nouvelles approches Lasso basées sur l'algorithme Buckley-James consistant à imputer les valeurs censurées par une espérance conditionnelle. En inversant la réponse, on peut se ramener à un problème de censure à droite, pour laquelle des estimations non-paramétriques de l'espérance conditionnelle ont été proposées en analyse de survie. Enfin, nous proposons une estimation paramétrique qui repose sur une hypothèse Gaussienne.Deuxièmement, nous nous intéressons au rôle du microbiote dans la détérioration de la santé respiratoire. Les données du microbiote sont sous forme d'abondances relatives (proportion de chaque espèce par individu, dites données compositionnelles) et elles présentent une structure phylogénétique.Nous avons dressé un état de l'art des méthodes d'analyses statistiques de données du microbiote. En raison de la nouveauté, peu de recommandations existent sur l'applicabilité et l'efficacité des méthodes proposées. Une étude de simulation nous a permis de comparer la capacité de sélection des méthodes de pénalisation proposées spécifiquement pour ce type de données.Puis nous appliquons ces recherches à l'analyse de l'association entre les bactéries/champignons et le déclin de la fonction pulmonaire chez des patients atteints de la mucoviscidose du projet MucoFong
In clinical studies and thanks to technological progress, the amount of information collected in the same patient continues to grow leading to situations where the number of explanatory variables is greater than the number of individuals. The Lasso method proved to be appropriate to circumvent over-adjustment problems in high-dimensional settings.This thesis is devoted to the application and development of Lasso-penalized regression for clinical data presenting particular structures.First, in patients with the human immunodeficiency virus, mutations in the virus's genetic structure may be related to the development of drug resistance. The prediction of the viral load from (potentially large) mutations allows guiding treatment choice.Below a threshold, the viral load is undetectable, data are left-censored. We propose two new Lasso approaches based on the Buckley-James algorithm, which imputes censored values ​​by a conditional expectation. By reversing the response, we obtain a right-censored problem, for which non-parametric estimates of the conditional expectation have been proposed in survival analysis. Finally, we propose a parametric estimation based on a Gaussian hypothesis.Secondly, we are interested in the role of the microbiota in the deterioration of respiratory health. The microbiota data are presented as relative abundances (proportion of each species per individual, called compositional data) and they have a phylogenetic structure.We have established a state of the art methods of statistical analysis of microbiota data. Due to the novelty, few recommendations exist on the applicability and effectiveness of the proposed methods. A simulation study allowed us to compare the selection capacity of penalization methods proposed specifically for this type of data.Then we apply this research to the analysis of the association between bacteria / fungi and the decline of pulmonary function in patients with cystic fibrosis from the MucoFong project
APA, Harvard, Vancouver, ISO, and other styles
4

Loth, Manuel. "Algorithmes d'Ensemble Actif pour le LASSO." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845441.

Full text
Abstract:
Cette thèse aborde le calcul de l'opérateur LASSO (Least Absolute Shrinkage and Selection Operator), ainsi que des problématiques qui lui sont associées, dans le domaine de la régression. Cet opérateur a suscité une attention croissante depuis son introduction par Robert Tibshirani en 1996, par sa capacité à produire ou identi fier des modèles linéaires parcimonieux à partir d'observations bruitées, la parcimonie signi fiant que seules quelques unes parmi de nombreuses variables explicatives apparaissent dans le modèle proposé. Cette sélection est produite par l'ajout à la méthode des moindres-carrés d'une contrainte ou pénalisation sur la somme des valeurs absolues des coe fficients linéaires, également appelée norme l1 du vecteur de coeffi cients. Après un rappel des motivations, principes et problématiques de la régression, des estimateurs linéaires, de la méthode des moindres-carrés, de la sélection de modèle et de la régularisation, les deux formulations équivalentes du LASSO contrainte ou régularisée sont présentées; elles dé finissent toutes deux un problème de calcul non trivial pour associer un estimateur à un ensemble d'observations et un paramètre de sélection. Un bref historique des algorithmes résolvant ce problème est dressé, et les deux approches permettant de gérer la non-di fferentiabilité de la norme l1 sont présentées, ainsi que l'équivalence de ces problèmes avec un programme quadratique. La seconde partie se concentre sur l'aspect pratique des algorithmes de résolution du LASSO. L'un d'eux, proposé par Michael Osborne en 2000, est reformulé. Cette reformulation consiste à donner une défi nition et explication générales de la méthode d'ensemble actif, qui généralise l'algorithme du simplex à la programmation convexe, puis à la spéci fier progressivement pour la programmation LASSO, et à adresser les questions d'optimisation des calculs algébriques. Bien que décrivant essentiellement le même algorithme que celui de Michael Osborne, la présentation qui en est faite ici a l'ambition d'en exposer clairement les mécanismes, et utilise des variables di fférentes. Outre le fait d'aider à mieux comprendre cet algorithme visiblement sous-estimé, l'angle par lequel il est présenté éclaire le fait nouveau que la même méthode s'applique naturellement à la formulation régularisée du LASSO, et non uniquement à la formulation contrainte. La populaire méthode par homotopie (ou LAR-LASSO, ou LARS) est ensuite présentée comme une dérivation de la méthode d'ensemble actif, amenant une formulation alternative et quelque peu simpli fiée de cet algorithme qui fournit les solutions du LASSO pour chaque valeur de son paramètre. Il est montré que, contrairement aux résultats d'une étude récente de Jerome H. Friedman, des implémentations de ces algorithmes suivant ces reformulations sont plus effi caces en terme de temps de calcul qu'une méthode de descente par coordonnées. La troisième partie étudie dans quelles mesures ces trois algorithmes (ensemble actif, homotopie, et descente par coordonnées) peuvent gérer certains cas particuliers, et peuvent être appliqués à des extensions du LASSO ou d'autres problèmes similaires. Les cas particuliers incluent les dégénérescences, comme la présence de variables lineairement dépendantes, ou la sélection/désélection simultanée de variables. Cette dernière problématique, qui était délaissée dans les travaux précédents, est ici expliquée plus largement et une solution simple et efficace y est apportée. Une autre cas particulier est la sélection LASSO à partir d'un nombre très large, voire infi ni de variables, cas pour lequel la méthode d'ensemble actif présente un avantage majeur. Une des extensions du LASSO est sa transposition dans un cadre d'apprentissage en ligne, où il est désirable ou nécessaire de résoudre le problème sur un ensemble d'observations qui évolue dans le temps. A nouveau, la flexibilité limitée de la méthode par homotopie la disquali fie au pro fit des deux autres. Une autre extension est l'utilisation de la pénalisation l1 sur d'autres fonction coûts que la norme l2 du résidu, ou en association avec d'autres pénalisations, et il est rappelé ou établi dans quelles mesures et de quelle façon chaque algorithme peut être transposé à ces problèmes.
APA, Harvard, Vancouver, ISO, and other styles
5

Ounaissi, Daoud. "Méthodes quasi-Monte Carlo et Monte Carlo : application aux calculs des estimateurs Lasso et Lasso bayésien." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10043/document.

Full text
Abstract:
La thèse contient 6 chapitres. Le premier chapitre contient une introduction à la régression linéaire et aux problèmes Lasso et Lasso bayésien. Le chapitre 2 rappelle les algorithmes d’optimisation convexe et présente l’algorithme FISTA pour calculer l’estimateur Lasso. La statistique de la convergence de cet algorithme est aussi donnée dans ce chapitre en utilisant l’entropie et l’estimateur de Pitman-Yor. Le chapitre 3 est consacré à la comparaison des méthodes quasi-Monte Carlo et Monte Carlo dans les calculs numériques du Lasso bayésien. Il sort de cette comparaison que les points de Hammersely donne les meilleurs résultats. Le chapitre 4 donne une interprétation géométrique de la fonction de partition du Lasso bayésien et l’exprime en fonction de la fonction Gamma incomplète. Ceci nous a permis de donner un critère de convergence pour l’algorithme de Metropolis Hastings. Le chapitre 5 présente l’estimateur bayésien comme la loi limite d’une équation différentielle stochastique multivariée. Ceci nous a permis de calculer le Lasso bayésien en utilisant les schémas numériques semi implicite et explicite d’Euler et les méthodes de Monte Carlo, Monte Carlo à plusieurs couches (MLMC) et l’algorithme de Metropolis Hastings. La comparaison des coûts de calcul montre que le couple (schéma semi-implicite d’Euler, MLMC) gagne contre les autres couples (schéma, méthode). Finalement dans le chapitre 6 nous avons trouvé la vitesse de convergence du Lasso bayésien vers le Lasso lorsque le rapport signal/bruit est constant et le bruit tend vers 0. Ceci nous a permis de donner de nouveaux critères pour la convergence de l’algorithme de Metropolis Hastings
The thesis contains 6 chapters. The first chapter contains an introduction to linear regression, the Lasso and the Bayesian Lasso problems. Chapter 2 recalls the convex optimization algorithms and presents the Fista algorithm for calculating the Lasso estimator. The properties of the convergence of this algorithm is also given in this chapter using the entropy estimator and Pitman-Yor estimator. Chapter 3 is devoted to comparison of Monte Carlo and quasi-Monte Carlo methods in numerical calculations of Bayesian Lasso. It comes out of this comparison that the Hammersely points give the best results. Chapter 4 gives a geometric interpretation of the partition function of the Bayesian lasso expressed as a function of the incomplete Gamma function. This allowed us to give a convergence criterion for the Metropolis Hastings algorithm. Chapter 5 presents the Bayesian estimator as the law limit a multivariate stochastic differential equation. This allowed us to calculate the Bayesian Lasso using numerical schemes semi-implicit and explicit Euler and methods of Monte Carlo, Monte Carlo multilevel (MLMC) and Metropolis Hastings algorithm. Comparing the calculation costs shows the couple (semi-implicit Euler scheme, MLMC) wins against the other couples (scheme method). Finally in chapter 6 we found the Lasso convergence rate of the Bayesian Lasso when the signal / noise ratio is constant and when the noise tends to 0. This allowed us to provide a new criteria for the convergence of the Metropolis algorithm Hastings
APA, Harvard, Vancouver, ISO, and other styles
6

Denoyelle, Quentin. "Theoretical and Numerical Analysis of Super-Resolution Without Grid." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED030/document.

Full text
Abstract:
Cette thèse porte sur l'utilisation du BLASSO, un problème d'optimisation convexe en dimension infinie généralisant le LASSO aux mesures, pour la super-résolution de sources ponctuelles. Nous montrons d'abord que la stabilité du support des solutions, pour N sources se regroupant, est contrôlée par un objet appelé pré-certificat aux 2N-1 dérivées nulles. Quand ce pré-certificat est non dégénéré, dans un régime de petit bruit dont la taille est contrôlée par la distance minimale séparant les sources, le BLASSO reconstruit exactement le support de la mesure initiale. Nous proposons ensuite l'algorithme Sliding Frank-Wolfe, une variante de l'algorithme de Frank-Wolfe avec déplacement continu des amplitudes et des positions, qui résout le BLASSO. Sous de faibles hypothèses, cet algorithme converge en un nombre fini d'itérations. Nous utilisons cet algorithme pour un problème 3D de microscopie par fluorescence en comparant trois modèles construits à partir des techniques PALM/STORM
This thesis studies the noisy sparse spikes super-resolution problem for positive measures using the BLASSO, an infinite dimensional convex optimization problem generalizing the LASSO to measures. First, we show that the support stability of the BLASSO for N clustered spikes is governed by an object called the (2N-1)-vanishing derivatives pre-certificate. When it is non-degenerate, solving the BLASSO leads to exact support recovery of the initial measure, in a low noise regime whose size is controlled by the minimal separation distance of the spikes. In a second part, we propose the Sliding Frank-Wolfe algorithm, based on the Frank-Wolfe algorithm with an added step moving continuously the amplitudes and positions of the spikes, that solves the BLASSO. We show that, under mild assumptions, it converges in a finite number of iterations. We apply this algorithm to the 3D fluorescent microscopy problem by comparing three models based on the PALM/STORM technics
APA, Harvard, Vancouver, ISO, and other styles
7

Huynh, Bao Tuyen. "Estimation and feature selection in high-dimensional mixtures-of-experts models." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC237.

Full text
Abstract:
Cette thèse traite de la modélisation et de l’estimation de modèles de mélanges d’experts de grande dimension, en vue d’efficaces estimation de densité, prédiction et classification de telles données complexes car hétérogènes et de grande dimension. Nous proposons de nouvelles stratégies basées sur l’estimation par maximum de vraisemblance régularisé des modèles pour pallier aux limites des méthodes standards, y compris l’EMV avec les algorithmes d’espérance-maximisation (EM), et pour effectuer simultanément la sélection des variables pertinentes afin d’encourager des solutions parcimonieuses dans un contexte haute dimension. Nous introduisons d’abord une méthode d’estimation régularisée des paramètres et de sélection de variables d’un mélange d’experts, basée sur des régularisations l1 (lasso) et le cadre de l’algorithme EM, pour la régression et la classification adaptés aux contextes de la grande dimension. Ensuite, nous étendons la stratégie un mélange régularisé de modèles d’experts pour les données discrètes, y compris pour la classification. Nous développons des algorithmes efficaces pour maximiser la fonction de log-vraisemblance l1 -pénalisée des données observées. Nos stratégies proposées jouissent de la maximisation monotone efficace du critère optimisé, et contrairement aux approches précédentes, ne s’appuient pas sur des approximations des fonctions de pénalité, évitent l’inversion de matrices et exploitent l’efficacité de l’algorithme de montée de coordonnées, particulièrement dans l’approche proximale par montée de coordonnées
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, towards effective density estimation, prediction and clustering of such heterogeneous and high-dimensional data. We propose new strategies based on regularized maximum-likelihood estimation (MLE) of MoE models to overcome the limitations of standard methods, including MLE estimation with Expectation-Maximization (EM) algorithms, and to simultaneously perform feature selection so that sparse models are encouraged in such a high-dimensional setting. We first introduce a mixture-of-experts’ parameter estimation and variable selection methodology, based on l1 (lasso) regularizations and the EM framework, for regression and clustering suited to high-dimensional contexts. Then, we extend the method to regularized mixture of experts models for discrete data, including classification. We develop efficient algorithms to maximize the proposed l1 -penalized observed-data log-likelihood function. Our proposed strategies enjoy the efficient monotone maximization of the optimized criterion, and unlike previous approaches, they do not rely on approximations on the penalty functions, avoid matrix inversion, and exploit the efficiency of the coordinate ascent algorithm, particularly within the proximal Newton-based approach
APA, Harvard, Vancouver, ISO, and other styles
8

SINGH, KEVIN. "Comparing Variable Selection Algorithms On Logistic Regression – A Simulation." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446090.

Full text
Abstract:
When we try to understand why some schools perform worse than others, if Covid-19 has struck harder on some demographics or whether income correlates with increased happiness, we may turn to regression to better understand how these variables are correlated. To capture the true relationship between variables we may use variable selection methods in order to ensure that the variables which have an actual effect have been included in the model. Choosing the right model for variable selection is vital. Without it there is a risk of including variables which have little to do with the dependent variable or excluding variables that are important. Failing to capture the true effects would paint a picture disconnected from reality and it would also give a false impression of what reality really looks like. To mitigate this risk a simulation study has been conducted to find out what variable selection algorithms to apply in order to make more accurate inference. The different algorithms being tested are stepwise regression, backward elimination and lasso regression. Lasso performed worst when applied to a small sample but performed best when applied to larger samples. Backward elimination and stepwise regression had very similar results.
APA, Harvard, Vancouver, ISO, and other styles
9

Fang, Zaili. "Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional Data." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/40090.

Full text
Abstract:
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model. To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information. In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed. The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Sanchez, Merchante Luis Francisco. "Learning algorithms for sparse classification." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00868847.

Full text
Abstract:
This thesis deals with the development of estimation algorithms with embedded feature selection the context of high dimensional data, in the supervised and unsupervised frameworks. The contributions of this work are materialized by two algorithms, GLOSS for the supervised domain and Mix-GLOSS for unsupervised counterpart. Both algorithms are based on the resolution of optimal scoring regression regularized with a quadratic formulation of the group-Lasso penalty which encourages the removal of uninformative features. The theoretical foundations that prove that a group-Lasso penalized optimal scoring regression can be used to solve a linear discriminant analysis bave been firstly developed in this work. The theory that adapts this technique to the unsupervised domain by means of the EM algorithm is not new, but it has never been clearly exposed for a sparsity-inducing penalty. This thesis solidly demonstrates that the utilization of group-Lasso penalized optimal scoring regression inside an EM algorithm is possible. Our algorithms have been tested with real and artificial high dimensional databases with impressive resuits from the point of view of the parsimony without compromising prediction performances.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LASSO algorithm"

1

Loth, Manuel, and Philippe Preux. "The Iso-regularization Descent Algorithm for the LASSO." In Neural Information Processing. Theory and Algorithms, 454–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17537-4_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Walrand, Jean. "Speech Recognition: B." In Probability in Electrical Engineering and Computer Science, 217–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_12.

Full text
Abstract:
AbstractOnline learning algorithms update their estimates as additional observations are made. Section 12.1 explains a simple example: online linear regression. The stochastic gradient projection algorithm is a general technique to update estimates based on additional observations; it is widely used in machine learning. Section 12.2 presents the theory behind that algorithm. When analyzing large amounts of data, one faces the problems of identifying the most relevant data and of how to use efficiently the available data. Section 12.3 explains three examples of how these questions are addressed: the LASSO algorithm, compressed sensing, and the matrix completion problem. Section 12.4 discusses deep neural networks for which the stochastic gradient projection algorithm is easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Kai, and Jing Liu. "Learning of Sparse Fuzzy Cognitive Maps Using Evolutionary Algorithm with Lasso Initialization." In Lecture Notes in Computer Science, 385–96. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68759-9_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Qian, and Lianbing Huang. "Research on Prediction Model of Gas Emission Based on Lasso Penalty Regression Algorithm." In Lecture Notes in Electrical Engineering, 165–72. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0187-6_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yanagihara, Hirokazu, and Ryoya Oda. "Coordinate Descent Algorithm for Normal-Likelihood-Based Group Lasso in Multivariate Linear Regression." In Intelligent Decision Technologies, 429–39. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2765-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

AlKindy, Bassam, Christophe Guyeux, Jean-François Couchot, Michel Salomon, Christian Parisod, and Jacques M. Bahi. "Hybrid Genetic Algorithm and Lasso Test Approach for Inferring Well Supported Phylogenetic Trees Based on Subsets of Chloroplastic Core Genes." In Algorithms for Computational Biology, 83–96. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21233-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Md Shahri, Nur Huda Nabihan, and Susana Conde. "Modelling Multi-dimensional Contingency Tables: LASSO and Stepwise Algorithms." In Proceedings of the Third International Conference on Computing, Mathematics and Statistics (iCMS2017), 563–70. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7279-7_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pawlak, Mirosław, and Jiaqing Lv. "Analysis of Large Scale Power Systems via LASSO Learning Algorithms." In Artificial Intelligence and Soft Computing, 652–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20912-4_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Boulesteix, Anne-Laure, Adrian Richter, and Christoph Bernau. "Complexity Selection with Cross-validation for Lasso and Sparse Partial Least Squares Using High-Dimensional Data." In Algorithms from and for Nature and Life, 261–68. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00035-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gnad, Daniel, Jan Eisenhut, Alberto Lluch Lafuente, and Jörg Hoffmann. "Model Checking $$\omega $$-Regular Properties with Decoupled Search." In Computer Aided Verification, 411–34. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_19.

Full text
Abstract:
AbstractDecoupled search is a state space search method originally introduced in AI Planning. Similar to partial-order reduction methods, decoupled search exploits the independence of components to tackle the state explosion problem. Similar to symbolic representations, it does not construct the explicit state space, but sets of states are represented in a compact manner, exploiting component independence. Given the success of both partial-order reduction and symbolic representations when model checking liveness properties, our goal is to add decoupled search to the toolset of liveness checking methods. Specifically, we show how decoupled search can be applied to liveness verification for composed Büchi automata by adapting, and showing correct, a standard algorithm for detecting lassos (i.e., infinite accepting runs), namely nested depth-first search. We evaluate our approach using a prototype implementation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "LASSO algorithm"

1

Lin, Daxuan, Zan Yang, Jiuwei Chen, Jiaxin Dong, Wei Nai, and Dan Li. "Lasso Regression with Quantum Whale Optimization Algorithm." In 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yashu, Jie Wang, and Jieping Ye. "An efficient algorithm for weak hierarchical lasso." In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jianming, Mingkun Du, and Keyang Cheng. "Pedestrian detection based on efficient fused lasso algorithm." In 2012 5th International Congress on Image and Signal Processing (CISP). IEEE, 2012. http://dx.doi.org/10.1109/cisp.2012.6469937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Kai, and Yang Jin. "An ensemble learning algorithm based on Lasso selection." In 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS 2010). IEEE, 2010. http://dx.doi.org/10.1109/icicisys.2010.5658515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mo, Weike, Jiaqing Lv, Mirostaw Pawlak, U. D. Annakkage, Haoyong Chen, and Yiping Chen. "Power System Online Sensitivity Identification Based on Lasso Algorithm." In 2020 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2020. http://dx.doi.org/10.1109/pesgm41954.2020.9281724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Costa, M. A., and A. P. Braga. "Optimization of Neural Networks with Multi-Objective LASSO Algorithm." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.247329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alissou, Simplice A., and Ye Zhang. "Hyperspectral data compression using lasso algorithm for spectral decorrelation." In SPIE Sensing Technology + Applications, edited by Bormin Huang, Chein-I. Chang, and José Fco López. SPIE, 2014. http://dx.doi.org/10.1117/12.2053265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fujiwara, Yasuhiro, Naoki Marumo, Mathieu Blondel, Koh Takeuchi, Hideaki Kim, Tomoharu Iwata, and Naonori Ueda. "SVD-Based Screening for the Graphical Lasso." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/233.

Full text
Abstract:
The graphical lasso is the most popular approach to estimating the inverse covariance matrix of high-dimension data. It iteratively estimates each row and column of the matrix in a round-robin style until convergence. However, the graphical lasso is infeasible due to its high computation cost for large size of datasets. This paper proposes Sting, a fast approach to the graphical lasso. In order to reduce the computation cost, it efficiently identifies blocks in the estimated matrix that have nonzero elements before entering the iterations by exploiting the singular value decomposition of data matrix. In addition, it selectively updates elements of the estimated matrix expected to have nonzero values. Theoretically, it guarantees to converge to the same result as the original algorithm of the graphical lasso. Experiments show that our approach is faster than existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jun, Lei Yuan, and Jieping Ye. "An efficient algorithm for a class of fused lasso problems." In the 16th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1835804.1835847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Farokhmanesh, Fatemeh, and Mohammad Taghi Sadeghi. "Deep Feature Selection using an Enhanced Sparse Group Lasso Algorithm." In 2019 27th Iranian Conference on Electrical Engineering (ICEE). IEEE, 2019. http://dx.doi.org/10.1109/iraniancee.2019.8786386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography