Academic literature on the topic 'LASSO algoritmus'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LASSO algoritmus.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LASSO algoritmus"

1

Gaines, Brian R., Juhyun Kim, and Hua Zhou. "Algorithms for Fitting the Constrained Lasso." Journal of Computational and Graphical Statistics 27, no. 4 (2018): 861–71. http://dx.doi.org/10.1080/10618600.2018.1473777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bonnefoy, Antoine, Valentin Emiya, Liva Ralaivola, and Remi Gribonval. "Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso." IEEE Transactions on Signal Processing 63, no. 19 (2015): 5121–32. http://dx.doi.org/10.1109/tsp.2015.2447503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Helper, and Victor Gumbo. "Supervised Machine Learning for Predicting SMME Sales: An Evaluation of Three Algorithms." African Journal of Information and Communication, no. 27 (May 31, 2021): 1–21. http://dx.doi.org/10.23962/10539/31371.

Full text
Abstract:
The emergence of machine learning algorithms presents the opportunity for a variety of stakeholders to perform advanced predictive analytics and to make informed decisions. However, to date there have been few studies in developing countries that evaluate the performance of such algorithms—with the result that pertinent stakeholders lack an informed basis for selecting appropriate techniques for modelling tasks. This study aims to address this gap by evaluating the performance of three machine learning techniques: ordinary least squares (OLS), least absolute shrinkage and selection operator (L
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Tong Tong, and Kenneth Lange. "Coordinate descent algorithms for lasso penalized regression." Annals of Applied Statistics 2, no. 1 (2008): 224–44. http://dx.doi.org/10.1214/07-aoas147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsiligkaridis, Theodoros, Alfred O. Hero III, and Shuheng Zhou. "On Convergence of Kronecker Graphical Lasso Algorithms." IEEE Transactions on Signal Processing 61, no. 7 (2013): 1743–55. http://dx.doi.org/10.1109/tsp.2013.2240157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Muchisha, Nadya Dwi, Novian Tamara, Andriansyah Andriansyah, and Agus M. Soleh. "Nowcasting Indonesia’s GDP Growth Using Machine Learning Algorithms." Indonesian Journal of Statistics and Its Applications 5, no. 2 (2021): 355–68. http://dx.doi.org/10.29244/ijsa.v5i2p355-368.

Full text
Abstract:
GDP is very important to be monitored in real time because of its usefulness for policy making. We built and compared the ML models to forecast real-time Indonesia's GDP growth. We used 18 variables that consist a number of quarterly macroeconomic and financial market statistics. We have evaluated the performance of six popular ML algorithms, such as Random Forest, LASSO, Ridge, Elastic Net, Neural Networks, and Support Vector Machines, in doing real-time forecast on GDP growth from 2013:Q3 to 2019:Q4 period. We used the RMSE, MAD, and Pearson correlation coefficient as measurements of forecas
APA, Harvard, Vancouver, ISO, and other styles
7

Jain, Rahi, and Wei Xu. "HDSI: High dimensional selection with interactions algorithm on feature selection and testing." PLOS ONE 16, no. 2 (2021): e0246159. http://dx.doi.org/10.1371/journal.pone.0246159.

Full text
Abstract:
Feature selection on high dimensional data along with the interaction effects is a critical challenge for classical statistical learning techniques. Existing feature selection algorithms such as random LASSO leverages LASSO capability to handle high dimensional data. However, the technique has two main limitations, namely the inability to consider interaction terms and the lack of a statistical test for determining the significance of selected features. This study proposes a High Dimensional Selection with Interactions (HDSI) algorithm, a new feature selection method, which can handle high-dim
APA, Harvard, Vancouver, ISO, and other styles
8

Qin, Zhiwei, Katya Scheinberg, and Donald Goldfarb. "Efficient block-coordinate descent algorithms for the Group Lasso." Mathematical Programming Computation 5, no. 2 (2013): 143–69. http://dx.doi.org/10.1007/s12532-013-0051-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Karl M., and Thomas P. Monath. "Imported Lassa Fever — Reexamining the Algorithms." New England Journal of Medicine 323, no. 16 (1990): 1139–41. http://dx.doi.org/10.1056/nejm199010183231611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yingdong, and Richard Simon. "Development and Validation of Predictive Indices for a Continuous Outcome Using Gene Expression Profiles." Cancer Informatics 9 (January 2010): CIN.S3805. http://dx.doi.org/10.4137/cin.s3805.

Full text
Abstract:
There have been relatively few publications using linear regression models to predict a continuous response based on microarray expression profiles. Standard linear regression methods are problematic when the number of predictor variables exceeds the number of cases. We have evaluated three linear regression algorithms that can be used for the prediction of a continuous response based on high dimensional gene expression data. The three algorithms are the least angle regression (LAR), the least absolute shrinkage and selection operator (LASSO), and the averaged linear regression method (ALM). A
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "LASSO algoritmus"

1

Loth, Manuel. "Algorithmes d'Ensemble Actif pour le LASSO." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845441.

Full text
Abstract:
Cette thèse aborde le calcul de l'opérateur LASSO (Least Absolute Shrinkage and Selection Operator), ainsi que des problématiques qui lui sont associées, dans le domaine de la régression. Cet opérateur a suscité une attention croissante depuis son introduction par Robert Tibshirani en 1996, par sa capacité à produire ou identi fier des modèles linéaires parcimonieux à partir d'observations bruitées, la parcimonie signi fiant que seules quelques unes parmi de nombreuses variables explicatives apparaissent dans le modèle proposé. Cette sélection est produite par l'ajout à la méthode des moindres
APA, Harvard, Vancouver, ISO, and other styles
2

SINGH, KEVIN. "Comparing Variable Selection Algorithms On Logistic Regression – A Simulation." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446090.

Full text
Abstract:
When we try to understand why some schools perform worse than others, if Covid-19 has struck harder on some demographics or whether income correlates with increased happiness, we may turn to regression to better understand how these variables are correlated. To capture the true relationship between variables we may use variable selection methods in order to ensure that the variables which have an actual effect have been included in the model. Choosing the right model for variable selection is vital. Without it there is a risk of including variables which have little to do with the dependent va
APA, Harvard, Vancouver, ISO, and other styles
3

Sanchez, Merchante Luis Francisco. "Learning algorithms for sparse classification." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00868847.

Full text
Abstract:
This thesis deals with the development of estimation algorithms with embedded feature selection the context of high dimensional data, in the supervised and unsupervised frameworks. The contributions of this work are materialized by two algorithms, GLOSS for the supervised domain and Mix-GLOSS for unsupervised counterpart. Both algorithms are based on the resolution of optimal scoring regression regularized with a quadratic formulation of the group-Lasso penalty which encourages the removal of uninformative features. The theoretical foundations that prove that a group-Lasso penalized optimal sc
APA, Harvard, Vancouver, ISO, and other styles
4

Huynh, Bao Tuyen. "Estimation and feature selection in high-dimensional mixtures-of-experts models." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC237.

Full text
Abstract:
Cette thèse traite de la modélisation et de l’estimation de modèles de mélanges d’experts de grande dimension, en vue d’efficaces estimation de densité, prédiction et classification de telles données complexes car hétérogènes et de grande dimension. Nous proposons de nouvelles stratégies basées sur l’estimation par maximum de vraisemblance régularisé des modèles pour pallier aux limites des méthodes standards, y compris l’EMV avec les algorithmes d’espérance-maximisation (EM), et pour effectuer simultanément la sélection des variables pertinentes afin d’encourager des solutions parcimonieuses
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Bo. "Variable Ranking by Solution-path Algorithms." Thesis, 2012. http://hdl.handle.net/10012/6496.

Full text
Abstract:
Variable Selection has always been a very important problem in statistics. We often meet situations where a huge data set is given and we want to find out the relationship between the response and the corresponding variables. With a huge number of variables, we often end up with a big model even if we delete those that are insignificant. There are two reasons why we are unsatisfied with a final model with too many variables. The first reason is the prediction accuracy. Though the prediction bias might be small under a big model, the variance is usually very high. The second reason is interpret
APA, Harvard, Vancouver, ISO, and other styles
6

Noro, Catarina Vieira. "Determinants of households´ consumption in Portugal - a machine learning approach." Master's thesis, 2021. http://hdl.handle.net/10362/121884.

Full text
Abstract:
Machine Learning has been widely adopted by researchers in several academic fields.Although at a slow pace, the field of economics has also started to acknowledge the pos-sibilities of these algorithm based methods for complementing or even replace traditionalEconometric approaches. This research aims to apply Machine Learning data-driven variable selection models for accessing the determinants of Portuguese households’ consumption using the Household Finance and Consumption Survey. I found that LASSO Regression and Elastic Net have the best performance in this setting and that wealth relate
APA, Harvard, Vancouver, ISO, and other styles
7

He, Zangdong. "Variable selection and structural discovery in joint models of longitudinal and survival data." Thesis, 2014. http://hdl.handle.net/1805/6365.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>Joint models of longitudinal and survival outcomes have been used with increasing frequency in clinical investigations. Correct specification of fixed and random effects, as well as their functional forms is essential for practical data analysis. However, no existing methods have been developed to meet this need in a joint model setting. In this dissertation, I describe a penalized likelihood-based method with adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions for model selection. By reparameterizing
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LASSO algoritmus"

1

Loth, Manuel, and Philippe Preux. "The Iso-regularization Descent Algorithm for the LASSO." In Neural Information Processing. Theory and Algorithms. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17537-4_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Md Shahri, Nur Huda Nabihan, and Susana Conde. "Modelling Multi-dimensional Contingency Tables: LASSO and Stepwise Algorithms." In Proceedings of the Third International Conference on Computing, Mathematics and Statistics (iCMS2017). Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7279-7_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walrand, Jean. "Speech Recognition: B." In Probability in Electrical Engineering and Computer Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_12.

Full text
Abstract:
AbstractOnline learning algorithms update their estimates as additional observations are made. Section 12.1 explains a simple example: online linear regression. The stochastic gradient projection algorithm is a general technique to update estimates based on additional observations; it is widely used in machine learning. Section 12.2 presents the theory behind that algorithm. When analyzing large amounts of data, one faces the problems of identifying the most relevant data and of how to use efficiently the available data. Section 12.3 explains three examples of how these questions are addressed: the LASSO algorithm, compressed sensing, and the matrix completion problem. Section 12.4 discusses deep neural networks for which the stochastic gradient projection algorithm is easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
4

Pawlak, Mirosław, and Jiaqing Lv. "Analysis of Large Scale Power Systems via LASSO Learning Algorithms." In Artificial Intelligence and Soft Computing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20912-4_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

AlKindy, Bassam, Christophe Guyeux, Jean-François Couchot, Michel Salomon, Christian Parisod, and Jacques M. Bahi. "Hybrid Genetic Algorithm and Lasso Test Approach for Inferring Well Supported Phylogenetic Trees Based on Subsets of Chloroplastic Core Genes." In Algorithms for Computational Biology. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21233-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Boulesteix, Anne-Laure, Adrian Richter, and Christoph Bernau. "Complexity Selection with Cross-validation for Lasso and Sparse Partial Least Squares Using High-Dimensional Data." In Algorithms from and for Nature and Life. Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00035-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yamada, Isao, and Masao Yamagishi. "Hierarchical Convex Optimization by the Hybrid Steepest Descent Method with Proximal Splitting Operators—Enhancements of SVM and Lasso." In Splitting Algorithms, Modern Operator Theory, and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25939-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hao, Yuhan, Gary M. Weiss, and Stuart M. Brown. "Identification of Candidate Genes Responsible for Age-Related Macular Degeneration Using Microarray Data." In Biotechnology. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8903-7.ch038.

Full text
Abstract:
A DNA microarray can measure the expression of thousands of genes simultaneously, and this enables us to study the molecular pathways underlying Age-related Macular Degeneration. Previous studies have not determined which genes are responsible for the process of AMD. The authors address this deficiency by applying modern data mining and machine learning feature selection algorithms to the AMD microarray dataset. In this paper four methods are utilized to perform feature selection: Naïve Bayes, Random Forest, Random Lasso, and Ensemble Feature Selection. Functional Annotation of 20 final selected genes suggests that most of them are responsible for signal transduction in an individual cell or between cells. The top seven genes, five protein-coding genes and two non-coding RNAs, are explored from their signaling pathways, functional interactions and associations with retinal pigment epithelium cells. The authors conclude that Pten/PI3K/Akt pathway, NF-kappaB pathway, JNK cascade, Non-canonical Wnt Pathway, and two biological processes of cilia are likely to play important roles in AMD pathogenesis.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "LASSO algoritmus"

1

Jin, Yuzhe, and Bhaskar D. Rao. "MultiPass lasso algorithms for sparse signal recovery." In 2011 IEEE International Symposium on Information Theory - ISIT. IEEE, 2011. http://dx.doi.org/10.1109/isit.2011.6033773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Wang. "A Comparison of Three Numeric Algorithms for Lasso Solution." In 2020 International Conference on Computing and Data Science (CDS). IEEE, 2020. http://dx.doi.org/10.1109/cds49703.2020.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kong, Deguang, and Chris Ding. "Efficient Algorithms for Selecting Features with Arbitrary Group Constraints via Group Lasso." In 2013 IEEE International Conference on Data Mining (ICDM). IEEE, 2013. http://dx.doi.org/10.1109/icdm.2013.168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marins, Matheus, Rafael Chaves, Vinicius Pinho, Rebeca Cunha, and Marcello Campos. "Tackling Fingerprinting Indoor Localization Using the LASSO and the Conjugate Gradient Algorithms." In XXXIV Simpósio Brasileiro de Telecomunicações. Sociedade Brasileira de Telecomunicações, 2016. http://dx.doi.org/10.14209/sbrt.2016.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Bin, Xingwang Ju, Xiang Li, and Guansheng Zheng. "Faster Training Algorithms for Structured Sparsity-Inducing Norm." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/299.

Full text
Abstract:
Structured-sparsity regularization is popular for sparse learning because of its flexibility of encoding the feature structures. This paper considers a generalized version of structured-sparsity regularization (especially for $l_1/l_{\infty}$ norm) with arbitrary group overlap. Due to the group overlap, it is time-consuming to solve the associated proximal operator. Although Mairal~\shortcite{mairal2010network} have proposed a network-flow algorithm to solve the proximal operator, it is still time-consuming especially in the high-dimensional setting. To address this challenge, in this paper, w
APA, Harvard, Vancouver, ISO, and other styles
6

Maya, Haroldo C., and Guilherme A. Barreto. "A GA-Based Approach for Building Regularized Sparse Polynomial Models for Wind Turbine Power Curves." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4455.

Full text
Abstract:
In this paper, the classical polynomial model for wind turbines power curve estimation is revisited aiming at an automatic and parsimonious design. In this regard, using genetic algorithms we introduce a methodoloy for estimating a suitable order for the polynomial as well its relevant terms. The proposed methodology is compared with the state of the art in estimating the power curve of wind turbines, such as logistic models (with 4 and 5 parameters), artificial neural networks and weighted polynomial regression. We also show that the proposed approach performs better than the standard LASSO a
APA, Harvard, Vancouver, ISO, and other styles
7

Kato, Masaya, Miho Ohsaki, and Kei Ohnishi. "Genetic Algorithms Using Neural Network Regression and Group Lasso for Dynamic Selection of Crossover Operators." In 2020 Joint 11th International Conference on Soft Computing and Intelligent Systems and 21st International Symposium on Advanced Intelligent Systems (SCIS-ISIS). IEEE, 2020. http://dx.doi.org/10.1109/scisisis50064.2020.9322697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Idogun, Akpevwe Kelvin, Ruth Oyanu Ujah, and Lesley Anne James. "Surrogate-Based Analysis of Chemical Enhanced Oil Recovery – A Comparative Analysis of Machine Learning Model Performance." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208452-ms.

Full text
Abstract:
Abstract Optimizing decision and design variables for Chemical EOR is imperative for sensitivity and uncertainty analysis. However, these processes involve multiple reservoir simulation runs which increase computational cost and time. Surrogate models are capable of overcoming this impediment as they are capable of mimicking the capabilities of full field three-dimensional reservoir simulation models in detail and complexity. Artificial Neural Networks (ANN) and regression-based Design of Experiments (DoE) are common methods for surrogate modelling. In this study, a comparative analysis of dat
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmadov, Jamal. "Utilizing Data-Driven Models to Predict Brittleness in Tuscaloosa Marine Shale: A Machine Learning Approach." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208628-stu.

Full text
Abstract:
Abstract The Tuscaloosa Marine Shale (TMS) formation is a clay- and liquid-rich emerging shale play across central Louisiana and southwest Mississippi with recoverable resources of 1.5 billion barrels of oil and 4.6 trillion cubic feet of gas. The formation poses numerous challenges due to its high average clay content (50 wt%) and rapidly changing mineralogy, making the selection of fracturing candidates a difficult task. While brittleness plays an important role in screening potential intervals for hydraulic fracturing, typical brittleness estimation methods require the use of geomechanical
APA, Harvard, Vancouver, ISO, and other styles
10

Orta Aleman, Dante, and Roland Horne. "Well Interference Detection from Long-Term Pressure Data Using Machine Learning and Multiresolution Analysis." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/206354-ms.

Full text
Abstract:
Abstract Knowledge of reservoir heterogeneity and connectivity is fundamental for reservoir management. Methods such as interference tests or tracers have been developed to obtain that knowledge from dynamic data. However, detecting well connectivity using interference tests requires long periods of time with a stable reservoir pressure and constant flow-rate conditions. Conversely, the long duration and high frequency of well production data have high value for detecting connectivity if noise, abrupt changes in flow-rate and missing data are dealt with. In this work, a methodology to detect i
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!