Academic literature on the topic 'LASSO algoritmus'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LASSO algoritmus.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LASSO algoritmus"

1

Gaines, Brian R., Juhyun Kim, and Hua Zhou. "Algorithms for Fitting the Constrained Lasso." Journal of Computational and Graphical Statistics 27, no. 4 (August 7, 2018): 861–71. http://dx.doi.org/10.1080/10618600.2018.1473777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bonnefoy, Antoine, Valentin Emiya, Liva Ralaivola, and Remi Gribonval. "Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso." IEEE Transactions on Signal Processing 63, no. 19 (October 2015): 5121–32. http://dx.doi.org/10.1109/tsp.2015.2447503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Helper, and Victor Gumbo. "Supervised Machine Learning for Predicting SMME Sales: An Evaluation of Three Algorithms." African Journal of Information and Communication, no. 27 (May 31, 2021): 1–21. http://dx.doi.org/10.23962/10539/31371.

Full text
Abstract:
The emergence of machine learning algorithms presents the opportunity for a variety of stakeholders to perform advanced predictive analytics and to make informed decisions. However, to date there have been few studies in developing countries that evaluate the performance of such algorithms—with the result that pertinent stakeholders lack an informed basis for selecting appropriate techniques for modelling tasks. This study aims to address this gap by evaluating the performance of three machine learning techniques: ordinary least squares (OLS), least absolute shrinkage and selection operator (LASSO), and artificial neural networks (ANNs). These techniques are evaluated in respect of their ability to perform predictive modelling of the sales performance of small, medium and micro enterprises (SMMEs) engaged in manufacturing. The evaluation finds that the ANNs algorithm’s performance is far superior to that of the other two techniques, OLS and LASSO, in predicting the SMMEs’ sales performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Tong Tong, and Kenneth Lange. "Coordinate descent algorithms for lasso penalized regression." Annals of Applied Statistics 2, no. 1 (March 2008): 224–44. http://dx.doi.org/10.1214/07-aoas147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsiligkaridis, Theodoros, Alfred O. Hero III, and Shuheng Zhou. "On Convergence of Kronecker Graphical Lasso Algorithms." IEEE Transactions on Signal Processing 61, no. 7 (April 2013): 1743–55. http://dx.doi.org/10.1109/tsp.2013.2240157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Muchisha, Nadya Dwi, Novian Tamara, Andriansyah Andriansyah, and Agus M. Soleh. "Nowcasting Indonesia’s GDP Growth Using Machine Learning Algorithms." Indonesian Journal of Statistics and Its Applications 5, no. 2 (June 30, 2021): 355–68. http://dx.doi.org/10.29244/ijsa.v5i2p355-368.

Full text
Abstract:
GDP is very important to be monitored in real time because of its usefulness for policy making. We built and compared the ML models to forecast real-time Indonesia's GDP growth. We used 18 variables that consist a number of quarterly macroeconomic and financial market statistics. We have evaluated the performance of six popular ML algorithms, such as Random Forest, LASSO, Ridge, Elastic Net, Neural Networks, and Support Vector Machines, in doing real-time forecast on GDP growth from 2013:Q3 to 2019:Q4 period. We used the RMSE, MAD, and Pearson correlation coefficient as measurements of forecast accuracy. The results showed that the performance of all these models outperformed AR (1) benchmark. The individual model that showed the best performance is random forest. To gain more accurate forecast result, we run forecast combination using equal weighting and lasso regression. The best model was obtained from forecast combination using lasso regression with selected ML models, which are Random Forest, Ridge, Support Vector Machine, and Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
7

Jain, Rahi, and Wei Xu. "HDSI: High dimensional selection with interactions algorithm on feature selection and testing." PLOS ONE 16, no. 2 (February 16, 2021): e0246159. http://dx.doi.org/10.1371/journal.pone.0246159.

Full text
Abstract:
Feature selection on high dimensional data along with the interaction effects is a critical challenge for classical statistical learning techniques. Existing feature selection algorithms such as random LASSO leverages LASSO capability to handle high dimensional data. However, the technique has two main limitations, namely the inability to consider interaction terms and the lack of a statistical test for determining the significance of selected features. This study proposes a High Dimensional Selection with Interactions (HDSI) algorithm, a new feature selection method, which can handle high-dimensional data, incorporate interaction terms, provide the statistical inferences of selected features and leverage the capability of existing classical statistical techniques. The method allows the application of any statistical technique like LASSO and subset selection on multiple bootstrapped samples; each contains randomly selected features. Each bootstrap data incorporates interaction terms for the randomly sampled features. The selected features from each model are pooled and their statistical significance is determined. The selected statistically significant features are used as the final output of the approach, whose final coefficients are estimated using appropriate statistical techniques. The performance of HDSI is evaluated using both simulated data and real studies. In general, HDSI outperforms the commonly used algorithms such as LASSO, subset selection, adaptive LASSO, random LASSO and group LASSO.
APA, Harvard, Vancouver, ISO, and other styles
8

Qin, Zhiwei, Katya Scheinberg, and Donald Goldfarb. "Efficient block-coordinate descent algorithms for the Group Lasso." Mathematical Programming Computation 5, no. 2 (March 31, 2013): 143–69. http://dx.doi.org/10.1007/s12532-013-0051-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Karl M., and Thomas P. Monath. "Imported Lassa Fever — Reexamining the Algorithms." New England Journal of Medicine 323, no. 16 (October 18, 1990): 1139–41. http://dx.doi.org/10.1056/nejm199010183231611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yingdong, and Richard Simon. "Development and Validation of Predictive Indices for a Continuous Outcome Using Gene Expression Profiles." Cancer Informatics 9 (January 2010): CIN.S3805. http://dx.doi.org/10.4137/cin.s3805.

Full text
Abstract:
There have been relatively few publications using linear regression models to predict a continuous response based on microarray expression profiles. Standard linear regression methods are problematic when the number of predictor variables exceeds the number of cases. We have evaluated three linear regression algorithms that can be used for the prediction of a continuous response based on high dimensional gene expression data. The three algorithms are the least angle regression (LAR), the least absolute shrinkage and selection operator (LASSO), and the averaged linear regression method (ALM). All methods are tested using simulations based on a real gene expression dataset and analyses of two sets of real gene expression data and using an unbiased complete cross validation approach. Our results show that the LASSO algorithm often provides a model with somewhat lower prediction error than the LAR method, but both of them perform more efficiently than the ALM predictor. We have developed a plug-in for BRB-ArrayTools that implements the LAR and the LASSO algorithms with complete cross-validation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "LASSO algoritmus"

1

Loth, Manuel. "Algorithmes d'Ensemble Actif pour le LASSO." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845441.

Full text
Abstract:
Cette thèse aborde le calcul de l'opérateur LASSO (Least Absolute Shrinkage and Selection Operator), ainsi que des problématiques qui lui sont associées, dans le domaine de la régression. Cet opérateur a suscité une attention croissante depuis son introduction par Robert Tibshirani en 1996, par sa capacité à produire ou identi fier des modèles linéaires parcimonieux à partir d'observations bruitées, la parcimonie signi fiant que seules quelques unes parmi de nombreuses variables explicatives apparaissent dans le modèle proposé. Cette sélection est produite par l'ajout à la méthode des moindres-carrés d'une contrainte ou pénalisation sur la somme des valeurs absolues des coe fficients linéaires, également appelée norme l1 du vecteur de coeffi cients. Après un rappel des motivations, principes et problématiques de la régression, des estimateurs linéaires, de la méthode des moindres-carrés, de la sélection de modèle et de la régularisation, les deux formulations équivalentes du LASSO contrainte ou régularisée sont présentées; elles dé finissent toutes deux un problème de calcul non trivial pour associer un estimateur à un ensemble d'observations et un paramètre de sélection. Un bref historique des algorithmes résolvant ce problème est dressé, et les deux approches permettant de gérer la non-di fferentiabilité de la norme l1 sont présentées, ainsi que l'équivalence de ces problèmes avec un programme quadratique. La seconde partie se concentre sur l'aspect pratique des algorithmes de résolution du LASSO. L'un d'eux, proposé par Michael Osborne en 2000, est reformulé. Cette reformulation consiste à donner une défi nition et explication générales de la méthode d'ensemble actif, qui généralise l'algorithme du simplex à la programmation convexe, puis à la spéci fier progressivement pour la programmation LASSO, et à adresser les questions d'optimisation des calculs algébriques. Bien que décrivant essentiellement le même algorithme que celui de Michael Osborne, la présentation qui en est faite ici a l'ambition d'en exposer clairement les mécanismes, et utilise des variables di fférentes. Outre le fait d'aider à mieux comprendre cet algorithme visiblement sous-estimé, l'angle par lequel il est présenté éclaire le fait nouveau que la même méthode s'applique naturellement à la formulation régularisée du LASSO, et non uniquement à la formulation contrainte. La populaire méthode par homotopie (ou LAR-LASSO, ou LARS) est ensuite présentée comme une dérivation de la méthode d'ensemble actif, amenant une formulation alternative et quelque peu simpli fiée de cet algorithme qui fournit les solutions du LASSO pour chaque valeur de son paramètre. Il est montré que, contrairement aux résultats d'une étude récente de Jerome H. Friedman, des implémentations de ces algorithmes suivant ces reformulations sont plus effi caces en terme de temps de calcul qu'une méthode de descente par coordonnées. La troisième partie étudie dans quelles mesures ces trois algorithmes (ensemble actif, homotopie, et descente par coordonnées) peuvent gérer certains cas particuliers, et peuvent être appliqués à des extensions du LASSO ou d'autres problèmes similaires. Les cas particuliers incluent les dégénérescences, comme la présence de variables lineairement dépendantes, ou la sélection/désélection simultanée de variables. Cette dernière problématique, qui était délaissée dans les travaux précédents, est ici expliquée plus largement et une solution simple et efficace y est apportée. Une autre cas particulier est la sélection LASSO à partir d'un nombre très large, voire infi ni de variables, cas pour lequel la méthode d'ensemble actif présente un avantage majeur. Une des extensions du LASSO est sa transposition dans un cadre d'apprentissage en ligne, où il est désirable ou nécessaire de résoudre le problème sur un ensemble d'observations qui évolue dans le temps. A nouveau, la flexibilité limitée de la méthode par homotopie la disquali fie au pro fit des deux autres. Une autre extension est l'utilisation de la pénalisation l1 sur d'autres fonction coûts que la norme l2 du résidu, ou en association avec d'autres pénalisations, et il est rappelé ou établi dans quelles mesures et de quelle façon chaque algorithme peut être transposé à ces problèmes.
APA, Harvard, Vancouver, ISO, and other styles
2

SINGH, KEVIN. "Comparing Variable Selection Algorithms On Logistic Regression – A Simulation." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446090.

Full text
Abstract:
When we try to understand why some schools perform worse than others, if Covid-19 has struck harder on some demographics or whether income correlates with increased happiness, we may turn to regression to better understand how these variables are correlated. To capture the true relationship between variables we may use variable selection methods in order to ensure that the variables which have an actual effect have been included in the model. Choosing the right model for variable selection is vital. Without it there is a risk of including variables which have little to do with the dependent variable or excluding variables that are important. Failing to capture the true effects would paint a picture disconnected from reality and it would also give a false impression of what reality really looks like. To mitigate this risk a simulation study has been conducted to find out what variable selection algorithms to apply in order to make more accurate inference. The different algorithms being tested are stepwise regression, backward elimination and lasso regression. Lasso performed worst when applied to a small sample but performed best when applied to larger samples. Backward elimination and stepwise regression had very similar results.
APA, Harvard, Vancouver, ISO, and other styles
3

Sanchez, Merchante Luis Francisco. "Learning algorithms for sparse classification." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00868847.

Full text
Abstract:
This thesis deals with the development of estimation algorithms with embedded feature selection the context of high dimensional data, in the supervised and unsupervised frameworks. The contributions of this work are materialized by two algorithms, GLOSS for the supervised domain and Mix-GLOSS for unsupervised counterpart. Both algorithms are based on the resolution of optimal scoring regression regularized with a quadratic formulation of the group-Lasso penalty which encourages the removal of uninformative features. The theoretical foundations that prove that a group-Lasso penalized optimal scoring regression can be used to solve a linear discriminant analysis bave been firstly developed in this work. The theory that adapts this technique to the unsupervised domain by means of the EM algorithm is not new, but it has never been clearly exposed for a sparsity-inducing penalty. This thesis solidly demonstrates that the utilization of group-Lasso penalized optimal scoring regression inside an EM algorithm is possible. Our algorithms have been tested with real and artificial high dimensional databases with impressive resuits from the point of view of the parsimony without compromising prediction performances.
APA, Harvard, Vancouver, ISO, and other styles
4

Huynh, Bao Tuyen. "Estimation and feature selection in high-dimensional mixtures-of-experts models." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC237.

Full text
Abstract:
Cette thèse traite de la modélisation et de l’estimation de modèles de mélanges d’experts de grande dimension, en vue d’efficaces estimation de densité, prédiction et classification de telles données complexes car hétérogènes et de grande dimension. Nous proposons de nouvelles stratégies basées sur l’estimation par maximum de vraisemblance régularisé des modèles pour pallier aux limites des méthodes standards, y compris l’EMV avec les algorithmes d’espérance-maximisation (EM), et pour effectuer simultanément la sélection des variables pertinentes afin d’encourager des solutions parcimonieuses dans un contexte haute dimension. Nous introduisons d’abord une méthode d’estimation régularisée des paramètres et de sélection de variables d’un mélange d’experts, basée sur des régularisations l1 (lasso) et le cadre de l’algorithme EM, pour la régression et la classification adaptés aux contextes de la grande dimension. Ensuite, nous étendons la stratégie un mélange régularisé de modèles d’experts pour les données discrètes, y compris pour la classification. Nous développons des algorithmes efficaces pour maximiser la fonction de log-vraisemblance l1 -pénalisée des données observées. Nos stratégies proposées jouissent de la maximisation monotone efficace du critère optimisé, et contrairement aux approches précédentes, ne s’appuient pas sur des approximations des fonctions de pénalité, évitent l’inversion de matrices et exploitent l’efficacité de l’algorithme de montée de coordonnées, particulièrement dans l’approche proximale par montée de coordonnées
This thesis deals with the problem of modeling and estimation of high-dimensional MoE models, towards effective density estimation, prediction and clustering of such heterogeneous and high-dimensional data. We propose new strategies based on regularized maximum-likelihood estimation (MLE) of MoE models to overcome the limitations of standard methods, including MLE estimation with Expectation-Maximization (EM) algorithms, and to simultaneously perform feature selection so that sparse models are encouraged in such a high-dimensional setting. We first introduce a mixture-of-experts’ parameter estimation and variable selection methodology, based on l1 (lasso) regularizations and the EM framework, for regression and clustering suited to high-dimensional contexts. Then, we extend the method to regularized mixture of experts models for discrete data, including classification. We develop efficient algorithms to maximize the proposed l1 -penalized observed-data log-likelihood function. Our proposed strategies enjoy the efficient monotone maximization of the optimized criterion, and unlike previous approaches, they do not rely on approximations on the penalty functions, avoid matrix inversion, and exploit the efficiency of the coordinate ascent algorithm, particularly within the proximal Newton-based approach
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Bo. "Variable Ranking by Solution-path Algorithms." Thesis, 2012. http://hdl.handle.net/10012/6496.

Full text
Abstract:
Variable Selection has always been a very important problem in statistics. We often meet situations where a huge data set is given and we want to find out the relationship between the response and the corresponding variables. With a huge number of variables, we often end up with a big model even if we delete those that are insignificant. There are two reasons why we are unsatisfied with a final model with too many variables. The first reason is the prediction accuracy. Though the prediction bias might be small under a big model, the variance is usually very high. The second reason is interpretation. With a large number of variables in the model, it's hard to determine a clear relationship and explain the effects of variables we are interested in. A lot of variable selection methods have been proposed. However, one disadvantage of variable selection is that different sizes of model require different tuning parameters in the analysis, which is hard to choose for non-statisticians. Xin and Zhu advocate variable ranking instead of variable selection. Once variables are ranked properly, we can make the selection by adopting a threshold rule. In this thesis, we try to rank the variables using Least Angle Regression (LARS). Some shrinkage methods like Lasso and LARS can shrink the coefficients to zero. The advantage of this kind of methods is that they can give a solution path which describes the order that variables enter the model. This provides an intuitive way to rank variables based on the path. However, Lasso can sometimes be difficult to apply to variable ranking directly. This is because that in a Lasso solution path, variables might enter the model and then get dropped. This dropping issue makes it hard to rank based on the order of entrance. However, LARS, which is a modified version of Lasso, doesn't have this problem. We'll make use of this property and rank variables using LARS solution path.
APA, Harvard, Vancouver, ISO, and other styles
6

Noro, Catarina Vieira. "Determinants of households´ consumption in Portugal - a machine learning approach." Master's thesis, 2021. http://hdl.handle.net/10362/121884.

Full text
Abstract:
Machine Learning has been widely adopted by researchers in several academic fields.Although at a slow pace, the field of economics has also started to acknowledge the pos-sibilities of these algorithm based methods for complementing or even replace traditionalEconometric approaches. This research aims to apply Machine Learning data-driven variable selection models for accessing the determinants of Portuguese households’ consumption using the Household Finance and Consumption Survey. I found that LASSO Regression and Elastic Net have the best performance in this setting and that wealth related variables have the highest impact on households’ consumption levels, followed by income, household’s characteristics and debt and consumption credit.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Zangdong. "Variable selection and structural discovery in joint models of longitudinal and survival data." Thesis, 2014. http://hdl.handle.net/1805/6365.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Joint models of longitudinal and survival outcomes have been used with increasing frequency in clinical investigations. Correct specification of fixed and random effects, as well as their functional forms is essential for practical data analysis. However, no existing methods have been developed to meet this need in a joint model setting. In this dissertation, I describe a penalized likelihood-based method with adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions for model selection. By reparameterizing variance components through a Cholesky decomposition, I introduce a penalty function of group shrinkage; the penalized likelihood is approximated by Gaussian quadrature and optimized by an EM algorithm. The functional forms of the independent effects are determined through a procedure for structural discovery. Specifically, I first construct the model by penalized cubic B-spline and then decompose the B-spline to linear and nonlinear elements by spectral decomposition. The decomposition represents the model in a mixed-effects model format, and I then use the mixed-effects variable selection method to perform structural discovery. Simulation studies show excellent performance. A clinical application is described to illustrate the use of the proposed methods, and the analytical results demonstrate the usefulness of the methods.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LASSO algoritmus"

1

Loth, Manuel, and Philippe Preux. "The Iso-regularization Descent Algorithm for the LASSO." In Neural Information Processing. Theory and Algorithms, 454–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17537-4_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Md Shahri, Nur Huda Nabihan, and Susana Conde. "Modelling Multi-dimensional Contingency Tables: LASSO and Stepwise Algorithms." In Proceedings of the Third International Conference on Computing, Mathematics and Statistics (iCMS2017), 563–70. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7279-7_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walrand, Jean. "Speech Recognition: B." In Probability in Electrical Engineering and Computer Science, 217–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_12.

Full text
Abstract:
AbstractOnline learning algorithms update their estimates as additional observations are made. Section 12.1 explains a simple example: online linear regression. The stochastic gradient projection algorithm is a general technique to update estimates based on additional observations; it is widely used in machine learning. Section 12.2 presents the theory behind that algorithm. When analyzing large amounts of data, one faces the problems of identifying the most relevant data and of how to use efficiently the available data. Section 12.3 explains three examples of how these questions are addressed: the LASSO algorithm, compressed sensing, and the matrix completion problem. Section 12.4 discusses deep neural networks for which the stochastic gradient projection algorithm is easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
4

Pawlak, Mirosław, and Jiaqing Lv. "Analysis of Large Scale Power Systems via LASSO Learning Algorithms." In Artificial Intelligence and Soft Computing, 652–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20912-4_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

AlKindy, Bassam, Christophe Guyeux, Jean-François Couchot, Michel Salomon, Christian Parisod, and Jacques M. Bahi. "Hybrid Genetic Algorithm and Lasso Test Approach for Inferring Well Supported Phylogenetic Trees Based on Subsets of Chloroplastic Core Genes." In Algorithms for Computational Biology, 83–96. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21233-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Boulesteix, Anne-Laure, Adrian Richter, and Christoph Bernau. "Complexity Selection with Cross-validation for Lasso and Sparse Partial Least Squares Using High-Dimensional Data." In Algorithms from and for Nature and Life, 261–68. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00035-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yamada, Isao, and Masao Yamagishi. "Hierarchical Convex Optimization by the Hybrid Steepest Descent Method with Proximal Splitting Operators—Enhancements of SVM and Lasso." In Splitting Algorithms, Modern Operator Theory, and Applications, 413–89. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25939-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hao, Yuhan, Gary M. Weiss, and Stuart M. Brown. "Identification of Candidate Genes Responsible for Age-Related Macular Degeneration Using Microarray Data." In Biotechnology, 969–1001. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8903-7.ch038.

Full text
Abstract:
A DNA microarray can measure the expression of thousands of genes simultaneously, and this enables us to study the molecular pathways underlying Age-related Macular Degeneration. Previous studies have not determined which genes are responsible for the process of AMD. The authors address this deficiency by applying modern data mining and machine learning feature selection algorithms to the AMD microarray dataset. In this paper four methods are utilized to perform feature selection: Naïve Bayes, Random Forest, Random Lasso, and Ensemble Feature Selection. Functional Annotation of 20 final selected genes suggests that most of them are responsible for signal transduction in an individual cell or between cells. The top seven genes, five protein-coding genes and two non-coding RNAs, are explored from their signaling pathways, functional interactions and associations with retinal pigment epithelium cells. The authors conclude that Pten/PI3K/Akt pathway, NF-kappaB pathway, JNK cascade, Non-canonical Wnt Pathway, and two biological processes of cilia are likely to play important roles in AMD pathogenesis.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "LASSO algoritmus"

1

Jin, Yuzhe, and Bhaskar D. Rao. "MultiPass lasso algorithms for sparse signal recovery." In 2011 IEEE International Symposium on Information Theory - ISIT. IEEE, 2011. http://dx.doi.org/10.1109/isit.2011.6033773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Wang. "A Comparison of Three Numeric Algorithms for Lasso Solution." In 2020 International Conference on Computing and Data Science (CDS). IEEE, 2020. http://dx.doi.org/10.1109/cds49703.2020.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kong, Deguang, and Chris Ding. "Efficient Algorithms for Selecting Features with Arbitrary Group Constraints via Group Lasso." In 2013 IEEE International Conference on Data Mining (ICDM). IEEE, 2013. http://dx.doi.org/10.1109/icdm.2013.168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marins, Matheus, Rafael Chaves, Vinicius Pinho, Rebeca Cunha, and Marcello Campos. "Tackling Fingerprinting Indoor Localization Using the LASSO and the Conjugate Gradient Algorithms." In XXXIV Simpósio Brasileiro de Telecomunicações. Sociedade Brasileira de Telecomunicações, 2016. http://dx.doi.org/10.14209/sbrt.2016.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Bin, Xingwang Ju, Xiang Li, and Guansheng Zheng. "Faster Training Algorithms for Structured Sparsity-Inducing Norm." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/299.

Full text
Abstract:
Structured-sparsity regularization is popular for sparse learning because of its flexibility of encoding the feature structures. This paper considers a generalized version of structured-sparsity regularization (especially for $l_1/l_{\infty}$ norm) with arbitrary group overlap. Due to the group overlap, it is time-consuming to solve the associated proximal operator. Although Mairal~\shortcite{mairal2010network} have proposed a network-flow algorithm to solve the proximal operator, it is still time-consuming especially in the high-dimensional setting. To address this challenge, in this paper, we have developed a more efficient solution for $l_1/l_{\infty}$ group lasso with arbitrary group overlap using an Inexact Proximal-Gradient method. In each iteration, our algorithm only requires to calculate an inexact solution to the proximal sub-problem, which can be done efficiently. On the theoretic side, the proposed algorithm enjoys the same global convergence rate as the exact proximal methods. Experiments demonstrate that our algorithm is much more efficient than network-flow algorithm, while retaining the similar generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Maya, Haroldo C., and Guilherme A. Barreto. "A GA-Based Approach for Building Regularized Sparse Polynomial Models for Wind Turbine Power Curves." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4455.

Full text
Abstract:
In this paper, the classical polynomial model for wind turbines power curve estimation is revisited aiming at an automatic and parsimonious design. In this regard, using genetic algorithms we introduce a methodoloy for estimating a suitable order for the polynomial as well its relevant terms. The proposed methodology is compared with the state of the art in estimating the power curve of wind turbines, such as logistic models (with 4 and 5 parameters), artificial neural networks and weighted polynomial regression. We also show that the proposed approach performs better than the standard LASSO approach for building regularized sparse models. The results indicate that the proposed methodology consistently outperforms all the evaluated alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Kato, Masaya, Miho Ohsaki, and Kei Ohnishi. "Genetic Algorithms Using Neural Network Regression and Group Lasso for Dynamic Selection of Crossover Operators." In 2020 Joint 11th International Conference on Soft Computing and Intelligent Systems and 21st International Symposium on Advanced Intelligent Systems (SCIS-ISIS). IEEE, 2020. http://dx.doi.org/10.1109/scisisis50064.2020.9322697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Idogun, Akpevwe Kelvin, Ruth Oyanu Ujah, and Lesley Anne James. "Surrogate-Based Analysis of Chemical Enhanced Oil Recovery – A Comparative Analysis of Machine Learning Model Performance." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208452-ms.

Full text
Abstract:
Abstract Optimizing decision and design variables for Chemical EOR is imperative for sensitivity and uncertainty analysis. However, these processes involve multiple reservoir simulation runs which increase computational cost and time. Surrogate models are capable of overcoming this impediment as they are capable of mimicking the capabilities of full field three-dimensional reservoir simulation models in detail and complexity. Artificial Neural Networks (ANN) and regression-based Design of Experiments (DoE) are common methods for surrogate modelling. In this study, a comparative analysis of data-driven surrogate model performance on Recovery Factor (RF) for Surfactant-Polymer flooding is investigated with seven input variables including Kv/Kh ratio, polymer concentration in polymer drive, surfactant slug size, surfactant concentration in surfactant slug, polymer concentration in surfactant slug, polymer drive size and salinity of polymer drive. Eleven Machine learning models including Multiple Linear Regression (MLR), Ridge and Lasso regression; Support Vector Regression (SVR), ANN as well as Classification and Regression Tree (CART) based algorithms including Decision Trees, Random Forest, eXtreme Gradient Boosting (XGBoost), Gradient Boosting and Extremely Randomized Trees (ERT), are applied on a dataset consisting of 202 datapoints. The results obtained indicate high model performance and accuracy for SVR, ANN and CART based ensemble techniques like Extremely Randomized Trees, Gradient Boost and XGBoost regression, with high R2 values and lowest Mean Squared Error (MSE) values for the training and test dataset. Unlike other studies on Chemical EOR surrogate modelling where sensitivity was analyzed with statistical DoE, we rank the input features using Decision Tree-based algorithms while model interpretability is achieved with Shapely Values. Results from feature ranking indicate that surfactant concentration, and slug size are the most influential parameters on the RF. Other important factors, though with less influence, are the polymer concentration in surfactant slug, polymer concentration in polymer drive and polymer drive size. The salinity of the polymer drive and the Kv/Kh ratio both have a negative effect on the RF, with a corresponding least level of significance.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmadov, Jamal. "Utilizing Data-Driven Models to Predict Brittleness in Tuscaloosa Marine Shale: A Machine Learning Approach." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208628-stu.

Full text
Abstract:
Abstract The Tuscaloosa Marine Shale (TMS) formation is a clay- and liquid-rich emerging shale play across central Louisiana and southwest Mississippi with recoverable resources of 1.5 billion barrels of oil and 4.6 trillion cubic feet of gas. The formation poses numerous challenges due to its high average clay content (50 wt%) and rapidly changing mineralogy, making the selection of fracturing candidates a difficult task. While brittleness plays an important role in screening potential intervals for hydraulic fracturing, typical brittleness estimation methods require the use of geomechanical and mineralogical properties from costly laboratory tests. Machine Learning (ML) can be employed to generate synthetic brittleness logs and therefore, may serve as an inexpensive and fast alternative to the current techniques. In this paper, we propose the use of machine learning to predict the brittleness index of Tuscaloosa Marine Shale from conventional well logs. We trained ML models on a dataset containing conventional and brittleness index logs from 8 wells. The latter were estimated either from geomechanical logs or log-derived mineralogy. Moreover, to ensure mechanical data reliability, dynamic-to-static conversion ratios were applied to Young's modulus and Poisson's ratio. The predictor features included neutron porosity, density and compressional slowness logs to account for the petrophysical and mineralogical character of TMS. The brittleness index was predicted using algorithms such as Linear, Ridge and Lasso Regression, K-Nearest Neighbors, Support Vector Machine (SVM), Decision Tree, Random Forest, AdaBoost and Gradient Boosting. Models were shortlisted based on the Root Mean Square Error (RMSE) value and fine-tuned using the Grid Search method with a specific set of hyperparameters for each model. Overall, Gradient Boosting and Random Forest outperformed other algorithms and showed an average error reduction of 5 %, a normalized RMSE of 0.06 and a R-squared value of 0.89. The Gradient Boosting was chosen to evaluate the test set and successfully predicted the brittleness index with a normalized RMSE of 0.07 and R-squared value of 0.83. This paper presents the practical use of machine learning to evaluate brittleness in a cost and time effective manner and can further provide valuable insights into the optimization of completion in TMS. The proposed ML model can be used as a tool for initial screening of fracturing candidates and selection of fracturing intervals in other clay-rich and heterogeneous shale formations.
APA, Harvard, Vancouver, ISO, and other styles
10

Orta Aleman, Dante, and Roland Horne. "Well Interference Detection from Long-Term Pressure Data Using Machine Learning and Multiresolution Analysis." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/206354-ms.

Full text
Abstract:
Abstract Knowledge of reservoir heterogeneity and connectivity is fundamental for reservoir management. Methods such as interference tests or tracers have been developed to obtain that knowledge from dynamic data. However, detecting well connectivity using interference tests requires long periods of time with a stable reservoir pressure and constant flow-rate conditions. Conversely, the long duration and high frequency of well production data have high value for detecting connectivity if noise, abrupt changes in flow-rate and missing data are dealt with. In this work, a methodology to detect interference from longterm pressure and flow-rate data was developed using multiresolution analysis in combination with machine learning algorithms. The methodology presents high accuracy and robustness to noise while requiring little to no data preprocessing. The methodology builds on previous work using the Maximal Overlap Wavelet Transform (MODWT) to analyze long-term pressure data. The new approach uses the ability of the MODWT to capture, synthesize and discriminate the relevant reservoir response for each individual well at different time scales while still honoring the relevant flow-physics. By first applying the MODWT to the flow rate history, a machine learning algorithm was used to estimate the pressure response of each well as it would be in isolation. Interference can be detected by comparing the output of the machine learning model with the unprocessed pressure data. A set of machine learning, and deep learning algorithms were tested including Kernel Ridge Regression, Lasso Regression and Recurrent Neural Networks. The machine learning models were able to detect interference at different distances even with the presence of high noise and missing data. The results were validated by comparing the machine learning output with the theoretical pressure response of wells in isolation. Additionally, it was proved that applying the MODWT multiresolution analysis to pressure and flow-rate data creates a set of "virtual wells" that still follow the diffusion equation and allow for a simplified analysis. By using production data, the proposed methodology allows for the detection of interference effects without the need of a stabilized pressure field. This allows for a significant cost reduction and no operational overhead because the detection does not require well shut-ins and it can be done regardless of operation opportunities or project objectives. Additionally, the long-term nature of production data can detect connectivity even at long distances even in the presence of noise and incomplete data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography