Academic literature on the topic 'Smoothed Ordered Weighted L1'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Smoothed Ordered Weighted L1.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Smoothed Ordered Weighted L1"

1

Turgut, Bülent, Merve Ateş, and Halil Akıncı Akıncı. "DETERMINING THE SOIL QUALITY INDEX IN THE BATUMI DELTA, GEORGIA." Agrociencia 55, no. 1 (February 17, 2021): 1–18. http://dx.doi.org/10.47163/agrociencia.v55i1.2344.

Full text
Abstract:
The soil quality index is a quantitative assessment concept and it is used in the evaluation of ecosystem components. Because of the high potential for agriculture and biodiversity, deltas are the most valuable parts of the ecosystem. This study aimed to determine the soil quality index (SQI) in the Batumi Delta, Georgia. For this purpose, the study area was divided into five plots due to their morphological positions (L1, L2, L3, L4, and L5). A total of 125 soil samples were taken for analysis including clay content (CC), silt content (SC), sand content (SaC), mean weight diameter (MWD), aggregate stability (AS), amount of water retained under -33 kPa (FC) and -1500 kPa (WP) pressures and organic matter content (OM). These properties were used as the main criteria, and the Analytic Hierarchy Process (AHP) and Factor Analysis were used for weighting them. Sub-criteria were scored using expert opinion and the linear score functions, such as “more is better” and “optimum value”. For determining SQI, the additive method (SQIA), the weighted method with AHP (SQIAHP), and the weighted method with factor analysis (SQIFA) were used. The resulting SQI scores of the three methods were ordered as SQIAHP>SQIA>SQIFA, but these differences were not significant. However, the SQI scores of the plots (p≤0.01) showed statistically significant differences and were ordered as L5>L4>L3>L2>L1.
APA, Harvard, Vancouver, ISO, and other styles
2

BALLINI, R., and R. R. YAGER. "LINEAR DECAYING WEIGHTS FOR TIME SERIES SMOOTHING: AN ANALYSIS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 22, no. 01 (February 2014): 23–40. http://dx.doi.org/10.1142/s0218488514500020.

Full text
Abstract:
In this paper, we investigate the use of weighted averaging aggregation operators as techniques for time series smoothing. We analyze the moving average, exponential smoothing methods, and a new class of smoothing operators based on linearly decaying weights from the perspective of ordered weights averaging to estimate a constant model. We examine two important features associated with the smoothing processes: the average age of the data and the expected variance, both defined in terms of the associated weights. We show that there exists a fundamental conflict between keeping the variance small while using the freshest data. We illustrate the flexibility of the smoothing methods with real datasets; that is, we evaluate the aggregation operators with respect to their minimal attainable variance versus average age. We also examine the efficiency of the smoothed models in time series smoothing, considering real datasets. Good smoothing generally depends upon the underlying method's ability to select appropriate weights to satisfy the criteria of both small variance and recent data.
APA, Harvard, Vancouver, ISO, and other styles
3

Mazza-Anthony, Cody, Bogdan Mazoure, and Mark Coates. "Learning Gaussian Graphical Models with Ordered Weighted L1 Regularization." IEEE Transactions on Signal Processing, 2020, 1. http://dx.doi.org/10.1109/tsp.2020.3038480.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Smoothed Ordered Weighted L1"

1

Sankaran, Raman. "Structured Regularization Through Convex Relaxations Of Discrete Penalties." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5456.

Full text
Abstract:
Motivation. Empirical risk minimization(ERM) is a popular framework for learning predictive models from data, which has been used in various domains such as computer vision, text processing, bioinformatics, neuro-biology, temporal point processes, to name a few. We consider the cases where one has apriori information regarding the model structure, the simplest one being the sparsity of the model. The desired sparsity structure can be imputed into ERM problems using a regularizer, which is typically a norm on the model vector. Popular choices of the regularizers include the `1 norm (LASSO) which encourages sparse solutions, block-`1 which encourages block level sparsity, among many others which induce more complicated structures. To impute the structural prior, recently, many studies have considered combinatorial functions on the model's support to be the regularizer. But this leads to an NP-hard problem, which thus motivates us to consider convex relaxations of the corresponding combinatorial functions, which are tractable. Existing work and research gaps. The convex relaxations of combinatorial functions have been studied recently in the context of structured sparsity, but they still lead to inefficient computational procedures in general cases: e.g., even when the combinatorial function is submodular, whose convex relaxations are well studied and easier to work with than the general ones, the resulting problem is computationally expensive (the proximal operator takes O(d6) time for d variables). Hence, the associated high expenses have limited the research interest towards these these regularizers, despite the submodular functions which generate them are expressive enough to encourage many of the structures such as those mentioned before. Hence it remains open to design efficient optimization procedures to work with the submodular penalties, and with combinatorial penalties in general. It is also desirable that the optimization algorithms designed to be applicable across the possible choices of the loss functions arising from various applications. We identify four such problems from these existing research gaps, and address them through the contributions which are listed below. We provide the list of publications related to this thesis following this abstract Contributions. First, we propose a novel kernel learning technique termed as Variable sparsity kernel learning (VSKL) for support vector machines (SVM), which are applicable when there is an apriori information regarding the grouping among the kernels. In such cases, we propose a novel mixed-norm regularizer, which encourages sparse selection of the kernels within a group while selecting all groups. This regularizer can also be viewed as the convex relaxation of a specifi c discrete penalty on the model's support. The resulting non-smooth optimization problem is difficult, where o -the-shelf techniques are not applicable. We propose a mirror descent based optimization algorithm to solve this problem, which has a guaranteed convergence rate of O(1= p T) over T iterations. We demonstrate the efficiency of the proposed algorithm in various scaling experiments, and applicability of the regularizer in an object recognition task. Second, we introduce a family of regularizers termed as Smoothed Ordered Weighted L1 (SOWL) norms, which are derived as the convex relaxation of non-decreasing cardinalitybased submodular penalties, which form an important special class of the general discrete penalties. Considering linear regression, where the presence of correlated predictors cause the traditional regularizers such as Lasso to fail recovering the true support, SOWL has the property of selecting the correlated predictors as groups. While Ordered Weighted `1 (OWL) norms address similar use cases, they are biased to promote undesirable piece-wise constant solutions, which SOWL does not have. SOWL is also shown equivalent to group-lasso, but without specifying the groups explicitly. We develop efficient proximal operators for SOWL, and illustrate its computational and theoretical benefi ts through various simulations. Third, we discuss Hawkes-OWL, an application of OWL regularizers for the setting of multidimensional Hawkes processes. Hawkes process is a multi-dimensional point process (collection of multiple event streams) with self and mutual in fluences between the event streams. While the popular `1 regularizer fails to recover the true models in the presence of strongly correlated event streams, OWL regularization address this issue and groups the correlated predictors. This is the first instance in the literature, where OWL norms, which predominantly have been studied with respect to simple loss functions such as the squared loss, are extended to the Hawkes process with similar theoretical and computational guarantees. In the fourth part, we discuss generic first-order algorithms for learning with Subquadraic norms, a special sub-family of convex relaxations of submodular penalties. We consider subquadratic norms regularization in a very general setting, covering all loss functions, and propose different reformulations of the original problem. The reformulations enable us to propose two different primal-dual algorithms, CP- and ADMM- , both of which having a guaranteed convergence rate of O(1=T ). This study thus provides the rst ever algorithms with e cient convergence rates for learning with subquadratic norms.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Smoothed Ordered Weighted L1"

1

Oswal, Urvashi, and Robert Nowak. "Scalable Sparse Subspace Clustering via Ordered Weighted l1 Regression." In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2018. http://dx.doi.org/10.1109/allerton.2018.8635965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Jwo-Yuh, Liang-Chi Huang, Ming-Hsun Yang, Ling-Hua Chang, and Chun-Hung Liu. "Sparse Subspace Clustering With Sequentially Ordered and Weighted L1-Minimization†." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography