Siga este link para ver outros tipos de publicações sobre o tema: Regularized approaches.

Artigos de revistas sobre o tema "Regularized approaches"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Regularized approaches".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

G.V., Suresh, e Srinivasa Reddy E.V. "Uncertain Data Analysis with Regularized XGBoost". Webology 19, n.º 1 (20 de janeiro de 2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Texto completo da fonte
Resumo:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing information is inevitable in real-world data sets. While some conventional multiple imputation approaches are well studied and have shown empirical validity, they entail limitations in processing large datasets with complex data structures. In addition, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable multiple imputation frameworks based on XGBoost, bootstrapping and regularized method. XGBoost, one of the fastest implementations of gradient boosted trees, is able to automatically retain interactions and non-linear relations in a dataset while achieving high computational efficiency with the aid of bootstrapping and regularized methods. In the context of high-dimensional data, this methodology provides fewer biased estimates and reflects acceptable imputation variability than previous regression approaches. We validate our adaptive imputation approaches with standard methods on numerical and real data sets and shown promising results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Taniguchi, Michiaki, e Volker Tresp. "Averaging Regularized Estimators". Neural Computation 9, n.º 5 (1 de julho de 1997): 1163–78. http://dx.doi.org/10.1162/neco.1997.9.5.1163.

Texto completo da fonte
Resumo:
We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvement—if compared to the individual estimators—is achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Luft, Daniel, e Volker Schulz. "Simultaneous shape and mesh quality optimization using pre-shape calculus". Control and Cybernetics 50, n.º 4 (1 de dezembro de 2021): 473–520. http://dx.doi.org/10.2478/candc-2021-0028.

Texto completo da fonte
Resumo:
Abstract Computational meshes arising from shape optimization routines commonly suffer from decrease of mesh quality or even destruction of the mesh. In this work, we provide an approach to regularize general shape optimization problems to increase both shape and volume mesh quality. For this, we employ pre-shape calculus as established in Luft and Schulz (2021). Existence of regularized solutions is guaranteed. Further, consistency of modified pre-shape gradient systems is established. We present pre-shape gradient system modifications, which permit simultaneous shape optimization with mesh quality improvement. Optimal shapes to the original problem are left invariant under regularization. The computational burden of our approach is limited, since additional solution of possibly larger (non-)linear systems for regularized shape gradients is not necessary. We implement and compare pre-shape gradient regularization approaches for a 2D problem, which is prone to mesh degeneration. As our approach does not depend on the choice of metrics representing shape gradients, we employ and compare several different metrics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ebadat, Afrooz, Giulio Bottegal, Damiano Varagnolo, Bo Wahlberg e Karl H. Johansson. "Regularized Deconvolution-Based Approaches for Estimating Room Occupancies". IEEE Transactions on Automation Science and Engineering 12, n.º 4 (outubro de 2015): 1157–68. http://dx.doi.org/10.1109/tase.2015.2471305.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Feng, Hesen, Lihong Ma e Jing Tian. "A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution". Sensors 22, n.º 11 (1 de junho de 2022): 4231. http://dx.doi.org/10.3390/s22114231.

Texto completo da fonte
Resumo:
Image super-resolution aims to reconstruct a high-resolution image from its low-resolution counterparts. Conventional image super-resolution approaches share the same spatial convolution kernel for the whole image in the upscaling modules, which neglect the specificity of content information in different positions of the image. In view of this, this paper proposes a regularized pattern method to represent spatially variant structural features in an image and further exploits a dynamic convolution kernel generation method to match the regularized pattern and improve image reconstruction performance. To be more specific, first, the proposed approach extracts features from low-resolution images using a self-organizing feature mapping network to construct regularized patterns (RP), which describe different contents at different locations. Second, the meta-learning mechanism based on the regularized pattern predicts the weights of the convolution kernels that match the regularized pattern for each different location; therefore, it generates different upscaling functions for images with different content. Extensive experiments are conducted using the benchmark datasets Set5, Set14, B100, Urban100, and Manga109 to demonstrate that the proposed approach outperforms the state-of-the-art super-resolution approaches in terms of both PSNR and SSIM performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Robitzsch, Alexander. "Implementation Aspects in Regularized Structural Equation Models". Algorithms 16, n.º 9 (18 de setembro de 2023): 446. http://dx.doi.org/10.3390/a16090446.

Texto completo da fonte
Resumo:
This article reviews several implementation aspects in estimating regularized single-group and multiple-group structural equation models (SEM). It is demonstrated that approximate estimation approaches that rely on a differentiable approximation of non-differentiable penalty functions perform similarly to the coordinate descent optimization approach of regularized SEMs. Furthermore, using a fixed regularization parameter can sometimes be superior to an optimal regularization parameter selected by the Bayesian information criterion when it comes to the estimation of structural parameters. Moreover, the widespread penalty functions of regularized SEM implemented in several R packages were compared with the estimation based on a recently proposed penalty function in the Mplus software. Finally, we also investigate the performance of a clever replacement of the optimization function in regularized SEM with a smoothed differentiable approximation of the Bayesian information criterion proposed by O’Neill and Burke in 2023. The findings were derived through two simulation studies and are intended to guide the practical implementation of regularized SEM in future software pieces.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Robitzsch, Alexander. "Comparing Robust Linking and Regularized Estimation for Linking Two Groups in the 1PL and 2PL Models in the Presence of Sparse Uniform Differential Item Functioning". Stats 6, n.º 1 (25 de janeiro de 2023): 192–208. http://dx.doi.org/10.3390/stats6010012.

Texto completo da fonte
Resumo:
In the social sciences, the performance of two groups is frequently compared based on a cognitive test involving binary items. Item response models are often utilized for comparing the two groups. However, the presence of differential item functioning (DIF) can impact group comparisons. In order to avoid the biased estimation of groups, appropriate statistical methods for handling differential item functioning are required. This article compares the performance-regularized estimation and several robust linking approaches in three simulation studies that address the one-parameter logistic (1PL) and two-parameter logistic (2PL) models, respectively. It turned out that robust linking approaches are at least as effective as the regularized estimation approach in most of the conditions in the simulation studies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Leen, Todd K. "From Data Distributions to Regularization in Invariant Learning". Neural Computation 7, n.º 5 (setembro de 1995): 974–81. http://dx.doi.org/10.1162/neco.1995.7.5.974.

Texto completo da fonte
Resumo:
Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed examples to the training data. We introduce the notion of a probability distribution over the group transformations, and use this to rewrite the cost function for the enhanced training data. Under certain conditions, the new cost function is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice—a term that penalizes changes in the output when the inputs are transformed under the group. For infinitesimal transformations, the coefficient of the regularization term reduces to the variance of the distortions introduced into the training data. This correspondence provides a simple bridge between the two approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Feng, Huijie, Chunpeng Wu, Guoyang Chen, Weifeng Zhang e Yang Ning. "Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3858–65. http://dx.doi.org/10.1609/aaai.v34i04.5798.

Texto completo da fonte
Resumo:
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against ℓ2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Zhang, Hong, Dong Lai Hao e Xiang Yang Liu. "A Precoding Strategy for Massive MIMO System". Applied Mechanics and Materials 568-570 (junho de 2014): 1278–81. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.1278.

Texto completo da fonte
Resumo:
The computational precoding complexity increases with its dimensions in massive multiple-input multiple-output system. A precoding scheme based on the truncated polynomial expansion is proposed, the hardware implementation is described for the superiority of the algorithm compared with the conventional regularized zero forcing precoding. Finally, under different channel conditions, the simulation results show that the average achievable rate will increase infinitely approaches the regularized zero forcing precoding simulation in a certain order, the polynomial order does not need to scale with the system dimensions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Kanzawa, Yuchi. "Entropy-Regularized Fuzzy Clustering for Non-Euclidean Relational Data and Indefinite Kernel Data". Journal of Advanced Computational Intelligence and Intelligent Informatics 16, n.º 7 (20 de novembro de 2012): 784–92. http://dx.doi.org/10.20965/jaciii.2012.p0784.

Texto completo da fonte
Resumo:
In this paper, an entropy-regularized fuzzy clustering approach for non-Euclidean relational data and indefinite kernel data is developed that has not previously been discussed. It is important because relational data and kernel data are not always Euclidean and positive semi-definite, respectively. It is theoretically determined that an entropy-regularized approach for both non-Euclidean relational data and indefinite kernel data can be applied without using a β-spread transformation, and that two other options make the clustering results crisp for both data types. These results are in contrast to those from the standard approach. Numerical experiments are employed to verify the theoretical results, and the clustering accuracy of three entropy-regularized approaches for non-Euclidean relational data, and three for indefinite kernel data, is compared.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

van Erp, Sara. "Bayesian Regularized SEM: Current Capabilities and Constraints". Psych 5, n.º 3 (3 de agosto de 2023): 814–35. http://dx.doi.org/10.3390/psych5030054.

Texto completo da fonte
Resumo:
An important challenge in statistical modeling is to balance how well our model explains the phenomenon under investigation with the parsimony of this explanation. In structural equation modeling (SEM), penalization approaches that add a penalty term to the estimation procedure have been proposed to achieve this balance. An alternative to the classical penalization approach is Bayesian regularized SEM in which the prior distribution serves as the penalty function. Many different shrinkage priors exist, enabling great flexibility in terms of shrinkage behavior. As a result, different types of shrinkage priors have been proposed for use in a wide variety of SEMs. However, the lack of a general framework and the technical details of these shrinkage methods can make it difficult for researchers outside the field of (Bayesian) regularized SEM to understand and apply these methods in their own work. Therefore, the aim of this paper is to provide an overview of Bayesian regularized SEM, with a focus on the types of SEMs in which Bayesian regularization has been applied as well as available software implementations. Through an empirical example, various open-source software packages for (Bayesian) regularized SEM are illustrated and all code is made available online to aid researchers in applying these methods. Finally, reviewing the current capabilities and constraints of Bayesian regularized SEM identifies several directions for future research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Koné, N’Golo. "Regularized Maximum Diversification Investment Strategy". Econometrics 9, n.º 1 (29 de dezembro de 2020): 1. http://dx.doi.org/10.3390/econometrics9010001.

Texto completo da fonte
Resumo:
The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return covariance matrix. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This, in turn, may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial market. To address this problem, we investigate three regularization techniques, including the ridge, the spectral cut-off, and the Landweber–Fridman approaches in order to stabilize the inverse of the covariance matrix. These regularization schemes involve a tuning parameter that needs to be chosen. In light of this fact, we propose a data-driven method for selecting the tuning parameter. We show that the selected portfolio by regularization is asymptotically efficient with respect to the diversification ratio. In empirical and Monte Carlo experiments, the resulting regularized rules are compared to several strategies, such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio performance, and it is shown that our method yields significant Sharpe ratio improvements.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Chahboun, Souhaila, e Mohamed Maaroufi. "Principal Component Analysis and Machine Learning Approaches for Photovoltaic Power Prediction: A Comparative Study". Applied Sciences 11, n.º 17 (27 de agosto de 2021): 7943. http://dx.doi.org/10.3390/app11177943.

Texto completo da fonte
Resumo:
Nowadays, in the context of the industrial revolution 4.0, considerable volumes of data are being generated continuously from intelligent sensors and connected objects. The proper understanding and use of these amounts of data are crucial levers of performance and innovation. Machine learning is the technology that allows the full potential of big datasets to be exploited. As a branch of artificial intelligence, it enables us to discover patterns and make predictions from data based on statistics, data mining, and predictive analysis. The key goal of this study was to use machine learning approaches to forecast the hourly power produced by photovoltaic panels. A comparison analysis of various predictive models including elastic net, support vector regression, random forest, and Bayesian regularized neural networks was carried out to identify the models providing the best predicting results. The principal components analysis used to reduce the dimensionality of the input data revealed six main factor components that could explain up to 91.95% of the variation in all variables. Finally, performance metrics demonstrated that Bayesian regularized neural networks achieved the best results, giving an accuracy of R2 = 99.99% and RMSE = 0.002 kW.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Lao, Qicheng, Xiang Jiang e Mohammad Havaei. "Hypothesis Disparity Regularized Mutual Information Maximization". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 9 (18 de maio de 2021): 8243–51. http://dx.doi.org/10.1609/aaai.v35i9.17003.

Texto completo da fonte
Resumo:
We propose a hypothesis disparity regularized mutual information maximization (HDMI) approach to tackle unsupervised hypothesis transfer---as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA)---where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner. In contrast to the prevalent HTL and UDA approaches that typically use a single hypothesis, HDMI employs multiple hypotheses to leverage the underlying distributions of the source and target hypotheses. To better utilize the crucial relationship among different hypotheses---as opposed to unconstrained optimization of each hypothesis independently---while adapting to the unlabeled target domain through mutual information maximization, HDMI incorporates a hypothesis disparity regularization that coordinates the target hypotheses jointly learn better target representations while preserving more transferable source knowledge with better-calibrated prediction uncertainty. HDMI achieves state-of-the-art adaptation performance on benchmark datasets for UDA in the context of HTL, without the need to access the source data during the adaptation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Herrera, Roberto H., Sergey Fomel e Mirko van der Baan. "Automatic approaches for seismic to well tying". Interpretation 2, n.º 2 (1 de maio de 2014): SD9—SD17. http://dx.doi.org/10.1190/int-2013-0130.1.

Texto completo da fonte
Resumo:
Tying the synthetic trace to the actual seismic trace at the well location is a labor-intensive task that relies on the interpreter’s experience and the similarity metric used. The traditional seismic to well tie suffers from subjectivity by visually matching major events and using global crosscorrelation to measure the quality of that tying. We compared two automatic techniques that will decrease the subjectivity in the entire process. First, we evaluated the dynamic time warping method, and then, we used the local similarity attribute based on regularized shaping filters. These two methods produced a guided stretching and squeezing process to find the best match between the two signals. We explored the proposed methods using real well log examples and compared to the manual method, showing promising results with both semiautomatic approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Schmid, Matthias, Olaf Gefeller, Elisabeth Waldmann, Andreas Mayr e Tobias Hepp. "Approaches to Regularized Regression – A Comparison between Gradient Boosting and the Lasso". Methods of Information in Medicine 55, n.º 05 (maio de 2016): 422–30. http://dx.doi.org/10.3414/me16-01-0033.

Texto completo da fonte
Resumo:
Summary Background: Penalization and regularization techniques for statistical modeling have attracted increasing attention in biomedical research due to their advantages in the presence of high-dimensional data. A special focus lies on algorithms that incorporate automatic variable selection like the least absolute shrinkage operator (lasso) or statistical boosting techniques. Objectives: Focusing on the linear regression framework, this article compares the two most-common techniques for this task, the lasso and gradient boosting, both from a methodological and a practical perspective. Methods: We describe these methods highlighting under which circumstances their results will coincide in low-dimensional settings. In addition, we carry out extensive simulation studies comparing the performance in settings with more predictors than observations and investigate multiple combinations of noise-to-signal ratio and number of true non-zero coeffcients. Finally, we examine the impact of different tuning methods on the results. Results: Both methods carry out penalization and variable selection for possibly highdimensional data, often resulting in very similar models. An advantage of the lasso is its faster run-time, a strength of the boosting concept is its modular nature, making it easy to extend to other regression settings. Conclusions: Although following different strategies with respect to optimization and regularization, both methods imply similar constraints to the estimation problem leading to a comparable performance regarding prediction accuracy and variable selection in practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Iosifidis, Alexandros, Anastasios Tefas e Ioannis Pitas. "Human Action Recognition Based on Multi-View Regularized Extreme Learning Machine". International Journal on Artificial Intelligence Tools 24, n.º 05 (outubro de 2015): 1540020. http://dx.doi.org/10.1142/s0218213015400205.

Texto completo da fonte
Resumo:
In this paper, we employ multiple Single-hidden Layer Feedforward Neural Networks for multi-view action recognition. We propose an extension of the Extreme Learning Machine algorithm that is able to exploit multiple action representations and scatter information in the corresponding ELM spaces for the calculation of the networks’ parameters and the determination of optimized network combination weights. The proposed algorithm is evaluated by using two state-of-the-art action video representation approaches on five publicly available action recognition databases designed for different application scenarios. Experimental comparison of the proposed approach with three commonly used video representation combination approaches and relating classification schemes illustrates that ELM networks employing a supervised view combination scheme generally outperform those exploiting unsupervised combination approaches, as well as that the exploitation of scatter information in ELM-based neural network training enhances the network’s performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Guo, Zheng-Chu, e Yiming Ying. "Guaranteed Classification via Regularized Similarity Learning". Neural Computation 26, n.º 3 (março de 2014): 497–522. http://dx.doi.org/10.1162/neco_a_00556.

Texto completo da fonte
Resumo:
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban ( 2012 ). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002 ), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002 ) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

PANTOJA, N. R., e H. RAGO. "DISTRIBUTIONAL SOURCES IN GENERAL RELATIVITY: TWO POINT-LIKE EXAMPLES REVISITED". International Journal of Modern Physics D 11, n.º 09 (outubro de 2002): 1479–99. http://dx.doi.org/10.1142/s021827180200213x.

Texto completo da fonte
Resumo:
A regularization procedure, that allows one to relate singularities of curvature to those of the Einstein tensor without some of the shortcomings of previous approaches, is proposed. This regularization is obtained by requiring that (i) the density [Formula: see text], associated to the Einstein tensor [Formula: see text] of the regularized metric, rather than the Einstein tensor itself, be a distribution and (ii) the regularized metric be a continuous metric with a discontinuous extrinsic curvature across a non-null hypersurface of codimension one. In this paper, the curvature and Einstein tensors of the geometries associated to point sources in the (2 + 1)-dimensional gravity and the Schwarzschild spacetime are considered. In both examples the regularized metrics are continuous regular metrics, as defined by Geroch and Traschen, with well defined distributional curvature tensors at all the intermediate steps of the calculation. The limit in which the support of these curvature tensors tends to the singular region of the original spacetime is studied and the results are contrasted with the ones obtained in previous works.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Stevens, Abby, Rebecca Willett, Antonios Mamalakis, Efi Foufoula-Georgiou, Alejandro Tejedor, James T. Randerson, Padhraic Smyth e Stephen Wright. "Graph-Guided Regularized Regression of Pacific Ocean Climate Variables to Increase Predictive Skill of Southwestern U.S. Winter Precipitation". Journal of Climate 34, n.º 2 (janeiro de 2021): 737–54. http://dx.doi.org/10.1175/jcli-d-20-0079.1.

Texto completo da fonte
Resumo:
AbstractUnderstanding the physical drivers of seasonal hydroclimatic variability and improving predictive skill remains a challenge with important socioeconomic and environmental implications for many regions around the world. Physics-based deterministic models show limited ability to predict precipitation as the lead time increases, due to imperfect representation of physical processes and incomplete knowledge of initial conditions. Similarly, statistical methods drawing upon established climate teleconnections have low prediction skill due to the complex nature of the climate system. Recently, promising data-driven approaches have been proposed, but they often suffer from overparameterization and overfitting due to the short observational record, and they often do not account for spatiotemporal dependencies among covariates (i.e., predictors such as sea surface temperatures). This study addresses these challenges via a predictive model based on a graph-guided regularizer that simultaneously promotes similarity of predictive weights for highly correlated covariates and enforces sparsity in the covariate domain. This approach both decreases the effective dimensionality of the problem and identifies the most predictive features without specifying them a priori. We use large ensemble simulations from a climate model to construct this regularizer, reducing the structural uncertainty in the estimation. We apply the learned model to predict winter precipitation in the southwestern United States using sea surface temperatures over the entire Pacific basin, and demonstrate its superiority compared to other regularization approaches and statistical models informed by known teleconnections. Our results highlight the potential to combine optimally the space–time structure of predictor variables learned from climate models with new graph-based regularizers to improve seasonal prediction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Ward, Eric J., Kristin Marshall e Mark D. Scheuerell. "Regularizing priors for Bayesian VAR applications to large ecological datasets". PeerJ 10 (8 de novembro de 2022): e14332. http://dx.doi.org/10.7717/peerj.14332.

Texto completo da fonte
Resumo:
Using multi-species time series data has long been of interest for estimating inter-specific interactions with vector autoregressive models (VAR) and state space VAR models (VARSS); these methods are also described in the ecological literature as multivariate autoregressive models (MAR, MARSS). To date, most studies have used these approaches on relatively small food webs where the total number of interactions to be estimated is relatively small. However, as the number of species or functional groups increases, the length of the time series must also increase to provide enough degrees of freedom with which to estimate the pairwise interactions. To address this issue, we use Bayesian methods to explore the potential benefits of using regularized priors, such as Laplace and regularized horseshoe, on estimating interspecific interactions with VAR and VARSS models. We first perform a large-scale simulation study, examining the performance of alternative priors across various levels of observation error. Results from these simulations show that for sparse matrices, the regularized horseshoe prior minimizes the bias and variance across all inter-specific interactions. We then apply the Bayesian VAR model with regularized priors to a output from a large marine food web model (37 species) from the west coast of the USA. Results from this analysis indicate that regularization improves predictive performance of the VAR model, while still identifying important inter-specific interactions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Khattab, Mahmoud M., Akram M. Zeki, Ali A. Alwan, Belgacem Bouallegue, Safaa S. Matter e Abdelmoty M. Ahmed. "Regularized Multiframe Super-Resolution Image Reconstruction Using Linear and Nonlinear Filters". Journal of Electrical and Computer Engineering 2021 (18 de dezembro de 2021): 1–16. http://dx.doi.org/10.1155/2021/8309910.

Texto completo da fonte
Resumo:
The primary goal of the multiframe super-resolution image reconstruction is to produce an image with a higher resolution by integrating information extracted from a set of corresponding images with low resolution, which is used in various fields. However, super-resolution image reconstruction approaches are typically affected by annoying restorative artifacts, including blurring, noise, and staircasing effect. Accordingly, it is always difficult to balance between smoothness and edge preservation. In this paper, we intend to enhance the efficiency of multiframe super-resolution image reconstruction in order to optimize both analysis and human interpretation processes by improving the pictorial information and enhancing the automatic machine perception. As a result, we propose new approaches that firstly rely on estimating the initial high-resolution image through preprocessing of the reference low-resolution image based on median, mean, Lucy-Richardson, and Wiener filters. This preprocessing stage is used to overcome the degradation present in the reference low-resolution image, which is a suitable kernel for producing the initial high-resolution image to be used in the reconstruction phase of the final image. Then, L2 norm is employed for the data-fidelity term to minimize the residual among the predicted high-resolution image and the observed low-resolution images. Finally, bilateral total variation prior model is utilized to restrict the minimization function to a stable state of the generated HR image. The experimental results of the synthetic data indicate that the proposed approaches have enhanced efficiency visually and quantitatively compared to other existing approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Ahrens, Achim, Christian B. Hansen e Mark E. Schaffer. "lassopack: Model selection and prediction with regularized regression in Stata". Stata Journal: Promoting communications on statistics and Stata 20, n.º 1 (março de 2020): 176–235. http://dx.doi.org/10.1177/1536867x20909697.

Texto completo da fonte
Resumo:
In this article, we introduce lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso, and postestimation ordinary least squares. The methods are suitable for the high-dimensional setting, where the number of predictors p may be large and possibly greater than the number of observations, n. We offer three approaches for selecting the penalization (“tuning”) parameters: information criteria (implemented in lasso2), K-fold cross-validation and h-step-ahead rolling cross-validation for cross-section, panel, and time-series data (cvlasso), and theory-driven (“rigorous” or plugin) penalization for the lasso and square-root lasso for cross-section and panel data (rlasso). We discuss the theoretical framework and practical considerations for each approach. We also present Monte Carlo results to compare the performances of the penalization approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Jain, Subit K., Deepak Kumar, Manoj Thakur e Rajendra K. Ray. "Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images". Journal of Intelligent Systems 29, n.º 1 (27 de março de 2019): 1315–28. http://dx.doi.org/10.1515/jisys-2017-0566.

Texto completo da fonte
Resumo:
Abstract We propose a novel edge detector in the presence of Gaussian noise with the use of proximal support vector machine (PSVM). The edges of a noisy image are detected using a two-stage architecture: smoothing of image is first performed using regularized anisotropic diffusion, followed by the classification using PSVM, termed as regularized anisotropic diffusion-based PSVM (RAD-PSVM) method. In this process, a feature vector is formed for a pixel using the denoised coefficient’s class and the local orientations to detect edges in all possible directions in images. From the experiments, conducted on both synthetic and benchmark images, it is observed that our RAD-PSVM approach outperforms the other state-of-the-art edge detection approaches, both qualitatively and quantitatively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Park, Minsu, Tae-Hun Kim, Eun-Seok Cho, Heebal Kim e Hee-Seok Oh. "Genomic Selection for Adjacent Genetic Markers of Yorkshire Pigs Using Regularized Regression Approaches". Asian-Australasian Journal of Animal Sciences 27, n.º 12 (16 de outubro de 2014): 1678–83. http://dx.doi.org/10.5713/ajas.2014.14236.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Pavan Kumar Varma Kothapalli, Et al. "A Linear Regularized Normalized Model for Dyslexia and ADHD Prediction Using Learning Approaches". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 11 (31 de dezembro de 2023): 560–71. http://dx.doi.org/10.17762/ijritcc.v11i11.9994.

Texto completo da fonte
Resumo:
A learning disability called dyslexia typically affects school-age kids. Children have trouble spelling, reading, and writing words. Children who experience this problem often struggle with negative emotions, rage, frustration, and low self-esteem. Consequently, a dyslexia predictor system is required to assist children in overcoming the risk. There are many current ways of predicting dyslexia. However, they need to provide higher prediction accuracy. Also, this work concentrates on another disorder known as Attention-Deficit Hyperactivity Disorder (ADHD). The prediction process is more challenging as the prediction process shows some negative consequences. The data is typically gathered from online resources for the prediction process. This study examines how the predictor model predicts dyslexia and ADHD using learning strategies. Here, the most important features for accurately classifying dyslexia, non-dyslexia and ADHD are extracted using a new Support Vector Machine (SVM) for feature selection based on the norm and norm. Based on the weighted values, the predicted model provides improved subset features from the internet-accessible dataset. The accuracy, precision, F1-score, specificity, sensitivity, and execution time are all examined here using 10-fold cross-validation. The maximum accuracy reached with this feature subset during the prediction process is carefully reviewed. The experiment results imply that the anticipated model is used to accurately predict the defect and as a tool for CDSS. Recently, dyslexia and ADHD prediction has been greatly aided by computer-based predictor systems. The expected model also effectively fits the experimental design, bridging the gap between feature selection and classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Bröcker, Jochen. "Regularized Logistic Models for Probabilistic Forecasting and Diagnostics". Monthly Weather Review 138, n.º 2 (1 de fevereiro de 2010): 592–604. http://dx.doi.org/10.1175/2009mwr3126.1.

Texto completo da fonte
Resumo:
Abstract Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Anastasiadis, Johannes, e Michael Heizmann. "GAN-regularized augmentation strategy for spectral datasets". tm - Technisches Messen 89, n.º 4 (5 de fevereiro de 2022): 278–88. http://dx.doi.org/10.1515/teme-2021-0109.

Texto completo da fonte
Resumo:
Abstract Artificial neural networks are used in various fields including spectral unmixing, which is used to determine the proportions of substances involved in a mixture, and achieve promising results. This is especially true if there is a non-linear relationship between the spectra of mixtures and the spectra of the substances involved (pure spectra). To achieve sufficient results, neural networks need lots of representative training data. We present a method that extends existing training data for spectral unmixing consisting of spectra of mixtures by learning the mixing characteristic using an artificial neural network. Spectral variability is considered by random inputs. The network structure used is a generative adversarial net that takes the dependence on the abundances of pure substances into account by an additional term in its objective function, which is minimized during training. After training further data for abundance vectors for which there is no real measurement data in the original training dataset can be generated. A neural network trained with the augmented training dataset shows better performance in spectral unmixing compared to being trained with the original dataset. The presented network structure improves already existing results obtained with a generative convolutional neural network, which is superior to model-based approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Thibault, Alexis, Lénaïc Chizat, Charles Dossal e Nicolas Papadakis. "Overrelaxed Sinkhorn–Knopp Algorithm for Regularized Optimal Transport". Algorithms 14, n.º 5 (30 de abril de 2021): 143. http://dx.doi.org/10.3390/a14050143.

Texto completo da fonte
Resumo:
This article describes a set of methods for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely used iterative Bregman projections algorithm (or Sinkhorn–Knopp algorithm). We first proposed to rely on regularized nonlinear acceleration schemes. In practice, such approaches lead to fast algorithms, but their global convergence is not ensured. Hence, we next proposed a new algorithm with convergence guarantees. The idea is to overrelax the Bregman projection operators, allowing for faster convergence. We proposed a simple method for establishing global convergence by ensuring the decrease of a Lyapunov function at each step. An adaptive choice of the overrelaxation parameter based on the Lyapunov function was constructed. We also suggested a heuristic to choose a suitable asymptotic overrelaxation parameter, based on a local convergence analysis. Our numerical experiments showed a gain in convergence speed by an order of magnitude in certain regimes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Authier, Matthieu, Anders Galatius, Anita Gilles e Jérôme Spitz. "Of power and despair in cetacean conservation: estimation and detection of trend in abundance with noisy and short time-series". PeerJ 8 (7 de agosto de 2020): e9436. http://dx.doi.org/10.7717/peerj.9436.

Texto completo da fonte
Resumo:
Many conservation instruments rely on detecting and estimating a population decline in a target species to take action. Trend estimation is difficult because of small sample size and relatively large uncertainty in abundance/density estimates of many wild populations of animals. Focusing on cetaceans, we performed a prospective analysis to estimate power, type-I, sign (type-S) and magnitude (type-M) error rates of detecting a decline in short time-series of abundance estimates with different signal-to-noise ratio. We contrasted results from both unregularized (classical) and regularized approaches. The latter allows to incorporate prior information when estimating a trend. Power to detect a statistically significant estimates was in general lower than 80%, except for large declines. The unregularized approach (status quo) had inflated type-I error rates and gave biased (either over- or under-) estimates of a trend. The regularized approach with a weakly-informative prior offered the best trade-off in terms of bias, statistical power, type-I, type-S and type-M error rates and confidence interval coverage. To facilitate timely conservation decisions, we recommend to use the regularized approach with a weakly-informative prior in the detection and estimation of trend with short and noisy time-series of abundance estimates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Chen, Jiangjie, Qiaoben Bao, Changzhi Sun, Xinbo Zhang, Jiaze Chen, Hao Zhou, Yanghua Xiao e Lei Li. "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 10482–91. http://dx.doi.org/10.1609/aaai.v36i10.21291.

Texto completo da fonte
Resumo:
Given a natural language statement, how to verify its veracity against a large-scale textual knowledge source like Wikipedia? Most existing neural models make predictions without giving clues about which part of a false claim goes wrong. In this paper, we propose LOREN, an approach for interpretable fact verification. We decompose the verification of the whole claim at phrase-level, where the veracity of the phrases serves as explanations and can be aggregated into the final verdict according to logical rules. The key insight of LOREN is to represent claim phrase veracity as three-valued latent variables, which are regularized by aggregation logical rules. The final claim verification is based on all latent variables. Thus, LOREN enjoys the additional benefit of interpretability --- it is easy to explain how it reaches certain results with claim phrase veracity. Experiments on a public fact verification benchmark show that LOREN is competitive against previous approaches while enjoying the merit of faithful and accurate interpretability. The resources of LOREN are available at: https://github.com/jiangjiechen/LOREN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Wang, Bingyuan, Yao Zhang, Dongyuan Liu, Xuemei Ding, Mai Dan, Tiantian Pan, Huijuan Zhao e Feng Gao. "Sparsity-regularized approaches to directly reconstructing hemodynamic response in brain functional diffuse optical tomography". Applied Optics 58, n.º 4 (25 de janeiro de 2019): 863. http://dx.doi.org/10.1364/ao.58.000863.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Hepp, Tobias, Matthias Schmid, Olaf Gefeller, Elisabeth Waldmann e Andreas Mayr. "Addendum to: Approaches to Regularized Regression – A Comparison between Gradient Boosting and the Lasso". Methods of Information in Medicine 58, n.º 01 (11 de janeiro de 2019): 060. http://dx.doi.org/10.1055/s-0038-1669389.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Voronin, Sergey, Dylan Mikesell e Guust Nolet. "Compression approaches for the regularized solutions of linear systems from large-scale inverse problems". GEM - International Journal on Geomathematics 6, n.º 2 (19 de maio de 2015): 251–94. http://dx.doi.org/10.1007/s13137-015-0073-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Abdulsamad, Hany, Oleg Arenz, Jan Peters e Gerhard Neumann. "State-Regularized Policy Search for Linearized Dynamical Systems". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5 de junho de 2017): 419–24. http://dx.doi.org/10.1609/icaps.v27i1.13853.

Texto completo da fonte
Resumo:
Trajectory-Centric Reinforcement Learning and Trajectory Optimization methods optimize a sequence of feedback-controllers by taking advantage of local approximations of model dynamics and cost functions. Stability of the policy update is a major issue for these methods, rendering them hard to apply for highly nonlinear systems. Recent approaches combine classical Stochastic Optimal Control methods with information-theoretic bounds to control the step-size of the policy update and could even be used to train nonlinear deep control policies. These methods bound the relative entropy between the new and the old policy to ensure a stable policy update. However, despite the bound in policy space, the state distributions of two consecutive policies can still differ significantly, rendering the used local approximate models invalid. To alleviate this issue we propose enforcing a relative entropy constraint not only on the policy update, but also on the update of the state distribution, around which the dynamics and cost are being approximated. We present a derivation of the closed-form policy update and show that our approach outperforms related methods on two nonlinear and highly dynamic simulated systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Chen, Jiqiang, Jie Wan e Litao Ma. "Regularized Discrete Optimal Transport for Class-Imbalanced Classifications". Mathematics 12, n.º 4 (7 de fevereiro de 2024): 524. http://dx.doi.org/10.3390/math12040524.

Texto completo da fonte
Resumo:
Imbalanced class data are commonly observed in pattern analysis, machine learning, and various real-world applications. Conventional approaches often resort to resampling techniques in order to address the imbalance, which inevitably alter the original data distribution. This paper proposes a novel classification method that leverages optimal transport for handling imbalanced data. Specifically, we establish a transport plan between training and testing data without modifying the original data distribution, drawing upon the principles of optimal transport theory. Additionally, we introduce a non-convex interclass regularization term to establish connections between testing samples and training samples with the same class labels. This regularization term forms the basis of a regularized discrete optimal transport model, which is employed to address imbalanced classification scenarios. Subsequently, in line with the concept of maximum minimization, a maximum minimization algorithm is introduced for regularized discrete optimal transport. Subsequent experiments on 17 Keel datasets with varying levels of imbalance demonstrate the superior performance of the proposed approach compared to 11 other widely used techniques for class-imbalanced classification. Additionally, the application of the proposed approach to water quality evaluation confirms its effectiveness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Fang, Qiang, Wenzhuo Zhang e Xitong Wang. "Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine". Electronics 10, n.º 16 (18 de agosto de 2021): 1997. http://dx.doi.org/10.3390/electronics10161997.

Texto completo da fonte
Resumo:
In this paper, we focus on the challenges of training efficiency, the designation of reward functions, and generalization in reinforcement learning for visual navigation and propose a regularized extreme learning machine-based inverse reinforcement learning approach (RELM-IRL) to improve the navigation performance. Our contributions are mainly three-fold: First, a framework combining extreme learning machine with inverse reinforcement learning is presented. This framework can improve the sample efficiency and obtain the reward function directly from the image information observed by the agent and improve the generation for the new target and the new environment. Second, the extreme learning machine is regularized by multi-response sparse regression and the leave-one-out method, which can further improve the generalization ability. Simulation experiments in the AI-THOR environment showed that the proposed approach outperformed previous end-to-end approaches, thus, demonstrating the effectiveness and efficiency of our approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Yang, Pengcheng, Boxing Chen, Pei Zhang e Xu Sun. "Visual Agreement Regularized Training for Multi-Modal Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9418–25. http://dx.doi.org/10.1609/aaai.v34i05.6484.

Texto completo da fonte
Resumo:
Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. “ball” in English and “ballon” in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

LARSEN, CHRISTOPHER J., CHRISTOPH ORTNER e ENDRE SÜLI. "EXISTENCE OF SOLUTIONS TO A REGULARIZED MODEL OF DYNAMIC FRACTURE". Mathematical Models and Methods in Applied Sciences 20, n.º 07 (julho de 2010): 1021–48. http://dx.doi.org/10.1142/s0218202510004520.

Texto completo da fonte
Resumo:
Existence and convergence results are proved for a regularized model of dynamic brittle fracture based on the Ambrosio–Tortorelli approximation. We show that the sequence of solutions to the time-discrete elastodynamics, proposed by Bourdin, Larsen & Richardson as a semidiscrete numerical model for dynamic fracture, converges, as the time-step approaches zero, to a solution of the natural time-continuous elastodynamics model, and that this solution satisfies an energy balance. We emphasize that these models do not specify crack paths a priori, but predict them, including such complicated behavior as kinking, crack branching, and so forth, in any spatial dimension.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Jahan, Sohana, Moriyam Akter, Sifta Yeasmin e Farhana Ahmed Simi. "Facial Expression Identification using Regularized Supervised Distance Preserving Projection". Dhaka University Journal of Science 69, n.º 2 (1 de dezembro de 2021): 70–75. http://dx.doi.org/10.3329/dujs.v69i2.56485.

Texto completo da fonte
Resumo:
Facial expression recognition is one of the most reliable and a key technology of advanced human-computer interaction with the rapid development of computer vision and artificial intelligence. Nowadays, there has been a growing interest in improving expression recognition techniques. In most of the cases, automatic recognition system’s efficiency depends on the represented facial expression feature. Even the best classifier may fail to achieve a good recognition rate if inadequate features are provided. Therefore, feature extraction is a crucial step of the facial expression recognition process. In this paper, we have used Regularized Supervised Distance Preserving Projection for extracting the best features of the images. Numerical experiment shows that the use of this technique outperforms many of state of art approaches in terms of recognition rate. Dhaka Univ. J. Sci. 69(2): 70-75, 2021 (July)
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Zhang, Hong, Dong Lai Hao e Xiang Yang Liu. "A Precoding Algorithm Based on Truncated Polynomial Expansion for Massive MIMO System". Advanced Materials Research 945-949 (junho de 2014): 2315–18. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.2315.

Texto completo da fonte
Resumo:
A precoding algorithm based on the truncated polynomial expansion is proposed for massive multiple-input multiple-output system, Using the random matrix theory, the optimal precoding weights coefficient is derived for tradeoff between system throughput and precoding complexity. Finally, under different channel conditions, the simulation results show that the average achievable rate will increase infinitely approaches the regularized zero forcing precoding and has better performance than TPE scheme without optimization..
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Robitzsch, Alexander. "Model-Robust Estimation of Multiple-Group Structural Equation Models". Algorithms 16, n.º 4 (17 de abril de 2023): 210. http://dx.doi.org/10.3390/a16040210.

Texto completo da fonte
Resumo:
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspecifications (i.e., model errors). In detail, maximum likelihood and weighted least-squares estimation approaches are compared with a newly proposed robust Lp loss function and regularized maximum likelihood estimation. The latter methods are referred to as model-robust estimators because they show some resistance to model errors. In particular, we focus on the performance of the different estimators in the presence of unmodelled residual error correlations and measurement noninvariance (i.e., group-specific item intercepts). The performance of the different estimators is compared in two simulation studies and an empirical example. It turned out that the robust loss function approach is computationally much less demanding than regularized maximum likelihood estimation but resulted in similar statistical performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

CHEN, WEN-SHENG, PONG CHI YUEN, JIAN HUANG e BIN FANG. "TWO-STEP SINGLE PARAMETER REGULARIZATION FISHER DISCRIMINANT METHOD FOR FACE RECOGNITION". International Journal of Pattern Recognition and Artificial Intelligence 20, n.º 02 (março de 2006): 189–207. http://dx.doi.org/10.1142/s0218001406004600.

Texto completo da fonte
Resumo:
In face recognition tasks, Fisher discriminant analysis (FDA) is one of the promising methods for dimensionality reduction and discriminant feature extraction. The objective of FDA is to find an optimal projection matrix, which maximizes the between-class-distance and simultaneously minimizes within-class-distance. The main limitation of traditional FDA is the so-called Small Sample Size (3S) problem. It induces that the within-class scatter matrix is singular and then the traditional FDA fails to perform directly for pattern classification. To overcome 3S problem, this paper proposes a novel two-step single parameter regularization Fisher discriminant (2SRFD) algorithm for face recognition. The first semi-regularized step is based on a rank lifting theorem. This step adjusts both the projection directions and their corresponding weights. Our previous three-to-one parameter regularized technique is exploited in the second stage, which just changes the weights of projection directions. It is shown that the final regularized within-class scatter matrix approaches the original within-class scatter matrix as the single parameter tends to zero. Also, our method has good computational complexity. The proposed method has been tested and evaluated with three public available databases, namely ORL, CMU PIE and FERET face databases. Comparing with existing state-of-the-art FDA-based methods in solving the S3 problem, the proposed 2SRFD approach gives the best performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Bender, Philipp, Dirk Honecker, Mathias Bersweiler, Rocio Costo, Tamara Kahmann, Frank Ludwig, Jon Leiner e Johanna K. Jochum. "Robust approaches for model-free small-angle scattering data analysis". Journal of Applied Crystallography 55, n.º 3 (28 de maio de 2022): 586–91. http://dx.doi.org/10.1107/s1600576722004356.

Texto completo da fonte
Resumo:
The small-angle neutron scattering data of nanostructured magnetic samples contain information regarding their chemical and magnetic properties. Often, the first step to access characteristic magnetic and structural length scales is a model-free investigation. However, due to measurement uncertainties and a restricted q range, a direct Fourier transform usually fails and results in ambiguous distributions. To circumvent these problems, different methods have been introduced to derive regularized, more stable correlation functions, with the indirect Fourier transform being the most prominent approach. Here, the indirect Fourier transform is compared with the singular value decomposition and an iterative algorithm. These approaches are used to determine the correlation function from magnetic small-angle neutron scattering data of a powder sample of iron oxide nanoparticles; it is shown that with all three methods, in principle, the same correlation function can be derived. Each method has certain advantages and disadvantages, and thus the recommendation is to combine these three approaches to obtain robust results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

WU, XIAN, JIANHUANG LAI e XILIN CHEN. "RANK-1 TENSOR PROJECTION VIA REGULARIZED REGRESSION FOR ACTION CLASSIFICATION". International Journal of Wavelets, Multiresolution and Information Processing 09, n.º 06 (novembro de 2011): 1025–41. http://dx.doi.org/10.1142/s0219691311004420.

Texto completo da fonte
Resumo:
This paper proposes a novel method for classification using rank-1 tensor projection via regularized regression to directly map tensor example to its corresponding label, which is different from the general procedure of classification on the compact representation of the original data. Action can be naturally considered as a third-order tensor, where the first two dimensions represent the space and the third one is temporal. By applying this method to multi-label action classification based on problem transformation and subset embedding technique, we obtain the comparable results to the state-of-the-art approaches on the popular action datasets Weizmann and KTH. Our experimental results are also considerably robust to the viewpoint changes, the partial occlusion and the irregularities in the motion styles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Alenezi, Fayadh, e K. C. Santosh. "Geometric Regularized Hopfield Neural Network for Medical Image Enhancement". International Journal of Biomedical Imaging 2021 (22 de janeiro de 2021): 1–12. http://dx.doi.org/10.1155/2021/6664569.

Texto completo da fonte
Resumo:
One of the major shortcomings of Hopfield neural network (HNN) is that the network may not always converge to a fixed point. HNN, predominantly, is limited to local optimization during training to achieve network stability. In this paper, the convergence problem is addressed using two approaches: (a) by sequencing the activation of a continuous modified HNN (MHNN) based on the geometric correlation of features within various image hyperplanes via pixel gradient vectors and (b) by regulating geometric pixel gradient vectors. These are achieved by regularizing proposed MHNNs under cohomology, which enables them to act as an unconventional filter for pixel spectral sequences. It shifts the focus to both local and global optimizations in order to strengthen feature correlations within each image subspace. As a result, it enhances edges, information content, contrast, and resolution. The proposed algorithm was tested on fifteen different medical images, where evaluations were made based on entropy, visual information fidelity (VIF), weighted peak signal-to-noise ratio (WPSNR), contrast, and homogeneity. Our results confirmed superiority as compared to four existing benchmark enhancement methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Hadj-Rabah, Karima, Gilda Schirinzi, Alessandra Budillon, Faiza Hocine e Aichouche Belhadj-Aissa. "Non-Parametric Tomographic SAR Reconstruction via Improved Regularized MUSIC". Remote Sensing 15, n.º 6 (15 de março de 2023): 1599. http://dx.doi.org/10.3390/rs15061599.

Texto completo da fonte
Resumo:
Height estimation of scatterers in complex environments via the Tomographic Synthetic Aperture Radar (TomoSAR) technique is still a valuable research field. The parametric spectral estimation approach constitutes a powerful tool to identify the superimposed scatterers with different complex reflectivities, located at different heights in the same range–azimuth resolution cell. Unfortunately, this approach requires prior knowledge about the number of scatterers for each pixel, which is not possible in practical situations. In this paper, we propose a method that analyzes the scree plot, generated from the spectral decomposition of the multidimensional covariance matrix, in order to estimate automatically the number of scatterers for each resolution cell. In this context, a properly improved regularization step is included during the reconstruction process, transforming the parametric MUSIC estimator into a non-parametric method. The experimental results on two data sets covering high elevation towers, with different facade coating characteristics, acquired by the TerraSAR-X satellite highlighted the effectiveness of the proposed regularized MUSIC for the reconstruction of high man-made structures compared with classical approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Zuo, Xin, Hong-Zhi Wei e Chun-Rong Chen. "Continuity Results and Error Bounds on Pseudomonotone Vector Variational Inequalities via Scalarization". Journal of Function Spaces 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/7297854.

Texto completo da fonte
Resumo:
Continuity (both lower and upper semicontinuities) results of the Pareto/efficient solution mapping for a parametric vector variational inequality with a polyhedral constraint set are established via scalarization approaches, within the framework of strict pseudomonotonicity assumptions. As a direct application, the continuity of the solution mapping to a parametric weak Minty vector variational inequality is also discussed. Furthermore, error bounds for the weak vector variational inequality in terms of two known regularized gap functions are also obtained, under strong pseudomonotonicity assumptions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Lee, Ching-pei, e Kai-Wei Chang. "Distributed block-diagonal approximation methods for regularized empirical risk minimization". Machine Learning 109, n.º 4 (18 de dezembro de 2019): 813–52. http://dx.doi.org/10.1007/s10994-019-05859-2.

Texto completo da fonte
Resumo:
AbstractIn recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia