Auswahl der wissenschaftlichen Literatur zum Thema „Regularized approaches“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Regularized approaches" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Regularized approaches"

1

G.V., Suresh, und Srinivasa Reddy E.V. „Uncertain Data Analysis with Regularized XGBoost“. Webology 19, Nr. 1 (20.01.2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Der volle Inhalt der Quelle
Annotation:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing information is inevitable in real-world data sets. While some conventional multiple imputation approaches are well studied and have shown empirical validity, they entail limitations in processing large datasets with complex data structures. In addition, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable multiple imputation frameworks based on XGBoost, bootstrapping and regularized method. XGBoost, one of the fastest implementations of gradient boosted trees, is able to automatically retain interactions and non-linear relations in a dataset while achieving high computational efficiency with the aid of bootstrapping and regularized methods. In the context of high-dimensional data, this methodology provides fewer biased estimates and reflects acceptable imputation variability than previous regression approaches. We validate our adaptive imputation approaches with standard methods on numerical and real data sets and shown promising results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Taniguchi, Michiaki, und Volker Tresp. „Averaging Regularized Estimators“. Neural Computation 9, Nr. 5 (01.07.1997): 1163–78. http://dx.doi.org/10.1162/neco.1997.9.5.1163.

Der volle Inhalt der Quelle
Annotation:
We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvement—if compared to the individual estimators—is achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Luft, Daniel, und Volker Schulz. „Simultaneous shape and mesh quality optimization using pre-shape calculus“. Control and Cybernetics 50, Nr. 4 (01.12.2021): 473–520. http://dx.doi.org/10.2478/candc-2021-0028.

Der volle Inhalt der Quelle
Annotation:
Abstract Computational meshes arising from shape optimization routines commonly suffer from decrease of mesh quality or even destruction of the mesh. In this work, we provide an approach to regularize general shape optimization problems to increase both shape and volume mesh quality. For this, we employ pre-shape calculus as established in Luft and Schulz (2021). Existence of regularized solutions is guaranteed. Further, consistency of modified pre-shape gradient systems is established. We present pre-shape gradient system modifications, which permit simultaneous shape optimization with mesh quality improvement. Optimal shapes to the original problem are left invariant under regularization. The computational burden of our approach is limited, since additional solution of possibly larger (non-)linear systems for regularized shape gradients is not necessary. We implement and compare pre-shape gradient regularization approaches for a 2D problem, which is prone to mesh degeneration. As our approach does not depend on the choice of metrics representing shape gradients, we employ and compare several different metrics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ebadat, Afrooz, Giulio Bottegal, Damiano Varagnolo, Bo Wahlberg und Karl H. Johansson. „Regularized Deconvolution-Based Approaches for Estimating Room Occupancies“. IEEE Transactions on Automation Science and Engineering 12, Nr. 4 (Oktober 2015): 1157–68. http://dx.doi.org/10.1109/tase.2015.2471305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Feng, Hesen, Lihong Ma und Jing Tian. „A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution“. Sensors 22, Nr. 11 (01.06.2022): 4231. http://dx.doi.org/10.3390/s22114231.

Der volle Inhalt der Quelle
Annotation:
Image super-resolution aims to reconstruct a high-resolution image from its low-resolution counterparts. Conventional image super-resolution approaches share the same spatial convolution kernel for the whole image in the upscaling modules, which neglect the specificity of content information in different positions of the image. In view of this, this paper proposes a regularized pattern method to represent spatially variant structural features in an image and further exploits a dynamic convolution kernel generation method to match the regularized pattern and improve image reconstruction performance. To be more specific, first, the proposed approach extracts features from low-resolution images using a self-organizing feature mapping network to construct regularized patterns (RP), which describe different contents at different locations. Second, the meta-learning mechanism based on the regularized pattern predicts the weights of the convolution kernels that match the regularized pattern for each different location; therefore, it generates different upscaling functions for images with different content. Extensive experiments are conducted using the benchmark datasets Set5, Set14, B100, Urban100, and Manga109 to demonstrate that the proposed approach outperforms the state-of-the-art super-resolution approaches in terms of both PSNR and SSIM performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Robitzsch, Alexander. „Implementation Aspects in Regularized Structural Equation Models“. Algorithms 16, Nr. 9 (18.09.2023): 446. http://dx.doi.org/10.3390/a16090446.

Der volle Inhalt der Quelle
Annotation:
This article reviews several implementation aspects in estimating regularized single-group and multiple-group structural equation models (SEM). It is demonstrated that approximate estimation approaches that rely on a differentiable approximation of non-differentiable penalty functions perform similarly to the coordinate descent optimization approach of regularized SEMs. Furthermore, using a fixed regularization parameter can sometimes be superior to an optimal regularization parameter selected by the Bayesian information criterion when it comes to the estimation of structural parameters. Moreover, the widespread penalty functions of regularized SEM implemented in several R packages were compared with the estimation based on a recently proposed penalty function in the Mplus software. Finally, we also investigate the performance of a clever replacement of the optimization function in regularized SEM with a smoothed differentiable approximation of the Bayesian information criterion proposed by O’Neill and Burke in 2023. The findings were derived through two simulation studies and are intended to guide the practical implementation of regularized SEM in future software pieces.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Robitzsch, Alexander. „Comparing Robust Linking and Regularized Estimation for Linking Two Groups in the 1PL and 2PL Models in the Presence of Sparse Uniform Differential Item Functioning“. Stats 6, Nr. 1 (25.01.2023): 192–208. http://dx.doi.org/10.3390/stats6010012.

Der volle Inhalt der Quelle
Annotation:
In the social sciences, the performance of two groups is frequently compared based on a cognitive test involving binary items. Item response models are often utilized for comparing the two groups. However, the presence of differential item functioning (DIF) can impact group comparisons. In order to avoid the biased estimation of groups, appropriate statistical methods for handling differential item functioning are required. This article compares the performance-regularized estimation and several robust linking approaches in three simulation studies that address the one-parameter logistic (1PL) and two-parameter logistic (2PL) models, respectively. It turned out that robust linking approaches are at least as effective as the regularized estimation approach in most of the conditions in the simulation studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Leen, Todd K. „From Data Distributions to Regularization in Invariant Learning“. Neural Computation 7, Nr. 5 (September 1995): 974–81. http://dx.doi.org/10.1162/neco.1995.7.5.974.

Der volle Inhalt der Quelle
Annotation:
Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed examples to the training data. We introduce the notion of a probability distribution over the group transformations, and use this to rewrite the cost function for the enhanced training data. Under certain conditions, the new cost function is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice—a term that penalizes changes in the output when the inputs are transformed under the group. For infinitesimal transformations, the coefficient of the regularization term reduces to the variance of the distortions introduced into the training data. This correspondence provides a simple bridge between the two approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Feng, Huijie, Chunpeng Wu, Guoyang Chen, Weifeng Zhang und Yang Ning. „Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 3858–65. http://dx.doi.org/10.1609/aaai.v34i04.5798.

Der volle Inhalt der Quelle
Annotation:
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against ℓ2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Hong, Dong Lai Hao und Xiang Yang Liu. „A Precoding Strategy for Massive MIMO System“. Applied Mechanics and Materials 568-570 (Juni 2014): 1278–81. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.1278.

Der volle Inhalt der Quelle
Annotation:
The computational precoding complexity increases with its dimensions in massive multiple-input multiple-output system. A precoding scheme based on the truncated polynomial expansion is proposed, the hardware implementation is described for the superiority of the algorithm compared with the conventional regularized zero forcing precoding. Finally, under different channel conditions, the simulation results show that the average achievable rate will increase infinitely approaches the regularized zero forcing precoding simulation in a certain order, the polynomial order does not need to scale with the system dimensions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Regularized approaches"

1

Schwarz, Stephan [Verfasser], Philipp [Gutachter] Junker und Klaus [Gutachter] Hackl. „Efficient approaches for regularized damage models : variational modeling and numerical treatment / Stephan Schwarz ; Gutachter: Philipp Junker, Klaus Hackl ; Fakultät für Maschinenbau“. Bochum : Ruhr-Universität Bochum, 2019. http://d-nb.info/1195220863/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Spagnoli, Lorenzo. „COVID-19 prognosis estimation from CAT scan radiomics: comparison of different machine learning approaches for predicting patients survival and ICU Admission“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23926/.

Der volle Inhalt der Quelle
Annotation:
Since the start of 2020 Sars-COVID19 has given rise to a world-wide pandemic. In an attempt to slow down the spreading of this disease various prevention and diagnostic methods have been developed. In this thesis the attention has been put on Machine Learning to predict prognosis based on data originating from radiological images. Radiomics has been used to extract information from images segmented using a software from the hospital which provided both the clinical data and images. The usefulness of different families of variables has then been evaluated through their performance in the methods used, i.e. Lasso regularized regression and Random Forest. The first chapter is introductory in nature, the second will contain a theoretical overview of the necessary concepts that will be needed throughout this whole work. The focus will be then shifted on methods and instruments used in the development of this thesis. The third chapter will report the results and finally some conclusions will be derived from the previously presented results. It will be concluded that the segmentation and feature extraction step is of pivotal importance in driving the performance of the predictions. In fact, in this thesis, it seems that the information from the images achieves the same predictive power that can be derived from the clinical data. This can be interpreted in three ways: first it can be taken as a symptom of the fact that even the more complex Sars-COVID19 cases can be segmented automatically, or semi-automatically by untrained personnel, leading to results competing with other methodologies. Secondly it can be taken to show that the performance of clinical variables can be reached by radiomic features alone in a semi-automatic pipeline, which could aid in reducing the workload imposed on medical professionals in case of pandemic. Finally it can be taken as proof that the method implemented has room to improve by more carefully investing in the segmentation phase
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Savino, Mary Edith. „Statistical learning methods for nonlinear geochemical problems“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM032.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre de simulations numériques de systèmes géochimiques s'intégrant dans un projet de stockage profond de déchets hautement radioactifs, nous proposons dans cette thèse deux méthodes d'estimation de fonction ainsi qu'une méthode de sélection de variables dans un modèle de régression non-paramétrique multivarié.Plus précisément, dans le Chapitre 2, nous présentons une procédure d'apprentissage actif utilisant les processus Gaussiens pour approcher des fonctions inconnues ayant plusieurs variables d'entrée. Cette méthode permet à chaque itération le calcul de l'incertitude globale sur l'estimation de la fonction et donc de choisir astucieusement les points en lesquels la fonction à estimer doit être évaluée. Ceci permet de réduire considérablement le nombre d'observations nécessaire à l'obtention d'une estimation satisfaisante de la fonction sous-jacente. De ce fait, cette méthode permet de limiter les appels à un logiciel dit "solveur" d'équations de réactions géochimiques, ce qui réduit les temps de calculs.Dans le Chapitre 3, nous proposons une deuxième méthode d'estimation de fonctions non séquentielle consistant à approximer la fonction à estimer par une combinaison linéaire de B-splines et appelée GLOBER. Dans cette approche, les noeuds des B-splines pouvant être considérés comme des changements dans les dérivées de la fonction à estimer, ceux-ci sont choisis à l'aide du generalized lasso. Dans le Chapitre 4, nous introduisons une nouvelle méthode de sélection de variables dans un modèle de régression non-paramétrique multivarié, ABSORBER, pour identifier les variables dont dépend réellement la fonction inconnue considérée et réduire ainsi la complexité des systèmes géochimiques étudiés. Dans cette approche, nous considérons que la fonction à estimer peut être approximée par une combinaison linéaire de B-splines et de leurs termes d'interactions deux-à-deux. Les coefficients de chaque terme de la combinaison linéaire sont estimés en utilisant un critère des moindres carrés standard pénalisé par les normes l2 des dérivées partielles par rapport à chaque variable.Les approches considérées ont été évaluées puis validées à l'aide de simulations numériques et ont toutes été appliquées à des systèmes géochimiques plus ou moins complexes. Des comparaisons à des méthodes de l'état de l'art ont également permis de montrer de meilleures performances obtenues par nos méthodes.Dans le Chapitre 5, les méthodes d'estimation de fonctions ainsi que la méthode de sélection de variables ont été appliquées dans le cadre d'un projet européen EURAD et comparées aux méthodes d'autres équipes impliquées dans le projet. Cette application a permis de montrer la performance de nos méthodes, notamment lorsque seules les variables pertinentes sélectionnées avec ABSORBER sont considérées.Les méthodes proposées ont été implémentées dans des packages R : glober et absorber qui sont disponibles sur le CRAN (Comprehensive R Archive Network)
In this thesis, we propose two function estimation methods and a variable selection method in a multivariate nonparametric model as part of numerical simulations of geochemical systems, for a deep geological disposal facility of highly radioactive waste. More specifically, in Chapter 2, we present an active learning procedure using Gaussian processes to approximate unknown functions having several input variables. This method allows for the computation of the global uncertainty of the function estimation at each iteration and thus, cunningly selects the most relevant observation points at which the function to estimate has to be evaluated. Consequently, the number of observations needed to obtain a satisfactory estimation of the underlying function is reduced, limiting calls to geochemical reaction equations solvers and reducing calculation times. Additionally, in Chapter 3, we propose a non sequential function estimation method called GLOBER consisting in approximating the function to estimate by a linear combination of B-splines. In this approach, since the knots of the B-splines can be seen as changes in the derivatives of the function to estimate, they are selected using the generalized lasso. In Chapter 4, we introduce a novel variable selection method in a multivariate nonparametric model, ABSORBER, to identify the variables the unknown function really depends on, thereby simplifying the geochemical system. In this approach, we assume that the function can be approximated by a linear combination of B-splines and their pairwise interaction terms. The coefficients of each term of the linear combination are estimated using the usual least squares criterion penalized by the l2-norms of the partial derivatives with respect to each variable. The introduced approaches were evaluated and validated through numerical experiments and were all applied to geochemical systems of varying complexity. Comparisons with state-of-the-art methods demonstrated that our methods outperformed the others. In Chapter 5, the function estimation and variable selection methods were applied in the context of a European project, EURAD, and compared to methods devised by other scientific teams involved in the projet. This application highlighted the performance of our methods, particularly when only the relevant variables selected with ABSORBER were considered. The proposed methods have been implemented in R packages: glober and absorber which are available on the CRAN (Comprehensive R Archive Network)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mak, Rachel Y. C. „Reducing Complexity| A Regularized Non-negative Matrix Approximation (NNMA) Approach to X-ray Spectromicroscopy Analysis“. Thesis, Northwestern University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3669280.

Der volle Inhalt der Quelle
Annotation:

X-ray absorption spectromicroscopy combines microscopy and spectroscopy to provide rich information about the chemical organization of materials down to the nanoscale. But with richness also comes complexity: natural materials such as biological or environmental science specimens can be composed of complex spectroscopic mixtures of different materials. The challenge becomes how we could meaningfully simplify and interpret this information. Approaches such as principal component analysis and cluster analysis have been used in previous studies, but with some limitations that we will describe. This leads us to develop a new approach based on a development of non-negative matrix approximation (NNMA) analysis with both sparseness and spectra similarity regularizations. We apply this new technique to simulated spectromicroscopy datasets as well as a preliminary study of the large-scale biochemical organization of a human sperm cell. NNMA analysis is able to select major features of the sperm cell without the physically erroneous negative weightings or thicknesses in the calculated image which appeared in previous approaches.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yu, Lixi. „Regularized efficient score estimation and testing (reset) approach in low-dimensional and high-dimensional GLM“. Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2301.

Der volle Inhalt der Quelle
Annotation:
Due to the rapid development and growing need for information technologies, more and more researchers start to focus on high-dimensional data. Much work has been done on problems like point estimation possessing oracle inequalities, coefficient estimation, variable selection in high-dimensional regression models. However, with respect to the statistical inference for the regression coefficients, there have been few studies. Therefore, we propose a regularized efficient score estimation and testing (RESET) approach for treatment effects in the presence of nuisance parameters, either low-dimensional or high-dimensional, in generalized linear models (GLMs). Based on the RESET method, we are also able to develop another two-step approach related to the same problem. The RESET approach is based on estimating the efficient score function of the treatment parameters. This means we are trying to remove the influence of nuisance parameters on the treatment parameters and construct an efficient score function which could be used for estimating and testing for the treatment effect. The RESET approach can be used in both low-dimensional and high-dimensional settings. As the simulation results show, it is comparable with the commonly used maximum likelihood estimators in most low-dimensional cases. We will prove that the RESET estimator is consistent under some regularity conditions, either in the low-dimensional or the high-dimensional linear models. Also, it is shown that the efficient score function of the treatment parameters follows a chi-square distribution, based on which the regularized efficient score tests are constructed to test for the treatment effect, in both low-dimensional and high-dimensional GLMs. The two-step approach is mainly used for high-dimensional inference. It combines the RESET approach with a first step of selecting "promising" variables for the purpose of reducing the dimension of the regression model. The minimax concave penalty is adopted for its oracle property, which means it tends to choose "correct" variables asymptotically. The simulation results show that some improvement is still required for this approach, which will be part of our future research direction. Finally, both the RESET and the two-step approaches are implemented with a real data example to demonstrate their application, followed by a conclusion for all the problems investigated here and a discussion for the directions of future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gürol, Selime. „Solving regularized nonlinear least-squares problem in dual space with application to variational data assimilation“. Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0040/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse étudie la méthode du gradient conjugué et la méthode de Lanczos pour la résolution de problèmes aux moindres carrés non-linéaires sous déterminés et régularisés par un terme de pénalisation quadratique. Ces problèmes résultent souvent d'une approche du maximum de vraisemblance, et impliquent un ensemble de m observations physiques et n inconnues estimées par régression non linéaire. Nous supposons ici que n est grand par rapport à m. Un tel cas se présente lorsque des champs tridimensionnels sont estimés à partir d'observations physiques, par exemple dans l'assimilation de données appliquée aux modèles du système terrestre. Un algorithme largement utilisé dans ce contexte est la méthode de Gauss- Newton (GN), connue dans la communauté d'assimilation de données sous le nom d'assimilation variationnelle des données quadridimensionnelles. Le procédé GN repose sur la résolution approchée d'une séquence de moindres carrés linéaires optimale dans laquelle la fonction coût non-linéaire des moindres carrés est approximée par une fonction quadratique dans le voisinage de l'itération non linéaire en cours. Cependant, il est bien connu que cette simple variante de l'algorithme de Gauss-Newton ne garantit pas une diminution monotone de la fonction coût et sa convergence n'est donc pas garantie. Cette difficulté est généralement surmontée en utilisant une recherche linéaire (Dennis and Schnabel, 1983) ou une méthode de région de confiance (Conn, Gould and Toint, 2000), qui assure la convergence globale des points critiques du premier ordre sous des hypothèses faibles. Nous considérons la seconde de ces approches dans cette thèse. En outre, compte tenu de la grande échelle de ce problème, nous proposons ici d'utiliser un algorithme de région de confiance particulier s'appuyant sur la méthode du gradient conjugué tronqué de Steihaug-Toint pour la résolution approchée du sous-problème (Conn, Gould and Toint, 2000, p. 133-139) La résolution de ce sous-problème dans un espace à n dimensions (par CG ou Lanczos) est considérée comme l'approche primale. Comme alternative, une réduction significative du coût de calcul est possible en réécrivant l'approximation quadratique dans l'espace à m dimensions associé aux observations. Ceci est important pour les applications à grande échelle telles que celles quotidiennement traitées dans les systèmes de prévisions météorologiques. Cette approche, qui effectue la minimisation de l'espace à m dimensions à l'aide CG ou de ces variantes, est considérée comme l'approche duale. La première approche proposée (Da Silva et al., 1995; Cohn et al., 1998; Courtier, 1997), connue sous le nom de Système d'analyse Statistique de l'espace Physique (PSAS) dans la communauté d'assimilation de données, commence par la minimisation de la fonction de coût duale dans l'espace de dimension m par un CG préconditionné (PCG), puis revient l'espace à n dimensions. Techniquement, l'algorithme se compose de formules de récurrence impliquant des vecteurs de taille m au lieu de vecteurs de taille n. Cependant, l'utilisation de PSAS peut être excessivement coûteuse car il a été remarqué que la fonction de coût linéaire des moindres carrés ne diminue pas monotonement au cours des itérations non-linéaires. Une autre approche duale, connue sous le nom de méthode du gradient conjugué préconditionné restreint (RPCG), a été proposée par Gratton and Tshimanga (2009). Celle-ci génère les mêmes itérations en arithmétique exacte que l'approche primale, à nouveau en utilisant la formule de récurrence impliquant des vecteurs taille m. L'intérêt principal de RPCG est qu'il en résulte une réduction significative de la mémoire utilisée et des coûts de calcul tout en conservant la propriété de convergence souhaitée, contrairement à l'algorithme PSAS
This thesis investigates the conjugate-gradient method and the Lanczos method for the solution of under-determined nonlinear least-squares problems regularized by a quadratic penalty term. Such problems often result from a maximum likelihood approach, and involve a set of m physical observations and n unknowns that are estimated by nonlinear regression. We suppose here that n is large compared to m. These problems are encountered for instance when three-dimensional fields are estimated from physical observations, as is the case in data assimilation in Earth system models. A widely used algorithm in this context is the Gauss-Newton (GN) method, known in the data assimilation community under the name of incremental four dimensional variational data assimilation. The GN method relies on the approximate solution of a sequence of linear least-squares problems in which the nonlinear least-squares cost function is approximated by a quadratic function in the neighbourhood of the current nonlinear iterate. However, it is well known that this simple variant of the Gauss-Newton algorithm does not ensure a monotonic decrease of the cost function and that convergence is not guaranteed. Removing this difficulty is typically achieved by using a line-search (Dennis and Schnabel, 1983) or trust-region (Conn, Gould and Toint, 2000) strategy, which ensures global convergence to first order critical points under mild assumptions. We consider the second of these approaches in this thesis. Moreover, taking into consideration the large-scale nature of the problem, we propose here to use a particular trust-region algorithm relying on the Steihaug-Toint truncated conjugate-gradient method for the approximate solution of the subproblem (Conn, Gould and Toint, 2000, pp. 133-139). Solving this subproblem in the n-dimensional space (by CG or Lanczos) is referred to as the primal approach. Alternatively, a significant reduction in the computational cost is possible by rewriting the quadratic approximation in the m-dimensional space associated with the observations. This is important for large-scale applications such as those solved daily in weather prediction systems. This approach, which performs the minimization in the m-dimensional space using CG or variants thereof, is referred to as the dual approach. The first proposed dual approach (Courtier, 1997), known as the Physical-space Statistical Analysis System (PSAS) in the data assimilation community starts by solving the corresponding dual cost function in m-dimensional space by a standard preconditioned CG (PCG), and then recovers the step in n-dimensional space through multiplication by an n by m matrix. Technically, the algorithm consists of recurrence formulas involving m-vectors instead of n-vectors. However, the use of PSAS can be unduly costly as it was noticed that the linear least-squares cost function does not monotonically decrease along the nonlinear iterations when applying standard termination. Another dual approach has been proposed by Gratton and Tshimanga (2009) and is known as the Restricted Preconditioned Conjugate Gradient (RPCG) method. It generates the same iterates in exact arithmetic as those generated by the primal approach, again using recursion formula involving m-vectors. The main interest of RPCG is that it results in significant reduction of both memory and computational costs while maintaining the desired convergence property, in contrast with the PSAS algorithm. The relation between these two dual approaches and the question of deriving efficient preconditioners (Gratton, Sartenaer and Tshimanga, 2011), essential when large-scale problems are considered, was not addressed in Gratton and Tshimanga (2009)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pröchtel, Patrick. „Anisotrope Schädigungsmodellierung von Beton mit adaptiver bruchenergetischer Regularisierung Anisotropic damage modeling of concrete regularized by means of the adaptive fracture energy approach /“. [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1224751435667-29771.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

TESEI, CLAUDIA. „Nonlinear analysis of masonry and concrete structures under monotonic and cyclic loading: a regularized multidirectional d+/d− damage model“. Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2710141.

Der volle Inhalt der Quelle
Annotation:
A rigorous structural analysis is fundamental in the safety assessment of the built heritage and in its efficient conservation and rehabilitation. In line with the necessity of refined techniques, the objective of the present thesis is to develop and validate, in a displacement-based finite element framework, a nonlinear model apt for the study of masonry and concrete structures under monotonic and cyclic loading. The proposed constitutive law adopts two independent scalar damage variables, d+ and d−, in combination with the spectral decomposition of the elastic strain tensor, to simulate the pronounced dissimilar response under tension and compression, typical of these materials. The assumption of energy-equivalence between the damaged solid and the effective (undamaged) one is considered for representing the orthotropy induced in the material by the degradation process, with the consequence that a thermodynamically consistent constitutive operator, positive definite, symmetric and strain-driven, is derived. The formulation is integrated with a multidirectional damage procedure, addressed to extend the microcrack closure-reopening (MCR) capabilities to generic cyclic conditions, especially shear cyclic conditions, making the model suitable for dealing with seismic actions. Maintaining unaltered the dependence of the constitutive law from d+ and d−, this approach activates or deactivates a tensile (compressive) damage value on the base of the current maximum (minimum) principal strain direction. In correspondence with damage activation (crack opening) or deactivation (crack closure), a smooth transition is introduced, in order to avoid abrupt changes in stiffness and enhance the numerical performance and robustness of the multidirectional procedure. Moreover, the mesh-objectivity of the numerical solutions is ensured by resorting to a nonlocal regularization technique, based on the adoption of damage variables driven by an averaged elastic strain tensor. To perform the averaging of the strain tensor, an internal length lRG is considered in the continuum. The strategy chosen to define the parameters affecting the softening behaviour consists in the modification of the local softening law on the base of the internal length, with the intent of ensuring the proper evaluation of the correct fracture energy Gf. The adequacy of the proposed constitutive model in reproducing experimental results is proven for both monotonic and cyclic loading conditions. Under monotonic loads, unreinforced concrete notched elements subjected to pure tension, pure bending and mixed-mode bending are studied. The two examples of application involving cyclic loads, a masonry and a reinforced concrete wall under in-plane cyclic shear, constitute a validation of the multidirectional damage approach, showing how the suitable representation of unilateral effects and permanent deformations is essential to model the observed structural response in terms of maximum resistance and dissipation capacity. The effectiveness of the regularized damage formulation is proven by successfully studying a masonry arch and reinforced and unreinforced concrete elements. Besides the validation of the numerical results with experimental or analytical data, each application is exploited to highlight one or more features of the formulation: the mesh-size and mesh-bias independence of the results, the effect of the choice of the variable to be averaged, the possibility to reproduce structural size effects, the influence of the internal length lRG. On this latter aspect, the almost null dependence of the regularized solutions on the internal length in terms of force-displacement curves, achieved thanks to the calibration strategy adopted to define the energy dissipation, suggests the interpretation of the internal length as a regularization parameter. On the one hand, this implies an analogy between the role played by the nonlocal internal length in a nonlocal model and the one’s of the mesh size in the crack band approach (Bažant and Oh, 1983). On the other hand, this translates in the versatility of the regularized damage model, which requires only the identification of the standard material properties (elastic constants, fracture energies and strengths). Finally, the d+/d− damage model is successfully applied to the study of a three-span masonry arch bridge subjected to a concentrated vertical load, in order to evaluate its carrying capacity and its failure mechanism. Numerical issues, usually neglected in large-scale applications, are also addressed proving the reliability of the regularized approach to provide mesh-independent results and its applicability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Olaya, Bucaro Orlando. „Exploring relevant features associated with measles nonvaccination using a machine learning approach“. Thesis, Stockholms universitet, Sociologiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-184577.

Der volle Inhalt der Quelle
Annotation:
Measles is resurging around the world, and large outbreaks have been observed in several parts of the world. In 2019 the Philippines suffered a major measles outbreak partly due to low immunization rates in certain parts of the population. There is currently limited research on how to identify and reach pockets of unvaccinated individuals effectively. This thesis aims to find important factors associated with non-vaccination against measles using a machine learning approach, using data from the 2017 Philippine National Demographic and Health Survey. In the analyzed sample (n = 4006), 74.84% of children aged 9 months to 3 years had received their first dose of measles vaccine, and 25.16% had not. Logistic regression with all 536 candidate features was fit with the regularized regression method Elastic Net, capable of automatically selecting relevant features. The final model consists of 32 predictors, and these are related to access and contact with healthcare, the region of residence, wealth, education, religion, ethnicity, sanitary conditions, the ideal number of children, husbands’ occupation, age and weight of the child, and features relating to pre and postnatal care. Total accuracy of the final model is 79.02% [95% confidence interval: (76.37%, 81.5%)], sensitivity: 97.73%, specificity: 23.41% and area under receiver operating characteristic curve: 0.81. The results indicate that socioeconomic differences determine to a degree measles vaccination. However, the difficulty in classifying non-vaccinated children, the low specificity, using only health and demographic characteristics suggests other factors than what is available in the analyzed data, possibly vaccine hesitation, could have a large effect on measles non-vaccination. Based on the results, efforts should be made to ensure access to facility-based delivery for all mothers regardless of socioeconomic status, to improve measles vaccination rates in the Philippines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Salgado, Patarroyo Ivan Camilo. „Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals“. Thesis, 2013. http://hdl.handle.net/10012/7847.

Der volle Inhalt der Quelle
Annotation:
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Regularized approaches"

1

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao und Lennart Ljung. „Regularization in Reproducing Kernel Hilbert Spaces“. In Regularized System Identification, 181–246. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_6.

Der volle Inhalt der Quelle
Annotation:
AbstractMethods for obtaining a function g in a relationship $$y=g(x)$$ y = g ( x ) from observed samples of y and x are the building blocks for black-box estimation. The classical parametric approach discussed in the previous chapters uses a function model that depends on a finite-dimensional vector, like, e.g., a polynomial model. We have seen that an important issue is the model order choice. This chapter describes some regularization approaches which permit to reconcile flexibility of the model class with well-posedness of the solution exploiting an alternative paradigm to traditional parametric estimation. Instead of constraining the unknown function to a specific parametric structure, the function will be searched over a possibly infinite-dimensional functional space. Overfitting and ill-posedness are circumvented by using reproducing kernel Hilbert spaces as hypothesis spaces and related norms as regularizers. Such kernel-based approaches thus permit to cast all the regularized estimators based on quadratic penalties encountered in the previous chapters as special cases of a more general theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao und Lennart Ljung. „Numerical Experiments and Real World Cases“. In Regularized System Identification, 343–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_9.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter collects some numerical experiments to test the performance of kernel-based approaches for discrete-time linear system identification. Using Monte Carlo simulations, we will compare the performance of kernel-based methods with the classical PEM approaches described in Chap. 10.1007/978-3-030-95860-2_2. Simulated and real data are included, concerning a robotic arm, a hairdryer and a problem of temperature prediction. We conclude the chapter by introducing the so-called multi-task learning where several functions (tasks) are simultaneously estimated. This problem is significant if the tasks are related to each other so that measurements taken on a function are informative with respect to the other ones. A problem involving real pharmacokinetics data, related to the so-called population approaches, is then illustrated. Results will be often illustrated by using MATLAB boxplots. As already mentioned in Sect. 7.2, when commenting Fig. 7.8, the median is given by the central mark while the box edges are the 25th and 75th percentiles. The whiskers extend to the most extreme fits not seen as outliers. Then, the outliers are plotted individually.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Graham, Lamar A. „Chapter 4. Derived verbs and future-conditional stem regularization in written Spanish in synchrony and diachrony“. In Innovative Approaches to Research in Hispanic Linguistics, 82–105. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/ihll.38.04gra.

Der volle Inhalt der Quelle
Annotation:
Spanish verbs derived via prefixation prescriptively retain the morphological complexity of their root verbs. However, some verbs derived from decir and hacer show allomorphic variation in the future and conditional, which is documented by the RAE for decir but not at all for hacer. The results of this study of historical variation suggest decreased morphological transparency of some verbs but not of others. Verbs derived from hacer continue to resist regularization, with the notable exception of satisfacer. The set of decir-derived verbs is much more complex in its tendencies. This may be attributable to either (a) perceived opacity of contradecir or (b) increased analogical pressure from maldecir and bendecir which are completely regularized in modern usage. The presence of regularized bendecir and its possible effects on etymologically related verbs contrasts with the resistance of regularization of hacer-derived verbs and the consequent absence of analogical pressure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ito, Kazufumi, und Bangti Jin. „Regularized Linear Inversion with Randomized Singular Value Decomposition“. In Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, 45–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48634-1_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lombardi, Michele, Federico Baldo, Andrea Borghesi und Michela Milano. „An Analysis of Regularized Approaches for Constrained Machine Learning“. In Trustworthy AI - Integrating Learning, Optimization and Reasoning, 112–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73959-1_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

de Campos Souza, Paulo Vitor, Augusto Junio Guimaraes, Vanessa Souza Araujo, Thiago Silva Rezende und Vinicius Jonathan Silva Araujo. „Using Fuzzy Neural Networks Regularized to Support Software for Predicting Autism in Adolescents on Mobile Devices“. In Smart Network Inspired Paradigm and Approaches in IoT Applications, 115–33. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8614-5_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schulz, Volker H., und Kathrin Welker. „Shape Optimization for Variational Inequalities of Obstacle Type: Regularized and Unregularized Computational Approaches“. In International Series of Numerical Mathematics, 397–420. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79393-7_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao und Lennart Ljung. „Bias“. In Regularized System Identification, 1–15. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_1.

Der volle Inhalt der Quelle
Annotation:
AbstractAdopting a quadratic loss, the performance of an estimator can be measured in terms of its mean squared error which decomposes into a variance and a bias component. This introductory chapter contains two linear regression examples which describe the importance of designing estimators able to well balance these two components. The first example will deal with estimation of the means of independent Gaussians. We will review the classical least squares approach which, at first sight, could appear the most appropriate solution to the problem. Remarkably, we will instead see that this unbiased approach can be dominated by a particular biased estimator, the so-called James–Stein estimator. Within this book, this represents the first example of regularized least squares, an estimator which will play a key role in subsequent chapters. The second example will deal with a classical system identification problem: impulse response estimation. A simple numerical experiment will show how the variance of least squares can be too large, hence leading to unacceptable system reconstructions. The use of an approach, known as ridge regression, will give first simple intuitions on the usefulness of regularization in the system identification scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao und Lennart Ljung. „Bayesian Interpretation of Regularization“. In Regularized System Identification, 95–134. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_4.

Der volle Inhalt der Quelle
Annotation:
AbstractIn the previous chapter, it has been shown that the regularization approach is particularly useful when information contained in the data is not sufficient to obtain a precise estimate of the unknown parameter vector and standard methods, such as least squares, yield poor solutions. The fact itself that an estimate is regarded as poor suggests the existence of some form of prior knowledge on the degree of acceptability of candidate solutions. It is this knowledge that guides the choice of the regularization penalty that is added as a corrective term to the usual sum of squared residuals. In the previous chapters, this design process has been described in a deterministic setting where only the measurement noises are random. In this chapter, we will see that an alternative formalization of prior information is obtained if a subjective/Bayesian estimation paradigm is adopted. The major difference is that the parameters, rather than being regarded as deterministic, are now treated as a random vector. This stochastic setting permits the definition of new powerful tools for both priors selection, e.g., through the maximum entropy principle, and for regularization parameters tuning, e.g., through the empirical Bayes approach and its connection with the concept of equivalent degrees of freedom.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Luo, Ruiyan, Alejandra D. Herrera-Reyes, Yena Kim, Susan Rogowski, Diana White und Alexandra Smirnova. „Estimation of Time-Dependent Transmission Rate for COVID-19 SVIRD Model Using Predictor–Corrector Algorithm“. In Mathematical Modeling for Women’s Health, 213–37. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58516-6_7.

Der volle Inhalt der Quelle
Annotation:
AbstractStable parameter estimation is an ongoing challenge within biomathematics, especially in epidemiology. Oftentimes epidemiological models are composed of large numbers of equations and parameters. Due to high dimensionality, classic parameter estimation approaches, such as least square fitting, are computationally expensive. Additionally, the presence of observational noise and reporting errors that accompany real-time data can make these parameter estimation problems ill-posed and unstable. The recent COVID-19 pandemic highlighted the need for efficient parameter estimation tools. In this chapter, we develop a modified version of a regularized predictor–corrector algorithm aimed at stable low-cost reconstruction of infectious disease parameters. This method is applied to a new compartmental model describing COVID-19 dynamics, which accounts for vaccination and immunity loss (from vaccinated and recovered populations). Numerical simulations are carried out with synthetic and real data for COVID-19 pandemic. Based on the reconstructed disease transmission rates (and known mitigation measures), observations on historical trends of COVID-19 in the states of Georgia and California are presented. Such observations can be used to provide insights into future COVID policies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Regularized approaches"

1

Safari, Habibollah, und Mona Bavarian. „Enhancing Polymer Reaction Engineering Through the Power of Machine Learning“. In Foundations of Computer-Aided Process Design, 367–72. Hamilton, Canada: PSE Press, 2024. http://dx.doi.org/10.69997/sct.157792.

Der volle Inhalt der Quelle
Annotation:
Copolymers are commonplace in various industries. Nevertheless, fine-tuning their properties bears significant cost and effort. Hence, an ability to predict polymer properties a priori can significantly reduce costs and shorten the need for extensive experimentation. Given that the physical and chemical characteristics of copolymers are correlated with molecular arrangement and chain topology, understanding the reactivity ratios of monomers�which determine the copolymer composition and sequence distribution of monomers in a chain�is important in accelerating research and cutting R&D costs. In this study, the prediction accuracy of two Artificial Neural Network (ANN) approaches, namely, Multi-layer Perceptron (MLP) and Graph Attention Network (GAT), are compared. The results highlight the potency and accuracy of the intrinsically interpretable ML approaches in predicting the molecular structures of copolymers. Our data indicates that even a well-regularized MLP cannot predict the reactivity ratio of copolymers as accurately as GAT. This is attributed to the compatibility of GAT with the data structure of molecules, which are graph-representative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Brault, Dylan, Thomas Olivier, Ferréol Soulez und Corinne Fournier. „Automation of Gram stain imaging with multispectral in-line holography“. In Digital Holography and Three-Dimensional Imaging, M3B.2. Washington, D.C.: Optica Publishing Group, 2024. http://dx.doi.org/10.1364/dh.2024.m3b.2.

Der volle Inhalt der Quelle
Annotation:
We propose an approach to automate stained micro-biological samples imaging using multispectral in-line holography. The approach is based on a self-calibrated regularized inverse problems reconstruction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Budillon, Alessandra, Loic Denis, Clement Rambour, Gilda Schirinzi und Florence Tupin. „Regularized SAR Tomography Approaches“. In IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2020. http://dx.doi.org/10.1109/igarss39084.2020.9323807.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xiao, Yichi, Zhe Li, Tianbao Yang und Lijun Zhang. „SVD-free Convex-Concave Approaches for Nuclear Norm Regularization“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/436.

Der volle Inhalt der Quelle
Annotation:
Minimizing a convex function of matrices regularized by the nuclear norm arises in many applications such as collaborative filtering and multi-task learning. In this paper, we study the general setting where the convex function could be non-smooth. When the size of the data matrix, denoted by m x n, is very large, existing optimization methods are inefficient because in each iteration, they need to perform a singular value decomposition (SVD) which takes O(m^2 n) time. To reduce the computation cost, we exploit the dual characterization of the nuclear norm to introduce a convex-concave optimization problem and design a subgradient-based algorithm without performing SVD. In each iteration, the proposed algorithm only computes the largest singular vector, reducing the time complexity from O(m^2 n) to O(mn). To the best of our knowledge, this is the first SVD-free convex optimization approach for nuclear-norm regularized problems that does not rely on the smoothness assumption. Theoretical analysis shows that the proposed algorithm converges at an optimal O(1/\sqrt{T}) rate where T is the number of iterations. We also extend our algorithm to the stochastic case where only stochastic subgradients of the convex function are available and a special case that contains an additional non-smooth regularizer (e.g., L1 norm regularizer). We conduct experiments on robust low-rank matrix approximation and link prediction to demonstrate the efficiency of our algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Meshgi, Kourosh, Maryam Sadat Mirzaei und Satoshi Sekine. „Uncertainty Regularized Multi-Task Learning“. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.wassa-1.8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Lefei, Qian Zhang, Bo Du, Jane You und Dacheng Tao. „Adaptive Manifold Regularized Matrix Factorization for Data Clustering“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/475.

Der volle Inhalt der Quelle
Annotation:
Data clustering is the task to group the data samples into certain clusters based on the relationships of samples and structures hidden in data, and it is a fundamental and important topic in data mining and machine learning areas. In the literature, the spectral clustering is one of the most popular approaches and has many variants in recent years. However, the performance of spectral clustering is determined by the affinity matrix, which is always computed by a predefined model (e.g., Gaussian kernel function) with carefully tuned parameters combination, and may far from optimal in practice. In this paper, we propose to consider the observed data clustering as a robust matrix factorization point of view, and learn an affinity matrix simultaneously to regularize the proposed matrix factorization. The solution of the proposed adaptive manifold regularized matrix factorization (AMRMF) is reached by a novel Augmented Lagrangian Multiplier (ALM) based algorithm. The experimental results on standard clustering datasets demonstrate the superior performance over the exist alternatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rajput, Shayam Singh, Deepak Rai und K. V. Arya. „Robust Image watermarking using Tikhonov regularized image reconstruction technique“. In 2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI). IEEE, 2024. http://dx.doi.org/10.1109/iatmsi60426.2024.10502853.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Narayan, Jyotindra, Hassène Gritli und Santosha K. Dwivedy. „Lower Limb Joint Torque Estimation via Bayesian Regularized Backpropagation Neural Networks“. In 2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI). IEEE, 2024. http://dx.doi.org/10.1109/iatmsi60426.2024.10502709.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Jianxin, Haoyi Zhou, Pengtao Xie und Yingchun Zhang. „Improving the Generalization Performance of Multi-class SVM via Angular Regularization“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/296.

Der volle Inhalt der Quelle
Annotation:
In multi-class support vector machine (MSVM) for classification, one core issue is to regularize the coefficient vectors to reduce overfitting. Various regularizers have been proposed such as L2, L1, and trace norm. In this paper, we introduce a new type of regularization approach -- angular regularization, that encourages the coefficient vectors to have larger angles such that class regions can be widen to flexibly accommodate unseen samples. We propose a novel angular regularizer based on the singular values of the coefficient matrix, where the uniformity of singular values reduces the correlation among different classes and drives the angles between coefficient vectors to increase. In generalization error analysis, we show that decreasing this regularizer effectively reduces generalization error bound. On various datasets, we demonstrate the efficacy of the regularizer in reducing overfitting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tomboulides, A., S. M. Aithal, P. F. Fischer, E. Merzari und A. Obabko. „A Novel Variant of the K-ω URANS Model for Spectral Element Methods: Implementation, Verification, and Validation in Nek5000“. In ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21926.

Der volle Inhalt der Quelle
Annotation:
Unsteady Reynolds-averaged Navier-Stokes (uRANS) models can provide good engineering estimates of wall shear and heat flux at a significantly lower computational cost compared with LES simulations. In this paper, we discuss the implementation of two novel variants of the k-ω turbulence model, the regularized k-ω standard and the regularized k-ω SST model, in a spectral element code, Nek5000. We present formulation for the specific dissipation rate (ω) in the standard k-ω model, which would obviate the need for ad hoc boundary conditions of ω on the wall. The regularized approach is designed to lead to grid-independent solutions as resolution is increased. We present a detailed comparison of these novel methods for various standard problems including the T-junction benchmark problem. The two approaches presented in this work compare very well with the standard k-ω model and experimental data for all the cases studied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Regularized approaches"

1

Da Gama Torres, Haroldo. Environmental Implications of Peri-urban Sprawl and the Urbanization of Secondary Cities in Latin America. Inter-American Development Bank, März 2011. http://dx.doi.org/10.18235/0008841.

Der volle Inhalt der Quelle
Annotation:
This paper examines the environmental and social implications of peri-urban growth in small to medium sized cities in Latin America and the Caribbean and proposes approaches to address this challenge. Key recommendations include cities should stimulate strategies for compact growth and efforts to regularize existing irregular settlements should be strongly supported, among other recommendations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

S.R. Hudson. A Regularized Approach for Solving Magnetic Differential Equations and a Revised Iterative Equilibrium Algorithm. Office of Scientific and Technical Information (OSTI), Oktober 2010. http://dx.doi.org/10.2172/990749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie