To see the other types of publications on this topic, follow the link: Implicit regularization.

Journal articles on the topic 'Implicit regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Implicit regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ceng, Lu-Chuan, Qamrul Hasan Ansari, and Ching-Feng Wen. "Implicit Relaxed and Hybrid Methods with Regularization for Minimization Problems and Asymptotically Strict Pseudocontractive Mappings in the Intermediate Sense." Abstract and Applied Analysis 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/854297.

Full text
Abstract:
We first introduce an implicit relaxed method with regularization for finding a common element of the set of fixed points of an asymptotically strict pseudocontractive mappingSin the intermediate sense and the set of solutions of the minimization problem (MP) for a convex and continuously Frechet differentiable functional in the setting of Hilbert spaces. The implicit relaxed method with regularization is based on three well-known methods: the extragradient method, viscosity approximation method, and gradient projection algorithm with regularization. We derive a weak convergence theorem for two sequences generated by this method. On the other hand, we also prove a new strong convergence theorem by an implicit hybrid method with regularization for the MP and the mappingS. The implicit hybrid method with regularization is based on four well-known methods: the CQ method, extragradient method, viscosity approximation method, and gradient projection algorithm with regularization.
APA, Harvard, Vancouver, ISO, and other styles
2

FARGNOLI, H. G., A. P. BAÊTA SCARPELLI, L. C. T. BRITO, B. HILLER, MARCOS SAMPAIO, M. C. NEMES, and A. A. OSIPOV. "ULTRAVIOLET AND INFRARED DIVERGENCES IN IMPLICIT REGULARIZATION: A CONSISTENT APPROACH." Modern Physics Letters A 26, no. 04 (February 10, 2011): 289–302. http://dx.doi.org/10.1142/s0217732311034773.

Full text
Abstract:
Implicit Regularization is a four-dimensional regularization initially conceived to treat ultraviolet divergences. It has been successfully tested in several instances in the literature, more specifically in those where Dimensional Regularization does not apply. In the present contribution, we extend the method to handle infrared divergences as well. We show that the essential steps which rendered Implicit Regularization adequate in the case of ultraviolet divergences have their counterpart for infrared ones. Moreover, we show that a new scale appears, typically an infrared scale which is completely independent of the ultraviolet one. Examples are given.
APA, Harvard, Vancouver, ISO, and other styles
3

Sampaio, Marcos, A. P. Baêta Scarpelli, J. E. Ottoni, and M. C. Nemes. "Implicit Regularization and Renormalization of QCD." International Journal of Theoretical Physics 45, no. 2 (February 2006): 436–57. http://dx.doi.org/10.1007/s10773-006-9045-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Tam, Faroq, António dos Anjos, and Hamid Reza Shahbazkia. "Iterative illumination correction with implicit regularization." Signal, Image and Video Processing 10, no. 5 (December 11, 2015): 967–74. http://dx.doi.org/10.1007/s11760-015-0847-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dandi, Yatin, Luis Barba, and Martin Jaggi. "Implicit Gradient Alignment in Distributed and Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6454–62. http://dx.doi.org/10.1609/aaai.v36i6.20597.

Full text
Abstract:
A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients or mini-batches due to heterogeneity and stochasticity of the distributed data. In this work, we show that data heterogeneity can in fact be exploited to improve generalization performance through implicit regularization. One way to alleviate the effects of heterogeneity is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies. Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches. To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefits of our algorithm in different distributed and federated learning settings.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Huangxing, Yihong Zhuang, Xinghao Ding, Delu Zeng, Yue Huang, Xiaotong Tu, and John Paisley. "Self-Supervised Image Denoising Using Implicit Deep Denoiser Prior." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1586–94. http://dx.doi.org/10.1609/aaai.v37i2.25245.

Full text
Abstract:
We devise a new regularization for denoising with self-supervised learning. The regularization uses a deep image prior learned by the network, rather than a traditional predefined prior. Specifically, we treat the output of the network as a ``prior'' that we again denoise after ``re-noising.'' The network is updated to minimize the discrepancy between the twice-denoised image and its prior. We demonstrate that this regularization enables the network to learn to denoise even if it has not seen any clean images. The effectiveness of our method is based on the fact that CNNs naturally tend to capture low-level image statistics. Since our method utilizes the image prior implicitly captured by the deep denoising CNN to guide denoising, we refer to this training strategy as an Implicit Deep Denoiser Prior (IDDP). IDDP can be seen as a mixture of learning-based methods and traditional model-based denoising methods, in which regularization is adaptively formulated using the output of the network. We apply IDDP to various denoising tasks using only observed corrupted data and show that it achieves better denoising results than other self-supervised denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yuan, Yanzhi Song, Zhouwang Yang, and Jiansong Deng. "Implicit surface reconstruction with total variation regularization." Computer Aided Geometric Design 52-53 (March 2017): 135–53. http://dx.doi.org/10.1016/j.cagd.2017.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Zhemin, Tao Sun, Hongxia Wang, and Bao Wang. "Adaptive and Implicit Regularization for Matrix Completion." SIAM Journal on Imaging Sciences 15, no. 4 (November 22, 2022): 2000–2022. http://dx.doi.org/10.1137/22m1489228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Belytschko, T., S. P. Xiao, and C. Parimi. "Topology optimization with implicit functions and regularization." International Journal for Numerical Methods in Engineering 57, no. 8 (2003): 1177–96. http://dx.doi.org/10.1002/nme.824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rosado, R. J. C., A. Cherchiglia, M. Sampaio, and B. Hiller. "An Implicit Regularization Approach to Chiral Models." Acta Physica Polonica B Proceedings Supplement 17, no. 6 (2024): 1. http://dx.doi.org/10.5506/aphyspolbsupp.17.6-a15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Arias-Perdomo, Dafne Carolina, Adriano Cherchiglia, Brigitte Hiller, and Marcos Sampaio. "A Brief Review of Implicit Regularization and Its Connection with the BPHZ Theorem." Symmetry 13, no. 6 (May 27, 2021): 956. http://dx.doi.org/10.3390/sym13060956.

Full text
Abstract:
Quantum Field Theory, as the keystone of particle physics, has offered great insights into deciphering the core of Nature. Despite its striking success, by adhering to local interactions, Quantum Field Theory suffers from the appearance of divergent quantities in intermediary steps of the calculation, which encompasses the need for some regularization/renormalization prescription. As an alternative to traditional methods, based on the analytic extension of space–time dimension, frameworks that stay in the physical dimension have emerged; Implicit Regularization is one among them. We briefly review the method, aiming to illustrate how Implicit Regularization complies with the BPHZ theorem, which implies that it respects unitarity and locality to arbitrary loop order. We also pedagogically discuss how the method complies with gauge symmetry using one- and two-loop examples in QED and QCD.
APA, Harvard, Vancouver, ISO, and other styles
12

Dias, E. W., A. P. Baêta Scarpelli, L. C. T. Brito, and H. G. Fargnoli. "Multiloop calculations with implicit regularization in massless theories." Brazilian Journal of Physics 40, no. 2 (June 2010): 228–34. http://dx.doi.org/10.1590/s0103-97332010000200018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Karageorgos, Konstantinos, Anastasios Dimou, Federico Alvarez, and Petros Daras. "Implicit and Explicit Regularization for Optical Flow Estimation." Sensors 20, no. 14 (July 10, 2020): 3855. http://dx.doi.org/10.3390/s20143855.

Full text
Abstract:
In this paper, two novel and practical regularizing methods are proposed to improve existing neural network architectures for monocular optical flow estimation. The proposed methods aim to alleviate deficiencies of current methods, such as flow leakage across objects and motion consistency within rigid objects, by exploiting contextual information. More specifically, the first regularization method utilizes semantic information during the training process to explicitly regularize the produced optical flow field. The novelty of this method lies in the use of semantic segmentation masks to teach the network to implicitly identify the semantic edges of an object and better reason on the local motion flow. A novel loss function is introduced that takes into account the objects’ boundaries as derived from the semantic segmentation mask to selectively penalize motion inconsistency within an object. The method is architecture agnostic and can be integrated into any neural network without modifying or adding complexity at inference. The second regularization method adds spatial awareness to the input data of the network in order to improve training stability and efficiency. The coordinates of each pixel are used as an additional feature, breaking the invariance properties of the neural network architecture. The additional features are shown to implicitly regularize the optical flow estimation enforcing a consistent flow, while improving both the performance and the convergence time. Finally, the combination of both regularization methods further improves the performance of existing cutting edge architectures in a complementary way, both quantitatively and qualitatively, on popular flow estimation benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
14

Scolnik, H. D., N. E. Echebest, and M. T. Guardarucci. "Implicit regularization of the incomplete oblique projections method." International Transactions in Operational Research 16, no. 4 (July 2009): 525–46. http://dx.doi.org/10.1111/j.1475-3995.2009.00694.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Zhe, and Xiaoyang Tan. "An Implicit Trust Region Approach to Behavior Regularized Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16944–52. http://dx.doi.org/10.1609/aaai.v38i15.29637.

Full text
Abstract:
We revisit behavior regularization, a popular approach to mitigate the extrapolation error in offline reinforcement learning (RL), showing that current behavior regularization may suffer from unstable learning and hinder policy improvement. Motivated by this, a novel reward shaping-based behavior regularization method is proposed, where the log-probability ratio between the learned policy and the behavior policy is monitored during learning. We show that this is equivalent to an implicit but computationally lightweight trust region mechanism, which is beneficial to mitigate the influence of estimation errors of the value function, leading to more stable performance improvement. Empirical results on the popular D4RL benchmark verify the effectiveness of the presented method with promising performance compared with some state-of-the-art offline RL algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Plusch, Grigoriy, Sergey Arsenyev-Obraztsov, and Olga Kochueva. "The Weights Reset Technique for Deep Neural Networks Implicit Regularization." Computation 11, no. 8 (August 1, 2023): 148. http://dx.doi.org/10.3390/computation11080148.

Full text
Abstract:
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Li, Zhiguo Fu, Yingcong Zhou, and Zili Yan. "The Implicit Regularization of Momentum Gradient Descent in Overparametrized Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10149–56. http://dx.doi.org/10.1609/aaai.v37i8.26209.

Full text
Abstract:
The study of the implicit regularization induced by gradient-based optimization in deep learning is a long-standing pursuit. In the present paper, we characterize the implicit regularization of momentum gradient descent (MGD) in the continuous-time view, so-called momentum gradient flow (MGF). We show that the components of weight vector are learned for a deep linear neural networks at different evolution rates, and this evolution gap increases with the depth. Firstly, we show that if the depth equals one, the evolution gap between the weight vector components is linear, which is consistent with the performance of ridge. In particular, we establish a tight coupling between MGF and ridge for the least squares regression. In detail, we show that when the regularization parameter of ridge is inversely proportional to the square of the time parameter of MGF, the risk of MGF is no more than 1.54 times that of ridge, and their relative Bayesian risks are almost indistinguishable. Secondly, if the model becomes deeper, i.e. the depth is greater than or equal to 2, the evolution gap becomes more significant, which implies an implicit bias towards sparse solutions. The numerical experiments strongly support our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
18

Arruda, M. R. T., M. Trombini, and A. Pagani. "Implicit to Explicit Algorithm for ABAQUS Standard User-Subroutine UMAT for a 3D Hashin-Based Orthotropic Damage Model." Applied Sciences 13, no. 2 (January 15, 2023): 1155. http://dx.doi.org/10.3390/app13021155.

Full text
Abstract:
This study examines a new approach to facilitate the convergence of upcoming user-subroutines UMAT when the secant material matrix is applied rather than the conventional tangent (also known as Jacobian) material matrix. This algorithm makes use of the viscous regularization technique to stabilize the numerical solution of softening material models. The Newton–Raphson algorithm predictor-corrector of ABAQUS then applies this type of viscous regularization to a UMAT using only the secant matrix. When the time step is smaller than the viscosity parameter, this type of regularization may be unsuitable for a predictor-corrector with the secant matrix because its implicit convergence is incorrect, transforming the algorithm into an undesirable explicit version that may cause convergence problems. A novel 3D orthotropic damage model with residual stresses is proposed for this study, and it is analyzed using a new algorithm. The method’s convergence is tested using the proposed implicit-to-explicit secant matrix as well as the traditional implicit and explicit secant matrices. Furthermore, all numerical models are compared to experimental data. It was concluded that both the new 3D orthotropic damage model and the new proposed time step algorithm were stable and robust.
APA, Harvard, Vancouver, ISO, and other styles
19

Dubikovsky, A. I., and P. K. Silaev. "On an alternative, implicit renormalization procedure for the Casimir energy." Modern Physics Letters A 33, no. 22 (July 19, 2018): 1850129. http://dx.doi.org/10.1142/s0217732318501298.

Full text
Abstract:
We propose a procedure for the renormalization of Casimir energy that is based on the implicit versions of standard steps of renormalization procedure — regularization, subtraction and removing the regularization. The proposed procedure is based on the calculation of a set of convergent sums, which are related to the original divergent sum for the non-renormalized Casimir energy. Then, we construct a linear equation system that connects this set of convergent sums with the renormalized Casimir energy, which is a solution to this system of equations. This procedure slightly reduces the indeterminacy that arises in the standard procedure when we choose the particular regularization and the explicit form of the counterterm. The proposed procedure can be applied not only to systems with the explicit transcendental equation for the spectrum but also to systems with the spectrum that can be obtained only numerically. However, to perform the proposed procedure, we need a parameter of the system that satisfies two conditions: (i) we can obtain explicit analytical expressions (as a function of our parameter) for coefficients for all divergent and unphysical terms in divergent sum for Casimir energy; (ii) infinite value of this parameter should be the “natural” renormalization point, i.e. Casimir energy must tend to zero when parameter tends to infinity.
APA, Harvard, Vancouver, ISO, and other styles
20

Fanuel, Michael, Joachim Schreurs, and Johan Suykens. "Diversity Sampling is an Implicit Regularization for Kernel Methods." SIAM Journal on Mathematics of Data Science 3, no. 1 (January 2021): 280–97. http://dx.doi.org/10.1137/20m1320031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Pontes, Carlos R., A. P. Baêta Scarpelli, Marcos Sampaio, and M. C. Nemes. "Implicit regularization beyond one-loop order: scalar field theories." Journal of Physics G: Nuclear and Particle Physics 34, no. 10 (September 12, 2007): 2215–34. http://dx.doi.org/10.1088/0954-3899/34/10/011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dias, E. W., A. P. Baêta Scarpelli, L. C. T. Brito, M. Sampaio, and M. C. Nemes. "Implicit regularization beyond one-loop order: gauge field theories." European Physical Journal C 55, no. 4 (May 17, 2008): 667–81. http://dx.doi.org/10.1140/epjc/s10052-008-0614-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

CHERCHIGLIA, A. L., MARCOS SAMPAIO, and M. C. NEMES. "SYSTEMATIC IMPLEMENTATION OF IMPLICIT REGULARIZATION FOR MULTILOOP FEYNMAN DIAGRAMS." International Journal of Modern Physics A 26, no. 15 (June 20, 2011): 2591–635. http://dx.doi.org/10.1142/s0217751x11053419.

Full text
Abstract:
Implicit Regularization (IReg) is a candidate to become an invariant framework in momentum space to perform Feynman diagram calculations to arbitrary loop order. In this work we present a systematic implementation of our method that automatically displays the terms to be subtracted by Bogoliubov's recursion formula. Therefore, we achieve a twofold objective: we show that the IReg program respects unitarity, locality and Lorentz invariance and we show that our method is consistent since we are able to display the divergent content of a multiloop amplitude in a well-defined set of basic divergent integrals in one-loop momentum only which is the essence of IReg. Moreover, we conjecture that momentum routing invariance in the loops, which has been shown to be connected with gauge symmetry, is a fundamental symmetry of any Feynman diagram in a renormalizable quantum field theory.
APA, Harvard, Vancouver, ISO, and other styles
24

Tian, Ming, and Jun-Ying Gong. "Strong Convergence of Modified Algorithms Based on the Regularization for the Constrained Convex Minimization Problem." Abstract and Applied Analysis 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/870102.

Full text
Abstract:
As is known, the regularization method plays an important role in solving constrained convex minimization problems. Based on the idea of regularization, implicit and explicit iterative algorithms are proposed in this paper and the sequences generated by the algorithms can converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality. As an application, we also apply the algorithm to solve the split feasibility problem.
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Woosung, Donghyeon Ki, and Byung-Jun Lee. "Relaxed Stationary Distribution Correction Estimation for Improved Offline Policy Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13185–92. http://dx.doi.org/10.1609/aaai.v38i12.29218.

Full text
Abstract:
One of the major challenges of offline reinforcement learning (RL) is dealing with distribution shifts that stem from the mismatch between the trained policy and the data collection policy. Stationary distribution correction estimation algorithms (DICE) have addressed this issue by regularizing the policy optimization with f-divergence between the state-action visitation distributions of the data collection policy and the optimized policy. While such regularization naturally integrates to derive an objective to get optimal state-action visitation, such an implicit policy optimization framework has shown limited performance in practice. We observe that the reduced performance is attributed to the biased estimate and the properties of conjugate functions of f-divergence regularization. In this paper, we improve the regularized implicit policy optimization framework by relieving the bias and reshaping the conjugate function by relaxing the constraints. We show that the relaxation adjusts the degree of involvement of the sub-optimal samples in optimization, and we derive a new offline RL algorithm that benefits from the relaxed framework, improving from a previous implicit policy optimization algorithm by a large margin.
APA, Harvard, Vancouver, ISO, and other styles
26

Hilali, Youssef, Bouazza Braikat, Hassane Lahmam, and Noureddine Damil. "An implicit algorithm for the dynamic study of nonlinear vibration of spur gear system with backlash." Mechanics & Industry 19, no. 3 (2018): 310. http://dx.doi.org/10.1051/meca/2017006.

Full text
Abstract:
In this work, we propose some regularization techniques to adapt the implicit high order algorithm based on the coupling of the asymptotic numerical methods (ANM) (Cochelin et al., Méthode Asymptotique Numérique, Hermès-Lavoisier, Paris, 2007; Mottaqui et al., Comput. Methods Appl. Mech. Eng. 199 (2010) 1701–1709; Mottaqui et al., Math. Model. Nat. Phenom. 5 (2010) 16–22) and the implicit Newmark scheme for solving the non-linear problem of dynamic model of a two-stage spur gear system with backlash. The regularization technique is used to overcome the numerical difficulties of singularities existing in the considered problem as in the contact problems (Abichou et al., Comput. Methods Appl. Mech. Eng. 191 (2002) 5795–5810; Aggoune et al., J. Comput. Appl. Math. 168 (2004) 1–9). This algorithm combines a time discretization technique, a homotopy method, Taylor series expansions technique and a continuation method. The performance and effectiveness of this algorithm will be illustrated on two examples of one-stage and two-stage gears with spur teeth. The obtained results are compared with those obtained by the Newton–Raphson method coupled with the implicit Newmark scheme.
APA, Harvard, Vancouver, ISO, and other styles
27

Pontes, C. R., A. P. Baêta Scarpelli, Marcos Sampaio, J. L. Acebal, and M. C. Nemes. "On the equivalence between implicit regularization and constrained differential renormalization." European Physical Journal C 53, no. 1 (October 27, 2007): 121–31. http://dx.doi.org/10.1140/epjc/s10052-007-0437-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zreid, Imadeddin, and Michael Kaliske. "Regularization of microplane damage models using an implicit gradient enhancement." International Journal of Solids and Structures 51, no. 19-20 (October 2014): 3480–89. http://dx.doi.org/10.1016/j.ijsolstr.2014.06.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Laurent, Gautier. "Iterative Thickness Regularization of Stratigraphic Layers in Discrete Implicit Modeling." Mathematical Geosciences 48, no. 7 (June 14, 2016): 811–33. http://dx.doi.org/10.1007/s11004-016-9637-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mohsin, Yasir Q., Sajan Goud Lingala, Edward DiBella, and Mathews Jacob. "Accelerated dynamic MRI using patch regularization for implicit motion compensation." Magnetic Resonance in Medicine 77, no. 3 (April 19, 2016): 1238–48. http://dx.doi.org/10.1002/mrm.26215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Meng, Jiaxin Li, Chengcheng Yang, and Quan Chen. "Deflated Restarting of Exponential Integrator Method with an Implicit Regularization for Efficient Transient Circuit Simulation." Electronics 10, no. 9 (May 10, 2021): 1124. http://dx.doi.org/10.3390/electronics10091124.

Full text
Abstract:
Exponential integrator (EI) method based on Krylov subspace approximation is a promising method for large-scale transient circuit simulation. However, it suffers from the singularity problem and consumes large subspace dimensions for stiff circuits when using the ordinary Krylov subspace. Restarting schemes are commonly applied to reduce the subspace dimension, but they also slow down the convergence and degrade the overall computational efficiency. In this paper, we first devise an implicit and sparsity-preserving regularization technique to tackle the singularity problem facing EI in the ordinary Krylov subspace framework. Next, we analyze the root cause of the slow convergence of the ordinary Krylov subspace methods when applied to stiff circuits. Based on the analysis, we propose a deflated restarting scheme, compatible with the above regularization technique, to accelerate the convergence of restarted Krylov subspace approximation for EI methods. Numerical experiments demonstrate the effectiveness of the proposed regularization technique, and up to 50% convergence improvements for Krylov subspace approximation compared to the non-deflated version.
APA, Harvard, Vancouver, ISO, and other styles
32

Zheng, Bin, Junfeng Liu, Zhenyu Zhao, Zhihong Dou, and Benxue Gong. "A Generalized Iterated Tikhonov Method in the Fourier Domain for Determining the Unknown Source of the Time-Fractional Diffusion Equation." Symmetry 16, no. 7 (July 8, 2024): 864. http://dx.doi.org/10.3390/sym16070864.

Full text
Abstract:
In this paper, an inverse problem of determining a source in a time-fractional diffusion equation is investigated. A Fourier extension scheme is used to approximate the solution to avoid the impact on smoothness caused by directly using singular system eigenfunctions for approximation. A modified implicit iteration method is proposed as a regularization technique to stabilize the solution process. The convergence rates are derived when a discrepancy principle serves as the principle for choosing the regularization parameters. Numerical tests are provided to further verify the efficacy of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
33

Reshniak, Viktor, and Clayton G. Webster. "Robust Learning with Implicit Residual Networks." Machine Learning and Knowledge Extraction 3, no. 1 (December 31, 2020): 34–55. http://dx.doi.org/10.3390/make3010003.

Full text
Abstract:
In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power, and allows for control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique, and provide numerical results in support of our findings.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Jiajia, and Zuoliang Xu. "Calibration of time-dependent volatility for European options under the fractional Vasicek model." AIMS Mathematics 7, no. 6 (2022): 11053–69. http://dx.doi.org/10.3934/math.2022617.

Full text
Abstract:
<abstract><p>In this paper, we calibrate the time-dependent volatility function for European options under the fractional Vasicek interest rate model. A fully implicit finite difference method is applied to solve the partial differential equation of option pricing numerically. To find the volatility function, we minimize a cost function that is the sum of the squared errors between the theoretical prices and market prices with Tikhonov $ L_2 $ regularization and $ L_{1/2} $ regularization respectively. Finally numerical experiments with simulated and real market data verify the efficiency of the proposed methods.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Minghao, and Yan Gao. "Adaptive Approximate Implicitization of Planar Parametric Curves via Asymmetric Gradient Constraints." Symmetry 15, no. 9 (September 11, 2023): 1738. http://dx.doi.org/10.3390/sym15091738.

Full text
Abstract:
Converting a parametric curve into the implicit form, which is called implicitization, has always been a popular but challenging problem in geometric modeling and related applications. However, existing methods mostly suffer from the problems of maintaining geometric features and choosing a reasonable implicit degree. The present paper has two contributions. We first introduce a new regularization constraint (called the asymmetric gradient constraint) for both polynomial and non-polynomial curves, which efficiently possesses shape-preserving. We then propose two adaptive algorithms of approximate implicitization for polynomial and non-polynomial curves respectively, which find the “optimal” implicit degree based on the behavior of the asymmetric gradient constraint. More precisely, the idea is gradually increasing the implicit degree, until there is no obvious improvement in the asymmetric gradient loss of the outputs. Experimental results have shown the effectiveness and high quality of our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Gobira, S. R., and M. C. Nemes. "N-Loop Treatment of Overlapping Diagrams by the Implicit Regularization Technique." International Journal of Theoretical Physics 42, no. 11 (November 2003): 2765–95. http://dx.doi.org/10.1023/b:ijtp.0000005983.70240.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Yulian, Songcan Chen, and Qing Tian. "Spatial regularization in subspace learning for face recognition: implicit vs. explicit." Neurocomputing 173 (January 2016): 1554–64. http://dx.doi.org/10.1016/j.neucom.2015.09.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ottoni, J. E., A. P. Baêta Scarpelli, Marcos Sampaio, and M. C. Nemes. "Supergravity corrections to the (g−2)l factor by implicit regularization." Physics Letters B 642, no. 3 (November 2006): 253–62. http://dx.doi.org/10.1016/j.physletb.2006.09.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Hyungmin, Sungho Suh, Sunghyun Baek, Daehwan Kim, Daun Jeong, Hansang Cho, and Junmo Kim. "AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation." Knowledge-Based Systems 293 (June 2024): 111692. http://dx.doi.org/10.1016/j.knosys.2024.111692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tan, Zhaorui, Xi Yang, and Kaizhu Huang. "Semantic-Aware Data Augmentation for Text-to-Image Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5098–107. http://dx.doi.org/10.1609/aaai.v38i6.28315.

Full text
Abstract:
Data augmentation has been recently leveraged as an effective regularizer in various vision-language deep neural networks. However, in text-to-image synthesis (T2Isyn), current augmentation wisdom still suffers from the semantic mismatch between augmented paired data. Even worse, semantic collapse may occur when generated images are less semantically constrained. In this paper, we develop a novel Semantic-aware Data Augmentation (SADA) framework dedicated to T2Isyn. In particular, we propose to augment texts in the semantic space via an Implicit Textual Semantic Preserving Augmentation, in conjunction with a specifically designed Image Semantic Regularization Loss as Generated Image Semantic Conservation, to cope well with semantic mismatch and collapse. As one major contribution, we theoretically show that Implicit Textual Semantic Preserving Augmentation can certify better text-image consistency while Image Semantic Regularization Loss regularizing the semantics of generated images would avoid semantic collapse and enhance image quality. Extensive experiments validate that SADA enhances text-image consistency and improves image quality significantly in T2Isyn models across various backbones. Especially, incorporating SADA during the tuning process of Stable Diffusion models also yields performance improvements.
APA, Harvard, Vancouver, ISO, and other styles
41

Hu, Wei. "Understanding Surprising Generalization Phenomena in Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22669. http://dx.doi.org/10.1609/aaai.v38i20.30285.

Full text
Abstract:
Deep learning has exhibited a number of surprising generalization phenomena that are not captured by classical statistical learning theory. This talk will survey some of my work on the theoretical characterizations of several such intriguing phenomena: (1) Implicit regularization: A major mystery in deep learning is that deep neural networks can often generalize well despite their excessive expressive capacity. Towards explaining this mystery, it has been suggested that commonly used gradient-based optimization algorithms enforce certain implicit regularization which effectively constrains the model capacity. (2) Benign overfitting: In certain scenarios, a model can perfectly fit noisily labeled training data, but still archives near-optimal test error at the same time, which is very different from the classical notion of overfitting. (3) Grokking: In certain scenarios, a model initially achieves perfect training accuracy but no generalization (i.e. no better than a random predictor), and upon further training, transitions to almost perfect generalization. Theoretically establishing these properties often involves making appropriate high-dimensional assumptions on the problem as well as a careful analysis of the training dynamics.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Jiangxin, Lijian Wu, Kexin Yin, Changjun Song, Xiaolin Bian, and Shengting Li. "Methods for Solving Finite Element Mesh-Dependency Problems in Geotechnical Engineering—A Review." Sustainability 14, no. 5 (March 3, 2022): 2982. http://dx.doi.org/10.3390/su14052982.

Full text
Abstract:
The instabilities of soil specimens in laboratory or soil made geotechnical structures in field are always numerically simulated by the classical continuum mechanics-based constitutive models with finite element method. However, finite element mesh dependency problems are inevitably encountered when the strain localized failure occurs especially in the post-bifurcation regime. In this paper, an attempt is made to summarize several main numerical regularization techniques used in alleviating the mesh dependency problems, i.e., viscosity theory, nonlocal theory, high-order gradient and micropolar theory. Their fundamentals as well as the advantages and limitations are presented, based on which the combinations of two or more regularization techniques are also suggested. For all the regularization techniques, at least one implicit or explicit parameter with length scale is necessary to preserve the ellipticity of the partial differential governing equations. It is worth noting that, however, the physical meanings and their relations between the length parameters in different regularization techniques are still an open question, and need to be further studied. Therefore, the micropolar theory or its combinations with other numerical methods are promising in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Yuldashev, T. K., Z. K. Eshkuvatov, and N. M. A. Nik Long. "Nonlinear the first kind Fredholm integro-differential first-order equation with degenerate kernel and nonlinear maxima." Mathematical Modeling and Computing 9, no. 1 (2022): 74–82. http://dx.doi.org/10.23939/mmc2022.01.074.

Full text
Abstract:
In this note, the problems of solvability and construction of solutions for a nonlinear Fredholm one-order integro-differential equation with degenerate kernel and nonlinear maxima are considered. Using the method of degenerate kernel combined with the method of regularization, we obtain an implicit the first-order functional-differential equation with the nonlinear maxima. Initial boundary conditions are used to ensure the solution uniqueness. In order to use the method of a successive approximations and prove the one value solvability, the obtained implicit functional-differential equation is transformed to the nonlinear Volterra type integro-differential equation with the nonlinear maxima.
APA, Harvard, Vancouver, ISO, and other styles
44

Lin Cui, Lin Cui, Caiyin Wang Lin Cui, Zhiwei Zhang Caiyin Wang, Xiaoyong Yu Zhiwei Zhang, and Fanghui Zha Xiaoyong Yu. "Fusing Dual Geo-Social Relationship and Deep Implicit Interest Topic Similarity for POI Recommendation." 網際網路技術學刊 23, no. 4 (July 2022): 791–99. http://dx.doi.org/10.53106/160792642022072304014.

Full text
Abstract:
<p>Nowadays, POI recommendation has been a hot research area, which are almost based on incomplete social relationships and geographical influence. However, few research simultaneously focuses on the refined social relationship and the user deep implicit topic similarity under a reachable region. Under this background, a novel Dual Geo-Social Relationship and Deep Implicit Interest Topic Similarity mining under a Reachable Region for POI Recommendation (DDR-PR) is proposed. DDR-PR first adopts kernel density estimation to compute the user checking-in reachable area. Under the reachable area, the combined relationship similarity based on the link relationship and common check-in social relationship is computed out. Then, the deep implicit interest topic similarity between users is mined out adopting the proposed topic model RTAU-TCP. We formulate the combined relationship similarity and implicit interest topic similarity as two regularization terms to incorporate into matrix factorization, which can recommend new POIs for a user under his or her reachable area. Extensive experiments prove the superiority of DDR-PR.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
45

Badriev, I. B., O. A. Zadvornov, and L. N. Ismagilov. "On Iterative Regularization Methods for Variational Inequalities of the Second Kind with Pseudomonotone Operators." Computational Methods in Applied Mathematics 3, no. 2 (2003): 223–34. http://dx.doi.org/10.2478/cmam-2003-0015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Zhongzhan, Senwei Liang, Mingfu Liang, and Haizhao Yang. "DIANet: Dense-and-Implicit Attention Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4206–14. http://dx.doi.org/10.1609/aaai.v34i04.5842.

Full text
Abstract:
Attention networks have successfully boosted the performance in various vision problems. Previous works lay emphasis on designing a new attention module and individually plug them into the networks. Our paper proposes a novel-and-simple framework that shares an attention module throughout different network layers to encourage the integration of layer-wise information and this parameter-sharing module is referred to as Dense-and-Implicit-Attention (DIA) unit. Many choices of modules can be used in the DIA unit. Since Long Short Term Memory (LSTM) has a capacity of capturing long-distance dependency, we focus on the case when the DIA unit is the modified LSTM (called DIA-LSTM). Experiments on benchmark datasets show that the DIA-LSTM unit is capable of emphasizing layer-wise feature interrelation and leads to significant improvement of image classification accuracy. We further empirically show that the DIA-LSTM has a strong regularization ability on stabilizing the training of deep networks by the experiments with the removal of skip connections (He et al. 2016a) or Batch Normalization (Ioffe and Szegedy 2015) in the whole residual network.
APA, Harvard, Vancouver, ISO, and other styles
47

Iqbal, Sajad, and Yujie Wei. "Recovery of the time-dependent implied volatility of time fractional Black–Scholes equation using linearization technique." Journal of Inverse and Ill-posed Problems 29, no. 4 (January 22, 2021): 599–610. http://dx.doi.org/10.1515/jiip-2020-0105.

Full text
Abstract:
Abstract This paper tries to examine the recovery of the time-dependent implied volatility coefficient from market prices of options for the time fractional Black–Scholes equation (TFBSM) with double barriers option. We apply the linearization technique and transform the direct problem into an inverse source problem. Resultantly, we get a Volterra integral equation for the unknown linear functional, which is then solved by the regularization method. We use L 1 {L_{1}} -forward difference implicit approximation for the forward problem. Numerical results using L 1 {L_{1}} -forward difference implicit approximation ( L 1 {L_{1}} -FDIA) for the inverse problem are also discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
48

Arthern, Robert J. "Exploring the use of transformation group priors and the method of maximum relative entropy for Bayesian glaciological inversions." Journal of Glaciology 61, no. 229 (2015): 947–62. http://dx.doi.org/10.3189/2015jog15j050.

Full text
Abstract:
AbstractIce-sheet models can be used to forecast ice losses from Antarctica and Greenland, but to fully quantify the risks associated with sea-level rise, probabilistic forecasts are needed. These require estimates of the probability density function (PDF) for various model parameters (e.g. the basal drag coefficient and ice viscosity). To infer such parameters from satellite observations it is common to use inverse methods. Two related approaches are in use: (1) minimization of a cost function that describes the misfit to the observations, often accompanied by explicit or implicit regularization, or (2) use of Bayes’ theorem to update prior assumptions about the probability of parameters. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. One way to specify prior PDFs more objectively is by deriving transformation group priors that are invariant to symmetries of the problem, and then maximizing relative entropy, subject to any additional constraints. Here we investigate the application of these methods to the derivation of priors for a Bayesian approach to an idealized glaciological inverse problem.
APA, Harvard, Vancouver, ISO, and other styles
49

Boffi, Nicholas M., and Jean-Jacques E. Slotine. "Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction." Neural Computation 33, no. 3 (March 2021): 590–673. http://dx.doi.org/10.1162/neco_a_01360.

Full text
Abstract:
Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors—out of the many that will achieve perfect tracking or prediction—for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
50

Sohn, G., Y. Jwa, J. Jung, and H. Kim. "AN IMPLICIT REGULARIZATION FOR 3D BUILDING ROOFTOP MODELING USING AIRBORNE LIDAR DATA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences I-3 (July 20, 2012): 305–10. http://dx.doi.org/10.5194/isprsannals-i-3-305-2012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography