To see the other types of publications on this topic, follow the link: Gradient implicite.

Journal articles on the topic 'Gradient implicite'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gradient implicite.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Weller, Hilary, and Ava Shahrokhi. "Curl-Free Pressure Gradients over Orography in a Solution of the Fully Compressible Euler Equations with Implicit Treatment of Acoustic and Gravity Waves." Monthly Weather Review 142, no. 12 (2014): 4439–57. http://dx.doi.org/10.1175/mwr-d-14-00054.1.

Full text
Abstract:
Abstract Steep orography can cause noisy solutions and instability in models of the atmosphere. A new technique for modeling flow over orography is introduced that guarantees curl-free gradients on arbitrary grids, implying that the pressure gradient term is not a spurious source of vorticity. This mimetic property leads to better hydrostatic balance and better energy conservation on test cases using terrain-following grids. Curl-free gradients are achieved by using the covariant components of velocity over orography rather than the usual horizontal and vertical components. In addition, gravit
APA, Harvard, Vancouver, ISO, and other styles
2

Askes, Harm, and Miguel A. Gutiérrez. "Implicit gradient elasticity." International Journal for Numerical Methods in Engineering 67, no. 3 (2006): 400–416. http://dx.doi.org/10.1002/nme.1640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haugh, Jason M., Franca Codazzi, Mary Teruel, and Tobias Meyer. "Spatial Sensing in Fibroblasts Mediated by 3′ Phosphoinositides." Journal of Cell Biology 151, no. 6 (2000): 1269–80. http://dx.doi.org/10.1083/jcb.151.6.1269.

Full text
Abstract:
The directed movement of fibroblasts towards locally released platelet-derived growth factor (PDGF) is a critical event in wound healing. Although recent studies have implicated polarized activation of phosphoinositide (PI) 3-kinase in G protein-mediated chemotaxis, the role of 3′ PI lipids in tyrosine kinase-triggered chemotaxis is not well understood. Using evanescent wave microscopy and green fluorescent protein–tagged Akt pleckstrin homology domain (GFP–AktPH) as a molecular sensor, we show that application of a shallow PDGF gradient triggers a markedly steeper gradient in 3′ PI lipids in
APA, Harvard, Vancouver, ISO, and other styles
4

Minervini, Pasquale, Luca Franceschi, and Mathias Niepert. "Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9200–9208. http://dx.doi.org/10.1609/aaai.v37i8.26103.

Full text
Abstract:
The integration of discrete algorithmic components in deep learning architectures has numerous applications. Recently, Implicit Maximum Likelihood Estimation, a class of gradient estimators for discrete exponential family distributions, was proposed by combining implicit differentiation through perturbation with the path-wise gradient estimator. However, due to the finite difference approximation of the gradients, it is especially sensitive to the choice of the finite difference step size, which needs to be specified by the user. In this work, we present Adaptive IMLE (AIMLE), the first adapti
APA, Harvard, Vancouver, ISO, and other styles
5

Dong, Mingrong, Yixuan Yang, Kai Zeng, Qingwang Wang, and Tao Shen. "Implicit Sharpness-Aware Minimization for Domain Generalization." Remote Sensing 16, no. 16 (2024): 2877. http://dx.doi.org/10.3390/rs16162877.

Full text
Abstract:
Domain generalization (DG) aims to learn knowledge from multiple related domains to achieve a robust generalization performance in unseen target domains, which is an effective approach to mitigate domain shift in remote sensing image classification. Although the sharpness-aware minimization (SAM) method enhances DG capability and improves remote sensing image classification performance by promoting the convergence of the loss minimum to a flatter loss surface, the perturbation loss (maximum loss within the neighborhood of a local minimum) of SAM fails to accurately measure the true sharpness o
APA, Harvard, Vancouver, ISO, and other styles
6

Bertei, Antonio, Andrea Lamorgese, and Roberto Mauri. "Constitutive Relations of Thermal and Mass Diffusion." Journal of Non-Equilibrium Thermodynamics 45, no. 1 (2020): 27–38. http://dx.doi.org/10.1515/jnet-2019-0055.

Full text
Abstract:
AbstractNon-equilibrium thermodynamics provides a general framework for the description of mass and thermal diffusion, thereby including also cross-thermal and material diffusion effects, which are generally modeled through the Onsager coupling terms within the constitutive equations relating heat and mass flux to the gradients of temperature and chemical potential. These so-called Soret and Dufour coefficients are not uniquely defined, though, as they can be derived by adopting one of the several constitutive relations satisfying the principles of non-equilibrium thermodynamics. Therefore, ma
APA, Harvard, Vancouver, ISO, and other styles
7

Nishikawa, Hiroaki. "From hyperbolic diffusion scheme to gradient method: Implicit Green–Gauss gradients for unstructured grids." Journal of Computational Physics 372 (November 2018): 126–60. http://dx.doi.org/10.1016/j.jcp.2018.06.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gourmel, Olivier, Loic Barthe, Marie-Paule Cani, et al. "A gradient-based implicit blend." ACM Transactions on Graphics 32, no. 2 (2013): 1–12. http://dx.doi.org/10.1145/2451236.2451238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dandi, Yatin, Luis Barba, and Martin Jaggi. "Implicit Gradient Alignment in Distributed and Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6454–62. http://dx.doi.org/10.1609/aaai.v36i6.20597.

Full text
Abstract:
A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients or mini-batches due to heterogeneity and stochasticity of the distributed data. In this work, we show that data heterogeneity can in fact be exploited to improve generalization performance through implicit regularization. One way to alleviate the effects of heterogeneity is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that
APA, Harvard, Vancouver, ISO, and other styles
10

Nelson, J. Bradley. "An alternate derivation of the three‐dimensional Hilbert transform relations from first principles." GEOPHYSICS 51, no. 4 (1986): 1014–15. http://dx.doi.org/10.1190/1.1442141.

Full text
Abstract:
Several techniques for determining the location, geometry, and strength of a source are based on a knowledge of the magnetic gradients generated by that source. Hood (1965), Bhattacharyya (1966), and Rao et al. (1981) detailed three of these gradient methods. For many years, geophysicists have used the two‐dimensional (2-D) Hilbert transform to approximate the vertical gradient from measurements of the horizontal gradient in the magnetic‐field intensity (Nabighian, 1972; Stanley and Green, 1976; Stanley, 1977; Mohan et al., 1982). This technique is of limited applicability because of the impli
APA, Harvard, Vancouver, ISO, and other styles
11

Tuğba Küçükseyhan. "Optimal control of the gradient systems." GSC Advanced Research and Reviews 21, no. 2 (2024): 382–91. http://dx.doi.org/10.30574/gscarr.2024.21.2.0453.

Full text
Abstract:
In this paper we consider the optimal control problems governed by the gradient systems for the Ginzburg-Landau free energy where denotes a potential function and ε is the diffusivity. One example of gradient systems are the Schlögl equation arising in chemical waves with a quartic potential function F(y). Gradient systems are characterized by energy decreasing property . Numerical integrators that preserve the energy decreasing property in the discrete setting are called energy or gradient stable. It is known that the implicit Euler method is first order unconditionally energy stable method.
APA, Harvard, Vancouver, ISO, and other styles
12

Goodhill, Geoffrey J., and Herwig Baier. "Axon Guidance: Stretching Gradients to the Limit." Neural Computation 10, no. 3 (1998): 521–27. http://dx.doi.org/10.1162/089976698300017638.

Full text
Abstract:
Neuronal growth cones, the sensory-motile structures at the tips of developing axons, navigate to their targets over distances that can be many times greater than their diameter. They may accomplish this impressive task by following spatial gradients of axon guidance molecules in their environment (Bonhoeffer & Gierer, 1984; Tessier-Lavigne & Placzek, 1991; Baier & Bonhoeffer, 1994). We calculate the optimal shape of a gradient and the distance over which it can be detected by a growth cone for two competing mechanistic models of axon guidance. The results are surprisingly simple:
APA, Harvard, Vancouver, ISO, and other styles
13

Brown, James William L., Azmain Chowdhury, Baris Kanber, et al. "Magnetisation transfer ratio abnormalities in primary and secondary progressive multiple sclerosis." Multiple Sclerosis Journal 26, no. 6 (2019): 679–87. http://dx.doi.org/10.1177/1352458519841810.

Full text
Abstract:
Background: In relapse-onset multiple sclerosis (MS), tissue abnormality – as assessed with magnetisation transfer ratio (MTR) imaging – is greater in the outer cortical and inner periventricular layers. The cause of this remains unknown but meningeal inflammation has been implicated, particularly lymphoid follicles, which are seen in secondary progressive (SP) but not primary progressive (PP) MS. Cortical and periventricular MTR gradients might, therefore, differ in PPMS and SPMS if these follicles are responsible. Objective: We assessed cortical and periventricular MTR gradients in PPMS, and
APA, Harvard, Vancouver, ISO, and other styles
14

Simone, Angelo. "Explicit and implicit gradient-enhanced damage models." Revue Européenne de Génie Civil 11, no. 7-8 (2007): 1023–44. http://dx.doi.org/10.1080/17747120.2007.9692975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cao, Qianfeng, Ning Dai, Siyuan Yang, and Hongqing Dai. "Implicit conformal design for gradient architected materials." Materials & Design 248 (December 2024): 113507. https://doi.org/10.1016/j.matdes.2024.113507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Zhuang. "Variance reduced optimization with implicit gradient transport." Knowledge-Based Systems 212 (January 2021): 106626. http://dx.doi.org/10.1016/j.knosys.2020.106626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Simone, Angelo. "Explicit and implicit gradient-enhanced damage models." Revue européenne de génie civil 11, no. 7-8 (2007): 1023–44. http://dx.doi.org/10.3166/regc.11.1023-1044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Jinze. "The advance of underdetermined linear regression optimization based on implicit bias." Journal of Physics: Conference Series 2580, no. 1 (2023): 012008. http://dx.doi.org/10.1088/1742-6596/2580/1/012008.

Full text
Abstract:
Abstract The gradient descent method has the characteristics of easy realization and simple structure. The traditional gradient descent method has many advantages, especially in solving convex optimization problems. In recent years, some researchers have noticed that the gradient descent algorithm is helpful to solve the problem of underdetermined linear regression optimization. Therefore, in order to explore the specific relationship between gradient descent and under-determined linear regression optimization, this paper focuses on a case with a unique finite root loss function and discusses
APA, Harvard, Vancouver, ISO, and other styles
19

Kolahdouz, Ebrahim M., and David Salac. "A Semi-implicit Gradient Augmented Level Set Method." SIAM Journal on Scientific Computing 35, no. 1 (2013): A231—A254. http://dx.doi.org/10.1137/120871237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Titscher, Thomas, Javier Oliver, and Jörg F. Unger. "Implicit–Explicit Integration of Gradient-Enhanced Damage Models." Journal of Engineering Mechanics 145, no. 7 (2019): 04019040. http://dx.doi.org/10.1061/(asce)em.1943-7889.0001608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Burago, N. G., and I. S. Nikitin. "Matrix-Free Conjugate Gradient Implementation of Implicit Schemes." Computational Mathematics and Mathematical Physics 58, no. 8 (2018): 1247–58. http://dx.doi.org/10.1134/s0965542518080043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Askes, H., and L. J. Sluys. "Explicit and implicit gradient series in damage mechanics." European Journal of Mechanics - A/Solids 21, no. 3 (2002): 379–90. http://dx.doi.org/10.1016/s0997-7538(02)01214-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gairre, L., and E. Mosca. "Implicit predictive adaptive control with stochastic gradient identifiers." IEEE Transactions on Automatic Control 38, no. 7 (1993): 1135–39. http://dx.doi.org/10.1109/9.231472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Vernay, Aurélia, Sébastien Schaub, Isabelle Guillas, Martine Bassilana, and Robert A. Arkowitz. "A steep phosphoinositide bis-phosphate gradient forms during fungal filamentous growth." Journal of Cell Biology 198, no. 4 (2012): 711–30. http://dx.doi.org/10.1083/jcb.201203099.

Full text
Abstract:
Membrane lipids have been implicated in many critical cellular processes, yet little is known about the role of asymmetric lipid distribution in cell morphogenesis. The phosphoinositide bis-phosphate PI(4,5)P2 is essential for polarized growth in a range of organisms. Although an asymmetric distribution of this phospholipid has been observed in some cells, long-range gradients of PI(4,5)P2 have not been observed. Here, we show that in the human pathogenic fungus Candida albicans a steep, long-range gradient of PI(4,5)P2 occurs concomitant with emergence of the hyphal filament. Both sufficient
APA, Harvard, Vancouver, ISO, and other styles
25

Ellison, David, Andrew Mugler, Matthew D. Brennan, et al. "Cell–cell communication enhances the capacity of cell ensembles to sense shallow gradients during morphogenesis." Proceedings of the National Academy of Sciences 113, no. 6 (2016): E679—E688. http://dx.doi.org/10.1073/pnas.1516503113.

Full text
Abstract:
Collective cell responses to exogenous cues depend on cell–cell interactions. In principle, these can result in enhanced sensitivity to weak and noisy stimuli. However, this has not yet been shown experimentally, and little is known about how multicellular signal processing modulates single-cell sensitivity to extracellular signaling inputs, including those guiding complex changes in the tissue form and function. Here we explored whether cell–cell communication can enhance the ability of cell ensembles to sense and respond to weak gradients of chemotactic cues. Using a combination of experimen
APA, Harvard, Vancouver, ISO, and other styles
26

Oropeza-Navarro, Osvaldo Andres, Ahmad Chihadeh, Jakob Platen, and Michael Kaliske. "An implicit gradient-enhanced microplane damage material model in the coupled implicit MPM-FEM." Computers & Structures 305 (December 2024): 107559. http://dx.doi.org/10.1016/j.compstruc.2024.107559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fan, Xiaomeng, Yuwei Wu, Zhi Gao, Yunde Jia, and Mehrtash Harandi. "Efficient Riemannian Meta-Optimization by Implicit Differentiation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 3733–40. http://dx.doi.org/10.1609/aaai.v36i4.20287.

Full text
Abstract:
To solve optimization problems with nonlinear constrains, the recently developed Riemannian meta-optimization methods show promise, which train neural networks as an optimizer to perform optimization on Riemannian manifolds. A key challenge is the heavy computational and memory burdens, because computing the meta-gradient with respect to the optimizer involves a series of time-consuming derivatives, and stores large computation graphs in memory. In this paper, we propose an efficient Riemannian meta-optimization method that decouples the complex computation scheme from the meta-gradient. We de
APA, Harvard, Vancouver, ISO, and other styles
28

Hickel, S., and N. A. Adams. "Implicit LES applied to zero-pressure-gradient and adverse-pressure-gradient boundary-layer turbulence." International Journal of Heat and Fluid Flow 29, no. 3 (2008): 626–39. http://dx.doi.org/10.1016/j.ijheatfluidflow.2008.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Poh, L. H., R. H. J. Peerlings, M. G. D. Geers, and S. Swaddiwudhipong. "An implicit tensorial gradient plasticity model – Formulation and comparison with a scalar gradient model." International Journal of Solids and Structures 48, no. 18 (2011): 2595–604. http://dx.doi.org/10.1016/j.ijsolstr.2011.05.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Buell, Thomas J., Daniel M. S. Raper, Dale Ding, et al. "Concurrent Venous Stenting of the Transverse and Occipito-Marginal Sinuses: An Analogy with Parallel Hemodynamic Circuits." Journal of Neurosciences in Rural Practice 10, no. 02 (2019): 334–38. http://dx.doi.org/10.4103/jnrp.jnrp_259_18.

Full text
Abstract:
ABSTRACTNonthrombotic intracranial venous occlusive disease (NIVOD) has been implicated in the pathophysiology of idiopathic intracranial hypertension (IIH) and various non-IIH headache syndromes. Endovascular stenting of stenotic, dominant transverse sinuses (TSs) may reduce trans-stenosis pressure gradients, decrease intracranial pressure, and alleviate symptoms in a subset of NIVOD patients. We present a case in which concurrent stenting of the occipito-marginal sinus obliterated the residual trans-stenosis pressure gradient across an initially stented dominant TS. We hypothesize that this
APA, Harvard, Vancouver, ISO, and other styles
31

Paramanantham, SalaiSargunan S., Dong-Hyun Kim, and Warn-Gyu Park. "Numerical Study of Bubble Behavior under Gradient Flows during Subcooled Flow Boiling in Vertical Flow Channel." Symmetry 12, no. 4 (2020): 611. http://dx.doi.org/10.3390/sym12040611.

Full text
Abstract:
In this study, we examined the condensing behavior of single and multiple bubbles of pure steam in a subcooled liquid phase using a fully compressible two-phase homogeneous mixture method that is solved by an implicit dual-time preconditioned technique. The interface between the liquid and vapor phases was determined by the advection equations using a compressive high-resolution interfacing capturing method. The spurious current reduced near the interface, a smoothing filter is applied to the progress curvature calculation. The sensitivity study carried out to predict the empirical constant by
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Ye, Song-Can Chen, and Sheng-Jun Huang. "Implicit Stochastic Gradient Descent for Training Physics-Informed Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8692–700. http://dx.doi.org/10.1609/aaai.v37i7.26046.

Full text
Abstract:
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems, but they are still trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features. In this paper, we propose to employ implicit stochastic gradient descent (ISGD) method to train PINNs for improving the stability of training process. We heuristically analyze how ISGD overcome stiffness in the gradient flow dynamics of PINNs, especially for problems with multi-scale solutions. We theoretically prove t
APA, Harvard, Vancouver, ISO, and other styles
33

Duan, Guangtao, and Bin Chen. "Comparison of parallel solvers for Moving Particle Semi-Implicit method." Engineering Computations 32, no. 3 (2015): 834–62. http://dx.doi.org/10.1108/ec-02-2014-0029.

Full text
Abstract:
Purpose – The purpose of this paper is to find the best solver for parallelizing particle methods based on solving Pressure Poisson Equation (PPE) by taking Moving Particle Semi-Implicit (MPS) method as an example because the solution for PPE is usually the most time-consuming part difficult to parallelize. Design/methodology/approach – To find the best solver, the authors compare six Krylov solvers, namely, Conjugate Gradient method (CG), Scaled Conjugate Gradient method (SCG), Bi-Conjugate Gradient Stabilized (BiCGStab) method, Conjugate Gradient Squared (CGS) method with Symmetric Lanczos A
APA, Harvard, Vancouver, ISO, and other styles
34

Singh, Manish K., N. Munikrishna, V. Ramesh, and N. Balakrishnan. "Implicit gradient reconstruction (IGR) method for compressible flow simulation." Journal of Physics: Conference Series 822 (April 11, 2017): 012030. http://dx.doi.org/10.1088/1742-6596/822/1/012030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fang, Yixin. "Scalable statistical inference for averaged implicit stochastic gradient descent." Scandinavian Journal of Statistics 46, no. 4 (2019): 987–1002. http://dx.doi.org/10.1111/sjos.12378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Feng, Decheng, Xiaodan Ren, and Jie Li. "Implicit Gradient Delocalization Method for Force-Based Frame Element." Journal of Structural Engineering 142, no. 2 (2016): 04015122. http://dx.doi.org/10.1061/(asce)st.1943-541x.0001397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tovo, R., and P. Livieri. "An implicit gradient application to fatigue of complex structures." Engineering Fracture Mechanics 75, no. 7 (2008): 1804–14. http://dx.doi.org/10.1016/j.engfracmech.2007.01.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zreid, Imadeddin, and Michael Kaliske. "An implicit gradient formulation for microplane Drucker-Prager plasticity." International Journal of Plasticity 83 (August 2016): 252–72. http://dx.doi.org/10.1016/j.ijplas.2016.04.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bui, Q. V. "Initiation of damage with implicit gradient-enhanced damage models." International Journal of Solids and Structures 47, no. 18-19 (2010): 2425–35. http://dx.doi.org/10.1016/j.ijsolstr.2010.05.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hadjidimos, A., and M. Lapidakis. "Optimal Alternating Direction Implicit Preconditioners for Conjugate Gradient methods." Applied Mathematics and Computation 183, no. 1 (2006): 559–74. http://dx.doi.org/10.1016/j.amc.2006.05.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zaitzeff, Alexander, Selim Esedoḡlu, and Krishna Garikipati. "Variational Extrapolation of Implicit Schemes for General Gradient Flows." SIAM Journal on Numerical Analysis 58, no. 5 (2020): 2799–817. http://dx.doi.org/10.1137/19m1283963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Geers, M. G. D., R. H. J. Peerlings, W. A. M. Brekelmans, and R. de Borst. "Phenomenological nonlocal approaches based on implicit gradient-enhanced damage." Acta Mechanica 144, no. 1-2 (2000): 1–15. http://dx.doi.org/10.1007/bf01181824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Simone, A., H. Askes, R. H. J. Peerlings, and L. J. Sluys. "Interpolation requirements for implicit gradient-enhanced continuum damage models." Communications in Numerical Methods in Engineering 19, no. 7 (2003): 563–72. http://dx.doi.org/10.1002/cnm.597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Simone, A., H. Askes, R. H. J. Peerlings, and L. J. Sluys. "‘Interpolation requirements for implicit gradient-enhanced continuum damage models’." Communications in Numerical Methods in Engineering 20, no. 2 (2004): 163–65. http://dx.doi.org/10.1002/cnm.663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xing, Weiwei, Yue Cheng, Hongzhu Yi, et al. "LCGC: Learning from Consistency Gradient Conflicting for Class-Imbalanced Semi-Supervised Debiasing." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21697–706. https://doi.org/10.1609/aaai.v39i20.35474.

Full text
Abstract:
Classifiers often learn to be biased corresponding to the class-imbalanced dataset under the semi-supervised learning (SSL) set. While previous work tries to appropriately re-balance the classifiers by subtracting a class-irrelevant image's logit, we further utilize a cheaper form of consistency gradients, which can be widely applicable to various class-imbalanced SSL (CISSL) models. We theoretically analyze that the process of refining pseudo-labels with a baseline image (solid color image without any patterns) in the basic SSL algorithm implicitly utilizes integrated gradient flow training,
APA, Harvard, Vancouver, ISO, and other styles
46

Vardi, Gal. "On the Implicit Bias in Deep-Learning Algorithms." Communications of the ACM 66, no. 6 (2023): 86–93. http://dx.doi.org/10.1145/3571070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Guo, Minghao, and Yan Gao. "Adaptive Approximate Implicitization of Planar Parametric Curves via Asymmetric Gradient Constraints." Symmetry 15, no. 9 (2023): 1738. http://dx.doi.org/10.3390/sym15091738.

Full text
Abstract:
Converting a parametric curve into the implicit form, which is called implicitization, has always been a popular but challenging problem in geometric modeling and related applications. However, existing methods mostly suffer from the problems of maintaining geometric features and choosing a reasonable implicit degree. The present paper has two contributions. We first introduce a new regularization constraint (called the asymmetric gradient constraint) for both polynomial and non-polynomial curves, which efficiently possesses shape-preserving. We then propose two adaptive algorithms of approxim
APA, Harvard, Vancouver, ISO, and other styles
48

Jagtap, Ameya D., Kenji Kawaguchi, and George Em Karniadakis. "Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2239 (2020): 20200334. http://dx.doi.org/10.1098/rspa.2020.0334.

Full text
Abstract:
We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing a scalable parameter in each layer (layer-wise) and for every neuron (neuron-wise) separately, and then optimizing it using a variant of stochastic gradient descent algorithm. In order to further increase the training speed, an activation slope-based slope recovery term is added in the loss function, which furt
APA, Harvard, Vancouver, ISO, and other styles
49

Lim, Jinwoo, Suhyun Kim, and Soo-Mook Moon. "Convergence Analysis of Federated Learning Methods Using Backward Error Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 18719–27. https://doi.org/10.1609/aaai.v39i18.34060.

Full text
Abstract:
Backward error analysis allows finding a modified loss function, which the parameter updates really follow under the influence of an optimization method. The additional loss terms included in this modified function is called implicit regularizer. In this paper, we attempt to find the implicit regularizer for various federated learning algorithms on non-IID data distribution, and explain why each method shows different convergence behavior. We first show that the implicit regularizer of FedAvg disperses the gradient of each client from the average gradient, thus increasing the gradient variance
APA, Harvard, Vancouver, ISO, and other styles
50

Amid, Ehsan, and Manfred K. Warmuth. "An Implicit Form of Krasulina's k-PCA Update without the Orthonormality Constraint." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3179–86. http://dx.doi.org/10.1609/aaai.v34i04.5715.

Full text
Abstract:
We shed new insights on the two commonly used updates for the online k-PCA problem, namely, Krasulina's and Oja's updates. We show that Krasulina's update corresponds to a projected gradient descent step on the Stiefel manifold of orthonormal k-frames, while Oja's update amounts to a gradient descent step using the unprojected gradient. Following these observations, we derive a more implicit form of Krasulina's k-PCA update, i.e. a version that uses the information of the future gradient as much as possible. Most interestingly, our implicit Krasulina update avoids the costly QR-decomposition s
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!