Artículos de revistas sobre el tema "Lipschitz neural network"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Lipschitz neural network.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Lipschitz neural network".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zhu, Zelong, Chunna Zhao y Yaqun Huang. "Fractional order Lipschitz recurrent neural network with attention for long time series prediction". Journal of Physics: Conference Series 2813, n.º 1 (1 de agosto de 2024): 012015. http://dx.doi.org/10.1088/1742-6596/2813/1/012015.

Texto completo
Resumen
Abstract Time series data prediction holds a significant importance in various applications. In this study, we specifically concentrate on long-time series data prediction. Recurrent Neural Networks are widely recognized as a fundamental neural network architecture for processing effectively time-series data. Recurrent Neural Network models encounter the gradient disappearance or gradient explosion challenge in long series data. To resolve the gradient problem and improve accuracy, the Fractional Order Lipschitz Recurrent Neural Network (FOLRNN) model is proposed to predict long time series in this paper. The proposed method uses the Lipschitz continuity to alleviate the gradient problem. The fractional order integration is applied to compute the hidden states of the Recurrent Neural Network in the proposed method. The intricate dynamics of long-time series data can be captured by fractional order calculus. It has more accurate predictions compared with Lipschitz Recurrent Neural Networks models. Then self-attention is used to improve feature representation. It can describe the correlation of features and improve predict performance. Some experiments show that the FOLRNN model achieves better results than other methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Huan, Pengchuan Zhang y Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.

Texto completo
Resumen
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks. In this paper, we propose a recursive algorithm, RecurJac, to compute both upper and lower bounds for each element in the Jacobian matrix of a neural network with respect to network’s input, and the network can contain a wide range of activation functions. As a byproduct, we can efficiently obtain a (local) Lipschitz constant, which plays a crucial role in neural network robustness verification, as well as the training stability of GANs. Experiments show that (local) Lipschitz constants produced by our method is of better quality than previous approaches, thus providing better robustness verification results. Our algorithm has polynomial time complexity, and its computation time is reasonable even for relatively large networks. Additionally, we use our bounds of Jacobian matrix to characterize the landscape of the neural network, for example, to determine whether there exist stationary points in a local neighborhood.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Araujo, Alexandre, Benjamin Negrevergne, Yann Chevaleyre y Jamal Atif. "On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 6661–69. http://dx.doi.org/10.1609/aaai.v35i8.16824.

Texto completo
Resumen
This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks. Lipschitz regularity is now established as a key property of modern deep learning with implications in training stability, generalization, robustness against adversarial examples, etc. However, computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard. Recent attempts from the literature introduce upper bounds to approximate this constant that are either efficient but loose or accurate but computationally expensive. In this work, by leveraging the theory of Toeplitz matrices, we introduce a new upper bound for convolutional layers that is both tight and easy to compute. Based on this result we devise an algorithm to train Lipschitz regularized Convolutional Neural Networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Xu, Yuhui, Wenrui Dai, Yingyong Qi, Junni Zou y Hongkai Xiong. "Iterative Deep Neural Network Quantization With Lipschitz Constraint". IEEE Transactions on Multimedia 22, n.º 7 (julio de 2020): 1874–88. http://dx.doi.org/10.1109/tmm.2019.2949857.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mohammad, Ibtihal J. "Neural Networks of the Rational r-th Powers of the Multivariate Bernstein Operators". BASRA JOURNAL OF SCIENCE 40, n.º 2 (1 de septiembre de 2022): 258–73. http://dx.doi.org/10.29072/basjs.20220201.

Texto completo
Resumen
In this study, a novel neural network for the multivariance Bernstein operators' rational powers was developed. A positive integer is required by these networks. In the space of all real-valued continuous functions, the pointwise and uniform approximation theorems are introduced and examined first. After that, the Lipschitz space is used to study two key theorems. Additionally, some numerical examples are provided to demonstrate how well these neural networks approximate two test functions. The numerical outcomes demonstrate that as input grows, the neural network provides a better approximation. Finally, the graphs used to represent these neural network approximations show the average error between the approximation and the test function.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ibtihal.J.M y Ali J. Mohammad. "Neural Network of Multivariate Square Rational Bernstein Operators with Positive Integer Parameter". European Journal of Pure and Applied Mathematics 15, n.º 3 (31 de julio de 2022): 1189–200. http://dx.doi.org/10.29020/nybg.ejpam.v15i3.4425.

Texto completo
Resumen
This research is defined a new neural network (NN) that depends upon a positive integer parameter using the multivariate square rational Bernstein polynomials. Some theorems for this network are proved, such as the pointwise and the uniform approximation theorems. Firstly, the absolute moment for a function that belongs to Lipschitz space is defined to estimate the order of the NN. Secondly, some numerical applications for this NN are given by taking two test functions. Finally, the numerical results for this network are compared with the classical neural networks (NNs). The results turn out that the new network is better than the classical one.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Kanglin y Guoping Qiu. "Lipschitz constrained GANs via boundedness and continuity". Neural Computing and Applications 32, n.º 24 (24 de mayo de 2020): 18271–83. http://dx.doi.org/10.1007/s00521-020-04954-z.

Texto completo
Resumen
AbstractOne of the challenges in the study of generative adversarial networks (GANs) is the difficulty of its performance control. Lipschitz constraint is essential in guaranteeing training stability for GANs. Although heuristic methods such as weight clipping, gradient penalty and spectral normalization have been proposed to enforce Lipschitz constraint, it is still difficult to achieve a solution that is both practically effective and theoretically provably satisfying a Lipschitz constraint. In this paper, we introduce the boundedness and continuity (BC) conditions to enforce the Lipschitz constraint on the discriminator functions of GANs. We prove theoretically that GANs with discriminators meeting the BC conditions satisfy the Lipschitz constraint. We present a practically very effective implementation of a GAN based on a convolutional neural network (CNN) by forcing the CNN to satisfy the BC conditions (BC–GAN). We show that as compared to recent techniques including gradient penalty and spectral normalization, BC–GANs have not only better performances but also lower computational complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Othmani, S., N. E. Tatar y A. Khemmoudj. "Asymptotic behavior of a BAM neural network with delays of distributed type". Mathematical Modelling of Natural Phenomena 16 (2021): 29. http://dx.doi.org/10.1051/mmnp/2021023.

Texto completo
Resumen
In this paper, we examine a Bidirectional Associative Memory neural network model with distributed delays. Using a result due to Cid [J. Math. Anal. Appl. 281 (2003) 264–275], we were able to prove an exponential stability result in the case when the standard Lipschitz continuity condition is violated. Indeed, we deal with activation functions which may not be Lipschitz continuous. Therefore, the standard Halanay inequality is not applicable. We will use a nonlinear version of this inequality. At the end, the obtained differential inequality which should imply the exponential stability appears ‘state dependent’. That is the usual constant depends in this case on the state itself. This adds some difficulties which we overcome by a suitable argument.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xia, Youshen. "An Extended Projection Neural Network for Constrained Optimization". Neural Computation 16, n.º 4 (1 de abril de 2004): 863–83. http://dx.doi.org/10.1162/089976604322860730.

Texto completo
Resumen
Recently, a projection neural network has been shown to be a promising computational model for solving variational inequality problems with box constraints. This letter presents an extended projection neural network for solving monotone variational inequality problems with linear and nonlinear constraints. In particular, the proposed neural network can include the projection neural network as a special case. Compared with the modified projection-type methods for solving constrained monotone variational inequality problems, the proposed neural network has a lower complexity and is suitable for parallel implementation. Furthermore, the proposed neural network is theoretically proven to be exponentially convergent to an exact solution without a Lipschitz condition. Illustrative examples show that the extended projection neural network can be used to solve constrained monotone variational inequality problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Peiluan, Yuejing Lu, Changjin Xu y Jing Ren. "Bifurcation Phenomenon and Control Technique in Fractional BAM Neural Network Models Concerning Delays". Fractal and Fractional 7, n.º 1 (22 de diciembre de 2022): 7. http://dx.doi.org/10.3390/fractalfract7010007.

Texto completo
Resumen
In this current study, we formulate a kind of new fractional BAM neural network model concerning five neurons and time delays. First, we explore the existence and uniqueness of the solution of the formulated fractional delay BAM neural network models via the Lipschitz condition. Second, we study the boundedness of the solution to the formulated fractional delayed BAM neural network models using a proper function. Third, we set up a novel sufficient criterion on the onset of the Hopf bifurcation stability of the formulated fractional BAM neural network models by virtue of the stability criterion and bifurcation principle of fractional delayed dynamical systems. Fourth, a delayed feedback controller is applied to command the time of occurrence of the bifurcation and stability domain of the formulated fractional delayed BAM neural network models. Lastly, software simulation figures are provided to verify the key outcomes. The theoretical outcomes obtained through this exploration can play a vital role in controlling and devising networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wei Bian y Xiaojun Chen. "Smoothing Neural Network for Constrained Non-Lipschitz Optimization With Applications". IEEE Transactions on Neural Networks and Learning Systems 23, n.º 3 (marzo de 2012): 399–411. http://dx.doi.org/10.1109/tnnls.2011.2181867.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Chen, Xin, Yujuan Si, Zhanyuan Zhang, Wenke Yang y Jianchao Feng. "Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression". Sensors 24, n.º 9 (6 de mayo de 2024): 2954. http://dx.doi.org/10.3390/s24092954.

Texto completo
Resumen
Deep neural networks (DNNs) are increasingly important in the medical diagnosis of electrocardiogram (ECG) signals. However, research has shown that DNNs are highly vulnerable to adversarial examples, which can be created by carefully crafted perturbations. This vulnerability can lead to potential medical accidents. This poses new challenges for the application of DNNs in the medical diagnosis of ECG signals. This paper proposes a novel network Channel Activation Suppression with Lipschitz Constraints Net (CASLCNet), which employs the Channel-wise Activation Suppressing (CAS) strategy to dynamically adjust the contribution of different channels to the class prediction and uses the 1-Lipschitz’s ℓ∞ distance network as a robust classifier to reduce the impact of adversarial perturbations on the model itself in order to increase the adversarial robustness of the model. The experimental results demonstrate that CASLCNet achieves ACCrobust scores of 91.03% and 83.01% when subjected to PGD attacks on the MIT-BIH and CPSC2018 datasets, respectively, which proves that the proposed method in this paper enhances the model’s adversarial robustness while maintaining a high accuracy rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhang, Chi, Wenjie Ruan y Peipei Xu. "Reachability Analysis of Neural Network Control Systems". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junio de 2023): 15287–95. http://dx.doi.org/10.1609/aaai.v37i12.26783.

Texto completo
Resumen
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-physical systems. Despite the various verification approaches for neural networks, the safety analysis of NNCs remains an open problem. Existing verification approaches for neural network control systems (NNCSs) either can only work on a limited type of activation functions, or result in non-trivial over-approximation errors with time evolving. This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC. We first prove the Lipschitz continuity of closed-loop NNCSs by unrolling and eliminating the loops. We then reveal the working principles of applying Lipschitzian optimisation on NNCS verification and illustrate it by verifying an adaptive cruise control model. Compared to state-of-the-art verification approaches, DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs. We also provide a case study to demonstrate the capability of DeepNNC to handle a real-world, practical, and complex system. Our tool DeepNNC is available at https://github.com/TrustAI/DeepNNC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Yu, Hongshan, Jinzhu Peng y Yandong Tang. "Identification of Nonlinear Dynamic Systems Using Hammerstein-Type Neural Network". Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/959507.

Texto completo
Resumen
Hammerstein model has been popularly applied to identify the nonlinear systems. In this paper, a Hammerstein-type neural network (HTNN) is derived to formulate the well-known Hammerstein model. The HTNN consists of a nonlinear static gain in cascade with a linear dynamic part. First, the Lipschitz criterion for order determination is derived. Second, the backpropagation algorithm for updating the network weights is presented, and the stability analysis is also drawn. Finally, simulation results show that HTNN identification approach demonstrated identification performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Xin, YU, WU Lingzhen, XIE Mian, WANG Yanlin, XU Liuming, LU Huixia y XU Chenhua. "Smoothing Neural Network for Non‐Lipschitz Optimization with Linear Inequality Constraints". Chinese Journal of Electronics 30, n.º 4 (julio de 2021): 634–43. http://dx.doi.org/10.1049/cje.2021.05.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zhao, Chunna, Junjie Ye, Zelong Zhu y Yaqun Huang. "FLRNN-FGA: Fractional-Order Lipschitz Recurrent Neural Network with Frequency-Domain Gated Attention Mechanism for Time Series Forecasting". Fractal and Fractional 8, n.º 7 (22 de julio de 2024): 433. http://dx.doi.org/10.3390/fractalfract8070433.

Texto completo
Resumen
Time series forecasting has played an important role in different industries, including economics, energy, weather, and healthcare. RNN-based methods have shown promising potential due to their strong ability to model the interaction of time and variables. However, they are prone to gradient issues like gradient explosion and vanishing gradients. And the prediction accuracy is not high. To address the above issues, this paper proposes a Fractional-order Lipschitz Recurrent Neural Network with a Frequency-domain Gated Attention mechanism (FLRNN-FGA). There are three major components: the Fractional-order Lipschitz Recurrent Neural Network (FLRNN), frequency module, and gated attention mechanism. In the FLRNN, fractional-order integration is employed to describe the dynamic systems accurately. It can capture long-term dependencies and improve prediction accuracy. Lipschitz weight matrices are applied to alleviate the gradient issues. In the frequency module, temporal data are transformed into the frequency domain by Fourier transform. Frequency domain processing can reduce the computational complexity of the model. In the gated attention mechanism, the gated structure can regulate attention information transmission to reduce the number of model parameters. Extensive experimental results on five real-world benchmark datasets demonstrate the effectiveness of FLRNN-FGA compared with the state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Liang, Youwei y Dong Huang. "Large Norms of CNN Layers Do Not Hurt Adversarial Robustness". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de mayo de 2021): 8565–73. http://dx.doi.org/10.1609/aaai.v35i10.17039.

Texto completo
Resumen
Since the Lipschitz properties of convolutional neural networks (CNNs) are widely considered to be related to adversarial robustness, we theoretically characterize the L-1 norm and L-infinity norm of 2D multi-channel convolutional layers and provide efficient methods to compute the exact L-1 norm and L-infinity norm. Based on our theorem, we propose a novel regularization method termed norm decay, which can effectively reduce the norms of convolutional layers and fully-connected layers. Experiments show that norm-regularization methods, including norm decay, weight decay, and singular value clipping, can improve generalization of CNNs. However, they can slightly hurt adversarial robustness. Observing this unexpected phenomenon, we compute the norms of layers in the CNNs trained with three different adversarial training frameworks and surprisingly find that adversarially robust CNNs have comparable or even larger layer norms than their non-adversarially robust counterparts. Furthermore, we prove that under a mild assumption, adversarially robust classifiers can be achieved using neural networks, and an adversarially robust neural network can have an arbitrarily large Lipschitz constant. For this reason, enforcing small norms on CNN layers may be neither necessary nor effective in achieving adversarial robustness. The code is available at https://github.com/youweiliang/norm_robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zhuo, Li’an, Baochang Zhang, Chen Chen, Qixiang Ye, Jianzhuang Liu y David Doermann. "Calibrated Stochastic Gradient Descent for Convolutional Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 9348–55. http://dx.doi.org/10.1609/aaai.v33i01.33019348.

Texto completo
Resumen
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as expensive to compute as the true gradient in many scenarios. This paper introduces a calibrated stochastic gradient descent (CSGD) algorithm for deep neural network optimization. A theorem is developed to prove that an unbiased estimator for the network variables can be obtained in a probabilistic way based on the Lipschitz hypothesis. Our work is significantly distinct from existing gradient optimization methods, by providing a theoretical framework for unbiased variable estimation in the deep learning paradigm to optimize the model parameter calculation. In particular, we develop a generic gradient calibration layer which can be easily used to build convolutional neural networks (CNNs). Experimental results demonstrate that CNNs with our CSGD optimization scheme can improve the stateof-the-art performance for natural image classification, digit recognition, ImageNet object classification, and object detection tasks. This work opens new research directions for developing more efficient SGD updates and analyzing the backpropagation algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lippl, Samuel, Benjamin Peters y Nikolaus Kriegeskorte. "Can neural networks benefit from objectives that encourage iterative convergent computations? A case study of ResNets and object classification". PLOS ONE 19, n.º 3 (21 de marzo de 2024): e0293440. http://dx.doi.org/10.1371/journal.pone.0293440.

Texto completo
Resumen
Recent work has suggested that feedforward residual neural networks (ResNets) approximate iterative recurrent computations. Iterative computations are useful in many domains, so they might provide good solutions for neural networks to learn. However, principled methods for measuring and manipulating iterative convergence in neural networks remain lacking. Here we address this gap by 1) quantifying the degree to which ResNets learn iterative solutions and 2) introducing a regularization approach that encourages the learning of iterative solutions. Iterative methods are characterized by two properties: iteration and convergence. To quantify these properties, we define three indices of iterative convergence. Consistent with previous work, we show that, even though ResNets can express iterative solutions, they do not learn them when trained conventionally on computer-vision tasks. We then introduce regularizations to encourage iterative convergent computation and test whether this provides a useful inductive bias. To make the networks more iterative, we manipulate the degree of weight sharing across layers using soft gradient coupling. This new method provides a form of recurrence regularization and can interpolate smoothly between an ordinary ResNet and a “recurrent” ResNet (i.e., one that uses identical weights across layers and thus could be physically implemented with a recurrent network computing the successive stages iteratively across time). To make the networks more convergent we impose a Lipschitz constraint on the residual functions using spectral normalization. The three indices of iterative convergence reveal that the gradient coupling and the Lipschitz constraint succeed at making the networks iterative and convergent, respectively. To showcase the practicality of our approach, we study how iterative convergence impacts generalization on standard visual recognition tasks (MNIST, CIFAR-10, CIFAR-100) or challenging recognition tasks with partial occlusions (Digitclutter). We find that iterative convergent computation, in these tasks, does not provide a useful inductive bias for ResNets. Importantly, our approach may be useful for investigating other network architectures and tasks as well and we hope that our study provides a useful starting point for investigating the broader question of whether iterative convergence can help neural networks in their generalization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Feyzdar, Mahdi, Ahmad Reza Vali y Valiollah Babaeipour. "Identification and Optimization of Recombinant E. coli Fed-Batch Fermentation Producing γ-Interferon Protein". International Journal of Chemical Reactor Engineering 11, n.º 1 (18 de junio de 2013): 123–34. http://dx.doi.org/10.1515/ijcre-2012-0081.

Texto completo
Resumen
Abstract A novel approach to identification of fed-batch cultivation of E. coli BL21 (DE3) has been presented. The process has been identified in the system that is designed for maximum production of γ-interferon protein. Dynamic order of the process has been determined by Lipschitz test. Multilayer Perceptron neural network has been used to process identification by experimental data. The optimal brain surgeon method is used to reduce the model complexity that can be easily implemented. Validation results base on autocorrelation function of the residuals, show good performance of neural network and make it possible to use of it in process analyses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Stamova, Ivanka, Trayan Stamov y Gani Stamov. "Lipschitz stability analysis of fractional-order impulsive delayed reaction-diffusion neural network models". Chaos, Solitons & Fractals 162 (septiembre de 2022): 112474. http://dx.doi.org/10.1016/j.chaos.2022.112474.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Chen, Yu-Wen, Ming-Li Chiang y Li-Chen Fu. "Adaptive Formation Control for Multiple Quadrotors with Nonlinear Uncertainties Using Lipschitz Neural Network". IFAC-PapersOnLine 56, n.º 2 (2023): 8714–19. http://dx.doi.org/10.1016/j.ifacol.2023.10.053.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Li, Wenjing, Wei Bian y Xiaoping Xue. "Projected Neural Network for a Class of Non-Lipschitz Optimization Problems With Linear Constraints". IEEE Transactions on Neural Networks and Learning Systems 31, n.º 9 (septiembre de 2020): 3361–73. http://dx.doi.org/10.1109/tnnls.2019.2944388.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Akrour, Riad, Asma Atamna y Jan Peters. "Convex optimization with an interpolation-based projection and its application to deep learning". Machine Learning 110, n.º 8 (19 de julio de 2021): 2267–89. http://dx.doi.org/10.1007/s10994-021-06037-z.

Texto completo
Resumen
AbstractConvex optimizers have known many applications as differentiable layers within deep neural architectures. One application of these convex layers is to project points into a convex set. However, both forward and backward passes of these convex layers are significantly more expensive to compute than those of a typical neural network. We investigate in this paper whether an inexact, but cheaper projection, can drive a descent algorithm to an optimum. Specifically, we propose an interpolation-based projection that is computationally cheap and easy to compute given a convex, domain defining, function. We then propose an optimization algorithm that follows the gradient of the composition of the objective and the projection and prove its convergence for linear objectives and arbitrary convex and Lipschitz domain defining inequality constraints. In addition to the theoretical contributions, we demonstrate empirically the practical interest of the interpolation projection when used in conjunction with neural networks in a reinforcement learning and a supervised learning setting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Humphries, Usa, Grienggrai Rajchakit, Pramet Kaewmesri, Pharunyou Chanthorn, Ramalingam Sriraman, Rajendran Samidurai y Chee Peng Lim. "Global Stability Analysis of Fractional-Order Quaternion-Valued Bidirectional Associative Memory Neural Networks". Mathematics 8, n.º 5 (14 de mayo de 2020): 801. http://dx.doi.org/10.3390/math8050801.

Texto completo
Resumen
We study the global asymptotic stability problem with respect to the fractional-order quaternion-valued bidirectional associative memory neural network (FQVBAMNN) models in this paper. Whether the real and imaginary parts of quaternion-valued activation functions are expressed implicitly or explicitly, they are considered to meet the global Lipschitz condition in the quaternion field. New sufficient conditions are derived by applying the principle of homeomorphism, Lyapunov fractional-order method and linear matrix inequality (LMI) approach for the two cases of activation functions. The results confirm the existence, uniqueness and global asymptotic stability of the system’s equilibrium point. Finally, two numerical examples with their simulation results are provided to show the effectiveness of the obtained results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bensidhoum, Tarek, Farah Bouakrif y Michel Zasadzinski. "Iterative learning radial basis function neural networks control for unknown multi input multi output nonlinear systems with unknown control direction". Transactions of the Institute of Measurement and Control 41, n.º 12 (19 de febrero de 2019): 3452–67. http://dx.doi.org/10.1177/0142331219826659.

Texto completo
Resumen
In this paper, an iterative learning radial basis function neural-networks (RBF NN) control algorithm is developed for a class of unknown multi input multi output (MIMO) nonlinear systems with unknown control directions. The proposed control scheme is very simple in the sense that we use just a P-type iterative learning control (ILC) updating law in which an RBF neural network term is added to approximate the unknown nonlinear function, and an adaptive law for the weights of RBF neural network is proposed. We chose the RBF NN because it has universal approximation capabilities and can approximate any continuous function. In addition, among the advantages of our controller scheme is the fact that it is applicable to deal with a class of nonlinear systems without the need to satisfy the global Lipschitz continuity condition and we assume, only, that the unstructured uncertainty is norm-bounded by an unknown function. Another advantage of the proposed controller and unlike other works on ILC, we do not need any prior knowledge of the control directions for MIMO nonlinear system. Thus, the Nussbaum-type function is used to solve the problem of unknown control directions. In order to prove the asymptotic stability of the closed-loop system, a Lyapunov-like positive definite sequence is used, which is shown to be monotonically decreasing under the control design scheme. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed control scheme.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zhang, Fan, Heng-You Lan y Hai-Yang Xu. "Generalized Hukuhara Weak Solutions for a Class of Coupled Systems of Fuzzy Fractional Order Partial Differential Equations without Lipschitz Conditions". Mathematics 10, n.º 21 (30 de octubre de 2022): 4033. http://dx.doi.org/10.3390/math10214033.

Texto completo
Resumen
As is known to all, Lipschitz condition, which is very important to guarantee existence and uniqueness of solution for differential equations, is not frequently satisfied in real-world problems. In this paper, without the Lipschitz condition, we intend to explore a kind of novel coupled systems of fuzzy Caputo Generalized Hukuhara type (in short, gH-type) fractional partial differential equations. First and foremost, based on a series of notions of relative compactness in fuzzy number spaces, and using Schauder fixed point theorem in Banach semilinear spaces, it is naturally to prove existence of two classes of gH-weak solutions for the coupled systems of fuzzy fractional partial differential equations. We then give an example to illustrate our main conclusions vividly and intuitively. As applications, combining with the relevant definitions of fuzzy projection operators, and under some suitable conditions, existence results of two categories of gH-weak solutions for a class of fire-new fuzzy fractional partial differential coupled projection neural network systems are also proposed, which are different from those already published work. Finally, we present some work for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Laurel, Jacob, Rem Yang, Shubham Ugare, Robert Nagel, Gagandeep Singh y Sasa Misailovic. "A general construction for abstract interpretation of higher-order automatic differentiation". Proceedings of the ACM on Programming Languages 6, OOPSLA2 (31 de octubre de 2022): 1007–35. http://dx.doi.org/10.1145/3563324.

Texto completo
Resumen
We present a novel, general construction to abstractly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an abstract interpreter for computing derivatives up to a chosen order. Furthermore, since our construction reduces the problem of abstractly reasoning about derivatives to abstractly reasoning about real-valued straight-line programs, it can be instantiated with almost any numerical abstract domain, both relational and non-relational. We formally establish the soundness of this construction. We implement our technique by instantiating our construction with both the non-relational interval domain and the relational zonotope domain to compute both first and higher-order derivatives. In the latter case, we are the first to apply a relational domain to automatic differentiation for abstracting higher-order derivatives, and hence we are also the first abstract interpretation work to track correlations across not only different variables, but different orders of derivatives. We evaluate these instantiations on multiple case studies, namely robustly explaining a neural network and more precisely computing a neural network’s Lipschitz constant. For robust interpretation, first and second derivatives computed via zonotope AD are up to 4.76× and 6.98× more precise, respectively, compared to interval AD. For Lipschitz certification, we obtain bounds that are up to 11,850× more precise with zonotopes, compared to the state-of-the-art interval-based tool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Tatar, Nasser-Eddine. "Long Time Behavior for a System of Differential Equations with Non-Lipschitzian Nonlinearities". Advances in Artificial Neural Systems 2014 (14 de septiembre de 2014): 1–7. http://dx.doi.org/10.1155/2014/252674.

Texto completo
Resumen
We consider a general system of nonlinear ordinary differential equations of first order. The nonlinearities involve distributed delays in addition to the states. In turn, the distributed delays involve nonlinear functions of the different variables and states. An explicit bound for solutions is obtained under some rather reasonable conditions. Several special cases of this system may be found in neural network theory. As a direct application of our result it is shown how to obtain global existence and, more importantly, convergence to zero at an exponential rate in a certain norm. All these nonlinearities (including the activation functions) may be non-Lipschitz and unbounded.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Li, Jia, Cong Fang y Zhouchen Lin. "Lifted Proximal Operator Machines". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4181–88. http://dx.doi.org/10.1609/aaai.v33i01.33014181.

Texto completo
Resumen
We propose a new optimization method for training feedforward neural networks. By rewriting the activation function as an equivalent proximal operator, we approximate a feedforward neural network by adding the proximal operators to the objective function as penalties, hence we call the lifted proximal operator machine (LPOM). LPOM is block multiconvex in all layer-wise weights and activations. This allows us to use block coordinate descent to update the layer-wise weights and activations. Most notably, we only use the mapping of the activation function itself, rather than its derivative, thus avoiding the gradient vanishing or blow-up issues in gradient based training methods. So our method is applicable to various non-decreasing Lipschitz continuous activation functions, which can be saturating and non-differentiable. LPOM does not require more auxiliary variables than the layer-wise activations, thus using roughly the same amount of memory as stochastic gradient descent (SGD) does. Its parameter tuning is also much simpler. We further prove the convergence of updating the layer-wise weights and activations and point out that the optimization could be made parallel by asynchronous update. Experiments on MNIST and CIFAR-10 datasets testify to the advantages of LPOM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Cantarini, Marco, Lucian Coroianu, Danilo Costarelli, Sorin G. Gal y Gianluca Vinti. "Inverse Result of Approximation for the Max-Product Neural Network Operators of the Kantorovich Type and Their Saturation Order". Mathematics 10, n.º 1 (25 de diciembre de 2021): 63. http://dx.doi.org/10.3390/math10010063.

Texto completo
Resumen
In this paper, we consider the max-product neural network operators of the Kantorovich type based on certain linear combinations of sigmoidal and ReLU activation functions. In general, it is well-known that max-product type operators have applications in problems related to probability and fuzzy theory, involving both real and interval/set valued functions. In particular, here we face inverse approximation problems for the above family of sub-linear operators. We first establish their saturation order for a certain class of functions; i.e., we show that if a continuous and non-decreasing function f can be approximated by a rate of convergence higher than 1/n, as n goes to +∞, then f must be a constant. Furthermore, we prove a local inverse theorem of approximation; i.e., assuming that f can be approximated with a rate of convergence of 1/n, then f turns out to be a Lipschitz continuous function.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Zhao, Liquan y Yan Liu. "Spectral Normalization for Domain Adaptation". Information 11, n.º 2 (27 de enero de 2020): 68. http://dx.doi.org/10.3390/info11020068.

Texto completo
Resumen
The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. First, we introduce the Lipschitz constraint condition into domain adaptation. If the constraint condition can be satisfied, the method will be stable. Second, we analyze how to make the gradient satisfy the condition, thereby deducing the modified gradient via the spectrum regularization method. The modified gradient is then used to update the parameter matrix. The proposed method is compared to the ResNet-50, deep adaptation network, domain adversarial neural network, joint adaptation network, and conditional domain adversarial network methods using the datasets that are found in Office-31, ImageCLEF-DA, and Office-Home. The simulations demonstrate that the proposed method has a better performance than other methods with respect to accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Pantoja-Garcia, Luis, Vicente Parra-Vega, Rodolfo Garcia-Rodriguez y Carlos Ernesto Vázquez-García. "A Novel Actor—Critic Motor Reinforcement Learning for Continuum Soft Robots". Robotics 12, n.º 5 (9 de octubre de 2023): 141. http://dx.doi.org/10.3390/robotics12050141.

Texto completo
Resumen
Reinforcement learning (RL) is explored for motor control of a novel pneumatic-driven soft robot modeled after continuum media with a varying density. This model complies with closed-form Lagrangian dynamics, which fulfills the fundamental structural property of passivity, among others. Then, the question arises of how to synthesize a passivity-based RL model to control the unknown continuum soft robot dynamics to exploit its input–output energy properties advantageously throughout a reward-based neural network controller. Thus, we propose a continuous-time Actor–Critic scheme for tracking tasks of the continuum 3D soft robot subject to Lipschitz disturbances. A reward-based temporal difference leads to learning with a novel discontinuous adaptive mechanism of Critic neural weights. Finally, the reward and integral of the Bellman error approximation reinforce the adaptive mechanism of Actor neural weights. Closed-loop stability is guaranteed in the sense of Lyapunov, which leads to local exponential convergence of tracking errors based on integral sliding modes. Notably, it is assumed that dynamics are unknown, yet the control is continuous and robust. A representative simulation study shows the effectiveness of our proposal for tracking tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Van, Mien. "Higher-order terminal sliding mode controller for fault accommodation of Lipschitz second-order nonlinear systems using fuzzy neural network". Applied Soft Computing 104 (junio de 2021): 107186. http://dx.doi.org/10.1016/j.asoc.2021.107186.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Jiao, Yulin, Feng Xiao, Wenjuan Zhang, Shujuan Huang, Hao Lu y Zhaoting Lu. "Image Inpainting based on Gated Convolution and spectral Normalization". Frontiers in Computing and Intelligent Systems 6, n.º 2 (5 de diciembre de 2023): 96–100. http://dx.doi.org/10.54097/wkezn917.

Texto completo
Resumen
Traditional image inpainting methods based on deep learning have the problem of insufficient discrimination between the missing area and the global area information in the feature extraction of image inpainting tasks because of the characteristics of the model constructed by the convolutional layer. At the same time, the traditional generative adversarial network often has problems such as training difficulties and model collapse in the training process. To solve the above problems and improve the repair effect of the model, this paper proposes a dual discriminator image inpainting model based on generative adversarial network combining gated convolution and spectral normalization. The model is mainly composed of an image inpainting module and an image recognition module. The traditional image inpainting model considers all input pixels as valid pixels when extracting the features of the image to be inpainted, which is unreasonable for the image inpainting task. In order to solve this problem, the gated convolution is designed to replace the role of traditional convolution in the image inpainting module. Gated convolutions address the irrationality of ordinary convolutions in image inpainting tasks by providing learnable dynamic feature selection mechanisms for each channel at each spatial location in all layers. At the same time, generative image inpainting models usually have problems such as training hard mode collapse during the training process. In this paper, we intend to introduce the spectral normalization mechanism in the convolutional layer design of the discriminator module. By introducing Lipschitz continuity constraints from the spectral norm of the parameter matrix of each layer of the neural network, the neural network is better insensitive to input perturbations, making the training process more stable and easier to converge. It solves the problems of mode collapse and model training difficulty in the training of image inpainting model based on generative adversarial network. Finally, qualitative and quantitative experiments show that the image inpainting model based on gated convolution and spectral normalization solves the above problems, and the inpainted image has reasonable texture structure and contextual semantic information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Li, Cuiying, Rui Wu y Ranzhuo Ma. "Existence of solutions for Caputo fractional iterative equations under several boundary value conditions". AIMS Mathematics 8, n.º 1 (2022): 317–39. http://dx.doi.org/10.3934/math.2023015.

Texto completo
Resumen
<abstract><p>In this paper, we investigate the existence and uniqueness of solutions for nonlinear quadratic iterative equations in the sense of the Caputo fractional derivative with different boundary conditions. Under a one-sided-Lipschitz condition on the nonlinear term, the existence and uniqueness of a solution for the boundary value problems of Caputo fractional iterative equations with arbitrary order is demonstrated by applying the Leray-Schauder fixed point theorem and topological degree theory, where the solution for the case of fractional order greater than 1 is monotonic. Then, the existence and uniqueness of a solution for the period and integral boundary value problems of Caputo fractional quadratic iterative equations in $ R^N $ are also demonstrated. Furthermore, the well posedness of the control problem of a nonlinear iteration system with a disturbance is established by applying set-valued theory, and the existence of solutions for a neural network iterative system is guaranteed. As an application, an example is provided at the end.</p></abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Tong, Qingbin, Feiyu Lu, Ziwei Feng, Qingzhu Wan, Guoping An, Junci Cao y Tao Guo. "A Novel Method for Fault Diagnosis of Bearings with Small and Imbalanced Data Based on Generative Adversarial Networks". Applied Sciences 12, n.º 14 (21 de julio de 2022): 7346. http://dx.doi.org/10.3390/app12147346.

Texto completo
Resumen
The data-driven intelligent fault diagnosis method of rolling bearings has strict requirements regarding the number and balance of fault samples. However, in practical engineering application scenarios, mechanical equipment is usually in a normal state, and small and imbalanced (S & I) fault samples are common, which seriously reduces the accuracy and stability of the fault diagnosis model. To solve this problem, an auxiliary classifier generative adversarial network with spectral normalization (ACGAN-SN) is proposed in this paper. First, a generation module based on a deconvolution layer is built to generate false data from Gaussian noise. Second, to enhance the training stability of the model, the data label information is used to make label constraints on the generated fake data under the basic GAN framework. Spectral normalization constraints are imposed on the output of each layer of the neural network of the discriminator to realize the Lipschitz continuity condition so as to avoid vanishing or exploding gradients. Finally, based on the generated data and the original S & I dataset, seven kinds of bearing fault datasets are made, and the prediction results of the Bi-directional Long Short-Term Memory (BiLSTM) model is verified. The results show that the data generated by ACGAN-SN can significantly promote the performance of the fault diagnosis model under the S & I fault samples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Pauli, Patricia, Anne Koch, Julian Berberich, Paul Kohler y Frank Allgower. "Training Robust Neural Networks Using Lipschitz Bounds". IEEE Control Systems Letters 6 (2022): 121–26. http://dx.doi.org/10.1109/lcsys.2021.3050444.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Negrini, Elisa, Giovanna Citti y Luca Capogna. "System identification through Lipschitz regularized deep neural networks". Journal of Computational Physics 444 (noviembre de 2021): 110549. http://dx.doi.org/10.1016/j.jcp.2021.110549.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Zou, Dongmian, Radu Balan y Maneesh Singh. "On Lipschitz Bounds of General Convolutional Neural Networks". IEEE Transactions on Information Theory 66, n.º 3 (marzo de 2020): 1738–59. http://dx.doi.org/10.1109/tit.2019.2961812.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Laurel, Jacob, Rem Yang, Gagandeep Singh y Sasa Misailovic. "A dual number abstraction for static analysis of Clarke Jacobians". Proceedings of the ACM on Programming Languages 6, POPL (16 de enero de 2022): 1–30. http://dx.doi.org/10.1145/3498718.

Texto completo
Resumen
We present a novel abstraction for bounding the Clarke Jacobian of a Lipschitz continuous, but not necessarily differentiable function over a local input region. To do so, we leverage a novel abstract domain built upon dual numbers, adapted to soundly over-approximate all first derivatives needed to compute the Clarke Jacobian. We formally prove that our novel forward-mode dual interval evaluation produces a sound, interval domain-based over-approximation of the true Clarke Jacobian for a given input region. Due to the generality of our formalism, we can compute and analyze interval Clarke Jacobians for a broader class of functions than previous works supported – specifically, arbitrary compositions of neural networks with Lipschitz, but non-differentiable perturbations. We implement our technique in a tool called DeepJ and evaluate it on multiple deep neural networks and non-differentiable input perturbations to showcase both the generality and scalability of our analysis. Concretely, we can obtain interval Clarke Jacobians to analyze Lipschitz robustness and local optimization landscapes of both fully-connected and convolutional neural networks for rotational, contrast variation, and haze perturbations, as well as their compositions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

García Cabello, Julia. "Mathematical Neural Networks". Axioms 11, n.º 2 (17 de febrero de 2022): 80. http://dx.doi.org/10.3390/axioms11020080.

Texto completo
Resumen
ANNs succeed in several tasks for real scenarios due to their high learning abilities. This paper focuses on theoretical aspects of ANNs to enhance the capacity of implementing those modifications that make ANNs absorb the defining features of each scenario. This work may be also encompassed within the trend devoted to providing mathematical explanations of ANN performance, with special attention to activation functions. The base algorithm has been mathematically decoded to analyse the required features of activation functions regarding their impact on the training process and on the applicability of the Universal Approximation Theorem. Particularly, significant new results to identify those activation functions which undergo some usual failings (gradient preserving) are presented here. This is the first paper—to the best of the author’s knowledge—that stresses the role of injectivity for activation functions, which has received scant attention in literature but has great incidence on the ANN performance. In this line, a characterization of injective activation functions has been provided related to monotonic functions which satisfy the classical contractive condition as a particular case of Lipschitz functions. A summary table on these is also provided, targeted at documenting how to select the best activation function for each situation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ma, Shuo y Yanmei Kang. "Exponential synchronization of delayed neutral-type neural networks with Lévy noise under non-Lipschitz condition". Communications in Nonlinear Science and Numerical Simulation 57 (abril de 2018): 372–87. http://dx.doi.org/10.1016/j.cnsns.2017.10.012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Neumayer, Sebastian, Alexis Goujon, Pakshal Bohra y Michael Unser. "Approximation of Lipschitz Functions Using Deep Spline Neural Networks". SIAM Journal on Mathematics of Data Science 5, n.º 2 (15 de mayo de 2023): 306–22. http://dx.doi.org/10.1137/22m1504573.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Song, Xueli y Jigen Peng. "Global Asymptotic Stability of Impulsive CNNs with Proportional Delays and Partially Lipschitz Activation Functions". Abstract and Applied Analysis 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/832892.

Texto completo
Resumen
This paper researches global asymptotic stability of impulsive cellular neural networks with proportional delays and partially Lipschitz activation functions. Firstly, by means of the transformation vi(t)=ui(et), the impulsive cellular neural networks with proportional delays are transformed into impulsive cellular neural networks with the variable coefficients and constant delays. Secondly, we provide novel criteria for the uniqueness and exponential stability of the equilibrium point of the latter by relative nonlinear measure and prove that the exponential stability of equilibrium point of the latter implies the asymptotic stability of one of the former. We furthermore obtain a sufficient condition to the uniqueness and global asymptotic stability of the equilibrium point of the former. Our method does not require conventional assumptions on global Lipschitz continuity, boundedness, and monotonicity of activation functions. Our results are generalizations and improvements of some existing ones. Finally, an example and its simulations are provided to illustrate the correctness of our analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Han, Fangfang, Bin Liu, Junchao Zhu y Baofeng Zhang. "Algorithm Design for Edge Detection of High-Speed Moving Target Image under Noisy Environment". Sensors 19, n.º 2 (16 de enero de 2019): 343. http://dx.doi.org/10.3390/s19020343.

Texto completo
Resumen
For some measurement and detection applications based on video (sequence images), if the exposure time of camera is not suitable with the motion speed of the photographed target, fuzzy edges will be produced in the image, and some poor lighting condition will aggravate this edge blur phenomena. Especially, the existence of noise in industrial field environment makes the extraction of fuzzy edges become a more difficult problem when analyzing the posture of a high-speed moving target. Because noise and edge are always both the kind of high-frequency information, it is difficult to make trade-offs only by frequency bands. In this paper, a noise-tolerant edge detection method based on the correlation relationship between layers of wavelet transform coefficients is proposed. The goal of the paper is not to recover a clean image from a noisy observation, but to make a trade-off judgment for noise and edge signal directly according to the characteristics of wavelet transform coefficients, to realize the extraction of edge information from a noisy image directly. According to the wavelet coefficients tree and the Lipschitz exponent property of noise, the idea of neural network activation function is adopted to design the activation judgment method of wavelet coefficients. Then the significant wavelet coefficients can be retained. At the same time, the non-significant coefficients were removed according to the method of judgment of isolated coefficients. On the other hand, based on the design of Daubechies orthogonal compactly-supported wavelet filter, rational coefficients wavelet filters can be designed by increasing free variables. By reducing the vanishing moments of wavelet filters, more high-frequency information can be retained in the wavelet transform fields, which is benefit to the application of edge detection. For a noisy image of high-speed moving targets with fuzzy edges, by using the length 8-4 rational coefficients biorthogonal wavelet filters and the algorithm proposed in this paper, edge information could be detected clearly. Results of multiple groups of comparative experiments have shown that the edge detection effect of the proposed algorithm in this paper has the obvious superiority.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Becktor, Jonathan, Frederik Schöller, Evangelos Boukas, Mogens Blanke y Lazaros Nalpantidis. "Lipschitz Constrained Neural Networks for Robust Object Detection at Sea". IOP Conference Series: Materials Science and Engineering 929 (27 de noviembre de 2020): 012023. http://dx.doi.org/10.1088/1757-899x/929/1/012023.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Aziznejad, Shayan, Harshit Gupta, Joaquim Campos y Michael Unser. "Deep Neural Networks With Trainable Activations and Controlled Lipschitz Constant". IEEE Transactions on Signal Processing 68 (2020): 4688–99. http://dx.doi.org/10.1109/tsp.2020.3014611.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Delaney, Blaise, Nicole Schulte, Gregory Ciezarek, Niklas Nolte, Mike Williams y Johannes Albrecht. "Applications of Lipschitz neural networks to the Run 3 LHCb trigger system". EPJ Web of Conferences 295 (2024): 09005. http://dx.doi.org/10.1051/epjconf/202429509005.

Texto completo
Resumen
The operating conditions defining the current data taking campaign at the Large Hadron Collider, known as Run 3, present unparalleled challenges for the real-time data acquisition workflow of the LHCb experiment at CERN. To address the anticipated surge in luminosity and consequent event rate, the LHCb experiment is transitioning to a fully software-based trigger system. This evolution necessitated innovations in hardware configurations, software paradigms, and algorithmic design. A significant advancement is the integration of monotonic Lipschitz Neural Networks into the LHCb trigger system. These deep learning models offer certified robustness against detector instabilities, and the ability to encode domain-specific inductive biases. Such properties are crucial for the inclusive heavy-flavour triggers and, most notably, for the topological triggers designed to inclusively select b-hadron candidates by exploiting the unique kinematic and decay topologies of beauty decays. This paper describes the recent progress in integrating Lipschitz Neural Networks into the topological triggers, highlighting the resulting enhanced sensitivity to highly displaced multi-body candidates produced within the LHCb acceptance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Mallat, Stéphane, Sixin Zhang y Gaspar Rochette. "Phase harmonic correlations and convolutional neural networks". Information and Inference: A Journal of the IMA 9, n.º 3 (5 de noviembre de 2019): 721–47. http://dx.doi.org/10.1093/imaiai/iaz019.

Texto completo
Resumen
Abstract A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters that are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors that are local in space, frequency and phase. The nonlinear phase filter becomes a multiplicative operator over phase harmonics computed with a Fourier transform along the phase. We prove that it defines a bi-Lipschitz and invertible representation. The correlations of phase harmonics coefficients characterize coherent structures from their phase dependence across frequencies. For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía