Letteratura scientifica selezionata sul tema "Lipschitz neural network"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Lipschitz neural network".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Lipschitz neural network"

1

Zhu, Zelong, Chunna Zhao e Yaqun Huang. "Fractional order Lipschitz recurrent neural network with attention for long time series prediction". Journal of Physics: Conference Series 2813, n. 1 (1 agosto 2024): 012015. http://dx.doi.org/10.1088/1742-6596/2813/1/012015.

Testo completo
Abstract (sommario):
Abstract Time series data prediction holds a significant importance in various applications. In this study, we specifically concentrate on long-time series data prediction. Recurrent Neural Networks are widely recognized as a fundamental neural network architecture for processing effectively time-series data. Recurrent Neural Network models encounter the gradient disappearance or gradient explosion challenge in long series data. To resolve the gradient problem and improve accuracy, the Fractional Order Lipschitz Recurrent Neural Network (FOLRNN) model is proposed to predict long time series in this paper. The proposed method uses the Lipschitz continuity to alleviate the gradient problem. The fractional order integration is applied to compute the hidden states of the Recurrent Neural Network in the proposed method. The intricate dynamics of long-time series data can be captured by fractional order calculus. It has more accurate predictions compared with Lipschitz Recurrent Neural Networks models. Then self-attention is used to improve feature representation. It can describe the correlation of features and improve predict performance. Some experiments show that the FOLRNN model achieves better results than other methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Huan, Pengchuan Zhang e Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.

Testo completo
Abstract (sommario):
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks. In this paper, we propose a recursive algorithm, RecurJac, to compute both upper and lower bounds for each element in the Jacobian matrix of a neural network with respect to network’s input, and the network can contain a wide range of activation functions. As a byproduct, we can efficiently obtain a (local) Lipschitz constant, which plays a crucial role in neural network robustness verification, as well as the training stability of GANs. Experiments show that (local) Lipschitz constants produced by our method is of better quality than previous approaches, thus providing better robustness verification results. Our algorithm has polynomial time complexity, and its computation time is reasonable even for relatively large networks. Additionally, we use our bounds of Jacobian matrix to characterize the landscape of the neural network, for example, to determine whether there exist stationary points in a local neighborhood.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Araujo, Alexandre, Benjamin Negrevergne, Yann Chevaleyre e Jamal Atif. "On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 8 (18 maggio 2021): 6661–69. http://dx.doi.org/10.1609/aaai.v35i8.16824.

Testo completo
Abstract (sommario):
This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks. Lipschitz regularity is now established as a key property of modern deep learning with implications in training stability, generalization, robustness against adversarial examples, etc. However, computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard. Recent attempts from the literature introduce upper bounds to approximate this constant that are either efficient but loose or accurate but computationally expensive. In this work, by leveraging the theory of Toeplitz matrices, we introduce a new upper bound for convolutional layers that is both tight and easy to compute. Based on this result we devise an algorithm to train Lipschitz regularized Convolutional Neural Networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Xu, Yuhui, Wenrui Dai, Yingyong Qi, Junni Zou e Hongkai Xiong. "Iterative Deep Neural Network Quantization With Lipschitz Constraint". IEEE Transactions on Multimedia 22, n. 7 (luglio 2020): 1874–88. http://dx.doi.org/10.1109/tmm.2019.2949857.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Mohammad, Ibtihal J. "Neural Networks of the Rational r-th Powers of the Multivariate Bernstein Operators". BASRA JOURNAL OF SCIENCE 40, n. 2 (1 settembre 2022): 258–73. http://dx.doi.org/10.29072/basjs.20220201.

Testo completo
Abstract (sommario):
In this study, a novel neural network for the multivariance Bernstein operators' rational powers was developed. A positive integer is required by these networks. In the space of all real-valued continuous functions, the pointwise and uniform approximation theorems are introduced and examined first. After that, the Lipschitz space is used to study two key theorems. Additionally, some numerical examples are provided to demonstrate how well these neural networks approximate two test functions. The numerical outcomes demonstrate that as input grows, the neural network provides a better approximation. Finally, the graphs used to represent these neural network approximations show the average error between the approximation and the test function.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ibtihal.J.M e Ali J. Mohammad. "Neural Network of Multivariate Square Rational Bernstein Operators with Positive Integer Parameter". European Journal of Pure and Applied Mathematics 15, n. 3 (31 luglio 2022): 1189–200. http://dx.doi.org/10.29020/nybg.ejpam.v15i3.4425.

Testo completo
Abstract (sommario):
This research is defined a new neural network (NN) that depends upon a positive integer parameter using the multivariate square rational Bernstein polynomials. Some theorems for this network are proved, such as the pointwise and the uniform approximation theorems. Firstly, the absolute moment for a function that belongs to Lipschitz space is defined to estimate the order of the NN. Secondly, some numerical applications for this NN are given by taking two test functions. Finally, the numerical results for this network are compared with the classical neural networks (NNs). The results turn out that the new network is better than the classical one.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Liu, Kanglin, e Guoping Qiu. "Lipschitz constrained GANs via boundedness and continuity". Neural Computing and Applications 32, n. 24 (24 maggio 2020): 18271–83. http://dx.doi.org/10.1007/s00521-020-04954-z.

Testo completo
Abstract (sommario):
AbstractOne of the challenges in the study of generative adversarial networks (GANs) is the difficulty of its performance control. Lipschitz constraint is essential in guaranteeing training stability for GANs. Although heuristic methods such as weight clipping, gradient penalty and spectral normalization have been proposed to enforce Lipschitz constraint, it is still difficult to achieve a solution that is both practically effective and theoretically provably satisfying a Lipschitz constraint. In this paper, we introduce the boundedness and continuity (BC) conditions to enforce the Lipschitz constraint on the discriminator functions of GANs. We prove theoretically that GANs with discriminators meeting the BC conditions satisfy the Lipschitz constraint. We present a practically very effective implementation of a GAN based on a convolutional neural network (CNN) by forcing the CNN to satisfy the BC conditions (BC–GAN). We show that as compared to recent techniques including gradient penalty and spectral normalization, BC–GANs have not only better performances but also lower computational complexity.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Othmani, S., N. E. Tatar e A. Khemmoudj. "Asymptotic behavior of a BAM neural network with delays of distributed type". Mathematical Modelling of Natural Phenomena 16 (2021): 29. http://dx.doi.org/10.1051/mmnp/2021023.

Testo completo
Abstract (sommario):
In this paper, we examine a Bidirectional Associative Memory neural network model with distributed delays. Using a result due to Cid [J. Math. Anal. Appl. 281 (2003) 264–275], we were able to prove an exponential stability result in the case when the standard Lipschitz continuity condition is violated. Indeed, we deal with activation functions which may not be Lipschitz continuous. Therefore, the standard Halanay inequality is not applicable. We will use a nonlinear version of this inequality. At the end, the obtained differential inequality which should imply the exponential stability appears ‘state dependent’. That is the usual constant depends in this case on the state itself. This adds some difficulties which we overcome by a suitable argument.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Xia, Youshen. "An Extended Projection Neural Network for Constrained Optimization". Neural Computation 16, n. 4 (1 aprile 2004): 863–83. http://dx.doi.org/10.1162/089976604322860730.

Testo completo
Abstract (sommario):
Recently, a projection neural network has been shown to be a promising computational model for solving variational inequality problems with box constraints. This letter presents an extended projection neural network for solving monotone variational inequality problems with linear and nonlinear constraints. In particular, the proposed neural network can include the projection neural network as a special case. Compared with the modified projection-type methods for solving constrained monotone variational inequality problems, the proposed neural network has a lower complexity and is suitable for parallel implementation. Furthermore, the proposed neural network is theoretically proven to be exponentially convergent to an exact solution without a Lipschitz condition. Illustrative examples show that the extended projection neural network can be used to solve constrained monotone variational inequality problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Li, Peiluan, Yuejing Lu, Changjin Xu e Jing Ren. "Bifurcation Phenomenon and Control Technique in Fractional BAM Neural Network Models Concerning Delays". Fractal and Fractional 7, n. 1 (22 dicembre 2022): 7. http://dx.doi.org/10.3390/fractalfract7010007.

Testo completo
Abstract (sommario):
In this current study, we formulate a kind of new fractional BAM neural network model concerning five neurons and time delays. First, we explore the existence and uniqueness of the solution of the formulated fractional delay BAM neural network models via the Lipschitz condition. Second, we study the boundedness of the solution to the formulated fractional delayed BAM neural network models using a proper function. Third, we set up a novel sufficient criterion on the onset of the Hopf bifurcation stability of the formulated fractional BAM neural network models by virtue of the stability criterion and bifurcation principle of fractional delayed dynamical systems. Fourth, a delayed feedback controller is applied to command the time of occurrence of the bifurcation and stability domain of the formulated fractional delayed BAM neural network models. Lastly, software simulation figures are provided to verify the key outcomes. The theoretical outcomes obtained through this exploration can play a vital role in controlling and devising networks.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Lipschitz neural network"

1

Béthune, Louis. "Apprentissage profond avec contraintes Lipschitz". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES014.

Testo completo
Abstract (sommario):
Cette thèse explore les caractéristiques et les applications des réseaux Lipschitz dans les tâches d'apprentissage automatique. Tout d'abord, le cadre de "l'optimisation en tant que couche" est présenté, mettant en avant diverses applications, notamment la paramétrisation des couches contraintes Lipschitz. Ensuite, l'expressivité de ces réseaux dans les tâches de classification est étudiée, révélant un compromis précision/robustesse contrôlé par la régularisation entropique de la perte, accompagnée de garanties de généralisation. Par la suite, la recherche se penche sur l'utilisation des fonctions de distance signée comme solution à un problème de transport optimal régularisé, mettant en avant leur efficacité dans l'apprentissage robuste en classe unique et la construction de surfaces implicites neurales. Ensuite, la thèse démontre l'adaptabilité de l'algorithme de rétropropagation pour propager des bornes au lieu de vecteurs, permettant un entraînement confidentiel des réseaux Lipschitz sans entraîner de surcoût en termes de temps d'exécution et de mémoire. Enfin, elle va au-delà des contraintes Lipschitz et explore l'utilisation de contraintes de convexité pour les quantiles multivariés
This thesis explores the characteristics and applications of Lipschitz networks in machine learning tasks. First, the framework of "optimization as a layer" is presented, showcasing various applications, including the parametrization of Lipschitz-constrained layers. Then, the expressiveness of these networks in classification tasks is investigated, revealing an accuracy/robustness tradeoff controlled by entropic regularization of the loss, accompanied by generalization guarantees. Subsequently, the research delves into the utilization of signed distance functions as a solution to a regularized optimal transport problem, showcasing their efficacy in robust one-class learning and the construction of neural implicit surfaces. After, the thesis demonstrates the adaptability of the back-propagation algorithm to propagate bounds instead of vectors, enabling differentially private training of Lipschitz networks without incurring runtime and memory overhead. Finally, it goes beyond Lipschitz constraints and explores the use of convexity constraint for multivariate quantiles
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gupta, Kavya. "Stability Quantification of Neural Networks". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST004.

Testo completo
Abstract (sommario):
Les réseaux de neurones artificiels sont au cœur des avancées récentes en Intelligence Artificielle. L'un des principaux défis auxquels on est aujourd'hui confronté, notamment au sein d'entreprises comme Thales concevant des systèmes industriels avancés, est d'assurer la sécurité des nouvelles générations de produits utilisant cette technologie. En 2013, une observation clé a révélé que les réseaux de neurones sont sensibles à des perturbations adverses. Ceci soulève de sérieuses inquiétudes quant à leur applicabilité dans des environnements où la sécurité est critique. Au cours des dernières années, des publications ont étudiées les différents aspects de la robustesse des réseaux de neurones, et des questions telles que ``Pourquoi des attaques adverses se produisent?", ``Comment pouvons-nous rendre les réseaux de neurones plus robustes à ces bruits ?", ``Comment générer des attaques plus fortes", etc., se sont posées avec une acuité croissante. Cette thèse vise à apporter des réponses à de telles questions. La communauté s'intéressant aux attaques adverses en apprentissage automatique travaille principalement sur des scénarios de classification, alors que les études portant sur des tâches de régression sont rares. Nos contributions comblent le fossé existant entre les méthodes adverses en apprentissage et les applications de régression.Notre première contribution, dans le chapitre 3, propose un algorithme de type ``boîte blanche" pour attaquer les modèles de régression. L'attaquant adverse présenté est déduit des propriétés algébriques du Jacobien du réseau. Nous montrons que notre attaquant réussit à tromper le réseau de neurones et évaluons son efficacité à réduire les performances d'estimation. Nous présentons nos résultats sur divers ensembles de données tabulaires industriels en libre accès et réels. Notre analyse repose sur la quantification de l'erreur de tromperie ainsi que différentes métriques. Une autre caractéristique remarquable de notre algorithme est qu'il nous permet d'attaquer de manière optimale un sous-ensemble d'entrées, ce qui peut aider à identifier la sensibilité de certaines d'entre elles. La deuxième contribution de cette thèse (Chapitre 4) présente une analyse de la constante de Lipschitz multivariée des réseaux de neurones. La constante de Lipschitz est largement utilisée dans la littérature pour étudier les propriétés intrinsèques des réseaux de neurones. Mais la plupart des travaux font une analyse mono-paramétrique, qui ne permet pas de quantifier l'effet des entrées individuelles sur la sortie. Nous proposons une analyse multivariée de la stabilité des réseaux de neurones entièrement connectés, reposant sur leur propriétés Lipschitziennes. Cette analyse nous permet de saisir l'influence de chaque entrée ou groupe d'entrées sur la stabilité du réseau de neurones. Notre approche repose sur une re-normalisation appropriée de l'espace d'entrée, visant à effectuer une analyse plus précise que celle fournie par une constante de Lipschitz globale. Nous visualisons les résultats de cette analyse par une nouvelle représentation conçue pour les praticiens de l'apprentissage automatique et les ingénieurs en sécurité appelée étoile de Lipschitz. L'utilisation de la normalisation spectrale dans la conception d'une boucle de contrôle de stabilité est abordée au chapitre 5. Une caractéristique essentielle du modèle optimal consiste à satisfaire aux objectifs de performance et de stabilité spécifiés pour le fonctionnement. Cependant, contraindre la constante de Lipschitz lors de l'apprentissage des modèles conduit généralement à une réduction de leur précision. Par conséquent, nous concevons un algorithme permettant de produire des modèles de réseaux de neurones ``stable dès la conception" en utilisant une nouvelle approche de normalisation spectrale, qui optimise le modèle, en tenant compte à la fois des objectifs de performance et de stabilité. Nous nous concentrons sur les petits drones aériens (UAV)
Artificial neural networks are at the core of recent advances in Artificial Intelligence. One of the main challenges faced today, especially by companies likeThales designing advanced industrial systems is to ensure the safety of newgenerations of products using these technologies. In 2013 in a key observation, neural networks were shown to be sensitive to adversarial perturbations, raising serious concerns about their applicability in critically safe environments. In the last years, publications studying the various aspects of this robustness of neural networks, and rising questions such as "Why adversarial attacks occur?", "How can we make the neural network more robust to adversarial noise?", "How to generate stronger attacks?" etc., have grown exponentially. The contributions of this thesis aim to tackle such problems. The adversarial machine learning community concentrates majorly on classification scenarios, whereas studies on regression tasks are scarce. Our contributions bridge this significant gap between adversarial machine learning and regression applications.The first contribution in Chapter 3 proposes a white-box attackers designed to attack regression models. The presented adversarial attacker is derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network and measure its effectiveness in reducing the estimation performance. We present our results on various open-source and real industrial tabular datasets. Our analysis relies on the quantification of the fooling error as well as different error metrics. Another noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may help to analyze the sensitivity of some specific inputs. We also, show the effect of this attacker on spectrally normalised trained models which are known to be more robust in handling attacks.The second contribution of this thesis (Chapter 4) presents a multivariate Lipschitz constant analysis of neural networks. The Lipschitz constant is widely used in the literature to study the internal properties of neural networks. But most works do a single parametric analysis, which do not allow to quantify the effect of individual inputs on the output. We propose a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, intending to perform a more precise analysis than the one provided by a global Lipschitz constant. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. We perform experiments on various open-access tabular datasets and an actual Thales Air Mobility industrial application subject to certification requirements.The use of spectral normalization in designing a stability control loop is discussed in Chapter 5. A critical part of the optimal model is to behave according to specified performance and stability targets while in operation. But imposing tight Lipschitz constant constraints while training the models usually leads to a reduction of their accuracy. Hence, we design an algorithm to train "stable-by-design" neural network models using our spectral normalization approach, which optimizes the model by taking into account both performance and stability targets. We focus on Small Unmanned Aerial Vehicles (UAVs). More specifically, we present a novel application of neural networks to detect in real-time elevon positioning faults to allow the remote pilot to take necessary actions to ensure safety
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Neacșu, Ana-Antonia. "Robust Deep learning methods inspired by signal processing algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST212.

Testo completo
Abstract (sommario):
Comprendre l'importance des stratégies de défense contre les attaques adverses est devenu primordial pour garantir la fiabilité et la résilience des réseaux de neurones. Alors que les mesures de sécurité traditionnelles se focalisent sur la protection des données et des logiciels contre les menaces externes, le défi unique posé par les attaques adverses réside dans leur capacité à exploiter les vulnérabilités inhérentes aux algorithmes d'apprentissage automatique.Dans la première partie de la thèse, nous proposons de nouvelles stratégies d'apprentissage contraint qui garantissent la robustesse vis-à-vis des perturbations adverses, en contrôlant la constante de Lipschitz d'un classifeur. Nous concentrons notre attention sur les réseaux de neurones positifs pour lesquels des bornes de Lipschitz précises peuvent être déduites, et nous proposons différentes contraintes de norme spectrale offrant des garanties de robustesse, d'un point de vue théorique. Nous validons notre solution dans le contexte de la reconnaissance de gestes basée sur des signaux électromyographiques de surface (sEMG).Dans la deuxième partie de la thèse, nous proposons une nouvelle classe de réseaux de neurones (ACNN) qui peut être considérée comme un intermédiaire entre les réseaux entièrement connectés et ceux convolutionnels. Nous proposons un algorithme itératif pour contrôler la robustesse pendant l'apprentissage. Ensuite, nous étendons notre solution au plan complexe et abordons le problème de la conception de réseaux de neurones robustes à valeurs complexes, en proposant une nouvelle architecture (RCFF-Net) pour laquelle nous obtenons des bornes fines de la constante de Lipschitz. Les deux solutions sont validées en débruitage audio.Dans la dernière partie, nous introduisons les réseaux ABBA, une nouvelle classe de réseaux de neurones (presque) positifs, dont nous démontrons les propriétés d'approximation universelle.Nous déduisons des bornes fines de Lipschitz pour les couches linéaires ou convolutionnelles, et nous proposons un algorithme pour entraîner des réseaux ABBA robustes.Nous démontrons l'efficacité de l'approche proposée dans le contexte de la classification d'images
Understanding the importance of defense strategies against adversarial attacks has become paramount in ensuring the trustworthiness and resilience of neural networks. While traditional security measures focused on protecting data and software from external threats, the unique challenge posed by adversarial attacks lies in their ability to exploit the inherent vulnerabilities of the underlying machine learning algorithms themselves.The first part of the thesis proposes new constrained learning strategies that ensure robustness against adversarial perturbations by controlling the Lipschitz constant of a classifier. We focus on nonnegative neural networks for which accurate Lipschitz bounds can be derived, and we propose different spectral norm constraints offering robustness guarantees from a theoretical viewpoint. We validate our solution in the context of gesture recognition based on Surface Electromyographic (sEMG) signals.In the second part of the thesis, we propose a new class of neural networks (ACNN) which can be viewed as establishing a link between fully connected and convolutional networks, and we propose an iterative algorithm to control their robustness during training. Next, we extend our solution to the complex plane and address the problem of designing robust complex-valued neural networks by proposing a new architecture (RCFF-Net) for which we derive tight Lipschitz constant bounds. Both solutions are validated for audio denoising.In the last part, we introduce ABBA Networks, a novel class of (almost) non-negative neural networks, which we show to be universal approximators. We derive tight Lipschitz bounds for both linear and convolutional layers, and we propose an algorithm to train robust ABBA networks. We show the effectiveness of the proposed approach in the context of image classification
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Lipschitz neural network"

1

Shang, Yuzhang, Dan Xu, Bin Duan, Ziliang Zong, Liqiang Nie e Yan Yan. "Lipschitz Continuity Retained Binary Neural Network". In Lecture Notes in Computer Science, 603–19. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20083-0_36.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yi, Zhang, e K. K. Tan. "Delayed Recurrent Neural Networks with Global Lipschitz Activation Functions". In Network Theory and Applications, 119–70. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tang, Zaiyong, Kallol Bagchi, Youqin Pan e Gary J. Koehler. "Pruning Feedforward Neural Network Search Space Using Local Lipschitz Constants". In Advances in Neural Networks – ISNN 2012, 11–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Stamova, Ivanka, Trayan Stamov e Gani Stamov. "Lipschitz Quasistability of Impulsive Cohen–Grossberg Neural Network Models with Delays and Reaction-Diffusion Terms". In Nonlinear Systems and Complexity, 59–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42689-6_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Mangal, Ravi, Kartik Sarangmath, Aditya V. Nori e Alessandro Orso. "Probabilistic Lipschitz Analysis of Neural Networks". In Static Analysis, 274–309. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65474-0_13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Nie, Xiaobing, e Jinde Cao. "Dynamics of Competitive Neural Networks with Inverse Lipschitz Neuron Activations". In Advances in Neural Networks - ISNN 2010, 483–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13278-0_62.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Usama, Muhammad, e Dong Eui Chang. "Towards Robust Neural Networks with Lipschitz Continuity". In Digital Forensics and Watermarking, 373–89. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11389-6_28.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bungert, Leon, René Raab, Tim Roith, Leo Schwinn e Daniel Tenbrinck. "CLIP: Cheap Lipschitz Training of Neural Networks". In Lecture Notes in Computer Science, 307–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_25.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Fu, Chaojin, e Ailong Wu. "Global Exponential Stability of Delayed Neural Networks with Non-lipschitz Neuron Activations and Impulses". In Advances in Computation and Intelligence, 92–100. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04843-2_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ricalde, Luis J., Glendy A. Catzin, Alma Y. Alanis e Edgar N. Sanchez. "Time Series Forecasting via a Higher Order Neural Network trained with the Extended Kalman Filter for Smart Grid Applications". In Artificial Higher Order Neural Networks for Modeling and Simulation, 254–74. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2175-6.ch012.

Testo completo
Abstract (sommario):
This chapter presents the design of a neural network that combines higher order terms in its input layer and an Extended Kalman Filter (EKF)-based algorithm for its training. The neural network-based scheme is defined as a Higher Order Neural Network (HONN), and its applicability is illustrated by means of time series forecasting for three important variables present in smart grids: Electric Load Demand (ELD), Wind Speed (WS), and Wind Energy Generation (WEG). The proposed model is trained and tested using real data values taken from a microgrid system in the UADY School of Engineering. The length of the regression vector is determined via the Lipschitz quotients methodology.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Lipschitz neural network"

1

Ozcan, Neyir. "New results for global stability of neutral-type delayed neural networks". In The 11th International Conference on Integrated Modeling and Analysis in Applied Control and Automation. CAL-TEK srl, 2018. http://dx.doi.org/10.46354/i3m.2018.imaaca.004.

Testo completo
Abstract (sommario):
"This paper deals with the stability analysis of the class of neutral-type neural networks with constant time delay. By using a suitable Lyapunov functional, some delay independent sufficient conditions are derived, which ensure the global asymptotic stability of the equilibrium point for this this class of neutral-type neural networks with time delays with respect to the Lipschitz activation functions. The presented stability results rely on checking some certain properties of matrices. Therefore, it is easy to verify the validation of the constraint conditions on the network parameters of neural system by simply using some basic information of the matrix theory."
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ruan, Wenjie, Xiaowei Huang e Marta Kwiatkowska. "Reachability Analysis of Deep Neural Networks with Provable Guarantees". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/368.

Testo completo
Abstract (sommario):
Verifying correctness for deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Wu, Zheng-Fan, Yi-Nan Feng e Hui Xue. "Automatically Gating Multi-Frequency Patterns through Rectified Continuous Bernoulli Units with Theoretical Principles". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/499.

Testo completo
Abstract (sommario):
Different nonlinearities are only suitable for responding to different frequency signals. The locally-responding ReLU is incapable of modeling high-frequency features due to the spectral bias, whereas the globally-responding sinusoidal function is intractable to represent low-frequency concepts cheaply owing to the optimization dilemma. Moreover, nearly all the practical tasks are composed of complex multi-frequency patterns, whereas there is little prospect of designing or searching a heterogeneous network containing various types of neurons matching the frequencies, because of their exponentially-increasing combinatorial states. In this paper, our contributions are three-fold: 1) we propose a general Rectified Continuous Bernoulli (ReCB) unit paired with an efficient variational Bayesian learning paradigm, to automatically detect/gate/represent different frequency responses; 2) our numerically-tight theoretical framework proves that ReCB-based networks can achieve the optimal representation ability, which is O(m^{η/(d^2)}) times better than that of popular neural networks, for a hidden dimension of m, an input dimension of d, and a Lipschitz constant of η; 3) we provide comprehensive empirical evidence showing that ReCB-based networks can keenly learn multi-frequency patterns and push the state-of-the-art performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bose, Sarosij. "Lipschitz Bound Analysis of Neural Networks". In 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2022. http://dx.doi.org/10.1109/icccnt54827.2022.9984441.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Mediratta, Ishita, Snehanshu Saha e Shubhad Mathur. "LipARELU: ARELU Networks aided by Lipschitz Acceleration". In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533853.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Pauli, Patricia, Anne Koch, Julian Berberich, Paul Kohler e Frank Allgower. "Training Robust Neural Networks Using Lipschitz Bounds". In 2021 American Control Conference (ACC). IEEE, 2021. http://dx.doi.org/10.23919/acc50511.2021.9482773.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hasan, Mahmudul, e Sachin Shetty. "Sentiment Analysis With Lipschitz Recurrent Neural Networks". In 2023 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2023. http://dx.doi.org/10.1109/isncc58260.2023.10323619.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bedregal, Benjamin, e Ivan Pan. "Some typical classes of t-norms and the 1-Lipschitz Condition". In 2006 Ninth Brazilian Symposium on Neural Networks (SBRN'06). IEEE, 2006. http://dx.doi.org/10.1109/sbrn.2006.38.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Guo, Yuhua, Yiran Li e Amin Farjudian. "Validated Computation of Lipschitz Constant of Recurrent Neural Networks". In ICMLSC 2023: 2023 The 7th International Conference on Machine Learning and Soft Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583788.3583795.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Hasan, Mahmudul, e Sachin Shetty. "Sentiment Analysis With Lipschitz Recurrent Neural Networks Based Generative Adversarial Networks". In 2024 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2024. http://dx.doi.org/10.1109/icnc59896.2024.10555933.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia