Auswahl der wissenschaftlichen Literatur zum Thema „Lipschitz neural network“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Lipschitz neural network" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Lipschitz neural network"
Zhu, Zelong, Chunna Zhao und Yaqun Huang. „Fractional order Lipschitz recurrent neural network with attention for long time series prediction“. Journal of Physics: Conference Series 2813, Nr. 1 (01.08.2024): 012015. http://dx.doi.org/10.1088/1742-6596/2813/1/012015.
Der volle Inhalt der QuelleZhang, Huan, Pengchuan Zhang und Cho-Jui Hsieh. „RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.
Der volle Inhalt der QuelleAraujo, Alexandre, Benjamin Negrevergne, Yann Chevaleyre und Jamal Atif. „On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 6661–69. http://dx.doi.org/10.1609/aaai.v35i8.16824.
Der volle Inhalt der QuelleXu, Yuhui, Wenrui Dai, Yingyong Qi, Junni Zou und Hongkai Xiong. „Iterative Deep Neural Network Quantization With Lipschitz Constraint“. IEEE Transactions on Multimedia 22, Nr. 7 (Juli 2020): 1874–88. http://dx.doi.org/10.1109/tmm.2019.2949857.
Der volle Inhalt der QuelleMohammad, Ibtihal J. „Neural Networks of the Rational r-th Powers of the Multivariate Bernstein Operators“. BASRA JOURNAL OF SCIENCE 40, Nr. 2 (01.09.2022): 258–73. http://dx.doi.org/10.29072/basjs.20220201.
Der volle Inhalt der QuelleIbtihal.J.M und Ali J. Mohammad. „Neural Network of Multivariate Square Rational Bernstein Operators with Positive Integer Parameter“. European Journal of Pure and Applied Mathematics 15, Nr. 3 (31.07.2022): 1189–200. http://dx.doi.org/10.29020/nybg.ejpam.v15i3.4425.
Der volle Inhalt der QuelleLiu, Kanglin, und Guoping Qiu. „Lipschitz constrained GANs via boundedness and continuity“. Neural Computing and Applications 32, Nr. 24 (24.05.2020): 18271–83. http://dx.doi.org/10.1007/s00521-020-04954-z.
Der volle Inhalt der QuelleOthmani, S., N. E. Tatar und A. Khemmoudj. „Asymptotic behavior of a BAM neural network with delays of distributed type“. Mathematical Modelling of Natural Phenomena 16 (2021): 29. http://dx.doi.org/10.1051/mmnp/2021023.
Der volle Inhalt der QuelleXia, Youshen. „An Extended Projection Neural Network for Constrained Optimization“. Neural Computation 16, Nr. 4 (01.04.2004): 863–83. http://dx.doi.org/10.1162/089976604322860730.
Der volle Inhalt der QuelleLi, Peiluan, Yuejing Lu, Changjin Xu und Jing Ren. „Bifurcation Phenomenon and Control Technique in Fractional BAM Neural Network Models Concerning Delays“. Fractal and Fractional 7, Nr. 1 (22.12.2022): 7. http://dx.doi.org/10.3390/fractalfract7010007.
Der volle Inhalt der QuelleDissertationen zum Thema "Lipschitz neural network"
Béthune, Louis. „Apprentissage profond avec contraintes Lipschitz“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES014.
Der volle Inhalt der QuelleThis thesis explores the characteristics and applications of Lipschitz networks in machine learning tasks. First, the framework of "optimization as a layer" is presented, showcasing various applications, including the parametrization of Lipschitz-constrained layers. Then, the expressiveness of these networks in classification tasks is investigated, revealing an accuracy/robustness tradeoff controlled by entropic regularization of the loss, accompanied by generalization guarantees. Subsequently, the research delves into the utilization of signed distance functions as a solution to a regularized optimal transport problem, showcasing their efficacy in robust one-class learning and the construction of neural implicit surfaces. After, the thesis demonstrates the adaptability of the back-propagation algorithm to propagate bounds instead of vectors, enabling differentially private training of Lipschitz networks without incurring runtime and memory overhead. Finally, it goes beyond Lipschitz constraints and explores the use of convexity constraint for multivariate quantiles
Gupta, Kavya. „Stability Quantification of Neural Networks“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST004.
Der volle Inhalt der QuelleArtificial neural networks are at the core of recent advances in Artificial Intelligence. One of the main challenges faced today, especially by companies likeThales designing advanced industrial systems is to ensure the safety of newgenerations of products using these technologies. In 2013 in a key observation, neural networks were shown to be sensitive to adversarial perturbations, raising serious concerns about their applicability in critically safe environments. In the last years, publications studying the various aspects of this robustness of neural networks, and rising questions such as "Why adversarial attacks occur?", "How can we make the neural network more robust to adversarial noise?", "How to generate stronger attacks?" etc., have grown exponentially. The contributions of this thesis aim to tackle such problems. The adversarial machine learning community concentrates majorly on classification scenarios, whereas studies on regression tasks are scarce. Our contributions bridge this significant gap between adversarial machine learning and regression applications.The first contribution in Chapter 3 proposes a white-box attackers designed to attack regression models. The presented adversarial attacker is derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network and measure its effectiveness in reducing the estimation performance. We present our results on various open-source and real industrial tabular datasets. Our analysis relies on the quantification of the fooling error as well as different error metrics. Another noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may help to analyze the sensitivity of some specific inputs. We also, show the effect of this attacker on spectrally normalised trained models which are known to be more robust in handling attacks.The second contribution of this thesis (Chapter 4) presents a multivariate Lipschitz constant analysis of neural networks. The Lipschitz constant is widely used in the literature to study the internal properties of neural networks. But most works do a single parametric analysis, which do not allow to quantify the effect of individual inputs on the output. We propose a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, intending to perform a more precise analysis than the one provided by a global Lipschitz constant. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. We perform experiments on various open-access tabular datasets and an actual Thales Air Mobility industrial application subject to certification requirements.The use of spectral normalization in designing a stability control loop is discussed in Chapter 5. A critical part of the optimal model is to behave according to specified performance and stability targets while in operation. But imposing tight Lipschitz constant constraints while training the models usually leads to a reduction of their accuracy. Hence, we design an algorithm to train "stable-by-design" neural network models using our spectral normalization approach, which optimizes the model by taking into account both performance and stability targets. We focus on Small Unmanned Aerial Vehicles (UAVs). More specifically, we present a novel application of neural networks to detect in real-time elevon positioning faults to allow the remote pilot to take necessary actions to ensure safety
Neacșu, Ana-Antonia. „Robust Deep learning methods inspired by signal processing algorithms“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST212.
Der volle Inhalt der QuelleUnderstanding the importance of defense strategies against adversarial attacks has become paramount in ensuring the trustworthiness and resilience of neural networks. While traditional security measures focused on protecting data and software from external threats, the unique challenge posed by adversarial attacks lies in their ability to exploit the inherent vulnerabilities of the underlying machine learning algorithms themselves.The first part of the thesis proposes new constrained learning strategies that ensure robustness against adversarial perturbations by controlling the Lipschitz constant of a classifier. We focus on nonnegative neural networks for which accurate Lipschitz bounds can be derived, and we propose different spectral norm constraints offering robustness guarantees from a theoretical viewpoint. We validate our solution in the context of gesture recognition based on Surface Electromyographic (sEMG) signals.In the second part of the thesis, we propose a new class of neural networks (ACNN) which can be viewed as establishing a link between fully connected and convolutional networks, and we propose an iterative algorithm to control their robustness during training. Next, we extend our solution to the complex plane and address the problem of designing robust complex-valued neural networks by proposing a new architecture (RCFF-Net) for which we derive tight Lipschitz constant bounds. Both solutions are validated for audio denoising.In the last part, we introduce ABBA Networks, a novel class of (almost) non-negative neural networks, which we show to be universal approximators. We derive tight Lipschitz bounds for both linear and convolutional layers, and we propose an algorithm to train robust ABBA networks. We show the effectiveness of the proposed approach in the context of image classification
Buchteile zum Thema "Lipschitz neural network"
Shang, Yuzhang, Dan Xu, Bin Duan, Ziliang Zong, Liqiang Nie und Yan Yan. „Lipschitz Continuity Retained Binary Neural Network“. In Lecture Notes in Computer Science, 603–19. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20083-0_36.
Der volle Inhalt der QuelleYi, Zhang, und K. K. Tan. „Delayed Recurrent Neural Networks with Global Lipschitz Activation Functions“. In Network Theory and Applications, 119–70. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3_6.
Der volle Inhalt der QuelleTang, Zaiyong, Kallol Bagchi, Youqin Pan und Gary J. Koehler. „Pruning Feedforward Neural Network Search Space Using Local Lipschitz Constants“. In Advances in Neural Networks – ISNN 2012, 11–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_2.
Der volle Inhalt der QuelleStamova, Ivanka, Trayan Stamov und Gani Stamov. „Lipschitz Quasistability of Impulsive Cohen–Grossberg Neural Network Models with Delays and Reaction-Diffusion Terms“. In Nonlinear Systems and Complexity, 59–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42689-6_3.
Der volle Inhalt der QuelleMangal, Ravi, Kartik Sarangmath, Aditya V. Nori und Alessandro Orso. „Probabilistic Lipschitz Analysis of Neural Networks“. In Static Analysis, 274–309. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65474-0_13.
Der volle Inhalt der QuelleNie, Xiaobing, und Jinde Cao. „Dynamics of Competitive Neural Networks with Inverse Lipschitz Neuron Activations“. In Advances in Neural Networks - ISNN 2010, 483–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13278-0_62.
Der volle Inhalt der QuelleUsama, Muhammad, und Dong Eui Chang. „Towards Robust Neural Networks with Lipschitz Continuity“. In Digital Forensics and Watermarking, 373–89. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11389-6_28.
Der volle Inhalt der QuelleBungert, Leon, René Raab, Tim Roith, Leo Schwinn und Daniel Tenbrinck. „CLIP: Cheap Lipschitz Training of Neural Networks“. In Lecture Notes in Computer Science, 307–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_25.
Der volle Inhalt der QuelleFu, Chaojin, und Ailong Wu. „Global Exponential Stability of Delayed Neural Networks with Non-lipschitz Neuron Activations and Impulses“. In Advances in Computation and Intelligence, 92–100. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04843-2_11.
Der volle Inhalt der QuelleRicalde, Luis J., Glendy A. Catzin, Alma Y. Alanis und Edgar N. Sanchez. „Time Series Forecasting via a Higher Order Neural Network trained with the Extended Kalman Filter for Smart Grid Applications“. In Artificial Higher Order Neural Networks for Modeling and Simulation, 254–74. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2175-6.ch012.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Lipschitz neural network"
Ozcan, Neyir. „New results for global stability of neutral-type delayed neural networks“. In The 11th International Conference on Integrated Modeling and Analysis in Applied Control and Automation. CAL-TEK srl, 2018. http://dx.doi.org/10.46354/i3m.2018.imaaca.004.
Der volle Inhalt der QuelleRuan, Wenjie, Xiaowei Huang und Marta Kwiatkowska. „Reachability Analysis of Deep Neural Networks with Provable Guarantees“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/368.
Der volle Inhalt der QuelleWu, Zheng-Fan, Yi-Nan Feng und Hui Xue. „Automatically Gating Multi-Frequency Patterns through Rectified Continuous Bernoulli Units with Theoretical Principles“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/499.
Der volle Inhalt der QuelleBose, Sarosij. „Lipschitz Bound Analysis of Neural Networks“. In 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2022. http://dx.doi.org/10.1109/icccnt54827.2022.9984441.
Der volle Inhalt der QuelleMediratta, Ishita, Snehanshu Saha und Shubhad Mathur. „LipARELU: ARELU Networks aided by Lipschitz Acceleration“. In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533853.
Der volle Inhalt der QuellePauli, Patricia, Anne Koch, Julian Berberich, Paul Kohler und Frank Allgower. „Training Robust Neural Networks Using Lipschitz Bounds“. In 2021 American Control Conference (ACC). IEEE, 2021. http://dx.doi.org/10.23919/acc50511.2021.9482773.
Der volle Inhalt der QuelleHasan, Mahmudul, und Sachin Shetty. „Sentiment Analysis With Lipschitz Recurrent Neural Networks“. In 2023 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2023. http://dx.doi.org/10.1109/isncc58260.2023.10323619.
Der volle Inhalt der QuelleBedregal, Benjamin, und Ivan Pan. „Some typical classes of t-norms and the 1-Lipschitz Condition“. In 2006 Ninth Brazilian Symposium on Neural Networks (SBRN'06). IEEE, 2006. http://dx.doi.org/10.1109/sbrn.2006.38.
Der volle Inhalt der QuelleGuo, Yuhua, Yiran Li und Amin Farjudian. „Validated Computation of Lipschitz Constant of Recurrent Neural Networks“. In ICMLSC 2023: 2023 The 7th International Conference on Machine Learning and Soft Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583788.3583795.
Der volle Inhalt der QuelleHasan, Mahmudul, und Sachin Shetty. „Sentiment Analysis With Lipschitz Recurrent Neural Networks Based Generative Adversarial Networks“. In 2024 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2024. http://dx.doi.org/10.1109/icnc59896.2024.10555933.
Der volle Inhalt der Quelle