Literatura científica selecionada sobre o tema "Lipschitz neural network"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Lipschitz neural network".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Lipschitz neural network"
Zhu, Zelong, Chunna Zhao e Yaqun Huang. "Fractional order Lipschitz recurrent neural network with attention for long time series prediction". Journal of Physics: Conference Series 2813, n.º 1 (1 de agosto de 2024): 012015. http://dx.doi.org/10.1088/1742-6596/2813/1/012015.
Texto completo da fonteZhang, Huan, Pengchuan Zhang e Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.
Texto completo da fonteAraujo, Alexandre, Benjamin Negrevergne, Yann Chevaleyre e Jamal Atif. "On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de maio de 2021): 6661–69. http://dx.doi.org/10.1609/aaai.v35i8.16824.
Texto completo da fonteXu, Yuhui, Wenrui Dai, Yingyong Qi, Junni Zou e Hongkai Xiong. "Iterative Deep Neural Network Quantization With Lipschitz Constraint". IEEE Transactions on Multimedia 22, n.º 7 (julho de 2020): 1874–88. http://dx.doi.org/10.1109/tmm.2019.2949857.
Texto completo da fonteMohammad, Ibtihal J. "Neural Networks of the Rational r-th Powers of the Multivariate Bernstein Operators". BASRA JOURNAL OF SCIENCE 40, n.º 2 (1 de setembro de 2022): 258–73. http://dx.doi.org/10.29072/basjs.20220201.
Texto completo da fonteIbtihal.J.M e Ali J. Mohammad. "Neural Network of Multivariate Square Rational Bernstein Operators with Positive Integer Parameter". European Journal of Pure and Applied Mathematics 15, n.º 3 (31 de julho de 2022): 1189–200. http://dx.doi.org/10.29020/nybg.ejpam.v15i3.4425.
Texto completo da fonteLiu, Kanglin, e Guoping Qiu. "Lipschitz constrained GANs via boundedness and continuity". Neural Computing and Applications 32, n.º 24 (24 de maio de 2020): 18271–83. http://dx.doi.org/10.1007/s00521-020-04954-z.
Texto completo da fonteOthmani, S., N. E. Tatar e A. Khemmoudj. "Asymptotic behavior of a BAM neural network with delays of distributed type". Mathematical Modelling of Natural Phenomena 16 (2021): 29. http://dx.doi.org/10.1051/mmnp/2021023.
Texto completo da fonteXia, Youshen. "An Extended Projection Neural Network for Constrained Optimization". Neural Computation 16, n.º 4 (1 de abril de 2004): 863–83. http://dx.doi.org/10.1162/089976604322860730.
Texto completo da fonteLi, Peiluan, Yuejing Lu, Changjin Xu e Jing Ren. "Bifurcation Phenomenon and Control Technique in Fractional BAM Neural Network Models Concerning Delays". Fractal and Fractional 7, n.º 1 (22 de dezembro de 2022): 7. http://dx.doi.org/10.3390/fractalfract7010007.
Texto completo da fonteTeses / dissertações sobre o assunto "Lipschitz neural network"
Béthune, Louis. "Apprentissage profond avec contraintes Lipschitz". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES014.
Texto completo da fonteThis thesis explores the characteristics and applications of Lipschitz networks in machine learning tasks. First, the framework of "optimization as a layer" is presented, showcasing various applications, including the parametrization of Lipschitz-constrained layers. Then, the expressiveness of these networks in classification tasks is investigated, revealing an accuracy/robustness tradeoff controlled by entropic regularization of the loss, accompanied by generalization guarantees. Subsequently, the research delves into the utilization of signed distance functions as a solution to a regularized optimal transport problem, showcasing their efficacy in robust one-class learning and the construction of neural implicit surfaces. After, the thesis demonstrates the adaptability of the back-propagation algorithm to propagate bounds instead of vectors, enabling differentially private training of Lipschitz networks without incurring runtime and memory overhead. Finally, it goes beyond Lipschitz constraints and explores the use of convexity constraint for multivariate quantiles
Gupta, Kavya. "Stability Quantification of Neural Networks". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST004.
Texto completo da fonteArtificial neural networks are at the core of recent advances in Artificial Intelligence. One of the main challenges faced today, especially by companies likeThales designing advanced industrial systems is to ensure the safety of newgenerations of products using these technologies. In 2013 in a key observation, neural networks were shown to be sensitive to adversarial perturbations, raising serious concerns about their applicability in critically safe environments. In the last years, publications studying the various aspects of this robustness of neural networks, and rising questions such as "Why adversarial attacks occur?", "How can we make the neural network more robust to adversarial noise?", "How to generate stronger attacks?" etc., have grown exponentially. The contributions of this thesis aim to tackle such problems. The adversarial machine learning community concentrates majorly on classification scenarios, whereas studies on regression tasks are scarce. Our contributions bridge this significant gap between adversarial machine learning and regression applications.The first contribution in Chapter 3 proposes a white-box attackers designed to attack regression models. The presented adversarial attacker is derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network and measure its effectiveness in reducing the estimation performance. We present our results on various open-source and real industrial tabular datasets. Our analysis relies on the quantification of the fooling error as well as different error metrics. Another noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may help to analyze the sensitivity of some specific inputs. We also, show the effect of this attacker on spectrally normalised trained models which are known to be more robust in handling attacks.The second contribution of this thesis (Chapter 4) presents a multivariate Lipschitz constant analysis of neural networks. The Lipschitz constant is widely used in the literature to study the internal properties of neural networks. But most works do a single parametric analysis, which do not allow to quantify the effect of individual inputs on the output. We propose a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, intending to perform a more precise analysis than the one provided by a global Lipschitz constant. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. We perform experiments on various open-access tabular datasets and an actual Thales Air Mobility industrial application subject to certification requirements.The use of spectral normalization in designing a stability control loop is discussed in Chapter 5. A critical part of the optimal model is to behave according to specified performance and stability targets while in operation. But imposing tight Lipschitz constant constraints while training the models usually leads to a reduction of their accuracy. Hence, we design an algorithm to train "stable-by-design" neural network models using our spectral normalization approach, which optimizes the model by taking into account both performance and stability targets. We focus on Small Unmanned Aerial Vehicles (UAVs). More specifically, we present a novel application of neural networks to detect in real-time elevon positioning faults to allow the remote pilot to take necessary actions to ensure safety
Neacșu, Ana-Antonia. "Robust Deep learning methods inspired by signal processing algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST212.
Texto completo da fonteUnderstanding the importance of defense strategies against adversarial attacks has become paramount in ensuring the trustworthiness and resilience of neural networks. While traditional security measures focused on protecting data and software from external threats, the unique challenge posed by adversarial attacks lies in their ability to exploit the inherent vulnerabilities of the underlying machine learning algorithms themselves.The first part of the thesis proposes new constrained learning strategies that ensure robustness against adversarial perturbations by controlling the Lipschitz constant of a classifier. We focus on nonnegative neural networks for which accurate Lipschitz bounds can be derived, and we propose different spectral norm constraints offering robustness guarantees from a theoretical viewpoint. We validate our solution in the context of gesture recognition based on Surface Electromyographic (sEMG) signals.In the second part of the thesis, we propose a new class of neural networks (ACNN) which can be viewed as establishing a link between fully connected and convolutional networks, and we propose an iterative algorithm to control their robustness during training. Next, we extend our solution to the complex plane and address the problem of designing robust complex-valued neural networks by proposing a new architecture (RCFF-Net) for which we derive tight Lipschitz constant bounds. Both solutions are validated for audio denoising.In the last part, we introduce ABBA Networks, a novel class of (almost) non-negative neural networks, which we show to be universal approximators. We derive tight Lipschitz bounds for both linear and convolutional layers, and we propose an algorithm to train robust ABBA networks. We show the effectiveness of the proposed approach in the context of image classification
Capítulos de livros sobre o assunto "Lipschitz neural network"
Shang, Yuzhang, Dan Xu, Bin Duan, Ziliang Zong, Liqiang Nie e Yan Yan. "Lipschitz Continuity Retained Binary Neural Network". In Lecture Notes in Computer Science, 603–19. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20083-0_36.
Texto completo da fonteYi, Zhang, e K. K. Tan. "Delayed Recurrent Neural Networks with Global Lipschitz Activation Functions". In Network Theory and Applications, 119–70. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3_6.
Texto completo da fonteTang, Zaiyong, Kallol Bagchi, Youqin Pan e Gary J. Koehler. "Pruning Feedforward Neural Network Search Space Using Local Lipschitz Constants". In Advances in Neural Networks – ISNN 2012, 11–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31346-2_2.
Texto completo da fonteStamova, Ivanka, Trayan Stamov e Gani Stamov. "Lipschitz Quasistability of Impulsive Cohen–Grossberg Neural Network Models with Delays and Reaction-Diffusion Terms". In Nonlinear Systems and Complexity, 59–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42689-6_3.
Texto completo da fonteMangal, Ravi, Kartik Sarangmath, Aditya V. Nori e Alessandro Orso. "Probabilistic Lipschitz Analysis of Neural Networks". In Static Analysis, 274–309. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65474-0_13.
Texto completo da fonteNie, Xiaobing, e Jinde Cao. "Dynamics of Competitive Neural Networks with Inverse Lipschitz Neuron Activations". In Advances in Neural Networks - ISNN 2010, 483–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13278-0_62.
Texto completo da fonteUsama, Muhammad, e Dong Eui Chang. "Towards Robust Neural Networks with Lipschitz Continuity". In Digital Forensics and Watermarking, 373–89. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11389-6_28.
Texto completo da fonteBungert, Leon, René Raab, Tim Roith, Leo Schwinn e Daniel Tenbrinck. "CLIP: Cheap Lipschitz Training of Neural Networks". In Lecture Notes in Computer Science, 307–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_25.
Texto completo da fonteFu, Chaojin, e Ailong Wu. "Global Exponential Stability of Delayed Neural Networks with Non-lipschitz Neuron Activations and Impulses". In Advances in Computation and Intelligence, 92–100. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04843-2_11.
Texto completo da fonteRicalde, Luis J., Glendy A. Catzin, Alma Y. Alanis e Edgar N. Sanchez. "Time Series Forecasting via a Higher Order Neural Network trained with the Extended Kalman Filter for Smart Grid Applications". In Artificial Higher Order Neural Networks for Modeling and Simulation, 254–74. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2175-6.ch012.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Lipschitz neural network"
Ozcan, Neyir. "New results for global stability of neutral-type delayed neural networks". In The 11th International Conference on Integrated Modeling and Analysis in Applied Control and Automation. CAL-TEK srl, 2018. http://dx.doi.org/10.46354/i3m.2018.imaaca.004.
Texto completo da fonteRuan, Wenjie, Xiaowei Huang e Marta Kwiatkowska. "Reachability Analysis of Deep Neural Networks with Provable Guarantees". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/368.
Texto completo da fonteWu, Zheng-Fan, Yi-Nan Feng e Hui Xue. "Automatically Gating Multi-Frequency Patterns through Rectified Continuous Bernoulli Units with Theoretical Principles". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/499.
Texto completo da fonteBose, Sarosij. "Lipschitz Bound Analysis of Neural Networks". In 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2022. http://dx.doi.org/10.1109/icccnt54827.2022.9984441.
Texto completo da fonteMediratta, Ishita, Snehanshu Saha e Shubhad Mathur. "LipARELU: ARELU Networks aided by Lipschitz Acceleration". In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533853.
Texto completo da fontePauli, Patricia, Anne Koch, Julian Berberich, Paul Kohler e Frank Allgower. "Training Robust Neural Networks Using Lipschitz Bounds". In 2021 American Control Conference (ACC). IEEE, 2021. http://dx.doi.org/10.23919/acc50511.2021.9482773.
Texto completo da fonteHasan, Mahmudul, e Sachin Shetty. "Sentiment Analysis With Lipschitz Recurrent Neural Networks". In 2023 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2023. http://dx.doi.org/10.1109/isncc58260.2023.10323619.
Texto completo da fonteBedregal, Benjamin, e Ivan Pan. "Some typical classes of t-norms and the 1-Lipschitz Condition". In 2006 Ninth Brazilian Symposium on Neural Networks (SBRN'06). IEEE, 2006. http://dx.doi.org/10.1109/sbrn.2006.38.
Texto completo da fonteGuo, Yuhua, Yiran Li e Amin Farjudian. "Validated Computation of Lipschitz Constant of Recurrent Neural Networks". In ICMLSC 2023: 2023 The 7th International Conference on Machine Learning and Soft Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583788.3583795.
Texto completo da fonteHasan, Mahmudul, e Sachin Shetty. "Sentiment Analysis With Lipschitz Recurrent Neural Networks Based Generative Adversarial Networks". In 2024 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2024. http://dx.doi.org/10.1109/icnc59896.2024.10555933.
Texto completo da fonte