Gotowa bibliografia na temat „Mini-Batch Optimization”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Mini-Batch Optimization”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Mini-Batch Optimization"
Gultekin, San, Avishek Saha, Adwait Ratnaparkhi i John Paisley. "MBA: Mini-Batch AUC Optimization". IEEE Transactions on Neural Networks and Learning Systems 31, nr 12 (grudzień 2020): 5561–74. http://dx.doi.org/10.1109/tnnls.2020.2969527.
Pełny tekst źródłaFeyzmahdavian, Hamid Reza, Arda Aytekin i Mikael Johansson. "An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization". IEEE Transactions on Automatic Control 61, nr 12 (grudzień 2016): 3740–54. http://dx.doi.org/10.1109/tac.2016.2525015.
Pełny tekst źródłaBanerjee, Subhankar, i Shayok Chakraborty. "Deterministic Mini-batch Sequencing for Training Deep Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 6723–31. http://dx.doi.org/10.1609/aaai.v35i8.16831.
Pełny tekst źródłaSimanungkalit, F. R. J., H. Hanifah, G. Ardaneswari, N. Hariadi i B. D. Handari. "Prediction of students’ academic performance using ANN with mini-batch gradient descent and Levenberg-Marquardt optimization algorithms". Journal of Physics: Conference Series 2106, nr 1 (1.11.2021): 012018. http://dx.doi.org/10.1088/1742-6596/2106/1/012018.
Pełny tekst źródłavan Herwaarden, Dirk Philip, Christian Boehm, Michael Afanasiev, Solvi Thrastarson, Lion Krischer, Jeannot Trampert i Andreas Fichtner. "Accelerated full-waveform inversion using dynamic mini-batches". Geophysical Journal International 221, nr 2 (21.02.2020): 1427–38. http://dx.doi.org/10.1093/gji/ggaa079.
Pełny tekst źródłaGhadimi, Saeed, Guanghui Lan i Hongchao Zhang. "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization". Mathematical Programming 155, nr 1-2 (11.12.2014): 267–305. http://dx.doi.org/10.1007/s10107-014-0846-1.
Pełny tekst źródłaKervazo, C., T. Liaudat i J. Bobin. "Faster and better sparse blind source separation through mini-batch optimization". Digital Signal Processing 106 (listopad 2020): 102827. http://dx.doi.org/10.1016/j.dsp.2020.102827.
Pełny tekst źródłaDimitriou, Neofytos, i Ognjen Arandjelović. "Sequential Normalization: Embracing Smaller Sample Sizes for Normalization". Information 13, nr 7 (12.07.2022): 337. http://dx.doi.org/10.3390/info13070337.
Pełny tekst źródłaBakurov, Illya, Marco Buzzelli, Mauro Castelli, Leonardo Vanneschi i Raimondo Schettini. "General Purpose Optimization Library (GPOL): A Flexible and Efficient Multi-Purpose Optimization Library in Python". Applied Sciences 11, nr 11 (23.05.2021): 4774. http://dx.doi.org/10.3390/app11114774.
Pełny tekst źródłaLi, Zhiyuan, Xun Jian, Yue Wang, Yingxia Shao i Lei Chen. "DAHA: Accelerating GNN Training with Data and Hardware Aware Execution Planning". Proceedings of the VLDB Endowment 17, nr 6 (luty 2024): 1364–76. http://dx.doi.org/10.14778/3648160.3648176.
Pełny tekst źródłaRozprawy doktorskie na temat "Mini-Batch Optimization"
Bensaid, Bilel. "Analyse et développement de nouveaux optimiseurs en Machine Learning". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.
Pełny tekst źródłaOver the last few years, developping an explainable and frugal artificial intelligence (AI) became a fundamental challenge, especially when AI is used in safety-critical systems and demands ever more energy. This issue is even more serious regarding the huge number of hyperparameters to tune to make the models work. Among these parameters, the optimizer as well as its associated tunings appear as the most important leverages to improve these models [196]. This thesis focuses on the analysis of learning process/optimizer for neural networks, by identifying mathematical properties closely related to these two challenges. First, undesirable behaviors preventing the design of explainable and frugal networks are identified. Then, these behaviors are explained using two tools: Lyapunov stability and geometrical integrators. Through numerical experiments, the learning process stabilization improves the overall performances and allows the design of shallow networks. Theoretically, the suggested point of view enables to derive convergence guarantees for classical Deep Learning optimizers. The same approach is valuable for mini-batch optimization where unwelcome phenomenons proliferate: the concept of balanced splitting scheme becomes essential to enhance the learning process understanding and improve its robustness. This study paves the way to the design of new adaptive optimizers, by exploiting the deep relation between robust optimization and invariant preserving scheme for dynamical systems
Części książek na temat "Mini-Batch Optimization"
Chauhan, Vinod Kumar. "Mini-batch Block-coordinate Newton Method". W Stochastic Optimization for Large-scale Machine Learning, 117–22. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-9.
Pełny tekst źródłaChauhan, Vinod Kumar. "Mini-batch and Block-coordinate Approach". W Stochastic Optimization for Large-scale Machine Learning, 51–66. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-5.
Pełny tekst źródłaFranchini, Giorgia, Valeria Ruggiero i Luca Zanni. "Steplength and Mini-batch Size Selection in Stochastic Gradient Methods". W Machine Learning, Optimization, and Data Science, 259–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_22.
Pełny tekst źródłaPanteleev, Andrei V., i Aleksandr V. Lobanov. "Application of Mini-Batch Adaptive Optimization Method in Stochastic Control Problems". W Advances in Theory and Practice of Computational Mechanics, 345–61. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8926-0_23.
Pełny tekst źródłaLiu, Jie, i Martin Takáč. "Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme Under Weak Strong Convexity Assumption". W Modeling and Optimization: Theory and Applications, 95–117. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66616-7_7.
Pełny tekst źródłaPanteleev, A. V., i A. V. Lobanov. "Application of the Zero-Order Mini-Batch Optimization Method in the Tracking Control Problem". W SMART Automatics and Energy, 573–81. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8759-4_59.
Pełny tekst źródłaKim, Hee-Seung, Lingyi Zhang, Adam Bienkowski, Krishna R. Pattipati, David Sidoti, Yaakov Bar-Shalom i David L. Kleinman. "Sequential Mini-Batch Noise Covariance Estimator". W Kalman Filter - Engineering Applications [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108917.
Pełny tekst źródłaSurono, Sugiyarto, Aris Thobirin, Zani Anjani Rafsanjani Hsm, Asih Yuli Astuti, Berlin Ryan Kp i Milla Oktavia. "Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent". W Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220387.
Pełny tekst źródłaLuo, Kangyang, Kunkun Zhang, Shengbo Zhang, Xiang Li i Ming Gao. "Decentralized Local Updates with Dual-Slow Estimation and Momentum-Based Variance-Reduction for Non-Convex Optimization". W Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230445.
Pełny tekst źródłaWu, Lilei, i Jie Liu. "Contrastive Learning with Diverse Samples". W Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230575.
Pełny tekst źródłaStreszczenia konferencji na temat "Mini-Batch Optimization"
Li, Mu, Tong Zhang, Yuqiang Chen i Alexander J. Smola. "Efficient mini-batch training for stochastic optimization". W KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623612.
Pełny tekst źródłaFeyzmahdavian, Hamid Reza, Arda Aytekin i Mikael Johansson. "An asynchronous mini-batch algorithm for regularized stochastic optimization". W 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015. http://dx.doi.org/10.1109/cdc.2015.7402404.
Pełny tekst źródłaJoseph, K. J., Vamshi Teja R, Krishnakant Singh i Vineeth N. Balasubramanian. "Submodular Batch Selection for Training Deep Neural Networks". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/372.
Pełny tekst źródłaXie, Xin, Chao Chen i Zhijian Chen. "Mini-batch Quasi-Newton optimization for Large Scale Linear Support Vector Regression". W 2015 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/icmmcce-15.2015.503.
Pełny tekst źródłaNaganuma, Hiroki, i Rio Yokota. "A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training". W 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE, 2019. http://dx.doi.org/10.1109/ccgrid.2019.00092.
Pełny tekst źródłaYang, Hui, i Bangyu Wu. "Full waveform inversion based on mini-batch gradient descent optimization with geological constrain". W International Geophysical Conference, Qingdao, China, 17-20 April 2017. Society of Exploration Geophysicists and Chinese Petroleum Society, 2017. http://dx.doi.org/10.1190/igc2017-104.
Pełny tekst źródłaOktavia, Milla, i Sugiyarto Surono. "Optimization Takagi Sugeno Kang fuzzy system using mini-batch gradient descent with uniform regularization". W PROCEEDINGS OF THE 3RD AHMAD DAHLAN INTERNATIONAL CONFERENCE ON MATHEMATICS AND MATHEMATICS EDUCATION 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0140144.
Pełny tekst źródłaKhan, Muhammad Waqas, Muhammad Zeeshan i Muhammad Usman. "Traffic Scheduling Optimization in Cognitive Radio based Smart Grid Network Using Mini-Batch Gradient Descent Method". W 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 2019. http://dx.doi.org/10.23919/cisti.2019.8760693.
Pełny tekst źródłaZheng, Feng, Xin Miao i Heng Huang. "Fast Vehicle Identification in Surveillance via Ranked Semantic Sampling Based Embedding". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/514.
Pełny tekst źródłaZhang, Wenyu, Li Shen, Wanyue Zhang i Chuan-Sheng Foo. "Few-Shot Adaptation of Pre-Trained Networks for Domain Shift". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/232.
Pełny tekst źródła