Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Mini-Batch Optimization“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Mini-Batch Optimization" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Mini-Batch Optimization"
Gultekin, San, Avishek Saha, Adwait Ratnaparkhi und John Paisley. „MBA: Mini-Batch AUC Optimization“. IEEE Transactions on Neural Networks and Learning Systems 31, Nr. 12 (Dezember 2020): 5561–74. http://dx.doi.org/10.1109/tnnls.2020.2969527.
Der volle Inhalt der QuelleFeyzmahdavian, Hamid Reza, Arda Aytekin und Mikael Johansson. „An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization“. IEEE Transactions on Automatic Control 61, Nr. 12 (Dezember 2016): 3740–54. http://dx.doi.org/10.1109/tac.2016.2525015.
Der volle Inhalt der QuelleBanerjee, Subhankar, und Shayok Chakraborty. „Deterministic Mini-batch Sequencing for Training Deep Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 8 (18.05.2021): 6723–31. http://dx.doi.org/10.1609/aaai.v35i8.16831.
Der volle Inhalt der QuelleSimanungkalit, F. R. J., H. Hanifah, G. Ardaneswari, N. Hariadi und B. D. Handari. „Prediction of students’ academic performance using ANN with mini-batch gradient descent and Levenberg-Marquardt optimization algorithms“. Journal of Physics: Conference Series 2106, Nr. 1 (01.11.2021): 012018. http://dx.doi.org/10.1088/1742-6596/2106/1/012018.
Der volle Inhalt der Quellevan Herwaarden, Dirk Philip, Christian Boehm, Michael Afanasiev, Solvi Thrastarson, Lion Krischer, Jeannot Trampert und Andreas Fichtner. „Accelerated full-waveform inversion using dynamic mini-batches“. Geophysical Journal International 221, Nr. 2 (21.02.2020): 1427–38. http://dx.doi.org/10.1093/gji/ggaa079.
Der volle Inhalt der QuelleGhadimi, Saeed, Guanghui Lan und Hongchao Zhang. „Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization“. Mathematical Programming 155, Nr. 1-2 (11.12.2014): 267–305. http://dx.doi.org/10.1007/s10107-014-0846-1.
Der volle Inhalt der QuelleKervazo, C., T. Liaudat und J. Bobin. „Faster and better sparse blind source separation through mini-batch optimization“. Digital Signal Processing 106 (November 2020): 102827. http://dx.doi.org/10.1016/j.dsp.2020.102827.
Der volle Inhalt der QuelleDimitriou, Neofytos, und Ognjen Arandjelović. „Sequential Normalization: Embracing Smaller Sample Sizes for Normalization“. Information 13, Nr. 7 (12.07.2022): 337. http://dx.doi.org/10.3390/info13070337.
Der volle Inhalt der QuelleBakurov, Illya, Marco Buzzelli, Mauro Castelli, Leonardo Vanneschi und Raimondo Schettini. „General Purpose Optimization Library (GPOL): A Flexible and Efficient Multi-Purpose Optimization Library in Python“. Applied Sciences 11, Nr. 11 (23.05.2021): 4774. http://dx.doi.org/10.3390/app11114774.
Der volle Inhalt der QuelleLi, Zhiyuan, Xun Jian, Yue Wang, Yingxia Shao und Lei Chen. „DAHA: Accelerating GNN Training with Data and Hardware Aware Execution Planning“. Proceedings of the VLDB Endowment 17, Nr. 6 (Februar 2024): 1364–76. http://dx.doi.org/10.14778/3648160.3648176.
Der volle Inhalt der QuelleDissertationen zum Thema "Mini-Batch Optimization"
Bensaid, Bilel. „Analyse et développement de nouveaux optimiseurs en Machine Learning“. Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.
Der volle Inhalt der QuelleOver the last few years, developping an explainable and frugal artificial intelligence (AI) became a fundamental challenge, especially when AI is used in safety-critical systems and demands ever more energy. This issue is even more serious regarding the huge number of hyperparameters to tune to make the models work. Among these parameters, the optimizer as well as its associated tunings appear as the most important leverages to improve these models [196]. This thesis focuses on the analysis of learning process/optimizer for neural networks, by identifying mathematical properties closely related to these two challenges. First, undesirable behaviors preventing the design of explainable and frugal networks are identified. Then, these behaviors are explained using two tools: Lyapunov stability and geometrical integrators. Through numerical experiments, the learning process stabilization improves the overall performances and allows the design of shallow networks. Theoretically, the suggested point of view enables to derive convergence guarantees for classical Deep Learning optimizers. The same approach is valuable for mini-batch optimization where unwelcome phenomenons proliferate: the concept of balanced splitting scheme becomes essential to enhance the learning process understanding and improve its robustness. This study paves the way to the design of new adaptive optimizers, by exploiting the deep relation between robust optimization and invariant preserving scheme for dynamical systems
Buchteile zum Thema "Mini-Batch Optimization"
Chauhan, Vinod Kumar. „Mini-batch Block-coordinate Newton Method“. In Stochastic Optimization for Large-scale Machine Learning, 117–22. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-9.
Der volle Inhalt der QuelleChauhan, Vinod Kumar. „Mini-batch and Block-coordinate Approach“. In Stochastic Optimization for Large-scale Machine Learning, 51–66. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-5.
Der volle Inhalt der QuelleFranchini, Giorgia, Valeria Ruggiero und Luca Zanni. „Steplength and Mini-batch Size Selection in Stochastic Gradient Methods“. In Machine Learning, Optimization, and Data Science, 259–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_22.
Der volle Inhalt der QuellePanteleev, Andrei V., und Aleksandr V. Lobanov. „Application of Mini-Batch Adaptive Optimization Method in Stochastic Control Problems“. In Advances in Theory and Practice of Computational Mechanics, 345–61. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8926-0_23.
Der volle Inhalt der QuelleLiu, Jie, und Martin Takáč. „Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme Under Weak Strong Convexity Assumption“. In Modeling and Optimization: Theory and Applications, 95–117. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66616-7_7.
Der volle Inhalt der QuellePanteleev, A. V., und A. V. Lobanov. „Application of the Zero-Order Mini-Batch Optimization Method in the Tracking Control Problem“. In SMART Automatics and Energy, 573–81. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8759-4_59.
Der volle Inhalt der QuelleKim, Hee-Seung, Lingyi Zhang, Adam Bienkowski, Krishna R. Pattipati, David Sidoti, Yaakov Bar-Shalom und David L. Kleinman. „Sequential Mini-Batch Noise Covariance Estimator“. In Kalman Filter - Engineering Applications [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108917.
Der volle Inhalt der QuelleSurono, Sugiyarto, Aris Thobirin, Zani Anjani Rafsanjani Hsm, Asih Yuli Astuti, Berlin Ryan Kp und Milla Oktavia. „Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220387.
Der volle Inhalt der QuelleLuo, Kangyang, Kunkun Zhang, Shengbo Zhang, Xiang Li und Ming Gao. „Decentralized Local Updates with Dual-Slow Estimation and Momentum-Based Variance-Reduction for Non-Convex Optimization“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230445.
Der volle Inhalt der QuelleWu, Lilei, und Jie Liu. „Contrastive Learning with Diverse Samples“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230575.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Mini-Batch Optimization"
Li, Mu, Tong Zhang, Yuqiang Chen und Alexander J. Smola. „Efficient mini-batch training for stochastic optimization“. In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623612.
Der volle Inhalt der QuelleFeyzmahdavian, Hamid Reza, Arda Aytekin und Mikael Johansson. „An asynchronous mini-batch algorithm for regularized stochastic optimization“. In 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015. http://dx.doi.org/10.1109/cdc.2015.7402404.
Der volle Inhalt der QuelleJoseph, K. J., Vamshi Teja R, Krishnakant Singh und Vineeth N. Balasubramanian. „Submodular Batch Selection for Training Deep Neural Networks“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/372.
Der volle Inhalt der QuelleXie, Xin, Chao Chen und Zhijian Chen. „Mini-batch Quasi-Newton optimization for Large Scale Linear Support Vector Regression“. In 2015 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/icmmcce-15.2015.503.
Der volle Inhalt der QuelleNaganuma, Hiroki, und Rio Yokota. „A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training“. In 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE, 2019. http://dx.doi.org/10.1109/ccgrid.2019.00092.
Der volle Inhalt der QuelleYang, Hui, und Bangyu Wu. „Full waveform inversion based on mini-batch gradient descent optimization with geological constrain“. In International Geophysical Conference, Qingdao, China, 17-20 April 2017. Society of Exploration Geophysicists and Chinese Petroleum Society, 2017. http://dx.doi.org/10.1190/igc2017-104.
Der volle Inhalt der QuelleOktavia, Milla, und Sugiyarto Surono. „Optimization Takagi Sugeno Kang fuzzy system using mini-batch gradient descent with uniform regularization“. In PROCEEDINGS OF THE 3RD AHMAD DAHLAN INTERNATIONAL CONFERENCE ON MATHEMATICS AND MATHEMATICS EDUCATION 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0140144.
Der volle Inhalt der QuelleKhan, Muhammad Waqas, Muhammad Zeeshan und Muhammad Usman. „Traffic Scheduling Optimization in Cognitive Radio based Smart Grid Network Using Mini-Batch Gradient Descent Method“. In 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 2019. http://dx.doi.org/10.23919/cisti.2019.8760693.
Der volle Inhalt der QuelleZheng, Feng, Xin Miao und Heng Huang. „Fast Vehicle Identification in Surveillance via Ranked Semantic Sampling Based Embedding“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/514.
Der volle Inhalt der QuelleZhang, Wenyu, Li Shen, Wanyue Zhang und Chuan-Sheng Foo. „Few-Shot Adaptation of Pre-Trained Networks for Domain Shift“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/232.
Der volle Inhalt der Quelle