Добірка наукової літератури з теми "Mini-Batch Optimization"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Mini-Batch Optimization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Mini-Batch Optimization"

1

Gultekin, San, Avishek Saha, Adwait Ratnaparkhi, and John Paisley. "MBA: Mini-Batch AUC Optimization." IEEE Transactions on Neural Networks and Learning Systems 31, no. 12 (December 2020): 5561–74. http://dx.doi.org/10.1109/tnnls.2020.2969527.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Feyzmahdavian, Hamid Reza, Arda Aytekin, and Mikael Johansson. "An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization." IEEE Transactions on Automatic Control 61, no. 12 (December 2016): 3740–54. http://dx.doi.org/10.1109/tac.2016.2525015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Banerjee, Subhankar, and Shayok Chakraborty. "Deterministic Mini-batch Sequencing for Training Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6723–31. http://dx.doi.org/10.1609/aaai.v35i8.16831.

Повний текст джерела
Анотація:
Recent advancements in the field of deep learning have dramatically improved the performance of machine learning models in a variety of applications, including computer vision, text mining, speech processing and fraud detection among others. Mini-batch gradient descent is the standard algorithm to train deep models, where mini-batches of a fixed size are sampled randomly from the training data and passed through the network sequentially. In this paper, we present a novel algorithm to generate a deterministic sequence of mini-batches to train a deep neural network (rather than a random sequence). Our rationale is to select a mini-batch by minimizing the Maximum Mean Discrepancy (MMD) between the already selected mini-batches and the unselected training samples. We pose the mini-batch selection as a constrained optimization problem and derive a linear programming relaxation to determine the sequence of mini-batches. To the best of our knowledge, this is the first research effort that uses the MMD criterion to determine a sequence of mini-batches to train a deep neural network. The proposed mini-batch sequencing strategy is deterministic and independent of the underlying network architecture and prediction task. Our extensive empirical analyses on three challenging datasets corroborate the merit of our framework over competing baselines. We further study the performance of our framework on two other applications besides classification (regression and semantic segmentation) to validate its generalizability.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Simanungkalit, F. R. J., H. Hanifah, G. Ardaneswari, N. Hariadi, and B. D. Handari. "Prediction of students’ academic performance using ANN with mini-batch gradient descent and Levenberg-Marquardt optimization algorithms." Journal of Physics: Conference Series 2106, no. 1 (November 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2106/1/012018.

Повний текст джерела
Анотація:
Abstract Online learning indirectly increases stress, thereby reducing social interaction among students and leading to physical and mental fatigue, which in turn reduced students’ academic performance. Therefore, the prediction of academic performance is required sooner to identify at-risk students with declining performance. In this paper, we use artificial neural networks (ANN) to predict this performance. ANNs with two optimization algorithms, mini-batch gradient descent and Levenberg-Marquardt, are implemented on students’ learning activity data in course X, which is recorded on LMS UI. Data contains 232 students and consists of two periods: the first month and second month of study. Before ANNs are implemented, both normalization and usage of ADASYN are conducted. The results of ANN implementation using two optimization algorithms within 10 trials each are compared based on the average accuracy, sensitivity, and specificity values. We then determine the best period to predict unsuccessful students correctly. The results show that both algorithms give better predictions over two months instead of one. ANN with mini-batch gradient descent has an average sensitivity of 78%; the corresponding values for ANN with Levenberg-Marquardt are 75%. Therefore, ANN with mini-batch gradient descent as its optimization algorithm is more suitable for predicting students that have potential to fail.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

van Herwaarden, Dirk Philip, Christian Boehm, Michael Afanasiev, Solvi Thrastarson, Lion Krischer, Jeannot Trampert, and Andreas Fichtner. "Accelerated full-waveform inversion using dynamic mini-batches." Geophysical Journal International 221, no. 2 (February 21, 2020): 1427–38. http://dx.doi.org/10.1093/gji/ggaa079.

Повний текст джерела
Анотація:
SUMMARY We present an accelerated full-waveform inversion based on dynamic mini-batch optimization, which naturally exploits redundancies in observed data from different sources. The method rests on the selection of quasi-random subsets (mini-batches) of sources, used to approximate the misfit and the gradient of the complete data set. The size of the mini-batch is dynamically controlled by the desired quality of the gradient approximation. Within each mini-batch, redundancy is minimized by selecting sources with the largest angular differences between their respective gradients, and spatial coverage is maximized by selecting candidate events with Mitchell’s best-candidate algorithm. Information from sources not included in a specific mini-batch is incorporated into each gradient calculation through a quasi-Newton approximation of the Hessian, and a consistent misfit measure is achieved through the inclusion of a control group of sources. By design, the dynamic mini-batch approach has several main advantages: (1) The use of mini-batches with adaptive size ensures that an optimally small number of sources is used in each iteration, thus potentially leading to significant computational savings; (2) curvature information is accumulated and exploited during the inversion, using a randomized quasi-Newton method; (3) new data can be incorporated without the need to re-invert the complete data set, thereby enabling an evolutionary mode of full-waveform inversion. We illustrate our method using synthetic and real-data inversions for upper-mantle structure beneath the African Plate. In these specific examples, the dynamic mini-batch approach requires around 20 per cent of the computational resources in order to achieve data and model misfits that are comparable to those achieved by a standard full-waveform inversion where all sources are used in each iteration.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ghadimi, Saeed, Guanghui Lan, and Hongchao Zhang. "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization." Mathematical Programming 155, no. 1-2 (December 11, 2014): 267–305. http://dx.doi.org/10.1007/s10107-014-0846-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kervazo, C., T. Liaudat, and J. Bobin. "Faster and better sparse blind source separation through mini-batch optimization." Digital Signal Processing 106 (November 2020): 102827. http://dx.doi.org/10.1016/j.dsp.2020.102827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dimitriou, Neofytos, and Ognjen Arandjelović. "Sequential Normalization: Embracing Smaller Sample Sizes for Normalization." Information 13, no. 7 (July 12, 2022): 337. http://dx.doi.org/10.3390/info13070337.

Повний текст джерела
Анотація:
Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNorm normalization statistics (μ and σ2) in each mini-batch result in better optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm known as GhostNorm that, despite independently normalizing batches within the mini-batches, i.e., μ and σ2 are independently computed and applied to groups of samples in each mini-batch, outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the sequential application of the above type of normalization across two dimensions of the input, and find that models trained with SeqNorm consistently outperform models trained with BatchNorm or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we uncover a source of regularization that is unique to GhostNorm, and not simply an extension from BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization (SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance is consistently better than our baselines, including ones with BatchNorm, on the standard image classification data sets of CIFAR–10, CIFAR-100, and ImageNet ((+0.2%, +0.7%, +0.4%), and (+0.3%, +1.7%, +1.1%) for GhostNorm and SeqNorm, respectively).
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bakurov, Illya, Marco Buzzelli, Mauro Castelli, Leonardo Vanneschi, and Raimondo Schettini. "General Purpose Optimization Library (GPOL): A Flexible and Efficient Multi-Purpose Optimization Library in Python." Applied Sciences 11, no. 11 (May 23, 2021): 4774. http://dx.doi.org/10.3390/app11114774.

Повний текст джерела
Анотація:
Several interesting libraries for optimization have been proposed. Some focus on individual optimization algorithms, or limited sets of them, and others focus on limited sets of problems. Frequently, the implementation of one of them does not precisely follow the formal definition, and they are difficult to personalize and compare. This makes it difficult to perform comparative studies and propose novel approaches. In this paper, we propose to solve these issues with the General Purpose Optimization Library (GPOL): a flexible and efficient multipurpose optimization library that covers a wide range of stochastic iterative search algorithms, through which flexible and modular implementation can allow for solving many different problem types from the fields of continuous and combinatorial optimization and supervised machine learning problem solving. Moreover, the library supports full-batch and mini-batch learning and allows carrying out computations on a CPU or GPU. The package is distributed under an MIT license. Source code, installation instructions, demos and tutorials are publicly available in our code hosting platform (the reference is provided in the Introduction).
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Zhiyuan, Xun Jian, Yue Wang, Yingxia Shao, and Lei Chen. "DAHA: Accelerating GNN Training with Data and Hardware Aware Execution Planning." Proceedings of the VLDB Endowment 17, no. 6 (February 2024): 1364–76. http://dx.doi.org/10.14778/3648160.3648176.

Повний текст джерела
Анотація:
Graph neural networks (GNNs) have been gaining a reputation for effective modeling of graph data. Yet, it is challenging to train GNNs efficiently. Many frameworks have been proposed but most of them suffer from high batch preparation cost and data transfer cost for mini-batch training. In addition, existing works have limitations on the device utilization pattern, which results in fewer opportunities for pipeline parallelism. In this paper, we present DAHA, a GNN training framework with data and hardware aware execution planning to accelerate end-to-end GNN training. We first propose a data and hardware aware cost model that is lightweight and gives accurate estimates on per-operation time cost for arbitrary input and hardware settings. Based on the cost model, we further explore the optimal execution plan for the data and hardware with three optimization strategies with pipeline parallelism: (1) group-based in-turn pipelining of batch preparation neural training to explore more optimization opportunities and prevent batch preparation bottlenecks; (2) data and hardware aware rewriting for intra-batch execution planning to improve computation efficiency and create more opportunities for pipeline parallelism; and (3) inter-batch scheduling to further boost the training efficiency. Extensive experiments demonstrate that DAHA can consistently and significantly accelerate end-to-end GNN training and generalize to different message-passing GNN models.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Mini-Batch Optimization"

1

Bensaid, Bilel. "Analyse et développement de nouveaux optimiseurs en Machine Learning." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.

Повний текст джерела
Анотація:
Ces dernières années, l’intelligence artificielle (IA) est confrontée à deux défis majeurs (parmi d’autres), à savoir l’explicabilité et la frugalité, dans un contexte d’intégration de l’IA dans des systèmes critiques ou embarqués et de raréfaction des ressources. Le défi est d’autant plus conséquent que les modèles proposés apparaissent commes des boîtes noires étant donné le nombre faramineux d’hyperparamètres à régler (véritable savoir-faire) pour les faire fonctionner. Parmi ces paramètres, l’optimiseur ainsi que les réglages qui lui sont associés ont un rôle critique dans la bonne mise en oeuvre de ces outils [196]. Dans cette thèse, nous nous focalisons sur l’analyse des algorithmes d’apprentissage/optimiseurs dans le contexte des réseaux de neurones, en identifiant des propriétés mathématiques faisant écho aux deux défis évoqués et nécessaires à la robustesse du processus d’apprentissage. Dans un premier temps, nous identifions des comportements indésirables lors du processus d’apprentissage qui vont à l’encontre d’une IA explicable et frugale. Ces comportements sont alors expliqués au travers de deux outils: la stabilité de Lyapunov et les intégrateurs géométriques. Empiriquement, la stabilisation du processus d’apprentissage améliore les performances, autorisant la construction de modèles plus économes. Théoriquement, le point de vue développé permet d’établir des garanties de convergence pour les optimiseurs classiquement utilisés dans l’entraînement des réseaux. La même démarche est suivie concernant l’optimisation mini-batch où les comportements indésirables sont légions: la notion de splitting équilibré est alors centrale afin d’expliquer et d’améliorer les performances. Cette étude ouvre la voie au développement de nouveaux optimiseurs adaptatifs, issus de la relation profonde entre optimisation robuste et schémas numériques préservant les invariants des systèmes dynamiques
Over the last few years, developping an explainable and frugal artificial intelligence (AI) became a fundamental challenge, especially when AI is used in safety-critical systems and demands ever more energy. This issue is even more serious regarding the huge number of hyperparameters to tune to make the models work. Among these parameters, the optimizer as well as its associated tunings appear as the most important leverages to improve these models [196]. This thesis focuses on the analysis of learning process/optimizer for neural networks, by identifying mathematical properties closely related to these two challenges. First, undesirable behaviors preventing the design of explainable and frugal networks are identified. Then, these behaviors are explained using two tools: Lyapunov stability and geometrical integrators. Through numerical experiments, the learning process stabilization improves the overall performances and allows the design of shallow networks. Theoretically, the suggested point of view enables to derive convergence guarantees for classical Deep Learning optimizers. The same approach is valuable for mini-batch optimization where unwelcome phenomenons proliferate: the concept of balanced splitting scheme becomes essential to enhance the learning process understanding and improve its robustness. This study paves the way to the design of new adaptive optimizers, by exploiting the deep relation between robust optimization and invariant preserving scheme for dynamical systems
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Mini-Batch Optimization"

1

Chauhan, Vinod Kumar. "Mini-batch Block-coordinate Newton Method." In Stochastic Optimization for Large-scale Machine Learning, 117–22. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chauhan, Vinod Kumar. "Mini-batch and Block-coordinate Approach." In Stochastic Optimization for Large-scale Machine Learning, 51–66. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003240167-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Franchini, Giorgia, Valeria Ruggiero, and Luca Zanni. "Steplength and Mini-batch Size Selection in Stochastic Gradient Methods." In Machine Learning, Optimization, and Data Science, 259–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Panteleev, Andrei V., and Aleksandr V. Lobanov. "Application of Mini-Batch Adaptive Optimization Method in Stochastic Control Problems." In Advances in Theory and Practice of Computational Mechanics, 345–61. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8926-0_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Jie, and Martin Takáč. "Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme Under Weak Strong Convexity Assumption." In Modeling and Optimization: Theory and Applications, 95–117. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66616-7_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Panteleev, A. V., and A. V. Lobanov. "Application of the Zero-Order Mini-Batch Optimization Method in the Tracking Control Problem." In SMART Automatics and Energy, 573–81. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8759-4_59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kim, Hee-Seung, Lingyi Zhang, Adam Bienkowski, Krishna R. Pattipati, David Sidoti, Yaakov Bar-Shalom, and David L. Kleinman. "Sequential Mini-Batch Noise Covariance Estimator." In Kalman Filter - Engineering Applications [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108917.

Повний текст джерела
Анотація:
Noise covariance estimation in an adaptive Kalman filter is a problem of significant practical interest in a wide array of industrial applications. Reliable algorithms for their estimation are scarce, and the necessary and sufficient conditions for identifiability of the covariances were in dispute until very recently. This chapter presents the necessary and sufficient conditions for the identifiability of noise covariances, and then develops sequential mini-batch stochastic optimization algorithms for estimating them. The optimization criterion involves the minimization of the sum of the normalized temporal cross-correlations of the innovations; this is based on the property that the innovations of an optimal Kalman filter are uncorrelated over time. Our approach enforces the structural constraints on noise covariances and ensures the symmetry and positive definiteness of the estimated covariance matrices. Our approach is applicable to non-stationary and multiple model systems, where the noise covariances can occasionally jump up or down by an unknown level. The validation of the proposed method on several test cases demonstrates its computational efficiency and accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Surono, Sugiyarto, Aris Thobirin, Zani Anjani Rafsanjani Hsm, Asih Yuli Astuti, Berlin Ryan Kp, and Milla Oktavia. "Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220387.

Повний текст джерела
Анотація:
Optimization is one of the factors in machine learning to help model training during backpropagation. This is conducted by adjusting the weights to minimize the loss function and to overcome dimensional problems. Also, the gradient descent method is a simple approach in the backpropagation model to solve minimum problems. The mini-batch gradient descent (MBGD) is one of the methods proven to be powerful for large-scale learning. The addition of several approaches to the MBGD such as AB, BN, and UR can accelerate the convergence process, hence, the algorithm becomes faster and more effective. This added method will perform an optimization process on the results of the data rule that has been processed as its objective function. The processing results showed the MBGD-AB-BN-UR method has a more stable computational time in the three data sets than the other methods. For the model evaluation, this research used RMSE, MAE, and MAPE.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Luo, Kangyang, Kunkun Zhang, Shengbo Zhang, Xiang Li, and Ming Gao. "Decentralized Local Updates with Dual-Slow Estimation and Momentum-Based Variance-Reduction for Non-Convex Optimization." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230445.

Повний текст джерела
Анотація:
Decentralized learning (DL) has recently employed local updates to reduce the communication cost for general non-convex optimization problems. Specifically, local updates require each node to perform multiple update steps on the parameters of the local model before communicating with others. However, most existing methods could be highly sensitive to data heterogeneity (i.e., non-iid data distribution) and adversely affected by the stochastic gradient noise. In this paper, we propose DSE-MVR to address these problems. Specifically, DSE-MVR introduces a dual-slow estimation strategy that utilizes the gradient tracking technique to estimate the global accumulated update direction for handling the data heterogeneity problem; also for stochastic noise, the method uses the mini-batch momentum-based variance-reduction technique. We theoretically prove that DSE-MVR can achieve optimal convergence results for general non-convex optimization in both iid and non-iid data distribution settings. In particular, the leading terms in the convergence rates derived by DSE-MVR are independent of the stochastic noise for large-batches or large partial average intervals (i.e., the number of local update steps). Further, we put forward DSE-SGD and theoretically justify the importance of the dual-slow estimation strategy in the data heterogeneity setting. Finally, we conduct extensive experiments to show the superiority of DSE-MVR against other state-of-the-art approaches. We provide our code here: https://anonymous.4open.science/r/DSE-MVR-32B8/.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Lilei, and Jie Liu. "Contrastive Learning with Diverse Samples." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230575.

Повний текст джерела
Анотація:
Unsupervised visual representation learning has gained much attention from the computer vision community because of the recent contrastive learning achievements. Current work mainly adopts instance discrimination as the pretext task, which treats every single instance as a different class (negative) and uses a collection of data augmentation techniques to generate more examples (positive) for each class. The idea is straightforward and efficient but will generally cause similar instances to be classified into different classes. Such problem has been defined as “class collision” in some previous works and is shown to hurt the representation ability. Motivated by this observation, we present a solution to address this issue by filtering similar negative examples from each mini-batch. Concretely, we model the problem as a Determinantal Point Process (DPP) so that similar instances can be filtered stochastically, and diverse samples are expected to be sampled for contrastive training. Besides, we further introduce a priority term for each instance, which indicates the hardness of its positives, so that instances with more hard positives are more likely to be sampled for contributing to the optimization. Our sampling can be efficiently implemented in a feed-forward manner and further accelerated by our encouraged complement DPP. Extensive experimental results demonstrate our priority over the standard setup of contrastive learning.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Mini-Batch Optimization"

1

Li, Mu, Tong Zhang, Yuqiang Chen, and Alexander J. Smola. "Efficient mini-batch training for stochastic optimization." In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623612.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Feyzmahdavian, Hamid Reza, Arda Aytekin, and Mikael Johansson. "An asynchronous mini-batch algorithm for regularized stochastic optimization." In 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015. http://dx.doi.org/10.1109/cdc.2015.7402404.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Joseph, K. J., Vamshi Teja R, Krishnakant Singh, and Vineeth N. Balasubramanian. "Submodular Batch Selection for Training Deep Neural Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/372.

Повний текст джерела
Анотація:
Mini-batch gradient descent based methods are the de facto algorithms for training neural network architectures today.We introduce a mini-batch selection strategy based on submodular function maximization. Our novel submodular formulation captures the informativeness of each sample and diversity of the whole subset. We design an efficient, greedy algorithm which can give high-quality solutions to this NP-hard combinatorial optimization problem. Our extensive experiments on standard datasets show that the deep models trained using the proposed batch selection strategy provide better generalization than Stochastic Gradient Descent as well as a popular baseline sampling strategy across different learning rates, batch sizes, and distance metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xie, Xin, Chao Chen, and Zhijian Chen. "Mini-batch Quasi-Newton optimization for Large Scale Linear Support Vector Regression." In 2015 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/icmmcce-15.2015.503.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Naganuma, Hiroki, and Rio Yokota. "A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training." In 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE, 2019. http://dx.doi.org/10.1109/ccgrid.2019.00092.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yang, Hui, and Bangyu Wu. "Full waveform inversion based on mini-batch gradient descent optimization with geological constrain." In International Geophysical Conference, Qingdao, China, 17-20 April 2017. Society of Exploration Geophysicists and Chinese Petroleum Society, 2017. http://dx.doi.org/10.1190/igc2017-104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Oktavia, Milla, and Sugiyarto Surono. "Optimization Takagi Sugeno Kang fuzzy system using mini-batch gradient descent with uniform regularization." In PROCEEDINGS OF THE 3RD AHMAD DAHLAN INTERNATIONAL CONFERENCE ON MATHEMATICS AND MATHEMATICS EDUCATION 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0140144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Khan, Muhammad Waqas, Muhammad Zeeshan, and Muhammad Usman. "Traffic Scheduling Optimization in Cognitive Radio based Smart Grid Network Using Mini-Batch Gradient Descent Method." In 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 2019. http://dx.doi.org/10.23919/cisti.2019.8760693.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zheng, Feng, Xin Miao, and Heng Huang. "Fast Vehicle Identification in Surveillance via Ranked Semantic Sampling Based Embedding." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/514.

Повний текст джерела
Анотація:
Identifying vehicles across cameras in traffic surveillance is fundamentally important for public safety purposes. However, despite some preliminary work, the rapid vehicle search in large-scale datasets has not been investigated. Moreover, modelling a view-invariant similarity between vehicle images from different views is still highly challenging. To address the problems, in this paper, we propose a Ranked Semantic Sampling (RSS) guided binary embedding method for fast cross-view vehicle Re-IDentification (Re-ID). The search can be conducted by efficiently computing similarities in the projected space. Unlike previous methods using random sampling, we design tree-structured attributes to guide the mini-batch sampling. The ranked pairs of hard samples in the mini-batch can improve the convergence of optimization. By minimizing a novel ranked semantic distance loss defined according to the structure, the learned Hamming distance is view-invariant, which enables cross-view Re-ID. The experimental results demonstrate that RSS outperforms the state-of-the-art approaches and the learned embedding from one dataset can be transferred to achieve the task of vehicle Re-ID on another dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Wenyu, Li Shen, Wanyue Zhang, and Chuan-Sheng Foo. "Few-Shot Adaptation of Pre-Trained Networks for Domain Shift." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/232.

Повний текст джерела
Анотація:
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data. Although these methods can adapt on-the-fly without first collecting a large target domain dataset, their performance is dependent on streaming conditions such as mini-batch size and class-distribution which can be unpredictable in practice. In this work, we propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation. Specifically, we propose a constrained optimization of feature normalization statistics in pre-trained source models supervised by a small target domain support set. Our method is easy to implement and improves source model performance with as little as one sample per class for classification tasks. Extensive experiments on 5 cross-domain classification and 4 semantic segmentation datasets show that our proposed method achieves more accurate and reliable performance than test-time adaptation, while not being constrained by streaming conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії