Добірка наукової літератури з теми "Scaled gradient descent"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Scaled gradient descent".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Scaled gradient descent"
Farhana Husin, Siti, Mustafa Mamat, Mohd Asrul Hery Ibrahim, and Mohd Rivaie. "A modification of steepest descent method for solving large-scaled unconstrained optimization problems." International Journal of Engineering & Technology 7, no. 3.28 (August 17, 2018): 72. http://dx.doi.org/10.14419/ijet.v7i3.28.20969.
Повний текст джерелаOkoubi, Firmin Andzembe, and Jonas Koko. "Parallel Nesterov Domain Decomposition Method for Elliptic Partial Differential Equations." Parallel Processing Letters 30, no. 01 (March 2020): 2050004. http://dx.doi.org/10.1142/s0129626420500048.
Повний текст джерелаMaduranga, Kehelwala D. G., Kyle E. Helfrich, and Qiang Ye. "Complex Unitary Recurrent Neural Networks Using Scaled Cayley Transform." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4528–35. http://dx.doi.org/10.1609/aaai.v33i01.33014528.
Повний текст джерелаBayati. "New Scaled Sufficient Descent Conjugate Gradient Algorithm for Solving Unconstraint Optimization Problems." Journal of Computer Science 6, no. 5 (May 1, 2010): 511–18. http://dx.doi.org/10.3844/jcssp.2010.511.518.
Повний текст джерелаAl-batah, Mohammad Subhi, Mutasem Sh Alkhasawneh, Lea Tien Tay, Umi Kalthum Ngah, Habibah Hj Lateh, and Nor Ashidi Mat Isa. "Landslide Occurrence Prediction Using Trainable Cascade Forward Network and Multilayer Perceptron." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/512158.
Повний текст джерелаAl-Naemi, Ghada M., and Ahmed H. Sheekoo. "New scaled algorithm for non-linear conjugate gradients in unconstrained optimization." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 3 (December 1, 2021): 1589. http://dx.doi.org/10.11591/ijeecs.v24.i3.pp1589-1595.
Повний текст джерелаArthur, C. K., V. A. Temeng, and Y. Y. Ziggah. "Performance Evaluation of Training Algorithms in Backpropagation Neural Network Approach to Blast-Induced Ground Vibration Prediction." Ghana Mining Journal 20, no. 1 (July 7, 2020): 20–33. http://dx.doi.org/10.4314/gm.v20i1.3.
Повний текст джерелаAbbaspour-Gilandeh, Yousef, Masoud Fazeli, Ali Roshanianfard, Mario Hernández-Hernández, Iván Gallardo-Bernal, and José Luis Hernández-Hernández. "Prediction of Draft Force of a Chisel Cultivator Using Artificial Neural Networks and Its Comparison with Regression Model." Agronomy 10, no. 4 (March 25, 2020): 451. http://dx.doi.org/10.3390/agronomy10040451.
Повний текст джерелаSra, Suvrit. "On the Matrix Square Root via Geometric Optimization." Electronic Journal of Linear Algebra 31 (February 5, 2016): 433–43. http://dx.doi.org/10.13001/1081-3810.3196.
Повний текст джерелаHamed, Eman T., Rana Z. Al-Kawaz, and Abbas Y. Al-Bayati. "New Investigation for the Liu-Story Scaled Conjugate Gradient Method for Nonlinear Optimization." Journal of Mathematics 2020 (January 25, 2020): 1–12. http://dx.doi.org/10.1155/2020/3615208.
Повний текст джерелаДисертації з теми "Scaled gradient descent"
Doan, Thanh-Nghi. "Large scale support vector machines algorithms for visual classification." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S083/document.
Повний текст джерелаWe have proposed a novel method of combination multiple of different features for image classification. For large scale learning classifiers, we have developed the parallel versions of both state-of-the-art linear and nonlinear SVMs. We have also proposed a novel algorithm to extend stochastic gradient descent SVM for large scale learning. A class of large scale incremental SVM classifiers has been developed in order to perform classification tasks on large datasets with very large number of classes and training data can not fit into memory
Akata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM003/document.
Повний текст джерелаBuilding algorithms that classify images on a large scale is an essential task due to the difficulty in searching massive amount of unlabeled visual data available on the Internet. We aim at classifying images based on their content to simplify the manageability of such large-scale collections. Large-scale image classification is a difficult problem as datasets are large with respect to both the number of images and the number of classes. Some of these classes are fine grained and they may not contain any labeled representatives. In this thesis, we use state-of-the-art image representations and focus on efficient learning methods. Our contributions are (1) a benchmark of learning algorithms for large scale image classification, and (2) a novel learning algorithm based on label embedding for learning with scarce training data. Firstly, we propose a benchmark of learning algorithms for large scale image classification in the fully supervised setting. It compares several objective functions for learning linear classifiers such as one-vs-rest, multiclass, ranking and weighted average ranking using the stochastic gradient descent optimization. The output of this benchmark is a set of recommendations for large-scale learning. We experimentally show that, online learning is well suited for large-scale image classification. With simple data rebalancing, One-vs-Rest performs better than all other methods. Moreover, in online learning, using a small enough step size with respect to the learning rate is sufficient for state-of-the-art performance. Finally, regularization through early stopping results in fast training and a good generalization performance. Secondly, when dealing with thousands of classes, it is difficult to collect sufficient labeled training data for each class. For some classes we might not even have a single training example. We propose a novel algorithm for this zero-shot learning scenario. Our algorithm uses side information, such as attributes to embed classes in a Euclidean space. We also introduce a function to measure the compatibility between an image and a label. The parameters of this function are learned using a ranking objective. Our algorithm outperforms the state-of-the-art for zero-shot learning. It is flexible and can accommodate other sources of side information such as hierarchies. It also allows for a smooth transition from zero-shot to few-shots learning
Silveti, Falls Antonio. "First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.
Повний текст джерелаIn this work we develop and examine two novel first-order splitting algorithms for solving large-scale composite optimization problems in infinite-dimensional spaces. Such problems are ubiquitous in many areas of science and engineering, particularly in data science and imaging sciences. Our work is focused on relaxing the Lipschitz-smoothness assumptions generally required by first-order splitting algorithms by replacing the Euclidean energy with a Bregman divergence. These developments allow one to solve problems having more exotic geometry than that of the usual Euclidean setting. One algorithm is hybridization of the conditional gradient algorithm, making use of a linear minimization oracle at each iteration, with an augmented Lagrangian algorithm, allowing for affine constraints. The other algorithm is a primal-dual splitting algorithm incorporating Bregman divergences for computing the associated proximal operators. For both of these algorithms, our analysis shows convergence of the Lagrangian values, subsequential weak convergence of the iterates to solutions, and rates of convergence. In addition to these novel deterministic algorithms, we introduce and study also the stochastic extensions of these algorithms through a perturbation perspective. Our results in this part include almost sure convergence results for all the same quantities as in the deterministic setting, with rates as well. Finally, we tackle new problems that are only accessible through the relaxed assumptions our algorithms allow. We demonstrate numerical efficiency and verify our theoretical results on problems like low rank, sparse matrix completion, inverse problems on the simplex, and entropically regularized Wasserstein inverse problems
Частини книг з теми "Scaled gradient descent"
Bottou, Léon. "Large-Scale Machine Learning with Stochastic Gradient Descent." In Proceedings of COMPSTAT'2010, 177–86. Heidelberg: Physica-Verlag HD, 2010. http://dx.doi.org/10.1007/978-3-7908-2604-3_16.
Повний текст джерелаShi, Ziqiang, and Rujie Liu. "Large Scale Optimization with Proximal Stochastic Newton-Type Gradient Descent." In Machine Learning and Knowledge Discovery in Databases, 691–704. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23528-8_43.
Повний текст джерелаYating, Chen. "Cooperation Coevolution Differential Evolution with Gradient Descent Strategy for Large Scale." In Lecture Notes in Computer Science, 429–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61824-1_47.
Повний текст джерелаSharma, Sweta, and Reshma Rastogi. "Stochastic Conjugate Gradient Descent Twin Support Vector Machine for Large Scale Pattern Classification." In AI 2018: Advances in Artificial Intelligence, 590–602. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03991-2_54.
Повний текст джерелаBottou, Léon. "Large-Scale Machine Learning with Stochastic Gradient Descent." In Chapman & Hall/CRC Computer Science & Data Analysis, 17–25. Chapman and Hall/CRC, 2011. http://dx.doi.org/10.1201/b11429-4.
Повний текст джерелаSurono, Sugiyarto, Aris Thobirin, Zani Anjani Rafsanjani Hsm, Asih Yuli Astuti, Berlin Ryan Kp, and Milla Oktavia. "Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220387.
Повний текст джерела"Large-Scale Machine Learning with Stochastic Gradient Descent Léon Bottou." In Statistical Learning and Data Science, 33–42. Chapman and Hall/CRC, 2011. http://dx.doi.org/10.1201/b11429-6.
Повний текст джерелаNaik, Bighnaraj, Janmenjoy Nayak, and H. S. Behera. "A Hybrid Model of FLANN and Firefly Algorithm for Classification." In Handbook of Research on Natural Computing for Optimization Problems, 491–522. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0058-2.ch021.
Повний текст джерелаMennour, Rostom, and Mohamed Batouche. "Novel Scalable Deep Learning Approaches for Big Data Analytics Applied to ECG Processing." In Deep Learning and Neural Networks, 633–53. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch035.
Повний текст джерелаAmitab, Khwairakpam, Debdatta Kandar, and Arnab K. Maji. "Speckle Noise Filtering Using Back-Propagation Multi-Layer Perceptron Network in Synthetic Aperture Radar Image." In Research Advances in the Integration of Big Data and Smart Computing, 280–301. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8737-0.ch016.
Повний текст джерелаТези доповідей конференцій з теми "Scaled gradient descent"
Mishra, Bamdev, and Rodolphe Sepulchre. "Scaled stochastic gradient descent for low-rank matrix completion." In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7798689.
Повний текст джерела"SCALED GRADIENT DESCENT LEARNING RATE - Reinforcement learning with light-seeking robot." In First International Conference on Informatics in Control, Automation and Robotics. SciTePress - Science and and Technology Publications, 2004. http://dx.doi.org/10.5220/0001138600030011.
Повний текст джерелаWu, Jui-Yu, and Pei-Ci Liu. "Identifying a Default of Credit Card Clients by using a LSTM Method: A Case Study." In 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121012.
Повний текст джерелаZhou, Fan, and Guojing Cong. "On the Convergence Properties of a K-step Averaging Stochastic Gradient Descent Algorithm for Nonconvex Optimization." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/447.
Повний текст джерелаTang, Jingjing, Yingjie Tian, Guoqiang Wu, and Dewei Li. "Stochastic gradient descent for large-scale linear nonparallel SVM." In WI '17: International Conference on Web Intelligence 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3106426.3109427.
Повний текст джерелаGemulla, Rainer, Erik Nijkamp, Peter J. Haas, and Yannis Sismanis. "Large-scale matrix factorization with distributed stochastic gradient descent." In the 17th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2020408.2020426.
Повний текст джерелаMu, Yang, Wei Ding, Tianyi Zhou, and Dacheng Tao. "Constrained stochastic gradient descent for large-scale least squares problem." In KDD' 13: The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2487575.2487635.
Повний текст джерелаGupta, Suyog, Wei Zhang, and Fei Wang. "Model Accuracy and Runtime Tradeoff in Distributed Deep Learning: A Systematic Study." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/681.
Повний текст джерелаLi, Fengan, Lingjiao Chen, Yijing Zeng, Arun Kumar, Xi Wu, Jeffrey F. Naughton, and Jignesh M. Patel. "Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent." In SIGMOD/PODS '19: International Conference on Management of Data. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3299869.3300070.
Повний текст джерелаZhang, Tong. "Solving large scale linear prediction problems using stochastic gradient descent algorithms." In Twenty-first international conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1015330.1015332.
Повний текст джерела