Siga este link para ver outros tipos de publicações sobre o tema: Distributed optimization and learning.

Artigos de revistas sobre o tema "Distributed optimization and learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Distributed optimization and learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Kamalesh, Kamalesh, e Dr Gobi Natesan. "Machine Learning-Driven Analysis of Distributed Computing Systems: Exploring Optimization and Efficiency". International Journal of Research Publication and Reviews 5, n.º 3 (9 de março de 2024): 3979–83. http://dx.doi.org/10.55248/gengpi.5.0324.0786.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Mertikopoulos, Panayotis, E. Veronica Belmega, Romain Negrel e Luca Sanguinetti. "Distributed Stochastic Optimization via Matrix Exponential Learning". IEEE Transactions on Signal Processing 65, n.º 9 (1 de maio de 2017): 2277–90. http://dx.doi.org/10.1109/tsp.2017.2656847.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Gratton, Cristiano, Naveen K. D. Venkategowda, Reza Arablouei e Stefan Werner. "Privacy-Preserved Distributed Learning With Zeroth-Order Optimization". IEEE Transactions on Information Forensics and Security 17 (2022): 265–79. http://dx.doi.org/10.1109/tifs.2021.3139267.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Blot, Michael, David Picard, Nicolas Thome e Matthieu Cord. "Distributed optimization for deep learning with gossip exchange". Neurocomputing 330 (fevereiro de 2019): 287–96. http://dx.doi.org/10.1016/j.neucom.2018.11.002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Young, M. Todd, Jacob D. Hinkle, Ramakrishnan Kannan e Arvind Ramanathan. "Distributed Bayesian optimization of deep reinforcement learning algorithms". Journal of Parallel and Distributed Computing 139 (maio de 2020): 43–52. http://dx.doi.org/10.1016/j.jpdc.2019.07.008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Nedic, Angelia. "Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization". IEEE Signal Processing Magazine 37, n.º 3 (maio de 2020): 92–101. http://dx.doi.org/10.1109/msp.2020.2975210.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lin, I.-Cheng. "Learning and Optimization over Robust Networked Systems". ACM SIGMETRICS Performance Evaluation Review 52, n.º 3 (9 de janeiro de 2025): 23–26. https://doi.org/10.1145/3712170.3712179.

Texto completo da fonte
Resumo:
Research Summary: 1 Introduction Networked systems are ubiquitous in our daily lives, playing a critical role across a wide range of scientific fields, including communication, machine learning, optimization, control, biology, economics, and social sciences. In machine learning and optimization, distributed learning and distributed optimization exploit the structure of such networked systems, which enables training models across multiple users [1] and solving complex problems by partitioning tasks among interconnected agents [2]. This significantly enhances both scalability and efficiency of the learning/optimization task. In control systems, for example, networked architectures synchronize the operations of different parts of the system like sensors, controllers etc. across distributed environments, ensuring the overall system's stability and performance [3]. In economics, networked systems were utilized to capture the interdependencies between different part of the economic system, allowing further insights of market dynamics [4]. In the social sciences, specifically social networks, study the propagation of idea, behaviors and opinions, offering a deeper understanding of societal structures and its dynamics [5].
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Gao, Hongchang. "Distributed Stochastic Nested Optimization for Emerging Machine Learning Models: Algorithm and Theory". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junho de 2023): 15437. http://dx.doi.org/10.1609/aaai.v37i13.26804.

Texto completo da fonte
Resumo:
Traditional machine learning models can be formulated as the expected risk minimization (ERM) problem: minw∈Rd Eξ [l(w; ξ)], where w ∈ Rd denotes the model parameter, ξ represents training samples, l(·) is the loss function. Numerous optimization algorithms, such as stochastic gradient descent (SGD), have been developed to solve the ERM problem. However, a wide range of emerging machine learning models are beyond this class of optimization problems, such as model-agnostic meta-learning (Finn, Abbeel, and Levine 2017). Of particular interest of my research is the stochastic nested optimization (SNO) problem, whose objective function has a nested structure. Specifically, I have been focusing on two instances of this kind of problem: stochastic compositional optimization (SCO) problems, which cover meta-learning, area-under-the-precision recall-curve optimization, contrastive self-supervised learning, etc., and stochastic bilevel optimization (SBO) problems, which can be applied to meta-learning, hyperparameter optimization, neural network architecture search, etc. With the emergence of large-scale distributed data, such as the user data generated on mobile devices or intelligent hardware, it is imperative to develop distributed optimization algorithms for SNO (Distributed SNO). A significant challenge for optimizing distributed SNO problems lies in that the stochastic (hyper-)gradient is a biased estimation of the full gradient. Thus, existing distributed optimization algorithms when applied to them suffer from slow convergence rates. In this talk, I will discuss my recent works about distributed SCO (Gao and Huang 2021; Gao, Li, and Huang 2022) and distributed SBO (Gao, Gu, and Thai 2022; Gao 2022) under both centralized and decentralized settings, including algorithmic details about reducing the bias of stochastic gradient, theoretical convergence rate, and practical machine learning applications, and then highlight challenges for future research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Choi, Dojin, Jiwon Wee, Sangho Song, Hyeonbyeong Lee, Jongtae Lim, Kyoungsoo Bok e Jaesoo Yoo. "k-NN Query Optimization for High-Dimensional Index Using Machine Learning". Electronics 12, n.º 11 (24 de maio de 2023): 2375. http://dx.doi.org/10.3390/electronics12112375.

Texto completo da fonte
Resumo:
In this study, we propose three k-nearest neighbor (k-NN) optimization techniques for a distributed, in-memory-based, high-dimensional indexing method to speed up content-based image retrieval. The proposed techniques perform distributed, in-memory, high-dimensional indexing-based k-NN query optimization: a density-based optimization technique that performs k-NN optimization using data distribution; a cost-based optimization technique using query processing cost statistics; and a learning-based optimization technique using a deep learning model, based on query logs. The proposed techniques were implemented on Spark, which supports a master/slave model for large-scale distributed processing. We showed the superiority and validity of the proposed techniques through various performance evaluations, based on high-dimensional data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Yang, Peng, e Ping Li. "Distributed Primal-Dual Optimization for Online Multi-Task Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6631–38. http://dx.doi.org/10.1609/aaai.v34i04.6139.

Texto completo da fonte
Resumo:
Conventional online multi-task learning algorithms suffer from two critical limitations: 1) Heavy communication caused by delivering high velocity of sequential data to a central machine; 2) Expensive runtime complexity for building task relatedness. To address these issues, in this paper we consider a setting where multiple tasks are geographically located in different places, where one task can synchronize data with others to leverage knowledge of related tasks. Specifically, we propose an adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency. Moreover, our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update. Theoretical results demonstrate the convergence guarantee of our distributed algorithm with an optimal regret. Empirical results confirm that the proposed model is highly effective on various real-world datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Shokoohi, Maryam, Mohsen Afsharchi e Hamed Shah-Hoseini. "Dynamic distributed constraint optimization using multi-agent reinforcement learning". Soft Computing 26, n.º 8 (16 de março de 2022): 3601–29. http://dx.doi.org/10.1007/s00500-022-06820-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Lee, Jaehwan, Hyeonseong Choi, Hyeonwoo Jeong, Baekhyeon Noh e Ji Sun Shin. "Communication Optimization Schemes for Accelerating Distributed Deep Learning Systems". Applied Sciences 10, n.º 24 (10 de dezembro de 2020): 8846. http://dx.doi.org/10.3390/app10248846.

Texto completo da fonte
Resumo:
In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput due to communication performance bottlenecks in distributed deep learning. To optimize communication, we propose two methods. The first is a layer dropping scheme to reduce communication data. The layer dropping scheme we propose compares the representative values of each hidden layer with a threshold value. Furthermore, to guarantee the training accuracy, we store the gradients that are not transmitted to the parameter server in the worker’s local cache. When the value of gradients stored in the worker’s local cache is greater than the threshold, the gradients stored in the worker’s local cache are transmitted to the parameter server. The second is an efficient threshold selection method. Our threshold selection method computes the threshold by replacing the gradients with the L1 norm of each hidden layer. Our data optimization scheme reduces the communication time by about 81% and the total training time by about 70% in a 56 Gbit network environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Pugh, Jim, e Alcherio Martinoli. "Distributed scalable multi-robot learning using particle swarm optimization". Swarm Intelligence 3, n.º 3 (27 de maio de 2009): 203–22. http://dx.doi.org/10.1007/s11721-009-0030-z.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Kazhmaganbetova, Zarina, Shnar Imangaliyev e Altynbek Sharipbay. "Machine Learning for the Communication Optimization in Distributed Systems". International Journal of Engineering & Technology 7, n.º 4.1 (12 de setembro de 2018): 47. http://dx.doi.org/10.14419/ijet.v7i4.1.19491.

Texto completo da fonte
Resumo:
The objective of the work that is presented in this paper was the problem of the communication optimization and detection of the issues of computing resources performance degradation [1, 2] with the usage of machine learning techniques. Computer networks transmit payload data and the meta-data from numerous sources towards vast number of destinations, especially in multi-tenant environments [3, 4]. Meta data describes the payload data and could be analyzed for anomalies detection in the communication patterns. Communication patterns depend on the payload itself and technical protocol used. The technical patterns are the research target as their analysis could spotlight the vulnerable behavior, for example: unusual traffic, extra load transported and etc.There was a big data used to train model with a supervised machine learning. Dataset was collected from the network interfaces of the distributed application infrastructure. Machine Learning tools had been retained from the cloud services provider – Amazon Web Services. The stochastic gradient descent technique was utilized for the model training, so that it could represent the communication patterns in the system. The learning target parameter was a packet length, the regression was performed to understand the relationship between packet meta-data (timestamp, protocol, the source server) and its length. The root mean square error calculation was applied to evaluate the learning efficiency. After model was prepared using training dataset, the model was tested with the test dataset and then applied on the target dataset (dataset for prediction) to check whether it was capable to detect anomalies.The experimental part showed the applicability of machine learning for the communication optimization in the distributed application environment. By means of the trained artificial intelligence model, it was possible to predict target parameters of traffic and computing resources usage with purpose to avoid service degradation. Additionally, one could reveal anomalies in the transferred traffic between application components. The application of techniques is envisioned in information security field and in the field of efficient network resources planning.Further research could be in application machine learning techniques for more complicated distributed environments and enlarging the number of protocols to prepare communication patterns.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Medyakov, D., G. Molodtsov, A. Beznosikov e A. Gasnikov. "Optimal Data Splitting in Distributed Optimization for Machine Learning". Doklady Mathematics 108, S2 (dezembro de 2023): S465—S475. http://dx.doi.org/10.1134/s1064562423701600.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Yang, Dezhi, Xintong He, Jun Wang, Guoxian Yu, Carlotta Domeniconi e Jinglin Zhang. "Federated Causality Learning with Explainable Adaptive Optimization". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de março de 2024): 16308–15. http://dx.doi.org/10.1609/aaai.v38i15.29566.

Texto completo da fonte
Resumo:
Discovering the causality from observational data is a crucial task in various scientific domains. With increasing awareness of privacy, data are not allowed to be exposed, and it is very hard to learn causal graphs from dispersed data, since these data may have different distributions. In this paper, we propose a federated causal discovery strategy (FedCausal) to learn the unified global causal graph from decentralized heterogeneous data. We design a global optimization formula to naturally aggregate the causal graphs from client data and constrain the acyclicity of the global graph without exposing local data. Unlike other federated causal learning algorithms, FedCausal unifies the local and global optimizations into a complete directed acyclic graph (DAG) learning process with a flexible optimization objective. We prove that this optimization objective has a high interpretability and can adaptively handle homogeneous and heterogeneous data. Experimental results on synthetic and real datasets show that FedCausal can effectively deal with non-independently and identically distributed (non-iid) data and has a superior performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Mar’i, Farhanna, e Ahmad Afif Supianto. "A conceptual approach of optimization in federated learning". Indonesian Journal of Electrical Engineering and Computer Science 37, n.º 1 (1 de janeiro de 2025): 288. http://dx.doi.org/10.11591/ijeecs.v37.i1.pp288-299.

Texto completo da fonte
Resumo:
Federated learning (FL) is an emerging approach to distributed learning from decentralized data, designed with privacy concerns in mind. FL has been successfully applied in several fields, such as the internet of things (IoT), human activity recognition (HAR), and natural language processing (NLP), showing remarkable results. However, the development of FL in real-world applications still faces several challenges. Recent optimizations of FL have been made to address these issues and enhance the FL settings. In this paper, we categorize the optimization of FL into five main challenges: Communication Efficiency, Heterogeneity, Privacy and Security, Scalability, and Convergence Rate. We provide an overview of various optimization frameworks for FL proposed in previous research, illustrated with concrete examples and applications based on these five optimization goals. Additionally, we propose two optional integrated conceptual frameworks (CFs) for optimizing FL by combining several optimization methods to achieve the best implementation of FL that addresses the five challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Shi, Junjie, Jiang Bian, Jakob Richter, Kuan-Hsun Chen, Jörg Rahnenführer, Haoyi Xiong e Jian-Jia Chen. "MODES: model-based optimization on distributed embedded systems". Machine Learning 110, n.º 6 (junho de 2021): 1527–47. http://dx.doi.org/10.1007/s10994-021-06014-6.

Texto completo da fonte
Resumo:
AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Zhang, Chongjie, e Victor Lesser. "Coordinated Multi-Agent Reinforcement Learning in Networked Distributed POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 764–70. http://dx.doi.org/10.1609/aaai.v25i1.7886.

Texto completo da fonte
Resumo:
In many multi-agent applications such as distributed sensor nets, a network of agents act collaboratively under uncertainty and local interactions. Networked Distributed POMDP (ND-POMDP) provides a framework to model such cooperative multi-agent decision making. Existing work on ND-POMDPs has focused on offline techniques that require accurate models, which are usually costly to obtain in practice. This paper presents a model-free, scalable learning approach that synthesizes multi-agent reinforcement learning (MARL) and distributed constraint optimization (DCOP). By exploiting structured interaction in ND-POMDPs, our approach distributes the learning of the joint policy and employs DCOP techniques to coordinate distributed learning to ensure the global learning performance. Our approach can learn a globally optimal policy for ND-POMDPs with a property called groupwise observability. Experimental results show that, with communication during learning and execution, our approach significantly outperforms the nearly-optimal non-communication policies computed offline.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Veerappa, Praveena Mydolalu, e Ajeet Annarao Chikkamannur. "Prime Learning – Ant Colony Optimization Technique for Query Optimization in Distributed Database System". International Journal of Engineering Trends and Technology 70, n.º 8 (31 de agosto de 2022): 158–65. http://dx.doi.org/10.14445/22315381/ijett-v70i8p216.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Zhang, Xin, e Ahmed Eldawy. "Spatial Query Optimization With Learning". Proceedings of the VLDB Endowment 17, n.º 12 (agosto de 2024): 4245–48. http://dx.doi.org/10.14778/3685800.3685846.

Texto completo da fonte
Resumo:
Query optimization is a key component in database management systems (DBMS) and distributed data processing platforms. Recent research in the database community incorporated techniques from artificial intelligence to enhance query optimization. Various learning models have been extended and applied to the query optimization tasks, including query execution plan, query rewriting, and cost estimation. The tasks involved in query optimization differ based on the type of data being processed, such as relational data or spatial geometries. This tutorial reviews recent learning-based approaches for spatial query optimization tasks. We go over methods designed specifically for spatial data, as well as solutions proposed for high-dimensional data. Additionally, we present learning-based spatial indexing and spatial partitioning methods, which are also vital components in spatial data processing. We also identify several open research problems in these fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Xian, Wenhan, Feihu Huang e Heng Huang. "Communication-Efficient Frank-Wolfe Algorithm for Nonconvex Decentralized Distributed Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 10405–13. http://dx.doi.org/10.1609/aaai.v35i12.17246.

Texto completo da fonte
Resumo:
Recently decentralized optimization attracts much attention in machine learning because it is more communication-efficient than the centralized fashion. Quantization is a promising method to reduce the communication cost via cutting down the budget of each single communication using the gradient compression. To further improve the communication efficiency, more recently, some quantized decentralized algorithms have been studied. However, the quantized decentralized algorithm for nonconvex constrained machine learning problems is still limited. Frank-Wolfe (a.k.a., conditional gradient or projection-free) method is very efficient to solve many constrained optimization tasks, such as low-rank or sparsity-constrained models training. In this paper, to fill the gap of decentralized quantized constrained optimization, we propose a novel communication-efficient Decentralized Quantized Stochastic Frank-Wolfe (DQSFW) algorithm for non-convex constrained learning models. We first design a new counterexample to show that the vanilla decentralized quantized stochastic Frank-Wolfe algorithm usually diverges. Thus, we propose DQSFW algorithm with the gradient tracking technique to guarantee the method will converge to the stationary point of non-convex optimization safely. In our theoretical analysis, we prove that to achieve the stationary point our DQSFW algorithm achieves the same gradient complexity as the standard stochastic Frank-Wolfe and centralized Frank-Wolfe algorithms, but has much less communication cost. Experiments on matrix completion and model compression applications demonstrate the efficiency of our new algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Alistarh, Dan. "Distributed Computing Column 85 Elastic Consistency". ACM SIGACT News 53, n.º 2 (10 de junho de 2022): 63. http://dx.doi.org/10.1145/3544979.3544990.

Texto completo da fonte
Resumo:
Overview. In this column, we attempt to cover a new and exciting area in distributed computing, that is, distributed and parallel machine learning. Together with my students Giorgi Nadiradze and Ilia Markov, I share an expository version of a recent conference paper [30] which attempts to provide a "consistency condition" for optimization problems which appear in distributed computing and machine learning applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Qin, Yude, Ji Ke, Biao Wang e Gennady Fedorovich Filaretov. "Energy optimization for regional buildings based on distributed reinforcement learning". Sustainable Cities and Society 78 (março de 2022): 103625. http://dx.doi.org/10.1016/j.scs.2021.103625.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Yu, Javier, Joseph A. Vincent e Mac Schwager. "DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning". IEEE Robotics and Automation Letters 7, n.º 2 (abril de 2022): 1896–903. http://dx.doi.org/10.1109/lra.2022.3142402.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Chen, Jianshu, e Ali H. Sayed. "Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks". IEEE Transactions on Signal Processing 60, n.º 8 (agosto de 2012): 4289–305. http://dx.doi.org/10.1109/tsp.2012.2198470.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Lee, Hoon, Sang Hyun Lee e Tony Q. S. Quek. "Deep Learning for Distributed Optimization: Applications to Wireless Resource Management". IEEE Journal on Selected Areas in Communications 37, n.º 10 (outubro de 2019): 2251–66. http://dx.doi.org/10.1109/jsac.2019.2933890.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Wen, Jing. "Distributed reinforcement learning-based optimization of resource scheduling for telematics". Computers and Electrical Engineering 118 (setembro de 2024): 109464. http://dx.doi.org/10.1016/j.compeleceng.2024.109464.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Zhang, Zhaojuan, Wanliang Wang e Gaofeng Pan. "A Distributed Quantum-Behaved Particle Swarm Optimization Using Opposition-Based Learning on Spark for Large-Scale Optimization Problem". Mathematics 8, n.º 11 (23 de outubro de 2020): 1860. http://dx.doi.org/10.3390/math8111860.

Texto completo da fonte
Resumo:
In the era of big data, the size and complexity of the data are increasing especially for those stored in remote locations, and whose difficulty is further increased by the ongoing rapid accumulation of data scale. Real-world optimization problems present new challenges to traditional intelligent optimization algorithms since the traditional serial optimization algorithm has a high computational cost or even cannot deal with it when faced with large-scale distributed data. Responding to these challenges, a distributed cooperative evolutionary algorithm framework using Spark (SDCEA) is first proposed. The SDCEA can be applied to address the challenge due to insufficient computing resources. Second, a distributed quantum-behaved particle swarm optimization algorithm (SDQPSO) based on the SDCEA is proposed, where the opposition-based learning scheme is incorporated to initialize the population, and a parallel search is conducted on distributed spaces. Finally, the performance of the proposed SDQPSO is tested. In comparison with SPSO, SCLPSO, and SALCPSO, SDQPSO can not only improve the search efficiency but also search for a better optimum with almost the same computational cost for the large-scale distributed optimization problem. In conclusion, the proposed SDQPSO based on the SDCEA framework has high scalability, which can be applied to solve the large-scale optimization problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Gunuganti, Anvesh. "Federated Learning". Journal of Artificial Intelligence & Cloud Computing 1, n.º 2 (30 de junho de 2022): 1–6. http://dx.doi.org/10.47363/jaicc/2022(1)360.

Texto completo da fonte
Resumo:
Federated Learning (FL) serves as one of the groundbreaking approaches in the present society, particularly in smart mobile applications, for designing a distributed environment for clients' model training without compromising data ownership. This paper narrows down the focus to how FL emerged, how it fits in distributed systems, and its usefulness in different fields. Research findings derived from thematic analysis include FL's contribution to improving the functionality of mobile applications and managing data privacy issues. Recommendations for the actual FL application underline such aspects as data management and protection. Optimization, privacy, and novelty areas of FL are the areas for further study in the field, as per the conclusion of the study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Xu, Wencai. "Efficient Distributed Image Recognition Algorithm of Deep Learning Framework TensorFlow". Journal of Physics: Conference Series 2066, n.º 1 (1 de novembro de 2021): 012070. http://dx.doi.org/10.1088/1742-6596/2066/1/012070.

Texto completo da fonte
Resumo:
Abstract Deep learning requires training on massive data to get the ability to deal with unfamiliar data in the future, but it is not as easy to get a good model from training on massive data. Because of the requirements of deep learning tasks, a deep learning framework has also emerged. This article mainly studies the efficient distributed image recognition algorithm of the deep learning framework TensorFlow. This paper studies the deep learning framework TensorFlow itself and the related theoretical knowledge of its parallel execution, which lays a theoretical foundation for the design and implementation of the TensorFlow distributed parallel optimization algorithm. This paper designs and implements a more efficient TensorFlow distributed parallel algorithm, and designs and implements different optimization algorithms from TensorFlow data parallelism and model parallelism. Through multiple sets of comparative experiments, this paper verifies the effectiveness of the two optimization algorithms implemented in this paper for improving the speed of TensorFlow distributed parallel iteration. The results of research experiments show that the 12 sets of experiments finally achieved a stable model accuracy rate, and the accuracy rate of each set of experiments is above 97%. It can be seen that the distributed algorithm of using a suitable deep learning framework TensorFlow can be implemented in the goal of effectively reducing model training time without reducing the accuracy of the final model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Fattahi, Salar, Nikolai Matni e Somayeh Sojoudi. "Efficient Learning of Distributed Linear-Quadratic Control Policies". SIAM Journal on Control and Optimization 58, n.º 5 (janeiro de 2020): 2927–51. http://dx.doi.org/10.1137/19m1291108.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Wang, Yibo, Yuanyu Wan, Shimao Zhang e Lijun Zhang. "Distributed Projection-Free Online Learning for Smooth and Convex Losses". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 10226–34. http://dx.doi.org/10.1609/aaai.v37i8.26218.

Texto completo da fonte
Resumo:
We investigate the problem of distributed online convex optimization with complicated constraints, in which the projection operation could be the computational bottleneck. To avoid projections, distributed online projection-free methods have been proposed and attain an O(T^{3/4}) regret bound for general convex losses. However, they cannot utilize the smoothness condition, which has been exploited in the centralized setting to improve the regret. In this paper, we propose a new distributed online projection-free method with a tighter regret bound of O(T^{2/3}) for smooth and convex losses. Specifically, we first provide a distributed extension of Follow-the-Perturbed-Leader so that the smoothness can be utilized in the distributed setting. Then, we reduce the computational cost via sampling and blocking techniques. In this way, our method only needs to solve one linear optimization per round on average. Finally, we conduct experiments on benchmark datasets to verify the effectiveness of our proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Wang, Shikai, Haotian Zheng, Xin Wen e Shang Fu. "DISTRIBUTED HIGH-PERFORMANCE COMPUTING METHODS FOR ACCELERATING DEEP LEARNING TRAINING". Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 3, n.º 3 (25 de setembro de 2024): 108–26. http://dx.doi.org/10.60087/jklst.v3.n4.p22.

Texto completo da fonte
Resumo:
This paper comprehensively analyzes distributed high-performance computing methods for accelerating deep learning training. We explore the evolution of distributed computing architectures, including data parallelism, model parallelism, and pipeline parallelism, and their hybrid implementations. The study delves into optimization techniques crucial for large-scale training, such as distributed optimization algorithms, gradient compression, and adaptive learning rate methods. We investigate communication-efficient algorithms, including Ring All Reduce variants and decentralized training approaches, which address the scalability challenges in distributed systems. The research examines hardware acceleration and specialized systems, focusing on GPU clusters, custom AI accelerators, high-performance interconnects, and distributed storage systems optimized for deep learning workloads. Finally, we discuss this field's challenges and future directions, including scalability-efficiency trade-offs, fault tolerance, energy efficiency in large-scale training, and emerging trends like federated learning and neuromorphic computing. Our findings highlight the synergy between advanced algorithms, specialized hardware, and optimized system designs in pushing the boundaries of large-scale deep learning, paving the way for future breakthroughs in artificial intelligence.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Wang, Shikai, Haotian Zheng, Xin Wen e Shang Fu. "DISTRIBUTED HIGH-PERFORMANCE COMPUTING METHODS FOR ACCELERATING DEEP LEARNING TRAINING". Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 3, n.º 3 (25 de setembro de 2024): 108–26. http://dx.doi.org/10.60087/jklst.v3.n3.p108-126.

Texto completo da fonte
Resumo:
This paper comprehensively analyzes distributed high-performance computing methods for accelerating deep learning training. We explore the evolution of distributed computing architectures, including data parallelism, model parallelism, and pipeline parallelism, and their hybrid implementations. The study delves into optimization techniques crucial for large-scale training, such as distributed optimization algorithms, gradient compression, and adaptive learning rate methods. We investigate communication-efficient algorithms, including Ring All Reduce variants and decentralized training approaches, which address the scalability challenges in distributed systems. The research examines hardware acceleration and specialized systems, focusing on GPU clusters, custom AI accelerators, high-performance interconnects, and distributed storage systems optimized for deep learning workloads. Finally, we discuss this field's challenges and future directions, including scalability-efficiency trade-offs, fault tolerance, energy efficiency in large-scale training, and emerging trends like federated learning and neuromorphic computing. Our findings highlight the synergy between advanced algorithms, specialized hardware, and optimized system designs in pushing the boundaries of large-scale deep learning, paving the way for future breakthroughs in artificial intelligence.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Deng, Yanchen, Shufeng Kong e Bo An. "Pretrained Cost Model for Distributed Constraint Optimization Problems". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junho de 2022): 9331–40. http://dx.doi.org/10.1609/aaai.v36i9.21164.

Texto completo da fonte
Resumo:
Distributed Constraint Optimization Problems (DCOPs) are an important subclass of combinatorial optimization problems, where information and controls are distributed among multiple autonomous agents. Previously, Machine Learning (ML) has been largely applied to solve combinatorial optimization problems by learning effective heuristics. However, existing ML-based heuristic methods are often not generalizable to different search algorithms. Most importantly, these methods usually require full knowledge about the problems to be solved, which are not suitable for distributed settings where centralization is not realistic due to geographical limitations or privacy concerns. To address the generality issue, we propose a novel directed acyclic graph representation schema for DCOPs and leverage the Graph Attention Networks (GATs) to embed graph representations. Our model, GAT-PCM, is then pretrained with optimally labelled data in an offline manner, so as to construct effective heuristics to boost a broad range of DCOP algorithms where evaluating the quality of a partial assignment is critical, such as local search or backtracking search. Furthermore, to enable decentralized model inference, we propose a distributed embedding schema of GAT-PCM where each agent exchanges only embedded vectors, and show its soundness and complexity. Finally, we demonstrate the effectiveness of our model by combining it with a local search or a backtracking search algorithm. Extensive empirical evaluations indicate that the GAT-PCM-boosted algorithms significantly outperform the state-of-the-art methods in various benchmarks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Taheri, Seyed Iman, Mohammadreza Davoodi e Mohd Hasan Ali. "A Simulated-Annealing-Quasi-Oppositional-Teaching-Learning-Based Optimization Algorithm for Distributed Generation Allocation". Computation 11, n.º 11 (2 de novembro de 2023): 214. http://dx.doi.org/10.3390/computation11110214.

Texto completo da fonte
Resumo:
Conventional evolutionary optimization techniques often struggle with finding global optima, getting stuck in local optima instead, and can be sensitive to initial conditions and parameter settings. Efficient Distributed Generation (DG) allocation in distribution systems hinges on streamlined optimization algorithms that handle complex energy operations, support real-time decisions, adapt to dynamics, and improve system performance, considering cost and power quality. This paper proposes the Simulated-Annealing-Quasi-Oppositional-Teaching-Learning-Based Optimization Algorithm to efficiently allocate DGs within a distribution test system. The study focuses on wind turbines, photovoltaic units, and fuel cells as prominent DG due to their growing usage trends. The optimization goals include minimizing voltage losses, reducing costs, and mitigating greenhouse gas emissions in the distribution system. The proposed algorithm is implemented and evaluated on the IEEE 70-bus test system, with a comparative analysis conducted against other evolutionary methods such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Honey Bee Mating Optimization (HBMO), and Teaching-Learning-Based Optimization (TLBO) algorithms. Results indicate that the proposed algorithm is effective in allocating the DGs. Statistical testing confirms significant results (probability < 0.1), indicating superior optimization capabilities for this specific problem. Crucially, the proposed algorithm excels in both accuracy and computational speed compared to other methods studied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Dai, Wei, Wei Wang, Zhongtian Mao, Ruwen Jiang, Fudong Nian e Teng Li. "Distributed Policy Evaluation with Fractional Order Dynamics in Multiagent Reinforcement Learning". Security and Communication Networks 2021 (3 de setembro de 2021): 1–7. http://dx.doi.org/10.1155/2021/1020466.

Texto completo da fonte
Resumo:
The main objective of multiagent reinforcement learning is to achieve a global optimal policy. It is difficult to evaluate the value function with high-dimensional state space. Therefore, we transfer the problem of multiagent reinforcement learning into a distributed optimization problem with constraint terms. In this problem, all agents share the space of states and actions, but each agent only obtains its own local reward. Then, we propose a distributed optimization with fractional order dynamics to solve this problem. Moreover, we prove the convergence of the proposed algorithm and illustrate its effectiveness with a numerical example.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Li, Xinhang, Yiying Yang, Qinwen Wang, Zheng Yuan, Chen Xu, Lei Li e Lin Zhang. "A distributed multi-vehicle pursuit scheme: generative multi-adversarial reinforcement learning". Intelligence & Robotics 3, n.º 3 (13 de setembro de 2023): 436–52. http://dx.doi.org/10.20517/ir.2023.25.

Texto completo da fonte
Resumo:
Multi-vehicle pursuit (MVP) is one of the most challenging problems for intelligent traffic management systems due to multi-source heterogeneous data and its mission nature. While many reinforcement learning (RL) algorithms have shown promising abilities for MVP in structured grid-pattern roads, their lack of dynamic and effective traffic awareness limits pursuing efficiency. The sparse reward of pursuing tasks still hinders the optimization of these RL algorithms. Therefore, this paper proposes a distributed generative multi-adversarial RL for MVP (DGMARL-MVP) in urban traffic scenes. In DGMARL-MVP, a generative multi-adversarial network is designed to improve the Bellman equation by generating the potential dense reward, thereby properly guiding strategy optimization of distributed multi-agent RL. Moreover, a graph neural network-based intersecting cognition is proposed to extract integrated features of traffic situations and relationships among agents from multi-source heterogeneous data. These integrated and comprehensive traffic features are used to assist RL decision-making and improve pursuing efficiency. Extensive experimental results show that the DGMARL-MVP can reduce the pursuit time by 5.47% compared with proximal policy optimization and improve the pursuing average success rate up to 85.67%. Codes are open-sourced in Github.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Agrawal, Shaashwat, Sagnik Sarkar, Mamoun Alazab, Praveen Kumar Reddy Maddikunta, Thippa Reddy Gadekallu e Quoc-Viet Pham. "Genetic CFL: Hyperparameter Optimization in Clustered Federated Learning". Computational Intelligence and Neuroscience 2021 (18 de novembro de 2021): 1–10. http://dx.doi.org/10.1155/2021/7156420.

Texto completo da fonte
Resumo:
Federated learning (FL) is a distributed model for deep learning that integrates client-server architecture, edge computing, and real-time intelligence. FL has the capability of revolutionizing machine learning (ML) but lacks in the practicality of implementation due to technological limitations, communication overhead, non-IID (independent and identically distributed) data, and privacy concerns. Training a ML model over heterogeneous non-IID data highly degrades the convergence rate and performance. The existing traditional and clustered FL algorithms exhibit two main limitations, including inefficient client training and static hyperparameter utilization. To overcome these limitations, we propose a novel hybrid algorithm, namely, genetic clustered FL (Genetic CFL), that clusters edge devices based on the training hyperparameters and genetically modifies the parameters clusterwise. Then, we introduce an algorithm that drastically increases the individual cluster accuracy by integrating the density-based clustering and genetic hyperparameter optimization. The results are bench-marked using MNIST handwritten digit dataset and the CIFAR-10 dataset. The proposed genetic CFL shows significant improvements and works well with realistic cases of non-IID and ambiguous data. An accuracy of 99.79% is observed in the MNIST dataset and 76.88% in CIFAR-10 dataset with only 10 training rounds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Mantri, Arjun. "Advanced ML (Machine Learning) Techniques for Optimizing ETL Workflows with Apache Spark and Snowflake". Journal of Artificial Intelligence & Cloud Computing 2, n.º 3 (30 de setembro de 2023): 1–6. http://dx.doi.org/10.47363/jaicc/2023(2)339.

Texto completo da fonte
Resumo:
The optimization of ETL (Extract, Transform, Load) pipelines using Apache Spark and Snowflake. Apache Spark is a powerful open-source distributed data processing platform, while Snowflake is a cloud-native data warehousing solution. It discusses the challenges and solutions in tuning Spark configurations using machine learning techniques and optimizing Snowflake's architecture for cost efficiency and performance. Experimental results demonstrate significant performance gains and cost savings through these optimizations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

JAMIAN, Jasrul Jamani, Hazlie MOKHLIS, Mohd Wazir MUSTAFA, Mohd Noor ABDULLAH e Muhammad Ariff BAHARUDIN. "Comparative learning global particle swarm optimization for optimal distributed generations' output". TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 22 (2014): 1323–37. http://dx.doi.org/10.3906/elk-1212-173.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Zhang, Jilin, Hangdi Tu, Yongjian Ren, Jian Wan, Li Zhou, Mingwei Li, Jue Wang, Lifeng Yu, Chang Zhao e Lei Zhang. "A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors". Sensors 17, n.º 10 (21 de setembro de 2017): 2172. http://dx.doi.org/10.3390/s17102172.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Ikebou, Shigeya, Fei Qian e Hironori Hirata. "A Parallel Distributed Learning Automaton Computing Model for Function Optimization Problems". IEEJ Transactions on Electronics, Information and Systems 121, n.º 2 (2001): 476–77. http://dx.doi.org/10.1541/ieejeiss1987.121.2_476.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Mai, Tianle, Haipeng Yao, Ni Zhang, Wenji He, Dong Guo e Mohsen Guizani. "Transfer Reinforcement Learning Aided Distributed Network Slicing Optimization in Industrial IoT". IEEE Transactions on Industrial Informatics 18, n.º 6 (junho de 2022): 4308–16. http://dx.doi.org/10.1109/tii.2021.3132136.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

He, Haibo, e He Jiang. "Deep Learning Based Energy Efficiency Optimization for Distributed Cooperative Spectrum Sensing". IEEE Wireless Communications 26, n.º 3 (junho de 2019): 32–39. http://dx.doi.org/10.1109/mwc.2019.1800397.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Raju, Leo, Sibi Sankar e R. S. Milton. "Distributed Optimization of Solar Micro-grid Using Multi Agent Reinforcement Learning". Procedia Computer Science 46 (2015): 231–39. http://dx.doi.org/10.1016/j.procs.2015.02.016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Simon, Dan, Arpit Shah e Carré Scheidegger. "Distributed learning with biogeography-based optimization: Markov modeling and robot control". Swarm and Evolutionary Computation 10 (junho de 2013): 12–24. http://dx.doi.org/10.1016/j.swevo.2012.12.003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Yuan, Kun, Bicheng Ying, Xiaochuan Zhao e Ali H. Sayed. "Exact Diffusion for Distributed Optimization and Learning—Part II: Convergence Analysis". IEEE Transactions on Signal Processing 67, n.º 3 (1 de fevereiro de 2019): 724–39. http://dx.doi.org/10.1109/tsp.2018.2875883.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Yuan, Kun, Bicheng Ying, Xiaochuan Zhao e Ali H. Sayed. "Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development". IEEE Transactions on Signal Processing 67, n.º 3 (1 de fevereiro de 2019): 708–23. http://dx.doi.org/10.1109/tsp.2018.2875898.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia